···11+* Toshiba TC3589x multi-purpose expander22+33+The Toshiba TC3589x series are I2C-based MFD devices which may expose the44+following built-in devices: gpio, keypad, rotator (vibrator), PWM (for55+e.g. LEDs or vibrators) The included models are:66+77+- TC3589088+- TC3589299+- TC358931010+- TC358941111+- TC358951212+- TC358961313+1414+Required properties:1515+ - compatible : must be "toshiba,tc35890", "toshiba,tc35892", "toshiba,tc35893",1616+ "toshiba,tc35894", "toshiba,tc35895" or "toshiba,tc35896"1717+ - reg : I2C address of the device1818+ - interrupt-parent : specifies which IRQ controller we're connected to1919+ - interrupts : the interrupt on the parent the controller is connected to2020+ - interrupt-controller : marks the device node as an interrupt controller2121+ - #interrupt-cells : should be <1>, the first cell is the IRQ offset on this2222+ TC3589x interrupt controller.2323+2424+Optional nodes:2525+2626+- GPIO2727+ This GPIO module inside the TC3589x has 24 (TC35890, TC35892) or 202828+ (other models) GPIO lines.2929+ - compatible : must be "toshiba,tc3589x-gpio"3030+ - interrupts : interrupt on the parent, which must be the tc3589x MFD device3131+ - interrupt-controller : marks the device node as an interrupt controller3232+ - #interrupt-cells : should be <2>, the first cell is the IRQ offset on this3333+ TC3589x GPIO interrupt controller, the second cell is the interrupt flags3434+ in accordance with <dt-bindings/interrupt-controller/irq.h>. The following3535+ flags are valid:3636+ - IRQ_TYPE_LEVEL_LOW3737+ - IRQ_TYPE_LEVEL_HIGH3838+ - IRQ_TYPE_EDGE_RISING3939+ - IRQ_TYPE_EDGE_FALLING4040+ - IRQ_TYPE_EDGE_BOTH4141+ - gpio-controller : marks the device node as a GPIO controller4242+ - #gpio-cells : should be <2>, the first cell is the GPIO offset on this4343+ GPIO controller, the second cell is the flags.4444+4545+- Keypad4646+ This keypad is the same on all variants, supporting up to 96 different4747+ keys. The linux-specific properties are modeled on those already existing4848+ in other input drivers.4949+ - compatible : must be "toshiba,tc3589x-keypad"5050+ - debounce-delay-ms : debounce interval in milliseconds5151+ - keypad,num-rows : number of rows in the matrix, see5252+ bindings/input/matrix-keymap.txt5353+ - keypad,num-columns : number of columns in the matrix, see5454+ bindings/input/matrix-keymap.txt5555+ - linux,keymap: the definition can be found in5656+ bindings/input/matrix-keymap.txt5757+ - linux,no-autorepeat: do no enable autorepeat feature.5858+ - linux,wakeup: use any event on keypad as wakeup event.5959+6060+Example:6161+6262+tc35893@44 {6363+ compatible = "toshiba,tc35893";6464+ reg = <0x44>;6565+ interrupt-parent = <&gpio6>;6666+ interrupts = <26 IRQ_TYPE_EDGE_RISING>;6767+6868+ interrupt-controller;6969+ #interrupt-cells = <1>;7070+7171+ tc3589x_gpio {7272+ compatible = "toshiba,tc3589x-gpio";7373+ interrupts = <0>;7474+7575+ interrupt-controller;7676+ #interrupt-cells = <2>;7777+ gpio-controller;7878+ #gpio-cells = <2>;7979+ };8080+ tc3589x_keypad {8181+ compatible = "toshiba,tc3589x-keypad";8282+ interrupts = <6>;8383+ debounce-delay-ms = <4>;8484+ keypad,num-columns = <8>;8585+ keypad,num-rows = <8>;8686+ linux,no-autorepeat;8787+ linux,wakeup;8888+ linux,keymap = <0x0301006b8989+ 0x040100669090+ 0x060400729191+ 0x040200d79292+ 0x0303006a9393+ 0x0205000e9494+ 0x0607008b9595+ 0x0500001c9696+ 0x0403000b9797+ 0x030400349898+ 0x050200679999+ 0x0305006c100100+ 0x040500e7101101+ 0x0005009e102102+ 0x06020073103103+ 0x01030039104104+ 0x07060069105105+ 0x050500d9>;106106+ };107107+};
···2222 width of 8 is assumed.23232424 - ti,nand-ecc-opt: A string setting the ECC layout to use. One of:2525- "sw" <deprecated> use "ham1" instead2525+ "sw" 1-bit Hamming ecc code via software2626 "hw" <deprecated> use "ham1" instead2727 "hw-romcode" <deprecated> use "ham1" instead2828 "ham1" 1-bit Hamming ecc code
···11ADI AXI-SPDIF controller2233Required properties:44- - compatible : Must be "adi,axi-spdif-1.00.a"44+ - compatible : Must be "adi,axi-spdif-tx-1.00.a"55 - reg : Must contain SPDIF core's registers location and length66 - clocks : Pairs of phandle and specifier referencing the controller's clocks.77 The controller expects two clocks, the clock used for the AXI interface and
+8-6
Documentation/dma-buf-sharing.txt
···5656 size_t size, int flags,5757 const char *exp_name)58585959- If this succeeds, dma_buf_export allocates a dma_buf structure, and returns a6060- pointer to the same. It also associates an anonymous file with this buffer,6161- so it can be exported. On failure to allocate the dma_buf object, it returns6262- NULL.5959+ If this succeeds, dma_buf_export_named allocates a dma_buf structure, and6060+ returns a pointer to the same. It also associates an anonymous file with this6161+ buffer, so it can be exported. On failure to allocate the dma_buf object,6262+ it returns NULL.63636464 'exp_name' is the name of exporter - to facilitate information while6565 debugging.···7676 drivers and/or processes.77777878 Interface:7979- int dma_buf_fd(struct dma_buf *dmabuf)7979+ int dma_buf_fd(struct dma_buf *dmabuf, int flags)80808181 This API installs an fd for the anonymous file associated with this buffer;8282 returns either 'fd', or error.···157157 "dma_buf->ops->" indirection from the users of this interface.158158159159 In struct dma_buf_ops, unmap_dma_buf is defined as160160- void (*unmap_dma_buf)(struct dma_buf_attachment *, struct sg_table *);160160+ void (*unmap_dma_buf)(struct dma_buf_attachment *,161161+ struct sg_table *,162162+ enum dma_data_direction);161163162164 unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like163165 map_dma_buf, this API also must be implemented by the exporter.
+33-3
Documentation/kdump/kdump.txt
···1818a remote system.19192020Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,2121-and s390x architectures.2121+s390x and arm architectures.22222323When the system kernel boots, it reserves a small section of memory for2424the dump-capture kernel. This ensures that ongoing Direct Memory Access···1121122) Or use the system kernel binary itself as dump-capture kernel and there is113113 no need to build a separate dump-capture kernel. This is possible114114 only with the architectures which support a relocatable kernel. As115115- of today, i386, x86_64, ppc64 and ia64 architectures support relocatable115115+ of today, i386, x86_64, ppc64, ia64 and arm architectures support relocatable116116 kernel.117117118118Building a relocatable kernel is advantageous from the point of view that···241241 kernel will be aligned to 64Mb, so if the start address is not then242242 any space below the alignment point will be wasted.243243244244+Dump-capture kernel config options (Arch Dependent, arm)245245+----------------------------------------------------------246246+247247+- To use a relocatable kernel,248248+ Enable "AUTO_ZRELADDR" support under "Boot" options:249249+250250+ AUTO_ZRELADDR=y244251245252Extended crashkernel syntax246253===========================···261254The syntax is:262255263256 crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]257257+ range=start-[end]258258+259259+Please note, on arm, the offset is required.260260+ crashkernel=<range1>:<size1>[,<range2>:<size2>,...]@offset264261 range=start-[end]265262266263 'start' is inclusive and 'end' is exclusive.···307296 on the memory consumption of the kdump system. In general this is not308297 dependent on the memory size of the production system.309298299299+ On arm, use "crashkernel=Y@X". Note that the start address of the kernel300300+ will be aligned to 128MiB (0x08000000), so if the start address is not then301301+ any space below the alignment point may be overwritten by the dump-capture kernel,302302+ which means it is possible that the vmcore is not that precise as expected.303303+304304+310305Load the Dump-capture Kernel311306============================312307···332315 - Use vmlinux or vmlinuz.gz333316For s390x:334317 - Use image or bzImage335335-318318+For arm:319319+ - Use zImage336320337321If you are using a uncompressed vmlinux image then use following command338322to load dump-capture kernel.···348330 kexec -p <dump-capture-kernel-bzImage> \349331 --initrd=<initrd-for-dump-capture-kernel> \350332 --append="root=<root-dev> <arch-specific-options>"333333+334334+If you are using a compressed zImage, then use following command335335+to load dump-capture kernel.336336+337337+ kexec --type zImage -p <dump-capture-kernel-bzImage> \338338+ --initrd=<initrd-for-dump-capture-kernel> \339339+ --dtb=<dtb-for-dump-capture-kernel> \340340+ --append="root=<root-dev> <arch-specific-options>"341341+351342352343Please note, that --args-linux does not need to be specified for ia64.353344It is planned to make this a no-op on that architecture, but for now···373346374347For s390x:375348 "1 maxcpus=1 cgroup_disable=memory"349349+350350+For arm:351351+ "1 maxcpus=1 reset_devices"376352377353Notes on loading the dump-capture kernel:378354
+169-40
Documentation/this_cpu_ops.txt
···22-------------------3344this_cpu operations are a way of optimizing access to per cpu55-variables associated with the *currently* executing processor through66-the use of segment registers (or a dedicated register where the cpu77-permanently stored the beginning of the per cpu area for a specific88-processor).55+variables associated with the *currently* executing processor. This is66+done through the use of segment registers (or a dedicated register where77+the cpu permanently stored the beginning of the per cpu area for a88+specific processor).991010-The this_cpu operations add a per cpu variable offset to the processor1111-specific percpu base and encode that operation in the instruction1010+this_cpu operations add a per cpu variable offset to the processor1111+specific per cpu base and encode that operation in the instruction1212operating on the per cpu variable.13131414-This means there are no atomicity issues between the calculation of1414+This means that there are no atomicity issues between the calculation of1515the offset and the operation on the data. Therefore it is not1616-necessary to disable preempt or interrupts to ensure that the1616+necessary to disable preemption or interrupts to ensure that the1717processor is not changed between the calculation of the address and1818the operation on the data.19192020Read-modify-write operations are of particular interest. Frequently2121processors have special lower latency instructions that can operate2222-without the typical synchronization overhead but still provide some2323-sort of relaxed atomicity guarantee. The x86 for example can execute2424-RMV (Read Modify Write) instructions like inc/dec/cmpxchg without the2222+without the typical synchronization overhead, but still provide some2323+sort of relaxed atomicity guarantees. The x86, for example, can execute2424+RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the2525lock prefix and the associated latency penalty.26262727Access to the variable without the lock prefix is not synchronized but···2929data specific to the currently executing processor. Only the current3030processor should be accessing that variable and therefore there are no3131concurrency issues with other processors in the system.3232+3333+Please note that accesses by remote processors to a per cpu area are3434+exceptional situations and may impact performance and/or correctness3535+(remote write operations) of local RMW operations via this_cpu_*.3636+3737+The main use of the this_cpu operations has been to optimize counter3838+operations.3939+4040+The following this_cpu() operations with implied preemption protection4141+are defined. These operations can be used without worrying about4242+preemption and interrupts.4343+4444+ this_cpu_add()4545+ this_cpu_read(pcp)4646+ this_cpu_write(pcp, val)4747+ this_cpu_add(pcp, val)4848+ this_cpu_and(pcp, val)4949+ this_cpu_or(pcp, val)5050+ this_cpu_add_return(pcp, val)5151+ this_cpu_xchg(pcp, nval)5252+ this_cpu_cmpxchg(pcp, oval, nval)5353+ this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)5454+ this_cpu_sub(pcp, val)5555+ this_cpu_inc(pcp)5656+ this_cpu_dec(pcp)5757+ this_cpu_sub_return(pcp, val)5858+ this_cpu_inc_return(pcp)5959+ this_cpu_dec_return(pcp)6060+6161+6262+Inner working of this_cpu operations6363+------------------------------------32643365On x86 the fs: or the gs: segment registers contain the base of the3466per cpu area. It is then possible to simply use the segment override···8048 mov ax, gs:[x]81498250instead of a sequence of calculation of the address and then a fetch8383-from that address which occurs with the percpu operations. Before5151+from that address which occurs with the per cpu operations. Before8452this_cpu_ops such sequence also required preempt disable/enable to8553prevent the kernel from moving the thread to a different processor8654while the calculation is performed.87558888-The main use of the this_cpu operations has been to optimize counter8989-operations.5656+Consider the following this_cpu operation:90579158 this_cpu_inc(x)92599393-results in the following single instruction (no lock prefix!)6060+The above results in the following single instruction (no lock prefix!)94619562 inc gs:[x]96639764instead of the following operations required if there is no segment9898-register.6565+register:996610067 int *y;10168 int cpu;···10473 (*y)++;10574 put_cpu();10675107107-Note that these operations can only be used on percpu data that is7676+Note that these operations can only be used on per cpu data that is10877reserved for a specific processor. Without disabling preemption in the10978surrounding code this_cpu_inc() will only guarantee that one of the110110-percpu counters is correctly incremented. However, there is no7979+per cpu counters is correctly incremented. However, there is no11180guarantee that the OS will not move the process directly before or11281after the this_cpu instruction is executed. In general this means that11382the value of the individual counters for each processor are···11786Per cpu variables are used for performance reasons. Bouncing cache11887lines can be avoided if multiple processors concurrently go through11988the same code paths. Since each processor has its own per cpu120120-variables no concurrent cacheline updates take place. The price that8989+variables no concurrent cache line updates take place. The price that12190has to be paid for this optimization is the need to add up the per cpu122122-counters when the value of the counter is needed.9191+counters when the value of a counter is needed.123921249312594Special operations:···131100of the per cpu variable that belongs to the currently executing132101processor. this_cpu_ptr avoids multiple steps that the common133102get_cpu/put_cpu sequence requires. No processor number is134134-available. Instead the offset of the local per cpu area is simply135135-added to the percpu offset.103103+available. Instead, the offset of the local per cpu area is simply104104+added to the per cpu offset.136105106106+Note that this operation is usually used in a code segment when107107+preemption has been disabled. The pointer is then used to108108+access local per cpu data in a critical section. When preemption109109+is re-enabled this pointer is usually no longer useful since it may110110+no longer point to per cpu data of the current processor.137111138112139113Per cpu variables and offsets140114-----------------------------141115142142-Per cpu variables have *offsets* to the beginning of the percpu116116+Per cpu variables have *offsets* to the beginning of the per cpu143117area. They do not have addresses although they look like that in the144118code. Offsets cannot be directly dereferenced. The offset must be145145-added to a base pointer of a percpu area of a processor in order to119119+added to a base pointer of a per cpu area of a processor in order to146120form a valid address.147121148122Therefore the use of x or &x outside of the context of per cpu149123operations is invalid and will generally be treated like a NULL150124pointer dereference.151125152152-In the context of per cpu operations126126+ DEFINE_PER_CPU(int, x);153127154154- x is a per cpu variable. Most this_cpu operations take a cpu155155- variable.128128+In the context of per cpu operations the above implies that x is a per129129+cpu variable. Most this_cpu operations take a cpu variable.156130157157- &x is the *offset* a per cpu variable. this_cpu_ptr() takes158158- the offset of a per cpu variable which makes this look a bit159159- strange.131131+ int __percpu *p = &x;160132133133+&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()134134+takes the offset of a per cpu variable which makes this look a bit135135+strange.161136162137163138Operations on a field of a per cpu structure···189152190153 struct s __percpu *ps = &p;191154192192- z = this_cpu_dec(ps->m);155155+ this_cpu_dec(ps->m);193156194157 z = this_cpu_inc_return(ps->n);195158···209172Variants of this_cpu ops210173-------------------------211174212212-this_cpu ops are interrupt safe. Some architecture do not support175175+this_cpu ops are interrupt safe. Some architectures do not support213176these per cpu local operations. In that case the operation must be214177replaced by code that disables interrupts, then does the operations215215-that are guaranteed to be atomic and then reenable interrupts. Doing178178+that are guaranteed to be atomic and then re-enable interrupts. Doing216179so is expensive. If there are other reasons why the scheduler cannot217180change the processor we are executing on then there is no reason to218218-disable interrupts. For that purpose the __this_cpu operations are219219-provided. For example.181181+disable interrupts. For that purpose the following __this_cpu operations182182+are provided.220183221221- __this_cpu_inc(x);184184+These operations have no guarantee against concurrent interrupts or185185+preemption. If a per cpu variable is not used in an interrupt context186186+and the scheduler cannot preempt, then they are safe. If any interrupts187187+still occur while an operation is in progress and if the interrupt too188188+modifies the variable, then RMW actions can not be guaranteed to be189189+safe.222190223223-Will increment x and will not fallback to code that disables191191+ __this_cpu_add()192192+ __this_cpu_read(pcp)193193+ __this_cpu_write(pcp, val)194194+ __this_cpu_add(pcp, val)195195+ __this_cpu_and(pcp, val)196196+ __this_cpu_or(pcp, val)197197+ __this_cpu_add_return(pcp, val)198198+ __this_cpu_xchg(pcp, nval)199199+ __this_cpu_cmpxchg(pcp, oval, nval)200200+ __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)201201+ __this_cpu_sub(pcp, val)202202+ __this_cpu_inc(pcp)203203+ __this_cpu_dec(pcp)204204+ __this_cpu_sub_return(pcp, val)205205+ __this_cpu_inc_return(pcp)206206+ __this_cpu_dec_return(pcp)207207+208208+209209+Will increment x and will not fall-back to code that disables224210interrupts on platforms that cannot accomplish atomicity through225211address relocation and a Read-Modify-Write operation in the same226212instruction.227227-228213229214230215&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)231216--------------------------------------------232217233218The first operation takes the offset and forms an address and then234234-adds the offset of the n field.219219+adds the offset of the n field. This may result in two add220220+instructions emitted by the compiler.235221236222The second one first adds the two offsets and then does the237223relocation. IMHO the second form looks cleaner and has an easier time···262202this_cpu_read() and friends are used.263203264204265265-Christoph Lameter, April 3rd, 2013205205+Remote access to per cpu data206206+------------------------------207207+208208+Per cpu data structures are designed to be used by one cpu exclusively.209209+If you use the variables as intended, this_cpu_ops() are guaranteed to210210+be "atomic" as no other CPU has access to these data structures.211211+212212+There are special cases where you might need to access per cpu data213213+structures remotely. It is usually safe to do a remote read access214214+and that is frequently done to summarize counters. Remote write access215215+something which could be problematic because this_cpu ops do not216216+have lock semantics. A remote write may interfere with a this_cpu217217+RMW operation.218218+219219+Remote write accesses to percpu data structures are highly discouraged220220+unless absolutely necessary. Please consider using an IPI to wake up221221+the remote CPU and perform the update to its per cpu area.222222+223223+To access per-cpu data structure remotely, typically the per_cpu_ptr()224224+function is used:225225+226226+227227+ DEFINE_PER_CPU(struct data, datap);228228+229229+ struct data *p = per_cpu_ptr(&datap, cpu);230230+231231+This makes it explicit that we are getting ready to access a percpu232232+area remotely.233233+234234+You can also do the following to convert the datap offset to an address235235+236236+ struct data *p = this_cpu_ptr(&datap);237237+238238+but, passing of pointers calculated via this_cpu_ptr to other cpus is239239+unusual and should be avoided.240240+241241+Remote access are typically only for reading the status of another cpus242242+per cpu data. Write accesses can cause unique problems due to the243243+relaxed synchronization requirements for this_cpu operations.244244+245245+One example that illustrates some concerns with write operations is246246+the following scenario that occurs because two per cpu variables247247+share a cache-line but the relaxed synchronization is applied to248248+only one process updating the cache-line.249249+250250+Consider the following example251251+252252+253253+ struct test {254254+ atomic_t a;255255+ int b;256256+ };257257+258258+ DEFINE_PER_CPU(struct test, onecacheline);259259+260260+There is some concern about what would happen if the field 'a' is updated261261+remotely from one processor and the local processor would use this_cpu ops262262+to update field b. Care should be taken that such simultaneous accesses to263263+data within the same cache line are avoided. Also costly synchronization264264+may be necessary. IPIs are generally recommended in such scenarios instead265265+of a remote write to the per cpu area of another processor.266266+267267+Even in cases where the remote writes are rare, please bear in268268+mind that a remote write will evict the cache line from the processor269269+that most likely will access it. If the processor wakes up and finds a270270+missing local cache line of a per cpu area, its performance and hence271271+the wake up times will be affected.272272+273273+Christoph Lameter, August 4th, 2014274274+Pranith Kumar, Aug 2nd, 2014
+13
MAINTAINERS
···12791279L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)12801280L: linux-rockchip@lists.infradead.org12811281S: Maintained12821282+F: arch/arm/boot/dts/rk3*12821283F: arch/arm/mach-rockchip/12841284+F: drivers/clk/rockchip/12851285+F: drivers/i2c/busses/i2c-rk3x.c12831286F: drivers/*/*rockchip*12871287+F: drivers/*/*/*rockchip*12881288+F: sound/soc/rockchip/1284128912851290ARM/SAMSUNG ARM ARCHITECTURES12861291M: Ben Dooks <ben-linux@fluff.org>···95619556S: Maintained95629557F: Documentation/usb/ohci.txt95639558F: drivers/usb/host/ohci*95599559+95609560+USB OVER IP DRIVER95619561+M: Valentina Manea <valentina.manea.m@gmail.com>95629562+M: Shuah Khan <shuah.kh@samsung.com>95639563+L: linux-usb@vger.kernel.org95649564+S: Maintained95659565+F: drivers/usb/usbip/95669566+F: tools/usb/usbip/9564956795659568USB PEGASUS DRIVER95669569M: Petko Manolov <petkan@nucleusys.com>
···581581 tot_sz -= sz;582582 }583583}584584+EXPORT_SYMBOL(flush_icache_range);584585585586/*586587 * General purpose helper to make I and D cache lines consistent.
-2
arch/arm/Kconfig
···19831983config KEXEC19841984 bool "Kexec system call (EXPERIMENTAL)"19851985 depends on (!SMP || PM_SLEEP_SMP)19861986- select CRYPTO19871987- select CRYPTO_SHA25619881986 help19891987 kexec is a system call that implements the ability to shutdown your19901988 current kernel, and to start another kernel. It is like a reboot
···88#include <linux/cpumask.h>99#include <linux/err.h>10101111+#include <asm/cpu.h>1112#include <asm/cputype.h>12131314/*···2423#else2524 return true;2625#endif2626+}2727+2828+/**2929+ * smp_cpuid_part() - return part id for a given cpu3030+ * @cpu: logical cpu id.3131+ *3232+ * Return: part id of logical cpu passed as argument.3333+ */3434+static inline unsigned int smp_cpuid_part(int cpu)3535+{3636+ struct cpuinfo_arm *cpu_info = &per_cpu(cpu_data, cpu);3737+3838+ return is_smp() ? cpu_info->cpuid & ARM_CPU_PART_MASK :3939+ read_cpuid_part();2740}28412942/* all SMP configurations have the extended CPUID registers */
+15-14
arch/arm/kernel/entry-header.S
···208208#endif209209 .endif210210 msr spsr_cxsf, \rpsr211211-#if defined(CONFIG_CPU_V6)212212- ldr r0, [sp]213213- strex r1, r2, [sp] @ clear the exclusive monitor214214- ldmib sp, {r1 - pc}^ @ load r1 - pc, cpsr215215-#elif defined(CONFIG_CPU_32v6K)216216- clrex @ clear the exclusive monitor217217- ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr218218-#else219219- ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr211211+#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)212212+ @ We must avoid clrex due to Cortex-A15 erratum #830321213213+ sub r0, sp, #4 @ uninhabited address214214+ strex r1, r2, [r0] @ clear the exclusive monitor220215#endif216216+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr221217 .endm222218223219 .macro restore_user_regs, fast = 0, offset = 0224220 ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr225221 ldr lr, [sp, #\offset + S_PC]! @ get pc226222 msr spsr_cxsf, r1 @ save in spsr_svc227227-#if defined(CONFIG_CPU_V6)223223+#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)224224+ @ We must avoid clrex due to Cortex-A15 erratum #830321228225 strex r1, r2, [sp] @ clear the exclusive monitor229229-#elif defined(CONFIG_CPU_32v6K)230230- clrex @ clear the exclusive monitor231226#endif232227 .if \fast233228 ldmdb sp, {r1 - lr}^ @ get calling r1 - lr···256261 .endif257262 ldr lr, [sp, #S_SP] @ top of the stack258263 ldrd r0, r1, [sp, #S_LR] @ calling lr and pc259259- clrex @ clear the exclusive monitor264264+265265+ @ We must avoid clrex due to Cortex-A15 erratum #830321266266+ strex r2, r1, [sp, #S_LR] @ clear the exclusive monitor267267+260268 stmdb lr!, {r0, r1, \rpsr} @ calling lr and rfe context261269 ldmia sp, {r0 - r12}262270 mov sp, lr···280282 .endm281283#else /* ifdef CONFIG_CPU_V7M */282284 .macro restore_user_regs, fast = 0, offset = 0283283- clrex @ clear the exclusive monitor284285 mov r2, sp285286 load_user_sp_lr r2, r3, \offset + S_SP @ calling sp, lr286287 ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr287288 ldr lr, [sp, #\offset + S_PC] @ get pc288289 add sp, sp, #\offset + S_SP289290 msr spsr_cxsf, r1 @ save in spsr_svc291291+292292+ @ We must avoid clrex due to Cortex-A15 erratum #830321293293+ strex r1, r2, [sp] @ clear the exclusive monitor294294+290295 .if \fast291296 ldmdb sp, {r1 - r12} @ get calling r1 - r12292297 .else
+1
arch/arm/kernel/module.c
···9191 break;92929393 case R_ARM_ABS32:9494+ case R_ARM_TARGET1:9495 *(u32 *)loc += sym->st_value;9596 break;9697
···11-/*22- * Copyright (C) 2013-2014 Broadcom Corporation33- *44- * This program is free software; you can redistribute it and/or55- * modify it under the terms of the GNU General Public License as66- * published by the Free Software Foundation version 2.77- *88- * This program is distributed "as is" WITHOUT ANY WARRANTY of any99- * kind, whether express or implied; without even the implied warranty1010- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1111- * GNU General Public License for more details.1212- */1313-1414-#ifndef __BRCMSTB_H__1515-#define __BRCMSTB_H__1616-1717-void brcmstb_secondary_startup(void);1818-1919-#endif /* __BRCMSTB_H__ */
-33
arch/arm/mach-bcm/headsmp-brcmstb.S
···11-/*22- * SMP boot code for secondary CPUs33- * Based on arch/arm/mach-tegra/headsmp.S44- *55- * Copyright (C) 2010 NVIDIA, Inc.66- * Copyright (C) 2013-2014 Broadcom Corporation77- *88- * This program is free software; you can redistribute it and/or99- * modify it under the terms of the GNU General Public License as1010- * published by the Free Software Foundation version 2.1111- *1212- * This program is distributed "as is" WITHOUT ANY WARRANTY of any1313- * kind, whether express or implied; without even the implied warranty1414- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1515- * GNU General Public License for more details.1616- */1717-1818-#include <asm/assembler.h>1919-#include <linux/linkage.h>2020-#include <linux/init.h>2121-2222- .section ".text.head", "ax"2323-2424-ENTRY(brcmstb_secondary_startup)2525- /*2626- * Ensure CPU is in a sane state by disabling all IRQs and switching2727- * into SVC mode.2828- */2929- setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r03030-3131- bl v7_invalidate_l13232- b secondary_startup3333-ENDPROC(brcmstb_secondary_startup)
-363
arch/arm/mach-bcm/platsmp-brcmstb.c
···11-/*22- * Broadcom STB CPU SMP and hotplug support for ARM33- *44- * Copyright (C) 2013-2014 Broadcom Corporation55- *66- * This program is free software; you can redistribute it and/or77- * modify it under the terms of the GNU General Public License as88- * published by the Free Software Foundation version 2.99- *1010- * This program is distributed "as is" WITHOUT ANY WARRANTY of any1111- * kind, whether express or implied; without even the implied warranty1212- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1313- * GNU General Public License for more details.1414- */1515-1616-#include <linux/delay.h>1717-#include <linux/errno.h>1818-#include <linux/init.h>1919-#include <linux/io.h>2020-#include <linux/of_address.h>2121-#include <linux/of_platform.h>2222-#include <linux/printk.h>2323-#include <linux/regmap.h>2424-#include <linux/smp.h>2525-#include <linux/mfd/syscon.h>2626-#include <linux/spinlock.h>2727-2828-#include <asm/cacheflush.h>2929-#include <asm/cp15.h>3030-#include <asm/mach-types.h>3131-#include <asm/smp_plat.h>3232-3333-#include "brcmstb.h"3434-3535-enum {3636- ZONE_MAN_CLKEN_MASK = BIT(0),3737- ZONE_MAN_RESET_CNTL_MASK = BIT(1),3838- ZONE_MAN_MEM_PWR_MASK = BIT(4),3939- ZONE_RESERVED_1_MASK = BIT(5),4040- ZONE_MAN_ISO_CNTL_MASK = BIT(6),4141- ZONE_MANUAL_CONTROL_MASK = BIT(7),4242- ZONE_PWR_DN_REQ_MASK = BIT(9),4343- ZONE_PWR_UP_REQ_MASK = BIT(10),4444- ZONE_BLK_RST_ASSERT_MASK = BIT(12),4545- ZONE_PWR_OFF_STATE_MASK = BIT(25),4646- ZONE_PWR_ON_STATE_MASK = BIT(26),4747- ZONE_DPG_PWR_STATE_MASK = BIT(28),4848- ZONE_MEM_PWR_STATE_MASK = BIT(29),4949- ZONE_RESET_STATE_MASK = BIT(31),5050- CPU0_PWR_ZONE_CTRL_REG = 1,5151- CPU_RESET_CONFIG_REG = 2,5252-};5353-5454-static void __iomem *cpubiuctrl_block;5555-static void __iomem *hif_cont_block;5656-static u32 cpu0_pwr_zone_ctrl_reg;5757-static u32 cpu_rst_cfg_reg;5858-static u32 hif_cont_reg;5959-6060-#ifdef CONFIG_HOTPLUG_CPU6161-static DEFINE_PER_CPU_ALIGNED(int, per_cpu_sw_state);6262-6363-static int per_cpu_sw_state_rd(u32 cpu)6464-{6565- sync_cache_r(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));6666- return per_cpu(per_cpu_sw_state, cpu);6767-}6868-6969-static void per_cpu_sw_state_wr(u32 cpu, int val)7070-{7171- per_cpu(per_cpu_sw_state, cpu) = val;7272- dmb();7373- sync_cache_w(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));7474- dsb_sev();7575-}7676-#else7777-static inline void per_cpu_sw_state_wr(u32 cpu, int val) { }7878-#endif7979-8080-static void __iomem *pwr_ctrl_get_base(u32 cpu)8181-{8282- void __iomem *base = cpubiuctrl_block + cpu0_pwr_zone_ctrl_reg;8383- base += (cpu_logical_map(cpu) * 4);8484- return base;8585-}8686-8787-static u32 pwr_ctrl_rd(u32 cpu)8888-{8989- void __iomem *base = pwr_ctrl_get_base(cpu);9090- return readl_relaxed(base);9191-}9292-9393-static void pwr_ctrl_wr(u32 cpu, u32 val)9494-{9595- void __iomem *base = pwr_ctrl_get_base(cpu);9696- writel(val, base);9797-}9898-9999-static void cpu_rst_cfg_set(u32 cpu, int set)100100-{101101- u32 val;102102- val = readl_relaxed(cpubiuctrl_block + cpu_rst_cfg_reg);103103- if (set)104104- val |= BIT(cpu_logical_map(cpu));105105- else106106- val &= ~BIT(cpu_logical_map(cpu));107107- writel_relaxed(val, cpubiuctrl_block + cpu_rst_cfg_reg);108108-}109109-110110-static void cpu_set_boot_addr(u32 cpu, unsigned long boot_addr)111111-{112112- const int reg_ofs = cpu_logical_map(cpu) * 8;113113- writel_relaxed(0, hif_cont_block + hif_cont_reg + reg_ofs);114114- writel_relaxed(boot_addr, hif_cont_block + hif_cont_reg + 4 + reg_ofs);115115-}116116-117117-static void brcmstb_cpu_boot(u32 cpu)118118-{119119- pr_info("SMP: Booting CPU%d...\n", cpu);120120-121121- /*122122- * set the reset vector to point to the secondary_startup123123- * routine124124- */125125- cpu_set_boot_addr(cpu, virt_to_phys(brcmstb_secondary_startup));126126-127127- /* unhalt the cpu */128128- cpu_rst_cfg_set(cpu, 0);129129-}130130-131131-static void brcmstb_cpu_power_on(u32 cpu)132132-{133133- /*134134- * The secondary cores power was cut, so we must go through135135- * power-on initialization.136136- */137137- u32 tmp;138138-139139- pr_info("SMP: Powering up CPU%d...\n", cpu);140140-141141- /* Request zone power up */142142- pwr_ctrl_wr(cpu, ZONE_PWR_UP_REQ_MASK);143143-144144- /* Wait for the power up FSM to complete */145145- do {146146- tmp = pwr_ctrl_rd(cpu);147147- } while (!(tmp & ZONE_PWR_ON_STATE_MASK));148148-149149- per_cpu_sw_state_wr(cpu, 1);150150-}151151-152152-static int brcmstb_cpu_get_power_state(u32 cpu)153153-{154154- int tmp = pwr_ctrl_rd(cpu);155155- return (tmp & ZONE_RESET_STATE_MASK) ? 0 : 1;156156-}157157-158158-#ifdef CONFIG_HOTPLUG_CPU159159-160160-static void brcmstb_cpu_die(u32 cpu)161161-{162162- v7_exit_coherency_flush(all);163163-164164- /* Prevent all interrupts from reaching this CPU. */165165- arch_local_irq_disable();166166-167167- /*168168- * Final full barrier to ensure everything before this instruction has169169- * quiesced.170170- */171171- isb();172172- dsb();173173-174174- per_cpu_sw_state_wr(cpu, 0);175175-176176- /* Sit and wait to die */177177- wfi();178178-179179- /* We should never get here... */180180- panic("Spurious interrupt on CPU %d received!\n", cpu);181181-}182182-183183-static int brcmstb_cpu_kill(u32 cpu)184184-{185185- u32 tmp;186186-187187- pr_info("SMP: Powering down CPU%d...\n", cpu);188188-189189- while (per_cpu_sw_state_rd(cpu))190190- ;191191-192192- /* Program zone reset */193193- pwr_ctrl_wr(cpu, ZONE_RESET_STATE_MASK | ZONE_BLK_RST_ASSERT_MASK |194194- ZONE_PWR_DN_REQ_MASK);195195-196196- /* Verify zone reset */197197- tmp = pwr_ctrl_rd(cpu);198198- if (!(tmp & ZONE_RESET_STATE_MASK))199199- pr_err("%s: Zone reset bit for CPU %d not asserted!\n",200200- __func__, cpu);201201-202202- /* Wait for power down */203203- do {204204- tmp = pwr_ctrl_rd(cpu);205205- } while (!(tmp & ZONE_PWR_OFF_STATE_MASK));206206-207207- /* Settle-time from Broadcom-internal DVT reference code */208208- udelay(7);209209-210210- /* Assert reset on the CPU */211211- cpu_rst_cfg_set(cpu, 1);212212-213213- return 1;214214-}215215-216216-#endif /* CONFIG_HOTPLUG_CPU */217217-218218-static int __init setup_hifcpubiuctrl_regs(struct device_node *np)219219-{220220- int rc = 0;221221- char *name;222222- struct device_node *syscon_np = NULL;223223-224224- name = "syscon-cpu";225225-226226- syscon_np = of_parse_phandle(np, name, 0);227227- if (!syscon_np) {228228- pr_err("can't find phandle %s\n", name);229229- rc = -EINVAL;230230- goto cleanup;231231- }232232-233233- cpubiuctrl_block = of_iomap(syscon_np, 0);234234- if (!cpubiuctrl_block) {235235- pr_err("iomap failed for cpubiuctrl_block\n");236236- rc = -EINVAL;237237- goto cleanup;238238- }239239-240240- rc = of_property_read_u32_index(np, name, CPU0_PWR_ZONE_CTRL_REG,241241- &cpu0_pwr_zone_ctrl_reg);242242- if (rc) {243243- pr_err("failed to read 1st entry from %s property (%d)\n", name,244244- rc);245245- rc = -EINVAL;246246- goto cleanup;247247- }248248-249249- rc = of_property_read_u32_index(np, name, CPU_RESET_CONFIG_REG,250250- &cpu_rst_cfg_reg);251251- if (rc) {252252- pr_err("failed to read 2nd entry from %s property (%d)\n", name,253253- rc);254254- rc = -EINVAL;255255- goto cleanup;256256- }257257-258258-cleanup:259259- if (syscon_np)260260- of_node_put(syscon_np);261261-262262- return rc;263263-}264264-265265-static int __init setup_hifcont_regs(struct device_node *np)266266-{267267- int rc = 0;268268- char *name;269269- struct device_node *syscon_np = NULL;270270-271271- name = "syscon-cont";272272-273273- syscon_np = of_parse_phandle(np, name, 0);274274- if (!syscon_np) {275275- pr_err("can't find phandle %s\n", name);276276- rc = -EINVAL;277277- goto cleanup;278278- }279279-280280- hif_cont_block = of_iomap(syscon_np, 0);281281- if (!hif_cont_block) {282282- pr_err("iomap failed for hif_cont_block\n");283283- rc = -EINVAL;284284- goto cleanup;285285- }286286-287287- /* offset is at top of hif_cont_block */288288- hif_cont_reg = 0;289289-290290-cleanup:291291- if (syscon_np)292292- of_node_put(syscon_np);293293-294294- return rc;295295-}296296-297297-static void __init brcmstb_cpu_ctrl_setup(unsigned int max_cpus)298298-{299299- int rc;300300- struct device_node *np;301301- char *name;302302-303303- name = "brcm,brcmstb-smpboot";304304- np = of_find_compatible_node(NULL, NULL, name);305305- if (!np) {306306- pr_err("can't find compatible node %s\n", name);307307- return;308308- }309309-310310- rc = setup_hifcpubiuctrl_regs(np);311311- if (rc)312312- return;313313-314314- rc = setup_hifcont_regs(np);315315- if (rc)316316- return;317317-}318318-319319-static DEFINE_SPINLOCK(boot_lock);320320-321321-static void brcmstb_secondary_init(unsigned int cpu)322322-{323323- /*324324- * Synchronise with the boot thread.325325- */326326- spin_lock(&boot_lock);327327- spin_unlock(&boot_lock);328328-}329329-330330-static int brcmstb_boot_secondary(unsigned int cpu, struct task_struct *idle)331331-{332332- /*333333- * set synchronisation state between this boot processor334334- * and the secondary one335335- */336336- spin_lock(&boot_lock);337337-338338- /* Bring up power to the core if necessary */339339- if (brcmstb_cpu_get_power_state(cpu) == 0)340340- brcmstb_cpu_power_on(cpu);341341-342342- brcmstb_cpu_boot(cpu);343343-344344- /*345345- * now the secondary core is starting up let it run its346346- * calibrations, then wait for it to finish347347- */348348- spin_unlock(&boot_lock);349349-350350- return 0;351351-}352352-353353-static struct smp_operations brcmstb_smp_ops __initdata = {354354- .smp_prepare_cpus = brcmstb_cpu_ctrl_setup,355355- .smp_secondary_init = brcmstb_secondary_init,356356- .smp_boot_secondary = brcmstb_boot_secondary,357357-#ifdef CONFIG_HOTPLUG_CPU358358- .cpu_kill = brcmstb_cpu_kill,359359- .cpu_die = brcmstb_cpu_die,360360-#endif361361-};362362-363363-CPU_METHOD_OF_DECLARE(brcmstb_smp, "brcm,brahma-b15", &brcmstb_smp_ops);
-1
arch/arm/mach-exynos/mcpm-exynos.c
···4343 "mcr p15, 0, r0, c1, c0, 0 @ set SCTLR\n\t" \4444 "isb\n\t"\4545 "bl v7_flush_dcache_"__stringify(level)"\n\t" \4646- "clrex\n\t"\4746 "mrc p15, 0, r0, c1, c0, 1 @ get ACTLR\n\t" \4847 "bic r0, r0, #(1 << 6) @ disable local coherency\n\t" \4948 /* Dummy Load of a device register to avoid Erratum 799270 */ \
···426426427427static int ve_init_opp_table(struct device *cpu_dev)428428{429429- int cluster = topology_physical_package_id(cpu_dev->id);430430- int idx, ret = 0, max_opp = info->num_opps[cluster];431431- struct ve_spc_opp *opps = info->opps[cluster];429429+ int cluster;430430+ int idx, ret = 0, max_opp;431431+ struct ve_spc_opp *opps;432432+433433+ cluster = topology_physical_package_id(cpu_dev->id);434434+ cluster = cluster < 0 ? 0 : cluster;435435+436436+ max_opp = info->num_opps[cluster];437437+ opps = info->opps[cluster];432438433439 for (idx = 0; idx < max_opp; idx++, opps++) {434440 ret = dev_pm_opp_add(cpu_dev, opps->freq * 1000, opps->u_volt);···542536543537 spc->hw.init = &init;544538 spc->cluster = topology_physical_package_id(cpu_dev->id);539539+540540+ spc->cluster = spc->cluster < 0 ? 0 : spc->cluster;545541546542 init.name = dev_name(cpu_dev);547543 init.ops = &clk_spc_ops;
-6
arch/arm/mm/abort-ev6.S
···1717 */1818 .align 51919ENTRY(v6_early_abort)2020-#ifdef CONFIG_CPU_V62121- sub r1, sp, #4 @ Get unused stack location2222- strex r0, r1, [r1] @ Clear the exclusive monitor2323-#elif defined(CONFIG_CPU_32v6K)2424- clrex2525-#endif2620 mrc p15, 0, r1, c5, c0, 0 @ get FSR2721 mrc p15, 0, r0, c6, c0, 0 @ get FAR2822/*
-6
arch/arm/mm/abort-ev7.S
···1313 */1414 .align 51515ENTRY(v7_early_abort)1616- /*1717- * The effect of data aborts on on the exclusive access monitor are1818- * UNPREDICTABLE. Do a CLREX to clear the state1919- */2020- clrex2121-2216 mrc p15, 0, r1, c5, c0, 0 @ get FSR2317 mrc p15, 0, r0, c6, c0, 0 @ get FAR2418
+1
arch/hexagon/mm/cache.c
···6868 );6969 local_irq_restore(flags);7070}7171+EXPORT_SYMBOL(flush_icache_range);71727273void hexagon_clean_dcache_range(unsigned long start, unsigned long end)7374{
-2
arch/ia64/Kconfig
···549549config KEXEC550550 bool "kexec system call"551551 depends on !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)552552- select CRYPTO553553- select CRYPTO_SHA256554552 help555553 kexec is a system call that implements the ability to shutdown your556554 current kernel, and to start another kernel. It is like a reboot
-2
arch/m68k/Kconfig
···9191config KEXEC9292 bool "kexec system call"9393 depends on M68KCLASSIC9494- select CRYPTO9595- select CRYPTO_SHA2569694 help9795 kexec is a system call that implements the ability to shutdown your9896 current kernel, and to start another kernel. It is like a reboot
-2
arch/mips/Kconfig
···2396239623972397config KEXEC23982398 bool "Kexec system call"23992399- select CRYPTO24002400- select CRYPTO_SHA25624012399 help24022400 kexec is a system call that implements the ability to shutdown your24032401 current kernel, and to start another kernel. It is like a reboot
-2
arch/powerpc/Kconfig
···399399config KEXEC400400 bool "kexec system call"401401 depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))402402- select CRYPTO403403- select CRYPTO_SHA256404402 help405403 kexec is a system call that implements the ability to shutdown your406404 current kernel, and to start another kernel. It is like a reboot
···283283#define __NR_sched_setattr 345284284#define __NR_sched_getattr 346285285#define __NR_renameat2 347286286-#define NR_syscalls 348286286+#define __NR_seccomp 348287287+#define __NR_getrandom 349288288+#define __NR_memfd_create 350289289+#define NR_syscalls 351287290288291/* 289292 * There are some system calls that are not present on 64 bit, some
···20602060 S390_lowcore.program_new_psw.addr =20612061 PSW_ADDR_AMODE | (unsigned long) s390_base_pgm_handler;2062206220632063+ /*20642064+ * Clear subchannel ID and number to signal new kernel that no CCW or20652065+ * SCSI IPL has been done (for kexec and kdump)20662066+ */20672067+ S390_lowcore.subchannel_id = 0;20682068+ S390_lowcore.subchannel_nr = 0;20692069+20632070 /* Store status at absolute zero */20642071 store_status();20652072
+19
arch/s390/kernel/setup.c
···2424#include <linux/stddef.h>2525#include <linux/unistd.h>2626#include <linux/ptrace.h>2727+#include <linux/random.h>2728#include <linux/user.h>2829#include <linux/tty.h>2930#include <linux/ioport.h>···6261#include <asm/diag.h>6362#include <asm/os_info.h>6463#include <asm/sclp.h>6464+#include <asm/sysinfo.h>6565#include "entry.h"66666767/*···768766#endif769767770768 get_cpu_id(&cpu_id);769769+ add_device_randomness(&cpu_id, sizeof(cpu_id));771770 switch (cpu_id.machine) {772771 case 0x9672:773772#if !defined(CONFIG_64BIT)···804801 strcpy(elf_platform, "zEC12");805802 break;806803 }804804+}805805+806806+/*807807+ * Add system information as device randomness808808+ */809809+static void __init setup_randomness(void)810810+{811811+ struct sysinfo_3_2_2 *vmms;812812+813813+ vmms = (struct sysinfo_3_2_2 *) alloc_page(GFP_KERNEL);814814+ if (vmms && stsi(vmms, 3, 2, 2) == 0 && vmms->count)815815+ add_device_randomness(&vmms, vmms->count);816816+ free_page((unsigned long) vmms);807817}808818809819/*···917901918902 /* Setup zfcpdump support */919903 setup_zfcpdump();904904+905905+ /* Add system specific data to the random pool */906906+ setup_randomness();920907}921908922909#ifdef CONFIG_32BIT
···598598config KEXEC599599 bool "kexec system call (EXPERIMENTAL)"600600 depends on SUPERH32 && MMU601601- select CRYPTO602602- select CRYPTO_SHA256603601 help604602 kexec is a system call that implements the ability to shutdown your605603 current kernel, and to start another kernel. It is like a reboot
···191191192192config KEXEC193193 bool "kexec system call"194194- select CRYPTO195195- select CRYPTO_SHA256196194 ---help---197195 kexec is a system call that implements the ability to shutdown your198196 current kernel, and to start another kernel. It is like a reboot
+1
arch/tile/kernel/smp.c
···183183 preempt_enable();184184 }185185}186186+EXPORT_SYMBOL(flush_icache_range);186187187188188189/* Called when smp_send_reschedule() triggers IRQ_RESCHEDULE. */
···1585158515861586config KEXEC15871587 bool "kexec system call"15881588- select BUILD_BIN2C15891589- select CRYPTO15901590- select CRYPTO_SHA25615911588 ---help---15921589 kexec is a system call that implements the ability to shutdown your15931590 current kernel, and to start another kernel. It is like a reboot···15991602 interface is strongly in flux, so no good recommendation can be16001603 made.1601160416051605+config KEXEC_FILE16061606+ bool "kexec file based system call"16071607+ select BUILD_BIN2C16081608+ depends on KEXEC16091609+ depends on X86_6416101610+ depends on CRYPTO=y16111611+ depends on CRYPTO_SHA256=y16121612+ ---help---16131613+ This is new version of kexec system call. This system call is16141614+ file based and takes file descriptors as system call argument16151615+ for kernel and initramfs as opposed to list of segments as16161616+ accepted by previous system call.16171617+16021618config KEXEC_VERIFY_SIG16031619 bool "Verify kernel signature during kexec_file_load() syscall"16041604- depends on KEXEC16201620+ depends on KEXEC_FILE16051621 ---help---16061622 This option makes kernel signature verification mandatory for16071623 kexec_file_load() syscall. If kernel is signature can not be
+2-4
arch/x86/Makefile
···184184 $(Q)$(MAKE) $(build)=arch/x86/syscalls all185185186186archprepare:187187-ifeq ($(CONFIG_KEXEC),y)188188-# Build only for 64bit. No loaders for 32bit yet.189189- ifeq ($(CONFIG_X86_64),y)187187+ifeq ($(CONFIG_KEXEC_FILE),y)190188 $(Q)$(MAKE) $(build)=arch/x86/purgatory arch/x86/purgatory/kexec-purgatory.c191191- endif192189endif193190194191###···251254 $(Q)rm -rf $(objtree)/arch/x86_64252255 $(Q)$(MAKE) $(clean)=$(boot)253256 $(Q)$(MAKE) $(clean)=arch/x86/tools257257+ $(Q)$(MAKE) $(clean)=arch/x86/purgatory254258255259PHONY += kvmconfig256260kvmconfig:
+2
arch/x86/include/asm/io_apic.h
···227227228228extern void io_apic_eoi(unsigned int apic, unsigned int vector);229229230230+extern bool mp_should_keep_irq(struct device *dev);231231+230232#else /* !CONFIG_X86_IO_APIC */231233232234#define io_apic_assign_pci_irqs 0
+7-2
arch/x86/include/asm/pgtable.h
···131131132132static inline int pte_special(pte_t pte)133133{134134- return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) ==135135- (_PAGE_PRESENT|_PAGE_SPECIAL);134134+ /*135135+ * See CONFIG_NUMA_BALANCING pte_numa in include/asm-generic/pgtable.h.136136+ * On x86 we have _PAGE_BIT_NUMA == _PAGE_BIT_GLOBAL+1 ==137137+ * __PAGE_BIT_SOFTW1 == _PAGE_BIT_SPECIAL.138138+ */139139+ return (pte_flags(pte) & _PAGE_SPECIAL) &&140140+ (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_PROTNONE));136141}137142138143static inline unsigned long pte_pfn(pte_t pte)
···10701070 }1071107110721072 if (flags & IOAPIC_MAP_ALLOC) {10731073+ /* special handling for legacy IRQs */10741074+ if (irq < nr_legacy_irqs() && info->count == 1 &&10751075+ mp_irqdomain_map(domain, irq, pin) != 0)10761076+ irq = -1;10771077+10731078 if (irq > 0)10741079 info->count++;10751080 else if (info->count == 0)···39013896 info->polarity = 1;39023897 }39033898 info->node = NUMA_NO_NODE;39043904- info->set = 1;38993899+39003900+ /*39013901+ * setup_IO_APIC_irqs() programs all legacy IRQs with default39023902+ * trigger and polarity attributes. Don't set the flag for that39033903+ * case so the first legacy IRQ user could reprogram the pin39043904+ * with real trigger and polarity attributes.39053905+ */39063906+ if (virq >= nr_legacy_irqs() || info->count)39073907+ info->set = 1;39053908 }39063909 set_io_apic_irq_attr(&attr, ioapic, hwirq, info->trigger,39073910 info->polarity);···39573944 mutex_unlock(&ioapic_mutex);3958394539593946 return ret;39473947+}39483948+39493949+bool mp_should_keep_irq(struct device *dev)39503950+{39513951+ if (dev->power.is_prepared)39523952+ return true;39533953+#ifdef CONFIG_PM_RUNTIME39543954+ if (dev->power.runtime_status == RPM_SUSPENDING)39553955+ return true;39563956+#endif39573957+39583958+ return false;39603959}3961396039623961/* Enable IOAPIC early just for system timer */
+2-4
arch/x86/kernel/crash.c
···182182 crash_save_cpu(regs, safe_smp_processor_id());183183}184184185185-#ifdef CONFIG_X86_64186186-185185+#ifdef CONFIG_KEXEC_FILE187186static int get_nr_ram_ranges_callback(unsigned long start_pfn,188187 unsigned long nr_pfn, void *arg)189188{···695696696697 return ret;697698}698698-699699-#endif /* CONFIG_X86_64 */699699+#endif /* CONFIG_KEXEC_FILE */
+1-1
arch/x86/kernel/irqinit.c
···203203 set_intr_gate(i, interrupt[i - FIRST_EXTERNAL_VECTOR]);204204 }205205206206- if (!acpi_ioapic && !of_ioapic)206206+ if (!acpi_ioapic && !of_ioapic && nr_legacy_irqs())207207 setup_irq(2, &irq2);208208209209#ifdef CONFIG_X86_32
+11
arch/x86/kernel/machine_kexec_64.c
···2525#include <asm/debugreg.h>2626#include <asm/kexec-bzimage64.h>27272828+#ifdef CONFIG_KEXEC_FILE2829static struct kexec_file_ops *kexec_file_loaders[] = {2930 &kexec_bzImage64_ops,3031};3232+#endif31333234static void free_transition_pgtable(struct kimage *image)3335{···180178 );181179}182180181181+#ifdef CONFIG_KEXEC_FILE183182/* Update purgatory as needed after various image segments have been prepared */184183static int arch_update_purgatory(struct kimage *image)185184{···212209213210 return ret;214211}212212+#else /* !CONFIG_KEXEC_FILE */213213+static inline int arch_update_purgatory(struct kimage *image)214214+{215215+ return 0;216216+}217217+#endif /* CONFIG_KEXEC_FILE */215218216219int machine_kexec_prepare(struct kimage *image)217220{···338329339330/* arch-dependent functionality related to kexec file-based syscall */340331332332+#ifdef CONFIG_KEXEC_FILE341333int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,342334 unsigned long buf_len)343335{···532522 (int)ELF64_R_TYPE(rel[i].r_info), value);533523 return -ENOEXEC;534524}525525+#endif /* CONFIG_KEXEC_FILE */
+2
arch/x86/kernel/time.c
···68686969void __init setup_default_timer_irq(void)7070{7171+ if (!nr_legacy_irqs())7272+ return;7173 setup_irq(0, &irq0);7274}7375
+1-1
arch/x86/pci/intel_mid_pci.c
···229229230230static void intel_mid_pci_irq_disable(struct pci_dev *dev)231231{232232- if (!dev->dev.power.is_prepared && dev->irq > 0)232232+ if (!mp_should_keep_irq(&dev->dev) && dev->irq > 0)233233 mp_unmap_irq(dev->irq);234234}235235
···1111# sure how to relocate those. Like kexec-tools, use custom flags.12121313KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large1414+KBUILD_CFLAGS += -m$(BITS)14151516$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE1617 $(call if_changed,ld)···2524 $(call if_changed,bin2c)262527262828-# No loaders for 32bits yet.2929-ifeq ($(CONFIG_X86_64),y)3030- obj-$(CONFIG_KEXEC) += kexec-purgatory.o3131-endif2727+obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
+75-17
arch/xtensa/Kconfig
···44config XTENSA55 def_bool y66 select ARCH_WANT_FRAME_POINTERS77- select HAVE_IDE88- select GENERIC_ATOMIC6499- select GENERIC_CLOCKEVENTS1010- select VIRT_TO_BUS1111- select GENERIC_IRQ_SHOW1212- select GENERIC_SCHED_CLOCK1313- select MODULES_USE_ELF_RELA1414- select GENERIC_PCI_IOMAP157 select ARCH_WANT_IPC_PARSE_VERSION168 select ARCH_WANT_OPTIONAL_GPIOLIB179 select BUILDTIME_EXTABLE_SORT1810 select CLONE_BACKWARDS1919- select IRQ_DOMAIN2020- select HAVE_OPROFILE1111+ select COMMON_CLK1212+ select GENERIC_ATOMIC641313+ select GENERIC_CLOCKEVENTS1414+ select GENERIC_IRQ_SHOW1515+ select GENERIC_PCI_IOMAP1616+ select GENERIC_SCHED_CLOCK2117 select HAVE_FUNCTION_TRACER2218 select HAVE_IRQ_TIME_ACCOUNTING1919+ select HAVE_OPROFILE2320 select HAVE_PERF_EVENTS2424- select COMMON_CLK2121+ select IRQ_DOMAIN2222+ select MODULES_USE_ELF_RELA2323+ select VIRT_TO_BUS2524 help2625 Xtensa processors are 32-bit RISC machines designed by Tensilica2726 primarily for embedded systems. These processors are both···6162 def_bool y62636364config MMU6464- def_bool n6565+ bool6666+ default n if !XTENSA_VARIANT_CUSTOM6767+ default XTENSA_VARIANT_MMU if XTENSA_VARIANT_CUSTOM65686669config VARIANT_IRQ_SWITCH6770 def_bool n···103102 select VARIANT_IRQ_SWITCH104103 select ARCH_REQUIRE_GPIOLIB105104 select XTENSA_CALIBRATE_CCOUNT105105+106106+config XTENSA_VARIANT_CUSTOM107107+ bool "Custom Xtensa processor configuration"108108+ select MAY_HAVE_SMP109109+ select HAVE_XTENSA_GPIO32110110+ help111111+ Select this variant to use a custom Xtensa processor configuration.112112+ You will be prompted for a processor variant CORENAME.106113endchoice114114+115115+config XTENSA_VARIANT_CUSTOM_NAME116116+ string "Xtensa Processor Custom Core Variant Name"117117+ depends on XTENSA_VARIANT_CUSTOM118118+ help119119+ Provide the name of a custom Xtensa processor variant.120120+ This CORENAME selects arch/xtensa/variant/CORENAME.121121+ Dont forget you have to select MMU if you have one.122122+123123+config XTENSA_VARIANT_NAME124124+ string125125+ default "dc232b" if XTENSA_VARIANT_DC232B126126+ default "dc233c" if XTENSA_VARIANT_DC233C127127+ default "fsf" if XTENSA_VARIANT_FSF128128+ default "s6000" if XTENSA_VARIANT_S6000129129+ default XTENSA_VARIANT_CUSTOM_NAME if XTENSA_VARIANT_CUSTOM130130+131131+config XTENSA_VARIANT_MMU132132+ bool "Core variant has a Full MMU (TLB, Pages, Protection, etc)"133133+ depends on XTENSA_VARIANT_CUSTOM134134+ default y135135+ help136136+ Build a Conventional Kernel with full MMU support,137137+ ie: it supports a TLB with auto-loading, page protection.107138108139config XTENSA_UNALIGNED_USER109140 bool "Unaligned memory access in use space"···189156190157 Say N if you want to disable CPU hotplug.191158192192-config MATH_EMULATION193193- bool "Math emulation"194194- help195195- Can we use information of configuration file?196196-197159config INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX198160 bool "Initialize Xtensa MMU inside the Linux kernel code"161161+ depends on MMU199162 default y200163 help201164 Earlier version initialized the MMU in the exception vector···221192222193config HIGHMEM223194 bool "High Memory Support"195195+ depends on MMU224196 help225197 Linux can use the full amount of RAM in the system by226198 default. However, the default MMUv2 setup only maps the···237207 N here.238208239209 If unsure, say Y.210210+211211+config FAST_SYSCALL_XTENSA212212+ bool "Enable fast atomic syscalls"213213+ default n214214+ help215215+ fast_syscall_xtensa is a syscall that can make atomic operations216216+ on UP kernel when processor has no s32c1i support.217217+218218+ This syscall is deprecated. It may have issues when called with219219+ invalid arguments. It is provided only for backwards compatibility.220220+ Only enable it if your userspace software requires it.221221+222222+ If unsure, say N.223223+224224+config FAST_SYSCALL_SPILL_REGISTERS225225+ bool "Enable spill registers syscall"226226+ default n227227+ help228228+ fast_syscall_spill_registers is a syscall that spills all active229229+ register windows of a calling userspace task onto its stack.230230+231231+ This syscall is deprecated. It may have issues when called with232232+ invalid arguments. It is provided only for backwards compatibility.233233+ Only enable it if your userspace software requires it.234234+235235+ If unsure, say N.240236241237endmenu242238···306250307251config XTENSA_PLATFORM_XT2000308252 bool "XT2000"253253+ select HAVE_IDE309254 help310255 XT2000 is the name of Tensilica's feature-rich emulation platform.311256 This hardware is capable of running a full Linux distribution.312257313258config XTENSA_PLATFORM_S6105314259 bool "S6105"260260+ select HAVE_IDE315261 select SERIAL_CONSOLE316262 select NO_IOPORT_MAP317263
+2-5
arch/xtensa/Makefile
···44# for more details.55#66# Copyright (C) 2001 - 2005 Tensilica Inc.77+# Copyright (C) 2014 Cadence Design Systems Inc.78#89# This file is included by the global makefile so that you can add your own910# architecture-specific flags and dependencies. Remember to do have actions···1413# Core configuration.1514# (Use VAR=<xtensa_config> to use another default compiler.)16151717-variant-$(CONFIG_XTENSA_VARIANT_FSF) := fsf1818-variant-$(CONFIG_XTENSA_VARIANT_DC232B) := dc232b1919-variant-$(CONFIG_XTENSA_VARIANT_DC233C) := dc233c2020-variant-$(CONFIG_XTENSA_VARIANT_S6000) := s60002121-variant-$(CONFIG_XTENSA_VARIANT_LINUX_CUSTOM) := custom1616+variant-y := $(patsubst "%",%,$(CONFIG_XTENSA_VARIANT_NAME))22172318VARIANT = $(variant-y)2419export VARIANT
···6666CONFIG_MMU=y6767# CONFIG_XTENSA_UNALIGNED_USER is not set6868# CONFIG_PREEMPT is not set6969-# CONFIG_MATH_EMULATION is not set7069# CONFIG_HIGHMEM is not set71707271#
+1-2
arch/xtensa/configs/iss_defconfig
···146146# CONFIG_XTENSA_VARIANT_S6000 is not set147147# CONFIG_XTENSA_UNALIGNED_USER is not set148148# CONFIG_PREEMPT is not set149149-# CONFIG_MATH_EMULATION is not set150149CONFIG_XTENSA_CALIBRATE_CCOUNT=y151150CONFIG_SERIAL_CONSOLE=y152151CONFIG_XTENSA_ISS_NETWORK=y···307308# EEPROM support308309#309310# CONFIG_EEPROM_93CX6 is not set310310-CONFIG_HAVE_IDE=y311311+# CONFIG_HAVE_IDE is not set311312# CONFIG_IDE is not set312313313314#
-1
arch/xtensa/configs/s6105_defconfig
···109109CONFIG_XTENSA_VARIANT_S6000=y110110# CONFIG_XTENSA_UNALIGNED_USER is not set111111CONFIG_PREEMPT=y112112-# CONFIG_MATH_EMULATION is not set113112# CONFIG_HIGHMEM is not set114113CONFIG_XTENSA_CALIBRATE_CCOUNT=y115114CONFIG_SERIAL_CONSOLE=y
···2323 * Here we define all the compile-time 'special' virtual2424 * addresses. The point is to have a constant address at2525 * compile time, but to set the physical address only2626- * in the boot process. We allocate these special addresses2727- * from the end of the consistent memory region backwards.2626+ * in the boot process. We allocate these special addresses2727+ * from the start of the consistent memory region upwards.2828 * Also this lets us do fail-safe vmalloc(), we2929 * can guarantee that these special addresses and3030 * vmalloc()-ed addresses never overlap.···3838#ifdef CONFIG_HIGHMEM3939 /* reserved pte's for temporary kernel mappings */4040 FIX_KMAP_BEGIN,4141- FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,4141+ FIX_KMAP_END = FIX_KMAP_BEGIN +4242+ (KM_TYPE_NR * NR_CPUS * DCACHE_N_COLORS) - 1,4243#endif4344 __end_of_fixed_addresses4445};···4847#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)4948#define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK)50495151-#include <asm-generic/fixmap.h>5050+#define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT))5151+#define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT)5252+5353+#ifndef __ASSEMBLY__5454+/*5555+ * 'index to address' translation. If anyone tries to use the idx5656+ * directly without translation, we catch the bug with a NULL-deference5757+ * kernel oops. Illegal ranges of incoming indices are caught too.5858+ */5959+static __always_inline unsigned long fix_to_virt(const unsigned int idx)6060+{6161+ BUILD_BUG_ON(idx >= __end_of_fixed_addresses);6262+ return __fix_to_virt(idx);6363+}6464+6565+static inline unsigned long virt_to_fix(const unsigned long vaddr)6666+{6767+ BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);6868+ return __virt_to_fix(vaddr);6969+}7070+7171+#endif52725373#define kmap_get_fixmap_pte(vaddr) \5474 pte_offset_kernel( \
···112112 */113113void blk_mq_freeze_queue(struct request_queue *q)114114{115115+ bool freeze;116116+115117 spin_lock_irq(q->queue_lock);116116- q->mq_freeze_depth++;118118+ freeze = !q->mq_freeze_depth++;117119 spin_unlock_irq(q->queue_lock);118120119119- percpu_ref_kill(&q->mq_usage_counter);120120- blk_mq_run_queues(q, false);121121+ if (freeze) {122122+ percpu_ref_kill(&q->mq_usage_counter);123123+ blk_mq_run_queues(q, false);124124+ }121125 wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->mq_usage_counter));122126}123127124128static void blk_mq_unfreeze_queue(struct request_queue *q)125129{126126- bool wake = false;130130+ bool wake;127131128132 spin_lock_irq(q->queue_lock);129133 wake = !--q->mq_freeze_depth;···175171 rq->special = NULL;176172 /* tag was already set */177173 rq->errors = 0;174174+175175+ rq->cmd = rq->__cmd;178176179177 rq->extra_len = 0;180178 rq->sense_len = 0;···10741068 blk_account_io_start(rq, 1);10751069}1076107010711071+static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)10721072+{10731073+ return (hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&10741074+ !blk_queue_nomerges(hctx->queue);10751075+}10761076+10771077static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,10781078 struct blk_mq_ctx *ctx,10791079 struct request *rq, struct bio *bio)10801080{10811081- struct request_queue *q = hctx->queue;10821082-10831083- if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE)) {10811081+ if (!hctx_allow_merges(hctx)) {10841082 blk_mq_bio_to_request(rq, bio);10851083 spin_lock(&ctx->lock);10861084insert_rq:···10921082 spin_unlock(&ctx->lock);10931083 return false;10941084 } else {10851085+ struct request_queue *q = hctx->queue;10861086+10951087 spin_lock(&ctx->lock);10961088 if (!blk_mq_attempt_merge(q, ctx, bio)) {10971089 blk_mq_bio_to_request(rq, bio);···15861574 hctx->tags = set->tags[i];1587157515881576 /*15891589- * Allocate space for all possible cpus to avoid allocation in15771577+ * Allocate space for all possible cpus to avoid allocation at15901578 * runtime15911579 */15921580 hctx->ctxs = kmalloc_node(nr_cpu_ids * sizeof(void *),···1674166216751663 queue_for_each_hw_ctx(q, hctx, i) {16761664 /*16771677- * If not software queues are mapped to this hardware queue,16781678- * disable it and free the request entries16651665+ * If no software queues are mapped to this hardware queue,16661666+ * disable it and free the request entries.16791667 */16801668 if (!hctx->nr_ctx) {16811669 struct blk_mq_tag_set *set = q->tag_set;···17251713{17261714 struct blk_mq_tag_set *set = q->tag_set;1727171517281728- blk_mq_freeze_queue(q);17291729-17301716 mutex_lock(&set->tag_list_lock);17311717 list_del_init(&q->tag_set_list);17321718 blk_mq_update_tag_set_depth(set);17331719 mutex_unlock(&set->tag_list_lock);17341734-17351735- blk_mq_unfreeze_queue(q);17361720}1737172117381722static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
+16-3
block/cfq-iosched.c
···12721272 rb_insert_color(&cfqg->rb_node, &st->rb);12731273}1274127412751275+/*12761276+ * This has to be called only on activation of cfqg12771277+ */12751278static void12761279cfq_update_group_weight(struct cfq_group *cfqg)12771280{12781278- BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));12791279-12801281 if (cfqg->new_weight) {12811282 cfqg->weight = cfqg->new_weight;12821283 cfqg->new_weight = 0;12831284 }12851285+}12861286+12871287+static void12881288+cfq_update_group_leaf_weight(struct cfq_group *cfqg)12891289+{12901290+ BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));1284129112851292 if (cfqg->new_leaf_weight) {12861293 cfqg->leaf_weight = cfqg->new_leaf_weight;···13061299 /* add to the service tree */13071300 BUG_ON(!RB_EMPTY_NODE(&cfqg->rb_node));1308130113091309- cfq_update_group_weight(cfqg);13021302+ /*13031303+ * Update leaf_weight. We cannot update weight at this point13041304+ * because cfqg might already have been activated and is13051305+ * contributing its current weight to the parent's child_weight.13061306+ */13071307+ cfq_update_group_leaf_weight(cfqg);13101308 __cfq_group_service_tree_add(st, cfqg);1311130913121310 /*···13351323 */13361324 while ((parent = cfqg_parent(pos))) {13371325 if (propagate) {13261326+ cfq_update_group_weight(pos);13381327 propagate = !parent->nr_active++;13391328 parent->children_weight += pos->weight;13401329 }
+27-13
block/scsi_ioctl.c
···279279 r = blk_rq_unmap_user(bio);280280 if (!ret)281281 ret = r;282282- blk_put_request(rq);283282284283 return ret;285284}···295296 struct bio *bio;296297297298 if (hdr->interface_id != 'S')298298- return -EINVAL;299299- if (hdr->cmd_len > BLK_MAX_CDB)300299 return -EINVAL;301300302301 if (hdr->dxfer_len > (queue_max_hw_sectors(q) << 9))···314317 if (hdr->flags & SG_FLAG_Q_AT_HEAD)315318 at_head = 1;316319320320+ ret = -ENOMEM;317321 rq = blk_get_request(q, writing ? WRITE : READ, GFP_KERNEL);318322 if (!rq)319319- return -ENOMEM;323323+ goto out;320324 blk_rq_set_block_pc(rq);321325322322- if (blk_fill_sghdr_rq(q, rq, hdr, mode)) {323323- blk_put_request(rq);324324- return -EFAULT;326326+ if (hdr->cmd_len > BLK_MAX_CDB) {327327+ rq->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);328328+ if (!rq->cmd)329329+ goto out_put_request;325330 }326331332332+ ret = -EFAULT;333333+ if (blk_fill_sghdr_rq(q, rq, hdr, mode))334334+ goto out_free_cdb;335335+336336+ ret = 0;327337 if (hdr->iovec_count) {328338 size_t iov_data_len;329339 struct iovec *iov = NULL;···339335 0, NULL, &iov);340336 if (ret < 0) {341337 kfree(iov);342342- goto out;338338+ goto out_free_cdb;343339 }344340345341 iov_data_len = ret;···362358 GFP_KERNEL);363359364360 if (ret)365365- goto out;361361+ goto out_free_cdb;366362367363 bio = rq->bio;368364 memset(sense, 0, sizeof(sense));···380376381377 hdr->duration = jiffies_to_msecs(jiffies - start_time);382378383383- return blk_complete_sghdr_rq(rq, hdr, bio);384384-out:379379+ ret = blk_complete_sghdr_rq(rq, hdr, bio);380380+381381+out_free_cdb:382382+ if (rq->cmd != rq->__cmd)383383+ kfree(rq->cmd);384384+out_put_request:385385 blk_put_request(rq);386386+out:386387 return ret;387388}388389···457448 }458449459450 rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT);451451+ if (!rq) {452452+ err = -ENOMEM;453453+ goto error;454454+ }455455+ blk_rq_set_block_pc(rq);460456461457 cmdlen = COMMAND_SIZE(opcode);462458···515501 memset(sense, 0, sizeof(sense));516502 rq->sense = sense;517503 rq->sense_len = 0;518518- blk_rq_set_block_pc(rq);519504520505 blk_execute_rq(q, disk, rq, 0);521506···534521535522error:536523 kfree(buffer);537537- blk_put_request(rq);524524+ if (rq)525525+ blk_put_request(rq);538526 return err;539527}540528EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
···197197 t->rdata[t->ri++] = acpi_ec_read_data(ec);198198 if (t->rlen == t->ri) {199199 t->flags |= ACPI_EC_COMMAND_COMPLETE;200200+ if (t->command == ACPI_EC_COMMAND_QUERY)201201+ pr_debug("hardware QR_EC completion\n");200202 wakeup = true;201203 }202204 } else···210208 }211209 return wakeup;212210 } else {213213- if ((status & ACPI_EC_FLAG_IBF) == 0) {211211+ /*212212+ * There is firmware refusing to respond QR_EC when SCI_EVT213213+ * is not set, for which case, we complete the QR_EC214214+ * without issuing it to the firmware.215215+ * https://bugzilla.kernel.org/show_bug.cgi?id=86211216216+ */217217+ if (!(status & ACPI_EC_FLAG_SCI) &&218218+ (t->command == ACPI_EC_COMMAND_QUERY)) {219219+ t->flags |= ACPI_EC_COMMAND_POLL;220220+ t->rdata[t->ri++] = 0x00;221221+ t->flags |= ACPI_EC_COMMAND_COMPLETE;222222+ pr_debug("software QR_EC completion\n");223223+ wakeup = true;224224+ } else if ((status & ACPI_EC_FLAG_IBF) == 0) {214225 acpi_ec_write_cmd(ec, t->command);215226 t->flags |= ACPI_EC_COMMAND_POLL;216227 } else···303288 /* following two actions should be kept atomic */304289 ec->curr = t;305290 start_transaction(ec);306306- if (ec->curr->command == ACPI_EC_COMMAND_QUERY)307307- clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);308291 spin_unlock_irqrestore(&ec->lock, tmp);309292 ret = ec_poll(ec);310293 spin_lock_irqsave(&ec->lock, tmp);294294+ if (ec->curr->command == ACPI_EC_COMMAND_QUERY)295295+ clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);311296 ec->curr = NULL;312297 spin_unlock_irqrestore(&ec->lock, tmp);313298 return ret;
+4
drivers/acpi/pci_irq.c
···484484 /* Keep IOAPIC pin configuration when suspending */485485 if (dev->dev.power.is_prepared)486486 return;487487+#ifdef CONFIG_PM_RUNTIME488488+ if (dev->dev.power.runtime_status == RPM_SUSPENDING)489489+ return;490490+#endif487491488492 entry = acpi_pci_irq_lookup(dev, pin);489493 if (!entry)
+11-6
drivers/acpi/scan.c
···922922 device->driver->ops.notify(device, event);923923}924924925925-static acpi_status acpi_device_notify_fixed(void *data)925925+static void acpi_device_notify_fixed(void *data)926926{927927 struct acpi_device *device = data;928928929929 /* Fixed hardware devices have no handles */930930 acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device);931931+}932932+933933+static acpi_status acpi_device_fixed_event(void *data)934934+{935935+ acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data);931936 return AE_OK;932937}933938···943938 if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)944939 status =945940 acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,946946- acpi_device_notify_fixed,941941+ acpi_device_fixed_event,947942 device);948943 else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)949944 status =950945 acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,951951- acpi_device_notify_fixed,946946+ acpi_device_fixed_event,952947 device);953948 else954949 status = acpi_install_notify_handler(device->handle,···965960{966961 if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)967962 acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,968968- acpi_device_notify_fixed);963963+ acpi_device_fixed_event);969964 else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)970965 acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,971971- acpi_device_notify_fixed);966966+ acpi_device_fixed_event);972967 else973968 acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,974969 acpi_device_notify);···980975 struct acpi_driver *acpi_drv = to_acpi_driver(dev->driver);981976 int ret;982977983983- if (acpi_dev->handler)978978+ if (acpi_dev->handler && !acpi_is_pnp_device(acpi_dev))984979 return -EINVAL;985980986981 if (!acpi_drv->ops.add)
+5-1
drivers/block/brd.c
···442442int rd_size = CONFIG_BLK_DEV_RAM_SIZE;443443static int max_part;444444static int part_shift;445445+static int part_show = 0;445446module_param(rd_nr, int, S_IRUGO);446447MODULE_PARM_DESC(rd_nr, "Maximum number of brd devices");447448module_param(rd_size, int, S_IRUGO);448449MODULE_PARM_DESC(rd_size, "Size of each RAM disk in kbytes.");449450module_param(max_part, int, S_IRUGO);450451MODULE_PARM_DESC(max_part, "Maximum number of partitions per RAM disk");452452+module_param(part_show, int, S_IRUGO);453453+MODULE_PARM_DESC(part_show, "Control RAM disk visibility in /proc/partitions");451454MODULE_LICENSE("GPL");452455MODULE_ALIAS_BLOCKDEV_MAJOR(RAMDISK_MAJOR);453456MODULE_ALIAS("rd");···504501 disk->fops = &brd_fops;505502 disk->private_data = brd;506503 disk->queue = brd->brd_queue;507507- disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;504504+ if (!part_show)505505+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;508506 sprintf(disk->disk_name, "ram%d", i);509507 set_capacity(disk, rd_size * 2);510508
···501501 return val >> 8;502502}503503504504-static int __init s5pv210_cpu_init(struct cpufreq_policy *policy)504504+static int s5pv210_cpu_init(struct cpufreq_policy *policy)505505{506506 unsigned long mem_type;507507 int ret;
+3-10
drivers/cpuidle/cpuidle-big_little.c
···138138 return idx;139139}140140141141-static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int cpu_id)141141+static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int part_id)142142{143143- struct cpuinfo_arm *cpu_info;144143 struct cpumask *cpumask;145145- unsigned long cpuid;146144 int cpu;147145148146 cpumask = kzalloc(cpumask_size(), GFP_KERNEL);149147 if (!cpumask)150148 return -ENOMEM;151149152152- for_each_possible_cpu(cpu) {153153- cpu_info = &per_cpu(cpu_data, cpu);154154- cpuid = is_smp() ? cpu_info->cpuid : read_cpuid_id();155155-156156- /* read cpu id part number */157157- if ((cpuid & 0xFFF0) == cpu_id)150150+ for_each_possible_cpu(cpu)151151+ if (smp_cpuid_part(cpu) == part_id)158152 cpumask_set_cpu(cpu, cpumask);159159- }160153161154 drv->cpumask = cpumask;162155
+1-1
drivers/dma-buf/fence.c
···2929EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);3030EXPORT_TRACEPOINT_SYMBOL(fence_emit);31313232-/**3232+/*3333 * fence context counter: each execution context should have its own3434 * fence context, this allows checking if fences belong to the same3535 * context or not. One device can have multiple separate contexts,
···46964696 return -EINVAL;4697469746984698 /* overflow checks for 32bit size calculations */46994699+ /* NOTE: DIV_ROUND_UP() can overflow */46994700 cpp = DIV_ROUND_UP(args->bpp, 8);47004700- if (cpp > 0xffffffffU / args->width)47014701+ if (!cpp || cpp > 0xffffffffU / args->width)47014702 return -EINVAL;47024703 stride = cpp * args->width;47034704 if (args->height > 0xffffffffU / stride)
+2
drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
···397397 struct mdp4_crtc *mdp4_crtc = to_mdp4_crtc(crtc);398398 DBG("%s", mdp4_crtc->name);399399 /* make sure we hold a ref to mdp clks while setting up mode: */400400+ drm_crtc_vblank_get(crtc);400401 mdp4_enable(get_kms(crtc));401402 mdp4_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);402403}···408407 crtc_flush(crtc);409408 /* drop the ref to mdp clk's that we got in prepare: */410409 mdp4_disable(get_kms(crtc));410410+ drm_crtc_vblank_put(crtc);411411}412412413413static int mdp4_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
+1-2
drivers/gpu/drm/msm/msm_drv.c
···974974975975 for (i = 0; i < ARRAY_SIZE(devnames); i++) {976976 struct device *dev;977977- int ret;978977979978 dev = bus_find_device_by_name(&platform_bus_type,980979 NULL, devnames[i]);981980 if (!dev) {982982- dev_info(master, "still waiting for %s\n", devnames[i]);981981+ dev_info(&pdev->dev, "still waiting for %s\n", devnames[i]);983982 return -EPROBE_DEFER;984983 }985984
+1-1
drivers/gpu/drm/msm/msm_fbdev.c
···143143 ret = msm_gem_get_iova_locked(fbdev->bo, 0, &paddr);144144 if (ret) {145145 dev_err(dev->dev, "failed to get buffer obj iova: %d\n", ret);146146- goto fail;146146+ goto fail_unlock;147147 }148148149149 fbi = framebuffer_alloc(0, dev->dev);
+2-2
drivers/gpu/drm/msm/msm_iommu.c
···2727static int msm_fault_handler(struct iommu_domain *iommu, struct device *dev,2828 unsigned long iova, int flags, void *arg)2929{3030- DBG("*** fault: iova=%08lx, flags=%d", iova, flags);3131- return -ENOSYS;3030+ pr_warn_ratelimited("*** fault: iova=%08lx, flags=%d\n", iova, flags);3131+ return 0;3232}33333434static int msm_iommu_attach(struct msm_mmu *mmu, const char **names, int cnt)
+19-7
drivers/gpu/drm/radeon/cik.c
···57495749 WREG32(0x15D8, 0);57505750 WREG32(0x15DC, 0);5751575157525752- /* empty context1-15 */57535753- /* FIXME start with 4G, once using 2 level pt switch to full57545754- * vm size space57555755- */57525752+ /* restore context1-15 */57565753 /* set vm size, must be a multiple of 4 */57575754 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);57585755 WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);57595756 for (i = 1; i < 16; i++) {57605757 if (i < 8)57615758 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),57625762- rdev->gart.table_addr >> 12);57595759+ rdev->vm_manager.saved_table_addr[i]);57635760 else57645761 WREG32(VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2),57655765- rdev->gart.table_addr >> 12);57625762+ rdev->vm_manager.saved_table_addr[i]);57665763 }5767576457685765 /* enable context1-15 */···58245827 */58255828static void cik_pcie_gart_disable(struct radeon_device *rdev)58265829{58305830+ unsigned i;58315831+58325832+ for (i = 1; i < 16; ++i) {58335833+ uint32_t reg;58345834+ if (i < 8)58355835+ reg = VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2);58365836+ else58375837+ reg = VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((i - 8) << 2);58385838+ rdev->vm_manager.saved_table_addr[i] = RREG32(reg);58395839+ }58405840+58275841 /* Disable all tables */58285842 WREG32(VM_CONTEXT0_CNTL, 0);58295843 WREG32(VM_CONTEXT1_CNTL, 0);···95639555 int ret, i;95649556 u16 tmp16;9565955795589558+ if (pci_is_root_bus(rdev->pdev->bus))95599559+ return;95609560+95669561 if (radeon_pcie_gen2 == 0)95679562 return;95689563···97929781 if (orig != data)97939782 WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, data);9794978397959795- if (!disable_clkreq) {97849784+ if (!disable_clkreq &&97859785+ !pci_is_root_bus(rdev->pdev->bus)) {97969786 struct pci_dev *root = rdev->pdev->bus->self;97979787 u32 lnkcap;97989788
+8-1
drivers/gpu/drm/radeon/ni.c
···12711271 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);12721272 WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);12731273 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),12741274- rdev->gart.table_addr >> 12);12741274+ rdev->vm_manager.saved_table_addr[i]);12751275 }1276127612771277 /* enable context1-7 */···1303130313041304static void cayman_pcie_gart_disable(struct radeon_device *rdev)13051305{13061306+ unsigned i;13071307+13081308+ for (i = 1; i < 8; ++i) {13091309+ rdev->vm_manager.saved_table_addr[i] = RREG32(13101310+ VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2));13111311+ }13121312+13061313 /* Disable all tables */13071314 WREG32(VM_CONTEXT0_CNTL, 0);13081315 WREG32(VM_CONTEXT1_CNTL, 0);
+8-18
drivers/gpu/drm/radeon/r600.c
···18121812{18131813 u32 tiling_config;18141814 u32 ramcfg;18151815- u32 cc_rb_backend_disable;18161815 u32 cc_gc_shader_pipe_config;18171816 u32 tmp;18181817 int i, j;···19381939 }19391940 tiling_config |= BANK_SWAPS(1);1940194119411941- cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE) & 0x00ff0000;19421942- tmp = R6XX_MAX_BACKENDS -19431943- r600_count_pipe_bits((cc_rb_backend_disable >> 16) & R6XX_MAX_BACKENDS_MASK);19441944- if (tmp < rdev->config.r600.max_backends) {19451945- rdev->config.r600.max_backends = tmp;19461946- }19471947-19481942 cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & 0x00ffff00;19491949- tmp = R6XX_MAX_PIPES -19501950- r600_count_pipe_bits((cc_gc_shader_pipe_config >> 8) & R6XX_MAX_PIPES_MASK);19511951- if (tmp < rdev->config.r600.max_pipes) {19521952- rdev->config.r600.max_pipes = tmp;19531953- }19541954- tmp = R6XX_MAX_SIMDS -19551955- r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);19561956- if (tmp < rdev->config.r600.max_simds) {19571957- rdev->config.r600.max_simds = tmp;19581958- }19591943 tmp = rdev->config.r600.max_simds -19601944 r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);19611945 rdev->config.r600.active_simds = tmp;1962194619631947 disabled_rb_mask = (RREG32(CC_RB_BACKEND_DISABLE) >> 16) & R6XX_MAX_BACKENDS_MASK;19481948+ tmp = 0;19491949+ for (i = 0; i < rdev->config.r600.max_backends; i++)19501950+ tmp |= (1 << i);19511951+ /* if all the backends are disabled, fix it up here */19521952+ if ((disabled_rb_mask & tmp) == tmp) {19531953+ for (i = 0; i < rdev->config.r600.max_backends; i++)19541954+ disabled_rb_mask &= ~(1 << i);19551955+ }19641956 tmp = (tiling_config & PIPE_TILING__MASK) >> PIPE_TILING__SHIFT;19651957 tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.r600.max_backends,19661958 R6XX_MAX_BACKENDS, disabled_rb_mask);
+2
drivers/gpu/drm/radeon/radeon.h
···915915 u64 vram_base_offset;916916 /* is vm enabled? */917917 bool enabled;918918+ /* for hw to save the PD addr on suspend/resume */919919+ uint32_t saved_table_addr[RADEON_NUM_VM];918920};919921920922/*
···290290 if (size < 4 || ((size - 4) % 9) != 0)291291 return 0;292292 npoints = (size - 4) / 9;293293+ if (npoints > 15) {294294+ hid_warn(hdev, "invalid size value (%d) for TRACKPAD_REPORT_ID\n",295295+ size);296296+ return 0;297297+ }293298 msc->ntouches = 0;294299 for (ii = 0; ii < npoints; ii++)295300 magicmouse_emit_touch(msc, ii, data + ii * 9 + 4);···312307 if (size < 6 || ((size - 6) % 8) != 0)313308 return 0;314309 npoints = (size - 6) / 8;310310+ if (npoints > 15) {311311+ hid_warn(hdev, "invalid size value (%d) for MOUSE_REPORT_ID\n",312312+ size);313313+ return 0;314314+ }315315 msc->ntouches = 0;316316 for (ii = 0; ii < npoints; ii++)317317 magicmouse_emit_touch(msc, ii, data + ii * 8 + 6);
+6
drivers/hid/hid-picolcd_core.c
···350350 if (!data)351351 return 1;352352353353+ if (size > 64) {354354+ hid_warn(hdev, "invalid size value (%d) for picolcd raw event\n",355355+ size);356356+ return 0;357357+ }358358+353359 if (report->id == REPORT_KEY_STATE) {354360 if (data->input_keys)355361 ret = picolcd_raw_keypad(data, report, raw_data+1, size-1);
+19-6
drivers/md/dm-crypt.c
···16881688 unsigned int key_size, opt_params;16891689 unsigned long long tmpll;16901690 int ret;16911691+ size_t iv_size_padding;16911692 struct dm_arg_set as;16921693 const char *opt_string;16931694 char dummy;···1725172417261725 cc->dmreq_start = sizeof(struct ablkcipher_request);17271726 cc->dmreq_start += crypto_ablkcipher_reqsize(any_tfm(cc));17281728- cc->dmreq_start = ALIGN(cc->dmreq_start, crypto_tfm_ctx_alignment());17291729- cc->dmreq_start += crypto_ablkcipher_alignmask(any_tfm(cc)) &17301730- ~(crypto_tfm_ctx_alignment() - 1);17271727+ cc->dmreq_start = ALIGN(cc->dmreq_start, __alignof__(struct dm_crypt_request));17281728+17291729+ if (crypto_ablkcipher_alignmask(any_tfm(cc)) < CRYPTO_MINALIGN) {17301730+ /* Allocate the padding exactly */17311731+ iv_size_padding = -(cc->dmreq_start + sizeof(struct dm_crypt_request))17321732+ & crypto_ablkcipher_alignmask(any_tfm(cc));17331733+ } else {17341734+ /*17351735+ * If the cipher requires greater alignment than kmalloc17361736+ * alignment, we don't know the exact position of the17371737+ * initialization vector. We must assume worst case.17381738+ */17391739+ iv_size_padding = crypto_ablkcipher_alignmask(any_tfm(cc));17401740+ }1731174117321742 cc->req_pool = mempool_create_kmalloc_pool(MIN_IOS, cc->dmreq_start +17331733- sizeof(struct dm_crypt_request) + cc->iv_size);17431743+ sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size);17341744 if (!cc->req_pool) {17351745 ti->error = "Cannot allocate crypt request mempool";17361746 goto bad;17371747 }1738174817391749 cc->per_bio_data_size = ti->per_bio_data_size =17401740- sizeof(struct dm_crypt_io) + cc->dmreq_start +17411741- sizeof(struct dm_crypt_request) + cc->iv_size;17501750+ ALIGN(sizeof(struct dm_crypt_io) + cc->dmreq_start +17511751+ sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size,17521752+ ARCH_KMALLOC_MINALIGN);1742175317431754 cc->page_pool = mempool_create_page_pool(MIN_POOL_PAGES, 0);17441755 if (!cc->page_pool) {
+1-1
drivers/mfd/ab8500-core.c
···17541754 if (ret)17551755 return ret;1756175617571757-#if CONFIG_DEBUG_FS17571757+#ifdef CONFIG_DEBUG_FS17581758 /* Pass to debugfs */17591759 ab8500_debug_resources[0].start = ab8500->irq;17601760 ab8500_debug_resources[0].end = ab8500->irq;
···2626#include <linux/gpio.h>27272828/* pinmux function number for pin as gpio output line */2929+#define FUNC_INPUT 0x02930#define FUNC_OUTPUT 0x130313132/**
···499499 }500500501501 /* div doesn't support odd number */502502- div = rs->max_freq / rs->speed;502502+ div = max_t(u32, rs->max_freq / rs->speed, 1);503503 div = (div + 1) & 0xfffe;504504505505 spi_enable_chip(rs, 0);···678678 rs->dma_tx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_TXDR);679679 rs->dma_rx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_RXDR);680680 rs->dma_tx.direction = DMA_MEM_TO_DEV;681681- rs->dma_tx.direction = DMA_DEV_TO_MEM;681681+ rs->dma_rx.direction = DMA_DEV_TO_MEM;682682683683 master->can_dma = rockchip_spi_can_dma;684684 master->dma_tx = rs->dma_tx.ch;
+58-36
drivers/spi/spi-rspi.c
···472472 dma_cookie_t cookie;473473 int ret;474474475475- if (tx) {476476- desc_tx = dmaengine_prep_slave_sg(rspi->master->dma_tx,477477- tx->sgl, tx->nents, DMA_TO_DEVICE,478478- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);479479- if (!desc_tx)480480- goto no_dma;481481-482482- irq_mask |= SPCR_SPTIE;483483- }475475+ /* First prepare and submit the DMA request(s), as this may fail */484476 if (rx) {485477 desc_rx = dmaengine_prep_slave_sg(rspi->master->dma_rx,486478 rx->sgl, rx->nents, DMA_FROM_DEVICE,487479 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);488488- if (!desc_rx)489489- goto no_dma;480480+ if (!desc_rx) {481481+ ret = -EAGAIN;482482+ goto no_dma_rx;483483+ }484484+485485+ desc_rx->callback = rspi_dma_complete;486486+ desc_rx->callback_param = rspi;487487+ cookie = dmaengine_submit(desc_rx);488488+ if (dma_submit_error(cookie)) {489489+ ret = cookie;490490+ goto no_dma_rx;491491+ }490492491493 irq_mask |= SPCR_SPRIE;494494+ }495495+496496+ if (tx) {497497+ desc_tx = dmaengine_prep_slave_sg(rspi->master->dma_tx,498498+ tx->sgl, tx->nents, DMA_TO_DEVICE,499499+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);500500+ if (!desc_tx) {501501+ ret = -EAGAIN;502502+ goto no_dma_tx;503503+ }504504+505505+ if (rx) {506506+ /* No callback */507507+ desc_tx->callback = NULL;508508+ } else {509509+ desc_tx->callback = rspi_dma_complete;510510+ desc_tx->callback_param = rspi;511511+ }512512+ cookie = dmaengine_submit(desc_tx);513513+ if (dma_submit_error(cookie)) {514514+ ret = cookie;515515+ goto no_dma_tx;516516+ }517517+518518+ irq_mask |= SPCR_SPTIE;492519 }493520494521 /*···530503 rspi_enable_irq(rspi, irq_mask);531504 rspi->dma_callbacked = 0;532505533533- if (rx) {534534- desc_rx->callback = rspi_dma_complete;535535- desc_rx->callback_param = rspi;536536- cookie = dmaengine_submit(desc_rx);537537- if (dma_submit_error(cookie))538538- return cookie;506506+ /* Now start DMA */507507+ if (rx)539508 dma_async_issue_pending(rspi->master->dma_rx);540540- }541541- if (tx) {542542- if (rx) {543543- /* No callback */544544- desc_tx->callback = NULL;545545- } else {546546- desc_tx->callback = rspi_dma_complete;547547- desc_tx->callback_param = rspi;548548- }549549- cookie = dmaengine_submit(desc_tx);550550- if (dma_submit_error(cookie))551551- return cookie;509509+ if (tx)552510 dma_async_issue_pending(rspi->master->dma_tx);553553- }554511555512 ret = wait_event_interruptible_timeout(rspi->wait,556513 rspi->dma_callbacked, HZ);557514 if (ret > 0 && rspi->dma_callbacked)558515 ret = 0;559559- else if (!ret)516516+ else if (!ret) {517517+ dev_err(&rspi->master->dev, "DMA timeout\n");560518 ret = -ETIMEDOUT;519519+ if (tx)520520+ dmaengine_terminate_all(rspi->master->dma_tx);521521+ if (rx)522522+ dmaengine_terminate_all(rspi->master->dma_rx);523523+ }561524562525 rspi_disable_irq(rspi, irq_mask);563526···558541559542 return ret;560543561561-no_dma:562562- pr_warn_once("%s %s: DMA not available, falling back to PIO\n",563563- dev_driver_string(&rspi->master->dev),564564- dev_name(&rspi->master->dev));565565- return -EAGAIN;544544+no_dma_tx:545545+ if (rx)546546+ dmaengine_terminate_all(rspi->master->dma_rx);547547+no_dma_rx:548548+ if (ret == -EAGAIN) {549549+ pr_warn_once("%s %s: DMA not available, falling back to PIO\n",550550+ dev_driver_string(&rspi->master->dev),551551+ dev_name(&rspi->master->dev));552552+ }553553+ return ret;566554}567555568556static void rspi_receive_init(const struct rspi_data *rspi)
+43-38
drivers/spi/spi-sh-msiof.c
···636636 dma_cookie_t cookie;637637 int ret;638638639639+ /* First prepare and submit the DMA request(s), as this may fail */640640+ if (rx) {641641+ ier_bits |= IER_RDREQE | IER_RDMAE;642642+ desc_rx = dmaengine_prep_slave_single(p->master->dma_rx,643643+ p->rx_dma_addr, len, DMA_FROM_DEVICE,644644+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);645645+ if (!desc_rx) {646646+ ret = -EAGAIN;647647+ goto no_dma_rx;648648+ }649649+650650+ desc_rx->callback = sh_msiof_dma_complete;651651+ desc_rx->callback_param = p;652652+ cookie = dmaengine_submit(desc_rx);653653+ if (dma_submit_error(cookie)) {654654+ ret = cookie;655655+ goto no_dma_rx;656656+ }657657+ }658658+639659 if (tx) {640660 ier_bits |= IER_TDREQE | IER_TDMAE;641661 dma_sync_single_for_device(p->master->dma_tx->device->dev,···663643 desc_tx = dmaengine_prep_slave_single(p->master->dma_tx,664644 p->tx_dma_addr, len, DMA_TO_DEVICE,665645 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);666666- if (!desc_tx)667667- return -EAGAIN;668668- }646646+ if (!desc_tx) {647647+ ret = -EAGAIN;648648+ goto no_dma_tx;649649+ }669650670670- if (rx) {671671- ier_bits |= IER_RDREQE | IER_RDMAE;672672- desc_rx = dmaengine_prep_slave_single(p->master->dma_rx,673673- p->rx_dma_addr, len, DMA_FROM_DEVICE,674674- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);675675- if (!desc_rx)676676- return -EAGAIN;651651+ if (rx) {652652+ /* No callback */653653+ desc_tx->callback = NULL;654654+ } else {655655+ desc_tx->callback = sh_msiof_dma_complete;656656+ desc_tx->callback_param = p;657657+ }658658+ cookie = dmaengine_submit(desc_tx);659659+ if (dma_submit_error(cookie)) {660660+ ret = cookie;661661+ goto no_dma_tx;662662+ }677663 }678664679665 /* 1 stage FIFO watermarks for DMA */···692666693667 reinit_completion(&p->done);694668695695- if (rx) {696696- desc_rx->callback = sh_msiof_dma_complete;697697- desc_rx->callback_param = p;698698- cookie = dmaengine_submit(desc_rx);699699- if (dma_submit_error(cookie)) {700700- ret = cookie;701701- goto stop_ier;702702- }669669+ /* Now start DMA */670670+ if (rx)703671 dma_async_issue_pending(p->master->dma_rx);704704- }705705-706706- if (tx) {707707- if (rx) {708708- /* No callback */709709- desc_tx->callback = NULL;710710- } else {711711- desc_tx->callback = sh_msiof_dma_complete;712712- desc_tx->callback_param = p;713713- }714714- cookie = dmaengine_submit(desc_tx);715715- if (dma_submit_error(cookie)) {716716- ret = cookie;717717- goto stop_rx;718718- }672672+ if (tx)719673 dma_async_issue_pending(p->master->dma_tx);720720- }721674722675 ret = sh_msiof_spi_start(p, rx);723676 if (ret) {724677 dev_err(&p->pdev->dev, "failed to start hardware\n");725725- goto stop_tx;678678+ goto stop_dma;726679 }727680728681 /* wait for tx fifo to be emptied / rx fifo to be filled */···731726stop_reset:732727 sh_msiof_reset_str(p);733728 sh_msiof_spi_stop(p, rx);734734-stop_tx:729729+stop_dma:735730 if (tx)736731 dmaengine_terminate_all(p->master->dma_tx);737737-stop_rx:732732+no_dma_tx:738733 if (rx)739734 dmaengine_terminate_all(p->master->dma_rx);740740-stop_ier:741735 sh_msiof_write(p, IER, 0);736736+no_dma_rx:742737 return ret;743738}744739
+1
drivers/spi/spi.c
···848848849849/**850850 * spi_finalize_current_transfer - report completion of a transfer851851+ * @master: the master reporting completion851852 *852853 * Called by SPI drivers using the core transfer_one_message()853854 * implementation to notify it that the current interrupt driven
···3535 */36363737#define DEBUG_SUBSYSTEM S_CLASS3838-# include <asm/atomic.h>3838+# include <linux/atomic.h>39394040#include "../include/obd_support.h"4141#include "../include/obd_class.h"
···150150151151 /* Activate hops. */152152 for (i = path->path_length - 1; i >= 0; i--) {153153- struct tb_regs_hop hop;153153+ struct tb_regs_hop hop = { 0 };154154+155155+ /*156156+ * We do (currently) not tear down paths setup by the firmeware.157157+ * If a firmware device is unplugged and plugged in again then158158+ * it can happen that we reuse some of the hops from the (now159159+ * defunct) firmeware path. This causes the hotplug operation to160160+ * fail (the pci device does not show up). Clearing the hop161161+ * before overwriting it fixes the problem.162162+ *163163+ * Should be removed once we discover and tear down firmeware164164+ * paths.165165+ */166166+ res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS,167167+ 2 * path->hops[i].in_hop_index, 2);168168+ if (res) {169169+ __tb_path_deactivate_hops(path, i);170170+ __tb_path_deallocate_nfc(path, 0);171171+ goto err;172172+ }154173155174 /* dword 0 */156175 hop.next_hop = path->hops[i].next_hop_index;
···17281728 * - Change autosuspend delay of hub can avoid unnecessary auto17291729 * suspend timer for hub, also may decrease power consumption17301730 * of USB bus.17311731+ *17321732+ * - If user has indicated to prevent autosuspend by passing17331733+ * usbcore.autosuspend = -1 then keep autosuspend disabled.17311734 */17321732- pm_runtime_set_autosuspend_delay(&hdev->dev, 0);17351735+#ifdef CONFIG_PM_RUNTIME17361736+ if (hdev->dev.power.autosuspend_delay >= 0)17371737+ pm_runtime_set_autosuspend_delay(&hdev->dev, 0);17381738+#endif1733173917341740 /*17351741 * Hubs have proper suspend/resume support, except for root hubs···21132107{21142108 struct usb_port *port_dev = NULL;21152109 struct usb_device *udev = *pdev;21162116- struct usb_hub *hub;21172117- int port1;21102110+ struct usb_hub *hub = NULL;21112111+ int port1 = 1;2118211221192113 /* mark the device as inactive, so any further urb submissions for21202114 * this device (and any of its children) will fail immediately.···46374631 if (status != -ENODEV &&46384632 port1 != unreliable_port &&46394633 printk_ratelimit())46404640- dev_err(&udev->dev, "connect-debounce failed, port %d disabled\n",46414641- port1);46424642-46344634+ dev_err(&port_dev->dev, "connect-debounce failed\n");46434635 portstatus &= ~USB_PORT_STAT_CONNECTION;46444636 unreliable_port = port1;46454637 } else {
+1-1
drivers/usb/dwc2/gadget.c
···19011901static void s3c_hsotg_irq_enumdone(struct s3c_hsotg *hsotg)19021902{19031903 u32 dsts = readl(hsotg->regs + DSTS);19041904- int ep0_mps = 0, ep_mps;19041904+ int ep0_mps = 0, ep_mps = 8;1905190519061906 /*19071907 * This should signal the finish of the enumeration phase
+1-1
drivers/usb/dwc3/dwc3-omap.c
···425425426426static int dwc3_omap_extcon_register(struct dwc3_omap *omap)427427{428428- u32 ret;428428+ int ret;429429 struct device_node *node = omap->dev->of_node;430430 struct extcon_dev *edev;431431
···440440441441 value = -ENOMEM;442442 kbuf = memdup_user(buf, len);443443- if (!kbuf) {443443+ if (IS_ERR(kbuf)) {444444 value = PTR_ERR(kbuf);445445 goto free1;446446 }
+2-1
drivers/usb/gadget/udc/Kconfig
···332332 gadget drivers to also be dynamically linked.333333334334config USB_EG20T335335- tristate "Intel EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC"335335+ tristate "Intel QUARK X1000/EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC"336336 depends on PCI337337 help338338 This is a USB device driver for EG20T PCH.···353353 ML7213/ML7831 is companion chip for Intel Atom E6xx series.354354 ML7213/ML7831 is completely compatible for Intel EG20T PCH.355355356356+ This driver can be used with Intel's Quark X1000 SOC platform356357#357358# LAST -- dummy/emulated controller358359#
+1-1
drivers/usb/gadget/udc/atmel_usba_udc.c
···16611661 if (dma_status) {16621662 int i;1663166316641664- for (i = 1; i < USBA_NR_DMAS; i++)16641664+ for (i = 1; i <= USBA_NR_DMAS; i++)16651665 if (dma_status & (1 << i))16661666 usba_dma_irq(udc, &udc->usba_ep[i]);16671667 }
+6-2
drivers/usb/gadget/udc/fusb300_udc.c
···1398139813991399 /* initialize udc */14001400 fusb300 = kzalloc(sizeof(struct fusb300), GFP_KERNEL);14011401- if (fusb300 == NULL)14011401+ if (fusb300 == NULL) {14021402+ ret = -ENOMEM;14021403 goto clean_up;14041404+ }1403140514041406 for (i = 0; i < FUSB300_MAX_NUM_EP; i++) {14051407 _ep[i] = kzalloc(sizeof(struct fusb300_ep), GFP_KERNEL);14061406- if (_ep[i] == NULL)14081408+ if (_ep[i] == NULL) {14091409+ ret = -ENOMEM;14071410 goto clean_up;14111411+ }14081412 fusb300->ep[i] = _ep[i];14091413 }14101414
+19-3
drivers/usb/gadget/udc/pch_udc.c
···343343 * @setup_data: Received setup data344344 * @phys_addr: of device memory345345 * @base_addr: for mapped device memory346346+ * @bar: Indicates which PCI BAR for USB regs346347 * @irq: IRQ line for the device347348 * @cfg_data: current cfg, intf, and alt in use348349 * @vbus_gpio: GPIO informaton for detecting VBUS···371370 struct usb_ctrlrequest setup_data;372371 unsigned long phys_addr;373372 void __iomem *base_addr;373373+ unsigned bar;374374 unsigned irq;375375 struct pch_udc_cfg_data cfg_data;376376 struct pch_vbus_gpio_data vbus_gpio;377377};378378#define to_pch_udc(g) (container_of((g), struct pch_udc_dev, gadget))379379380380+#define PCH_UDC_PCI_BAR_QUARK_X1000 0380381#define PCH_UDC_PCI_BAR 1381382#define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808383383+#define PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC 0x0939382384#define PCI_VENDOR_ID_ROHM 0x10DB383385#define PCI_DEVICE_ID_ML7213_IOH_UDC 0x801D384386#define PCI_DEVICE_ID_ML7831_IOH_UDC 0x8808···30803076 iounmap(dev->base_addr);30813077 if (dev->mem_region)30823078 release_mem_region(dev->phys_addr,30833083- pci_resource_len(pdev, PCH_UDC_PCI_BAR));30793079+ pci_resource_len(pdev, dev->bar));30843080 if (dev->active)30853081 pci_disable_device(pdev);30863082 kfree(dev);···31483144 dev->active = 1;31493145 pci_set_drvdata(pdev, dev);3150314631473147+ /* Determine BAR based on PCI ID */31483148+ if (id->device == PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC)31493149+ dev->bar = PCH_UDC_PCI_BAR_QUARK_X1000;31503150+ else31513151+ dev->bar = PCH_UDC_PCI_BAR;31523152+31513153 /* PCI resource allocation */31523152- resource = pci_resource_start(pdev, 1);31533153- len = pci_resource_len(pdev, 1);31543154+ resource = pci_resource_start(pdev, dev->bar);31553155+ len = pci_resource_len(pdev, dev->bar);3154315631553157 if (!request_mem_region(resource, len, KBUILD_MODNAME)) {31563158 dev_err(&pdev->dev, "%s: pci device used already\n", __func__);···32213211}3222321232233213static const struct pci_device_id pch_udc_pcidev_id[] = {32143214+ {32153215+ PCI_DEVICE(PCI_VENDOR_ID_INTEL,32163216+ PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC),32173217+ .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,32183218+ .class_mask = 0xffffffff,32193219+ },32243220 {32253221 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EG20T_UDC),32263222 .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,
+2-2
drivers/usb/gadget/udc/r8a66597-udc.c
···1868186818691869 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);18701870 reg = devm_ioremap_resource(&pdev->dev, res);18711871- if (!reg)18721872- return -ENODEV;18711871+ if (IS_ERR(reg))18721872+ return PTR_ERR(reg);1873187318741874 ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);18751875 irq = ires->start;
···101101 /* AMD PLL quirk */102102 if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())103103 xhci->quirks |= XHCI_AMD_PLL_FIX;104104+105105+ if (pdev->vendor == PCI_VENDOR_ID_AMD)106106+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;107107+104108 if (pdev->vendor == PCI_VENDOR_ID_INTEL) {105109 xhci->quirks |= XHCI_LPM_SUPPORT;106110 xhci->quirks |= XHCI_INTEL_HOST;···154150 xhci->quirks |= XHCI_RESET_ON_RESUME;155151 if (pdev->vendor == PCI_VENDOR_ID_VIA)156152 xhci->quirks |= XHCI_RESET_ON_RESUME;153153+154154+ /* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */155155+ if (pdev->vendor == PCI_VENDOR_ID_VIA &&156156+ pdev->device == 0x3432)157157+ xhci->quirks |= XHCI_BROKEN_STREAMS;157158158159 if (xhci->quirks & XHCI_RESET_ON_RESUME)159160 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+41-63
drivers/usb/host/xhci-ring.c
···364364 }365365}366366367367-/*368368- * Find the segment that trb is in. Start searching in start_seg.369369- * If we must move past a segment that has a link TRB with a toggle cycle state370370- * bit set, then we will toggle the value pointed at by cycle_state.371371- */372372-static struct xhci_segment *find_trb_seg(373373- struct xhci_segment *start_seg,374374- union xhci_trb *trb, int *cycle_state)375375-{376376- struct xhci_segment *cur_seg = start_seg;377377- struct xhci_generic_trb *generic_trb;378378-379379- while (cur_seg->trbs > trb ||380380- &cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) {381381- generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic;382382- if (generic_trb->field[3] & cpu_to_le32(LINK_TOGGLE))383383- *cycle_state ^= 0x1;384384- cur_seg = cur_seg->next;385385- if (cur_seg == start_seg)386386- /* Looped over the entire list. Oops! */387387- return NULL;388388- }389389- return cur_seg;390390-}391391-392392-393367static struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci,394368 unsigned int slot_id, unsigned int ep_index,395369 unsigned int stream_id)···433459 struct xhci_virt_device *dev = xhci->devs[slot_id];434460 struct xhci_virt_ep *ep = &dev->eps[ep_index];435461 struct xhci_ring *ep_ring;436436- struct xhci_generic_trb *trb;462462+ struct xhci_segment *new_seg;463463+ union xhci_trb *new_deq;437464 dma_addr_t addr;438465 u64 hw_dequeue;466466+ bool cycle_found = false;467467+ bool td_last_trb_found = false;439468440469 ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id,441470 ep_index, stream_id);···463486 hw_dequeue = le64_to_cpu(ep_ctx->deq);464487 }465488466466- /* Find virtual address and segment of hardware dequeue pointer */467467- state->new_deq_seg = ep_ring->deq_seg;468468- state->new_deq_ptr = ep_ring->dequeue;469469- while (xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr)470470- != (dma_addr_t)(hw_dequeue & ~0xf)) {471471- next_trb(xhci, ep_ring, &state->new_deq_seg,472472- &state->new_deq_ptr);473473- if (state->new_deq_ptr == ep_ring->dequeue) {474474- WARN_ON(1);489489+ new_seg = ep_ring->deq_seg;490490+ new_deq = ep_ring->dequeue;491491+ state->new_cycle_state = hw_dequeue & 0x1;492492+493493+ /*494494+ * We want to find the pointer, segment and cycle state of the new trb495495+ * (the one after current TD's last_trb). We know the cycle state at496496+ * hw_dequeue, so walk the ring until both hw_dequeue and last_trb are497497+ * found.498498+ */499499+ do {500500+ if (!cycle_found && xhci_trb_virt_to_dma(new_seg, new_deq)501501+ == (dma_addr_t)(hw_dequeue & ~0xf)) {502502+ cycle_found = true;503503+ if (td_last_trb_found)504504+ break;505505+ }506506+ if (new_deq == cur_td->last_trb)507507+ td_last_trb_found = true;508508+509509+ if (cycle_found &&510510+ TRB_TYPE_LINK_LE32(new_deq->generic.field[3]) &&511511+ new_deq->generic.field[3] & cpu_to_le32(LINK_TOGGLE))512512+ state->new_cycle_state ^= 0x1;513513+514514+ next_trb(xhci, ep_ring, &new_seg, &new_deq);515515+516516+ /* Search wrapped around, bail out */517517+ if (new_deq == ep->ring->dequeue) {518518+ xhci_err(xhci, "Error: Failed finding new dequeue state\n");519519+ state->new_deq_seg = NULL;520520+ state->new_deq_ptr = NULL;475521 return;476522 }477477- }478478- /*479479- * Find cycle state for last_trb, starting at old cycle state of480480- * hw_dequeue. If there is only one segment ring, find_trb_seg() will481481- * return immediately and cannot toggle the cycle state if this search482482- * wraps around, so add one more toggle manually in that case.483483- */484484- state->new_cycle_state = hw_dequeue & 0x1;485485- if (ep_ring->first_seg == ep_ring->first_seg->next &&486486- cur_td->last_trb < state->new_deq_ptr)487487- state->new_cycle_state ^= 0x1;488523489489- state->new_deq_ptr = cur_td->last_trb;490490- xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,491491- "Finding segment containing last TRB in TD.");492492- state->new_deq_seg = find_trb_seg(state->new_deq_seg,493493- state->new_deq_ptr, &state->new_cycle_state);494494- if (!state->new_deq_seg) {495495- WARN_ON(1);496496- return;497497- }524524+ } while (!cycle_found || !td_last_trb_found);498525499499- /* Increment to find next TRB after last_trb. Cycle if appropriate. */500500- trb = &state->new_deq_ptr->generic;501501- if (TRB_TYPE_LINK_LE32(trb->field[3]) &&502502- (trb->field[3] & cpu_to_le32(LINK_TOGGLE)))503503- state->new_cycle_state ^= 0x1;504504- next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr);526526+ state->new_deq_seg = new_seg;527527+ state->new_deq_ptr = new_deq;505528506529 /* Don't update the ring cycle state for the producer (us). */507530 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,···24642487 * last TRB of the previous TD. The command completion handle24652488 * will take care the rest.24662489 */24672467- if (!event_seg && trb_comp_code == COMP_STOP_INVAL) {24902490+ if (!event_seg && (trb_comp_code == COMP_STOP ||24912491+ trb_comp_code == COMP_STOP_INVAL)) {24682492 ret = 0;24692493 goto cleanup;24702494 }
+3
drivers/usb/host/xhci.c
···28802880 ep_index, ep->stopped_stream, ep->stopped_td,28812881 &deq_state);2882288228832883+ if (!deq_state.new_deq_ptr || !deq_state.new_deq_seg)28842884+ return;28852885+28832886 /* HW with the reset endpoint quirk will use the saved dequeue state to28842887 * issue a configure endpoint command later.28852888 */
···16011601 */16021602 if (motg->phy_number) {16031603 phy_select = devm_ioremap_nocache(&pdev->dev, USB2_PHY_SEL, 4);16041604- if (IS_ERR(phy_select))16051605- return PTR_ERR(phy_select);16041604+ if (!phy_select)16051605+ return -ENOMEM;16061606 /* Enable second PHY with the OTG port */16071607 writel(0x1, phy_select);16081608 }
···764764 if (usb_endpoint_is_bulk_in(endpoint)) {765765 /* we found a bulk in endpoint */766766 dev_dbg(ddev, "found bulk in on endpoint %d\n", i);767767- bulk_in_endpoint[num_bulk_in] = endpoint;768768- ++num_bulk_in;767767+ if (num_bulk_in < MAX_NUM_PORTS) {768768+ bulk_in_endpoint[num_bulk_in] = endpoint;769769+ ++num_bulk_in;770770+ }769771 }770772771773 if (usb_endpoint_is_bulk_out(endpoint)) {772774 /* we found a bulk out endpoint */773775 dev_dbg(ddev, "found bulk out on endpoint %d\n", i);774774- bulk_out_endpoint[num_bulk_out] = endpoint;775775- ++num_bulk_out;776776+ if (num_bulk_out < MAX_NUM_PORTS) {777777+ bulk_out_endpoint[num_bulk_out] = endpoint;778778+ ++num_bulk_out;779779+ }776780 }777781778782 if (usb_endpoint_is_int_in(endpoint)) {779783 /* we found a interrupt in endpoint */780784 dev_dbg(ddev, "found interrupt in on endpoint %d\n", i);781781- interrupt_in_endpoint[num_interrupt_in] = endpoint;782782- ++num_interrupt_in;785785+ if (num_interrupt_in < MAX_NUM_PORTS) {786786+ interrupt_in_endpoint[num_interrupt_in] =787787+ endpoint;788788+ ++num_interrupt_in;789789+ }783790 }784791785792 if (usb_endpoint_is_int_out(endpoint)) {786793 /* we found an interrupt out endpoint */787794 dev_dbg(ddev, "found interrupt out on endpoint %d\n", i);788788- interrupt_out_endpoint[num_interrupt_out] = endpoint;789789- ++num_interrupt_out;795795+ if (num_interrupt_out < MAX_NUM_PORTS) {796796+ interrupt_out_endpoint[num_interrupt_out] =797797+ endpoint;798798+ ++num_interrupt_out;799799+ }790800 }791801 }792802···819809 if (usb_endpoint_is_int_in(endpoint)) {820810 /* we found a interrupt in endpoint */821811 dev_dbg(ddev, "found interrupt in for Prolific device on separate interface\n");822822- interrupt_in_endpoint[num_interrupt_in] = endpoint;823823- ++num_interrupt_in;812812+ if (num_interrupt_in < MAX_NUM_PORTS) {813813+ interrupt_in_endpoint[num_interrupt_in] = endpoint;814814+ ++num_interrupt_in;815815+ }824816 }825817 }826818 }···860848 num_ports = type->calc_num_ports(serial);861849 if (!num_ports)862850 num_ports = type->num_ports;851851+ }852852+853853+ if (num_ports > MAX_NUM_PORTS) {854854+ dev_warn(ddev, "too many ports requested: %d\n", num_ports);855855+ num_ports = MAX_NUM_PORTS;863856 }864857865858 serial->num_ports = num_ports;
+6-1
drivers/usb/serial/whiteheat.c
···514514 dev_dbg(&urb->dev->dev, "%s - command_info is NULL, exiting.\n", __func__);515515 return;516516 }517517+ if (!urb->actual_length) {518518+ dev_dbg(&urb->dev->dev, "%s - empty response, exiting.\n", __func__);519519+ return;520520+ }517521 if (status) {518522 dev_dbg(&urb->dev->dev, "%s - nonzero urb status: %d\n", __func__, status);519523 if (status != -ENOENT)···538534 /* These are unsolicited reports from the firmware, hence no539535 waiting command to wakeup */540536 dev_dbg(&urb->dev->dev, "%s - event received\n", __func__);541541- } else if (data[0] == WHITEHEAT_GET_DTR_RTS) {537537+ } else if ((data[0] == WHITEHEAT_GET_DTR_RTS) &&538538+ (urb->actual_length - 1 <= sizeof(command_info->result_buffer))) {542539 memcpy(command_info->result_buffer, &data[1],543540 urb->actual_length - 1);544541 command_info->command_finished = WHITEHEAT_CMD_COMPLETE;
···552552 caching_ctl->block_group = cache;553553 caching_ctl->progress = cache->key.objectid;554554 atomic_set(&caching_ctl->count, 1);555555- btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL);555555+ btrfs_init_work(&caching_ctl->work, btrfs_cache_helper,556556+ caching_thread, NULL, NULL);556557557558 spin_lock(&cache->lock);558559 /*···27502749 async->sync = 0;27512750 init_completion(&async->wait);2752275127532753- btrfs_init_work(&async->work, delayed_ref_async_start,27542754- NULL, NULL);27522752+ btrfs_init_work(&async->work, btrfs_extent_refs_helper,27532753+ delayed_ref_async_start, NULL, NULL);2755275427562755 btrfs_queue_work(root->fs_info->extent_workers, &async->work);27572756···35873586 */35883587static u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags)35893588{35903590- /*35913591- * we add in the count of missing devices because we want35923592- * to make sure that any RAID levels on a degraded FS35933593- * continue to be honored.35943594- */35953595- u64 num_devices = root->fs_info->fs_devices->rw_devices +35963596- root->fs_info->fs_devices->missing_devices;35893589+ u64 num_devices = root->fs_info->fs_devices->rw_devices;35973590 u64 target;35983591 u64 tmp;35993592···84358440 if (stripped)84368441 return extended_to_chunk(stripped);8437844284388438- /*84398439- * we add in the count of missing devices because we want84408440- * to make sure that any RAID levels on a degraded FS84418441- * continue to be honored.84428442- */84438443- num_devices = root->fs_info->fs_devices->rw_devices +84448444- root->fs_info->fs_devices->missing_devices;84438443+ num_devices = root->fs_info->fs_devices->rw_devices;8445844484468445 stripped = BTRFS_BLOCK_GROUP_RAID0 |84478446 BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
+3-2
fs/btrfs/extent_io.c
···25322532 test_bit(BIO_UPTODATE, &bio->bi_flags);25332533 if (err)25342534 uptodate = 0;25352535+ offset += len;25352536 continue;25362537 }25372538 }···42084207 return -ENOMEM;42094208 path->leave_spinning = 1;4210420942114211- start = ALIGN(start, BTRFS_I(inode)->root->sectorsize);42124212- len = ALIGN(len, BTRFS_I(inode)->root->sectorsize);42104210+ start = round_down(start, BTRFS_I(inode)->root->sectorsize);42114211+ len = round_up(max, BTRFS_I(inode)->root->sectorsize) - start;4213421242144213 /*42154214 * lookup the last file extent. We're not using i_size here
+12-5
fs/btrfs/file.c
···18401840{18411841 if (filp->private_data)18421842 btrfs_ioctl_trans_end(filp);18431843- filemap_flush(inode->i_mapping);18431843+ /*18441844+ * ordered_data_close is set by settattr when we are about to truncate18451845+ * a file from a non-zero size to a zero size. This tries to18461846+ * flush down new bytes that may have been written if the18471847+ * application were using truncate to replace a file in place.18481848+ */18491849+ if (test_and_clear_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,18501850+ &BTRFS_I(inode)->runtime_flags))18511851+ filemap_flush(inode->i_mapping);18441852 return 0;18451853}18461854···20962088 goto out;20972089 }2098209020992099- if (hole_mergeable(inode, leaf, path->slots[0]+1, offset, end)) {20912091+ if (hole_mergeable(inode, leaf, path->slots[0], offset, end)) {21002092 u64 num_bytes;2101209321022102- path->slots[0]++;21032094 key.offset = offset;21042095 btrfs_set_item_key_safe(root, path, &key);21052096 fi = btrfs_item_ptr(leaf, path->slots[0],···22232216 goto out_only_mutex;22242217 }2225221822262226- lockstart = round_up(offset , BTRFS_I(inode)->root->sectorsize);22192219+ lockstart = round_up(offset, BTRFS_I(inode)->root->sectorsize);22272220 lockend = round_down(offset + len,22282221 BTRFS_I(inode)->root->sectorsize) - 1;22292222 same_page = ((offset >> PAGE_CACHE_SHIFT) ==···22842277 tail_start + tail_len, 0, 1);22852278 if (ret)22862279 goto out_only_mutex;22872287- }22802280+ }22882281 }22892282 }22902283
+89-20
fs/btrfs/inode.c
···10961096 async_cow->end = cur_end;10971097 INIT_LIST_HEAD(&async_cow->extents);1098109810991099- btrfs_init_work(&async_cow->work, async_cow_start,11001100- async_cow_submit, async_cow_free);10991099+ btrfs_init_work(&async_cow->work,11001100+ btrfs_delalloc_helper,11011101+ async_cow_start, async_cow_submit,11021102+ async_cow_free);1101110311021104 nr_pages = (cur_end - start + PAGE_CACHE_SIZE) >>11031105 PAGE_CACHE_SHIFT;···1883188118841882 SetPageChecked(page);18851883 page_cache_get(page);18861886- btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL, NULL);18841884+ btrfs_init_work(&fixup->work, btrfs_fixup_helper,18851885+ btrfs_writepage_fixup_worker, NULL, NULL);18871886 fixup->page = page;18881887 btrfs_queue_work(root->fs_info->fixup_workers, &fixup->work);18891888 return -EBUSY;···28252822 struct inode *inode = page->mapping->host;28262823 struct btrfs_root *root = BTRFS_I(inode)->root;28272824 struct btrfs_ordered_extent *ordered_extent = NULL;28282828- struct btrfs_workqueue *workers;28252825+ struct btrfs_workqueue *wq;28262826+ btrfs_work_func_t func;2829282728302828 trace_btrfs_writepage_end_io_hook(page, start, end, uptodate);28312829···28352831 end - start + 1, uptodate))28362832 return 0;2837283328382838- btrfs_init_work(&ordered_extent->work, finish_ordered_fn, NULL, NULL);28342834+ if (btrfs_is_free_space_inode(inode)) {28352835+ wq = root->fs_info->endio_freespace_worker;28362836+ func = btrfs_freespace_write_helper;28372837+ } else {28382838+ wq = root->fs_info->endio_write_workers;28392839+ func = btrfs_endio_write_helper;28402840+ }2839284128402840- if (btrfs_is_free_space_inode(inode))28412841- workers = root->fs_info->endio_freespace_worker;28422842- else28432843- workers = root->fs_info->endio_write_workers;28442844- btrfs_queue_work(workers, &ordered_extent->work);28422842+ btrfs_init_work(&ordered_extent->work, func, finish_ordered_fn, NULL,28432843+ NULL);28442844+ btrfs_queue_work(wq, &ordered_extent->work);2845284528462846 return 0;28472847}···46824674 clear_bit(EXTENT_FLAG_LOGGING, &em->flags);46834675 remove_extent_mapping(map_tree, em);46844676 free_extent_map(em);46774677+ if (need_resched()) {46784678+ write_unlock(&map_tree->lock);46794679+ cond_resched();46804680+ write_lock(&map_tree->lock);46814681+ }46854682 }46864683 write_unlock(&map_tree->lock);46874684···47094696 &cached_state, GFP_NOFS);47104697 free_extent_state(state);4711469846994699+ cond_resched();47124700 spin_lock(&io_tree->lock);47134701 }47144702 spin_unlock(&io_tree->lock);···51955181 iput(inode);51965182 inode = ERR_PTR(ret);51975183 }51845184+ /*51855185+ * If orphan cleanup did remove any orphans, it means the tree51865186+ * was modified and therefore the commit root is not the same as51875187+ * the current root anymore. This is a problem, because send51885188+ * uses the commit root and therefore can see inode items that51895189+ * don't exist in the current root anymore, and for example make51905190+ * calls to btrfs_iget, which will do tree lookups based on the51915191+ * current root and not on the commit root. Those lookups will51925192+ * fail, returning a -ESTALE error, and making send fail with51935193+ * that error. So make sure a send does not see any orphans we51945194+ * have just removed, and that it will see the same inodes51955195+ * regardless of whether a transaction commit happened before51965196+ * it started (meaning that the commit root will be the same as51975197+ * the current root) or not.51985198+ */51995199+ if (sub_root->node != sub_root->commit_root) {52005200+ u64 sub_flags = btrfs_root_flags(&sub_root->root_item);52015201+52025202+ if (sub_flags & BTRFS_ROOT_SUBVOL_RDONLY) {52035203+ struct extent_buffer *eb;52045204+52055205+ /*52065206+ * Assert we can't have races between dentry52075207+ * lookup called through the snapshot creation52085208+ * ioctl and the VFS.52095209+ */52105210+ ASSERT(mutex_is_locked(&dir->i_mutex));52115211+52125212+ down_write(&root->fs_info->commit_root_sem);52135213+ eb = sub_root->commit_root;52145214+ sub_root->commit_root =52155215+ btrfs_root_node(sub_root);52165216+ up_write(&root->fs_info->commit_root_sem);52175217+ free_extent_buffer(eb);52185218+ }52195219+ }51985220 }5199522152005222 return inode;···56545604 btrfs_free_path(path);56555605 return ERR_PTR(-ENOMEM);56565606 }56075607+56085608+ /*56095609+ * O_TMPFILE, set link count to 0, so that after this point,56105610+ * we fill in an inode item with the correct link count.56115611+ */56125612+ if (!name)56135613+ set_nlink(inode, 0);5657561456585615 /*56595616 * we have to initialize this early, so we can reclaim the inode···61546097static int merge_extent_mapping(struct extent_map_tree *em_tree,61556098 struct extent_map *existing,61566099 struct extent_map *em,61576157- u64 map_start, u64 map_len)61006100+ u64 map_start)61586101{61596102 u64 start_diff;6160610361616104 BUG_ON(map_start < em->start || map_start >= extent_map_end(em));61626105 start_diff = map_start - em->start;61636106 em->start = map_start;61646164- em->len = map_len;61076107+ em->len = existing->start - em->start;61656108 if (em->block_start < EXTENT_MAP_LAST_BYTE &&61666109 !test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) {61676110 em->block_start += start_diff;···63326275 goto not_found;63336276 if (start + len <= found_key.offset)63346277 goto not_found;62786278+ if (start > found_key.offset)62796279+ goto next;63356280 em->start = start;63366281 em->orig_start = start;63376282 em->len = found_key.offset - start;···64496390 em->len);64506391 if (existing) {64516392 err = merge_extent_mapping(em_tree, existing,64526452- em, start,64536453- root->sectorsize);63936393+ em, start);64546394 free_extent_map(existing);64556395 if (err) {64566396 free_extent_map(em);···72167158 if (!ret)72177159 goto out_test;7218716072197219- btrfs_init_work(&ordered->work, finish_ordered_fn, NULL, NULL);71617161+ btrfs_init_work(&ordered->work, btrfs_endio_write_helper,71627162+ finish_ordered_fn, NULL, NULL);72207163 btrfs_queue_work(root->fs_info->endio_write_workers,72217164 &ordered->work);72227165out_test:···73657306 map_length = orig_bio->bi_iter.bi_size;73667307 ret = btrfs_map_block(root->fs_info, rw, start_sector << 9,73677308 &map_length, NULL, 0);73687368- if (ret) {73697369- bio_put(orig_bio);73097309+ if (ret)73707310 return -EIO;73717371- }7372731173737312 if (map_length >= orig_bio->bi_iter.bi_size) {73747313 bio = orig_bio;···73837326 bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS);73847327 if (!bio)73857328 return -ENOMEM;73297329+73867330 bio->bi_private = dip;73877331 bio->bi_end_io = btrfs_end_dio_bio;73887332 atomic_inc(&dip->pending_bios);···75927534 count = iov_iter_count(iter);75937535 if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,75947536 &BTRFS_I(inode)->runtime_flags))75957595- filemap_fdatawrite_range(inode->i_mapping, offset, count);75377537+ filemap_fdatawrite_range(inode->i_mapping, offset,75387538+ offset + count - 1);7596753975977540 if (rw & WRITE) {75987541 /*···85548495 work->inode = inode;85558496 work->wait = wait;85568497 work->delay_iput = delay_iput;85578557- btrfs_init_work(&work->work, btrfs_run_delalloc_work, NULL, NULL);84988498+ WARN_ON_ONCE(!inode);84998499+ btrfs_init_work(&work->work, btrfs_flush_delalloc_helper,85008500+ btrfs_run_delalloc_work, NULL, NULL);8558850185598502 return work;85608503}···90408979 if (ret)90418980 goto out;9042898189828982+ /*89838983+ * We set number of links to 0 in btrfs_new_inode(), and here we set89848984+ * it to 1 because d_tmpfile() will issue a warning if the count is 0,89858985+ * through:89868986+ *89878987+ * d_tmpfile() -> inode_dec_link_count() -> drop_nlink()89888988+ */89898989+ set_nlink(inode, 1);90438990 d_tmpfile(dentry, inode);90448991 mark_inode_dirty(inode);90458992
+2-34
fs/btrfs/ioctl.c
···711711 if (ret)712712 goto fail;713713714714- ret = btrfs_orphan_cleanup(pending_snapshot->snap);715715- if (ret)716716- goto fail;717717-718718- /*719719- * If orphan cleanup did remove any orphans, it means the tree was720720- * modified and therefore the commit root is not the same as the721721- * current root anymore. This is a problem, because send uses the722722- * commit root and therefore can see inode items that don't exist723723- * in the current root anymore, and for example make calls to724724- * btrfs_iget, which will do tree lookups based on the current root725725- * and not on the commit root. Those lookups will fail, returning a726726- * -ESTALE error, and making send fail with that error. So make sure727727- * a send does not see any orphans we have just removed, and that it728728- * will see the same inodes regardless of whether a transaction729729- * commit happened before it started (meaning that the commit root730730- * will be the same as the current root) or not.731731- */732732- if (readonly && pending_snapshot->snap->node !=733733- pending_snapshot->snap->commit_root) {734734- trans = btrfs_join_transaction(pending_snapshot->snap);735735- if (IS_ERR(trans) && PTR_ERR(trans) != -ENOENT) {736736- ret = PTR_ERR(trans);737737- goto fail;738738- }739739- if (!IS_ERR(trans)) {740740- ret = btrfs_commit_transaction(trans,741741- pending_snapshot->snap);742742- if (ret)743743- goto fail;744744- }745745- }746746-747714 inode = btrfs_lookup_dentry(dentry->d_parent->d_inode, dentry);748715 if (IS_ERR(inode)) {749716 ret = PTR_ERR(inode);···34943527 btrfs_mark_buffer_dirty(leaf);34953528 btrfs_release_path(path);3496352934973497- last_dest_end = new_key.offset + datal;35303530+ last_dest_end = ALIGN(new_key.offset + datal,35313531+ root->sectorsize);34983532 ret = clone_finish_inode_update(trans, inode,34993533 last_dest_end,35003534 destoff, olen);
···614614 if (!fs_info->device_dir_kobj)615615 return -EINVAL;616616617617- if (one_device) {617617+ if (one_device && one_device->bdev) {618618 disk = one_device->bdev->bd_part;619619 disk_kobj = &part_to_dev(disk)->kobj;620620
+13-4
fs/btrfs/tree-log.c
···32983298 struct list_head ordered_sums;32993299 int skip_csum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM;33003300 bool has_extents = false;33013301- bool need_find_last_extent = (*last_extent == 0);33013301+ bool need_find_last_extent = true;33023302 bool done = false;3303330333043304 INIT_LIST_HEAD(&ordered_sums);···33523352 */33533353 if (ins_keys[i].type == BTRFS_EXTENT_DATA_KEY) {33543354 has_extents = true;33553355- if (need_find_last_extent &&33563356- first_key.objectid == (u64)-1)33553355+ if (first_key.objectid == (u64)-1)33573356 first_key = ins_keys[i];33583357 } else {33593358 need_find_last_extent = false;···3425342634263427 if (!has_extents)34273428 return ret;34293429+34303430+ if (need_find_last_extent && *last_extent == first_key.offset) {34313431+ /*34323432+ * We don't have any leafs between our current one and the one34333433+ * we processed before that can have file extent items for our34343434+ * inode (and have a generation number smaller than our current34353435+ * transaction id).34363436+ */34373437+ need_find_last_extent = false;34383438+ }3428343934293440 /*34303441 * Because we use btrfs_search_forward we could skip leaves that were···35463537 0, 0);35473538 if (ret)35483539 break;35493549- *last_extent = offset + len;35403540+ *last_extent = extent_end;35503541 }35513542 /*35523543 * Need to let the callers know we dropped the path so they should
+60-5
fs/btrfs/volumes.c
···508508 ret = 1;509509 device->fs_devices = fs_devices;510510 } else if (!device->name || strcmp(device->name->str, path)) {511511+ /*512512+ * When FS is already mounted.513513+ * 1. If you are here and if the device->name is NULL that514514+ * means this device was missing at time of FS mount.515515+ * 2. If you are here and if the device->name is different516516+ * from 'path' that means either517517+ * a. The same device disappeared and reappeared with518518+ * different name. or519519+ * b. The missing-disk-which-was-replaced, has520520+ * reappeared now.521521+ *522522+ * We must allow 1 and 2a above. But 2b would be a spurious523523+ * and unintentional.524524+ *525525+ * Further in case of 1 and 2a above, the disk at 'path'526526+ * would have missed some transaction when it was away and527527+ * in case of 2a the stale bdev has to be updated as well.528528+ * 2b must not be allowed at all time.529529+ */530530+531531+ /*532532+ * As of now don't allow update to btrfs_fs_device through533533+ * the btrfs dev scan cli, after FS has been mounted.534534+ */535535+ if (fs_devices->opened) {536536+ return -EBUSY;537537+ } else {538538+ /*539539+ * That is if the FS is _not_ mounted and if you540540+ * are here, that means there is more than one541541+ * disk with same uuid and devid.We keep the one542542+ * with larger generation number or the last-in if543543+ * generation are equal.544544+ */545545+ if (found_transid < device->generation)546546+ return -EEXIST;547547+ }548548+511549 name = rcu_string_strdup(path, GFP_NOFS);512550 if (!name)513551 return -ENOMEM;···556518 device->missing = 0;557519 }558520 }521521+522522+ /*523523+ * Unmount does not free the btrfs_device struct but would zero524524+ * generation along with most of the other members. So just update525525+ * it back. We need it to pick the disk with largest generation526526+ * (as above).527527+ */528528+ if (!fs_devices->opened)529529+ device->generation = found_transid;559530560531 if (found_transid > fs_devices->latest_trans) {561532 fs_devices->latest_devid = devid;···14831436 btrfs_set_device_io_align(leaf, dev_item, device->io_align);14841437 btrfs_set_device_io_width(leaf, dev_item, device->io_width);14851438 btrfs_set_device_sector_size(leaf, dev_item, device->sector_size);14861486- btrfs_set_device_total_bytes(leaf, dev_item, device->total_bytes);14391439+ btrfs_set_device_total_bytes(leaf, dev_item, device->disk_total_bytes);14871440 btrfs_set_device_bytes_used(leaf, dev_item, device->bytes_used);14881441 btrfs_set_device_group(leaf, dev_item, 0);14891442 btrfs_set_device_seek_speed(leaf, dev_item, 0);···17181671 device->fs_devices->total_devices--;1719167217201673 if (device->missing)17211721- root->fs_info->fs_devices->missing_devices--;16741674+ device->fs_devices->missing_devices--;1722167517231676 next_device = list_entry(root->fs_info->fs_devices->devices.next,17241677 struct btrfs_device, dev_list);···18481801 if (srcdev->bdev) {18491802 fs_info->fs_devices->open_devices--;1850180318511851- /* zero out the old super */18521852- btrfs_scratch_superblock(srcdev);18041804+ /*18051805+ * zero out the old super if it is not writable18061806+ * (e.g. seed device)18071807+ */18081808+ if (srcdev->writeable)18091809+ btrfs_scratch_superblock(srcdev);18531810 }1854181118551812 call_rcu(&srcdev->rcu, free_device);···19921941 fs_devices->seeding = 0;19931942 fs_devices->num_devices = 0;19941943 fs_devices->open_devices = 0;19441944+ fs_devices->missing_devices = 0;19451945+ fs_devices->num_can_discard = 0;19461946+ fs_devices->rotating = 0;19951947 fs_devices->seed = seed_devices;1996194819971949 generate_random_uuid(fs_devices->fsid);···58545800 else58555801 generate_random_uuid(dev->uuid);5856580258575857- btrfs_init_work(&dev->work, pending_bios_fn, NULL, NULL);58035803+ btrfs_init_work(&dev->work, btrfs_submit_helper,58045804+ pending_bios_fn, NULL, NULL);5858580558595806 return dev;58605807}
+17-1
fs/ext4/ext4.h
···18251825/*18261826 * Special error return code only used by dx_probe() and its callers.18271827 */18281828-#define ERR_BAD_DX_DIR -7500018281828+#define ERR_BAD_DX_DIR (-(MAX_ERRNO - 1))1829182918301830/*18311831 * Timeout and state flag for lazy initialization inode thread.···24522452 if (newsize > EXT4_I(inode)->i_disksize)24532453 EXT4_I(inode)->i_disksize = newsize;24542454 up_write(&EXT4_I(inode)->i_data_sem);24552455+}24562456+24572457+/* Update i_size, i_disksize. Requires i_mutex to avoid races with truncate */24582458+static inline int ext4_update_inode_size(struct inode *inode, loff_t newsize)24592459+{24602460+ int changed = 0;24612461+24622462+ if (newsize > inode->i_size) {24632463+ i_size_write(inode, newsize);24642464+ changed = 1;24652465+ }24662466+ if (newsize > EXT4_I(inode)->i_disksize) {24672467+ ext4_update_i_disksize(inode, newsize);24682468+ changed |= 2;24692469+ }24702470+ return changed;24552471}2456247224572473struct ext4_group_info {
+44-44
fs/ext4/extents.c
···46654665}4666466646674667static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,46684668- ext4_lblk_t len, int flags, int mode)46684668+ ext4_lblk_t len, loff_t new_size,46694669+ int flags, int mode)46694670{46704671 struct inode *inode = file_inode(file);46714672 handle_t *handle;···46754674 int retries = 0;46764675 struct ext4_map_blocks map;46774676 unsigned int credits;46774677+ loff_t epos;4678467846794679 map.m_lblk = offset;46804680+ map.m_len = len;46804681 /*46814682 * Don't normalize the request if it can fit in one extent so46824683 * that it doesn't get unnecessarily split into multiple···46934690 credits = ext4_chunk_trans_blocks(inode, len);4694469146954692retry:46964696- while (ret >= 0 && ret < len) {46974697- map.m_lblk = map.m_lblk + ret;46984698- map.m_len = len = len - ret;46934693+ while (ret >= 0 && len) {46994694 handle = ext4_journal_start(inode, EXT4_HT_MAP_BLOCKS,47004695 credits);47014696 if (IS_ERR(handle)) {···47104709 ret2 = ext4_journal_stop(handle);47114710 break;47124711 }47124712+ map.m_lblk += ret;47134713+ map.m_len = len = len - ret;47144714+ epos = (loff_t)map.m_lblk << inode->i_blkbits;47154715+ inode->i_ctime = ext4_current_time(inode);47164716+ if (new_size) {47174717+ if (epos > new_size)47184718+ epos = new_size;47194719+ if (ext4_update_inode_size(inode, epos) & 0x1)47204720+ inode->i_mtime = inode->i_ctime;47214721+ } else {47224722+ if (epos > inode->i_size)47234723+ ext4_set_inode_flag(inode,47244724+ EXT4_INODE_EOFBLOCKS);47254725+ }47264726+ ext4_mark_inode_dirty(handle, inode);47134727 ret2 = ext4_journal_stop(handle);47144728 if (ret2)47154729 break;···47474731 loff_t new_size = 0;47484732 int ret = 0;47494733 int flags;47504750- int partial;47344734+ int credits;47354735+ int partial_begin, partial_end;47514736 loff_t start, end;47524737 ext4_lblk_t lblk;47534738 struct address_space *mapping = inode->i_mapping;···4788477147894772 if (start < offset || end > offset + len)47904773 return -EINVAL;47914791- partial = (offset + len) & ((1 << blkbits) - 1);47744774+ partial_begin = offset & ((1 << blkbits) - 1);47754775+ partial_end = (offset + len) & ((1 << blkbits) - 1);4792477647934777 lblk = start >> blkbits;47944778 max_blocks = (end >> blkbits);···48234805 * If we have a partial block after EOF we have to allocate48244806 * the entire block.48254807 */48264826- if (partial)48084808+ if (partial_end)48274809 max_blocks += 1;48284810 }48294811···4831481348324814 /* Now release the pages and zero block aligned part of pages*/48334815 truncate_pagecache_range(inode, start, end - 1);48164816+ inode->i_mtime = inode->i_ctime = ext4_current_time(inode);4834481748354818 /* Wait all existing dio workers, newcomers will block on i_mutex */48364819 ext4_inode_block_unlocked_dio(inode);···48444825 if (ret)48454826 goto out_dio;4846482748474847- ret = ext4_alloc_file_blocks(file, lblk, max_blocks, flags,48484848- mode);48284828+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,48294829+ flags, mode);48494830 if (ret)48504831 goto out_dio;48514832 }48334833+ if (!partial_begin && !partial_end)48344834+ goto out_dio;4852483548534853- handle = ext4_journal_start(inode, EXT4_HT_MISC, 4);48364836+ /*48374837+ * In worst case we have to writeout two nonadjacent unwritten48384838+ * blocks and update the inode48394839+ */48404840+ credits = (2 * ext4_ext_index_trans_blocks(inode, 2)) + 1;48414841+ if (ext4_should_journal_data(inode))48424842+ credits += 2;48434843+ handle = ext4_journal_start(inode, EXT4_HT_MISC, credits);48544844 if (IS_ERR(handle)) {48554845 ret = PTR_ERR(handle);48564846 ext4_std_error(inode->i_sb, ret);···48674839 }4868484048694841 inode->i_mtime = inode->i_ctime = ext4_current_time(inode);48704870-48714842 if (new_size) {48724872- if (new_size > i_size_read(inode))48734873- i_size_write(inode, new_size);48744874- if (new_size > EXT4_I(inode)->i_disksize)48754875- ext4_update_i_disksize(inode, new_size);48434843+ ext4_update_inode_size(inode, new_size);48764844 } else {48774845 /*48784846 * Mark that we allocate beyond EOF so the subsequent truncate···48774853 if ((offset + len) > i_size_read(inode))48784854 ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS);48794855 }48804880-48814856 ext4_mark_inode_dirty(handle, inode);4882485748834858 /* Zero out partial block at the edges of the range */···49034880long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)49044881{49054882 struct inode *inode = file_inode(file);49064906- handle_t *handle;49074883 loff_t new_size = 0;49084884 unsigned int max_blocks;49094885 int ret = 0;49104886 int flags;49114887 ext4_lblk_t lblk;49124912- struct timespec tv;49134888 unsigned int blkbits = inode->i_blkbits;4914488949154890 /* Return error if mode is not supported */···49584937 goto out;49594938 }4960493949614961- ret = ext4_alloc_file_blocks(file, lblk, max_blocks, flags, mode);49404940+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,49414941+ flags, mode);49624942 if (ret)49634943 goto out;4964494449654965- handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);49664966- if (IS_ERR(handle))49674967- goto out;49684968-49694969- tv = inode->i_ctime = ext4_current_time(inode);49704970-49714971- if (new_size) {49724972- if (new_size > i_size_read(inode)) {49734973- i_size_write(inode, new_size);49744974- inode->i_mtime = tv;49754975- }49764976- if (new_size > EXT4_I(inode)->i_disksize)49774977- ext4_update_i_disksize(inode, new_size);49784978- } else {49794979- /*49804980- * Mark that we allocate beyond EOF so the subsequent truncate49814981- * can proceed even if the new size is the same as i_size.49824982- */49834983- if ((offset + len) > i_size_read(inode))49844984- ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS);49454945+ if (file->f_flags & O_SYNC && EXT4_SB(inode->i_sb)->s_journal) {49464946+ ret = jbd2_complete_transaction(EXT4_SB(inode->i_sb)->s_journal,49474947+ EXT4_I(inode)->i_sync_tid);49854948 }49864986- ext4_mark_inode_dirty(handle, inode);49874987- if (file->f_flags & O_SYNC)49884988- ext4_handle_sync(handle);49894989-49904990- ext4_journal_stop(handle);49914949out:49924950 mutex_unlock(&inode->i_mutex);49934951 trace_ext4_fallocate_exit(inode, offset, max_blocks, ret);
+16-28
fs/ext4/inode.c
···10551055 } else10561056 copied = block_write_end(file, mapping, pos,10571057 len, copied, page, fsdata);10581058-10591058 /*10601060- * No need to use i_size_read() here, the i_size10611061- * cannot change under us because we hole i_mutex.10621062- *10631063- * But it's important to update i_size while still holding page lock:10591059+ * it's important to update i_size while still holding page lock:10641060 * page writeout could otherwise come in and zero beyond i_size.10651061 */10661066- if (pos + copied > inode->i_size) {10671067- i_size_write(inode, pos + copied);10681068- i_size_changed = 1;10691069- }10701070-10711071- if (pos + copied > EXT4_I(inode)->i_disksize) {10721072- /* We need to mark inode dirty even if10731073- * new_i_size is less that inode->i_size10741074- * but greater than i_disksize. (hint delalloc)10751075- */10761076- ext4_update_i_disksize(inode, (pos + copied));10771077- i_size_changed = 1;10781078- }10621062+ i_size_changed = ext4_update_inode_size(inode, pos + copied);10791063 unlock_page(page);10801064 page_cache_release(page);10811065···11071123 int ret = 0, ret2;11081124 int partial = 0;11091125 unsigned from, to;11101110- loff_t new_i_size;11261126+ int size_changed = 0;1111112711121128 trace_ext4_journalled_write_end(inode, pos, len, copied);11131129 from = pos & (PAGE_CACHE_SIZE - 1);···11301146 if (!partial)11311147 SetPageUptodate(page);11321148 }11331133- new_i_size = pos + copied;11341134- if (new_i_size > inode->i_size)11351135- i_size_write(inode, pos+copied);11491149+ size_changed = ext4_update_inode_size(inode, pos + copied);11361150 ext4_set_inode_state(inode, EXT4_STATE_JDATA);11371151 EXT4_I(inode)->i_datasync_tid = handle->h_transaction->t_tid;11381138- if (new_i_size > EXT4_I(inode)->i_disksize) {11391139- ext4_update_i_disksize(inode, new_i_size);11521152+ unlock_page(page);11531153+ page_cache_release(page);11541154+11551155+ if (size_changed) {11401156 ret2 = ext4_mark_inode_dirty(handle, inode);11411157 if (!ret)11421158 ret = ret2;11431159 }1144116011451145- unlock_page(page);11461146- page_cache_release(page);11471161 if (pos + len > inode->i_size && ext4_can_truncate(inode))11481162 /* if we have allocated more blocks and copied11491163 * less. We will have blocks allocated outside···20772095 struct ext4_map_blocks *map = &mpd->map;20782096 int err;20792097 loff_t disksize;20982098+ int progress = 0;2080209920812100 mpd->io_submit.io_end->offset =20822101 ((loff_t)map->m_lblk) << inode->i_blkbits;···20942111 * is non-zero, a commit should free up blocks.20952112 */20962113 if ((err == -ENOMEM) ||20972097- (err == -ENOSPC && ext4_count_free_clusters(sb)))21142114+ (err == -ENOSPC && ext4_count_free_clusters(sb))) {21152115+ if (progress)21162116+ goto update_disksize;20982117 return err;21182118+ }20992119 ext4_msg(sb, KERN_CRIT,21002120 "Delayed block allocation failed for "21012121 "inode %lu at logical offset %llu with"···21152129 *give_up_on_write = true;21162130 return err;21172131 }21322132+ progress = 1;21182133 /*21192134 * Update buffer state, submit mapped pages, and get us new21202135 * extent to map21212136 */21222137 err = mpage_map_and_submit_buffers(mpd);21232138 if (err < 0)21242124- return err;21392139+ goto update_disksize;21252140 } while (map->m_len);2126214121422142+update_disksize:21272143 /*21282144 * Update on-disk size after IO is submitted. Races with21292145 * truncate are avoided by checking i_size under i_data_sem.
+5
fs/ext4/mballoc.c
···14121412 int last = first + count - 1;14131413 struct super_block *sb = e4b->bd_sb;1414141414151415+ if (WARN_ON(count == 0))14161416+ return;14151417 BUG_ON(last >= (sb->s_blocksize << 3));14161418 assert_spin_locked(ext4_group_lock_ptr(sb, e4b->bd_group));14171419 /* Don't bother if the block group is corrupt. */···32233221 int err;3224322232253223 if (pa == NULL) {32243224+ if (ac->ac_f_ex.fe_len == 0)32253225+ return;32263226 err = ext4_mb_load_buddy(ac->ac_sb, ac->ac_f_ex.fe_group, &e4b);32273227 if (err) {32283228 /*···32393235 mb_free_blocks(ac->ac_inode, &e4b, ac->ac_f_ex.fe_start,32403236 ac->ac_f_ex.fe_len);32413237 ext4_unlock_group(ac->ac_sb, ac->ac_f_ex.fe_group);32383238+ ext4_mb_unload_buddy(&e4b);32423239 return;32433240 }32443241 if (pa->pa_type == MB_INODE_PA)
+51-5
fs/ext4/namei.c
···12271227 buffer */12281228 int num = 0;12291229 ext4_lblk_t nblocks;12301230- int i, err;12301230+ int i, err = 0;12311231 int namelen;1232123212331233 *res_dir = NULL;···12641264 * return. Otherwise, fall back to doing a search the12651265 * old fashioned way.12661266 */12671267- if (bh || (err != ERR_BAD_DX_DIR))12671267+ if (err == -ENOENT)12681268+ return NULL;12691269+ if (err && err != ERR_BAD_DX_DIR)12701270+ return ERR_PTR(err);12711271+ if (bh)12681272 return bh;12691273 dxtrace(printk(KERN_DEBUG "ext4_find_entry: dx failed, "12701274 "falling back\n"));···12991295 }13001296 num++;13011297 bh = ext4_getblk(NULL, dir, b++, 0, &err);12981298+ if (unlikely(err)) {12991299+ if (ra_max == 0)13001300+ return ERR_PTR(err);13011301+ break;13021302+ }13021303 bh_use[ra_max] = bh;13031304 if (bh)13041305 ll_rw_block(READ | REQ_META | REQ_PRIO,···14261417 return ERR_PTR(-ENAMETOOLONG);1427141814281419 bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);14201420+ if (IS_ERR(bh))14211421+ return (struct dentry *) bh;14291422 inode = NULL;14301423 if (bh) {14311424 __u32 ino = le32_to_cpu(de->inode);···14611450 struct buffer_head *bh;1462145114631452 bh = ext4_find_entry(child->d_inode, &dotdot, &de, NULL);14531453+ if (IS_ERR(bh))14541454+ return (struct dentry *) bh;14641455 if (!bh)14651456 return ERR_PTR(-ENOENT);14661457 ino = le32_to_cpu(de->inode);···2740272727412728 retval = -ENOENT;27422729 bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);27302730+ if (IS_ERR(bh))27312731+ return PTR_ERR(bh);27432732 if (!bh)27442733 goto end_rmdir;27452734···2809279428102795 retval = -ENOENT;28112796 bh = ext4_find_entry(dir, &dentry->d_name, &de, NULL);27972797+ if (IS_ERR(bh))27982798+ return PTR_ERR(bh);28122799 if (!bh)28132800 goto end_unlink;28142801···31383121 struct ext4_dir_entry_2 *de;3139312231403123 bh = ext4_find_entry(dir, d_name, &de, NULL);31243124+ if (IS_ERR(bh))31253125+ return PTR_ERR(bh);31413126 if (bh) {31423127 retval = ext4_delete_entry(handle, dir, de, bh);31433128 brelse(bh);···31473128 return retval;31483129}3149313031503150-static void ext4_rename_delete(handle_t *handle, struct ext4_renament *ent)31313131+static void ext4_rename_delete(handle_t *handle, struct ext4_renament *ent,31323132+ int force_reread)31513133{31523134 int retval;31533135 /*···31603140 if (le32_to_cpu(ent->de->inode) != ent->inode->i_ino ||31613141 ent->de->name_len != ent->dentry->d_name.len ||31623142 strncmp(ent->de->name, ent->dentry->d_name.name,31633163- ent->de->name_len)) {31433143+ ent->de->name_len) ||31443144+ force_reread) {31643145 retval = ext4_find_delete_entry(handle, ent->dir,31653146 &ent->dentry->d_name);31663147 } else {···32123191 .dentry = new_dentry,32133192 .inode = new_dentry->d_inode,32143193 };31943194+ int force_reread;32153195 int retval;3216319632173197 dquot_initialize(old.dir);···32243202 dquot_initialize(new.inode);3225320332263204 old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);32053205+ if (IS_ERR(old.bh))32063206+ return PTR_ERR(old.bh);32273207 /*32283208 * Check for inode number is _not_ due to possible IO errors.32293209 * We might rmdir the source, keep it as pwd of some process···3238321432393215 new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,32403216 &new.de, &new.inlined);32173217+ if (IS_ERR(new.bh)) {32183218+ retval = PTR_ERR(new.bh);32193219+ goto end_rename;32203220+ }32413221 if (new.bh) {32423222 if (!new.inode) {32433223 brelse(new.bh);···32743246 if (retval)32753247 goto end_rename;32763248 }32493249+ /*32503250+ * If we're renaming a file within an inline_data dir and adding or32513251+ * setting the new dirent causes a conversion from inline_data to32523252+ * extents/blockmap, we need to force the dirent delete code to32533253+ * re-read the directory, or else we end up trying to delete a dirent32543254+ * from what is now the extent tree root (or a block map).32553255+ */32563256+ force_reread = (new.dir->i_ino == old.dir->i_ino &&32573257+ ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA));32773258 if (!new.bh) {32783259 retval = ext4_add_entry(handle, new.dentry, old.inode);32793260 if (retval)···32933256 if (retval)32943257 goto end_rename;32953258 }32593259+ if (force_reread)32603260+ force_reread = !ext4_test_inode_flag(new.dir,32613261+ EXT4_INODE_INLINE_DATA);3296326232973263 /*32983264 * Like most other Unix systems, set the ctime for inodes on a···33073267 /*33083268 * ok, that's it33093269 */33103310- ext4_rename_delete(handle, &old);32703270+ ext4_rename_delete(handle, &old, force_reread);3311327133123272 if (new.inode) {33133273 ext4_dec_count(handle, new.inode);···3370333033713331 old.bh = ext4_find_entry(old.dir, &old.dentry->d_name,33723332 &old.de, &old.inlined);33333333+ if (IS_ERR(old.bh))33343334+ return PTR_ERR(old.bh);33733335 /*33743336 * Check for inode number is _not_ due to possible IO errors.33753337 * We might rmdir the source, keep it as pwd of some process···3384334233853343 new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,33863344 &new.de, &new.inlined);33453345+ if (IS_ERR(new.bh)) {33463346+ retval = PTR_ERR(new.bh);33473347+ goto end_rename;33483348+ }3387334933883350 /* RENAME_EXCHANGE case: old *and* new must both exist */33893351 if (!new.bh || le32_to_cpu(new.de->inode) != new.inode->i_ino)
···9797 struct commit_header *h;9898 __u32 csum;9999100100- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))100100+ if (!jbd2_journal_has_csum_v2or3(j))101101 return;102102103103 h = (struct commit_header *)(bh->b_data);···313313 return checksum;314314}315315316316-static void write_tag_block(int tag_bytes, journal_block_tag_t *tag,316316+static void write_tag_block(journal_t *j, journal_block_tag_t *tag,317317 unsigned long long block)318318{319319 tag->t_blocknr = cpu_to_be32(block & (u32)~0);320320- if (tag_bytes > JBD2_TAG_SIZE32)320320+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_64BIT))321321 tag->t_blocknr_high = cpu_to_be32((block >> 31) >> 1);322322}323323···327327 struct jbd2_journal_block_tail *tail;328328 __u32 csum;329329330330- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))330330+ if (!jbd2_journal_has_csum_v2or3(j))331331 return;332332333333 tail = (struct jbd2_journal_block_tail *)(bh->b_data + j->j_blocksize -···340340static void jbd2_block_tag_csum_set(journal_t *j, journal_block_tag_t *tag,341341 struct buffer_head *bh, __u32 sequence)342342{343343+ journal_block_tag3_t *tag3 = (journal_block_tag3_t *)tag;343344 struct page *page = bh->b_page;344345 __u8 *addr;345346 __u32 csum32;346347 __be32 seq;347348348348- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))349349+ if (!jbd2_journal_has_csum_v2or3(j))349350 return;350351351352 seq = cpu_to_be32(sequence);···356355 bh->b_size);357356 kunmap_atomic(addr);358357359359- /* We only have space to store the lower 16 bits of the crc32c. */360360- tag->t_checksum = cpu_to_be16(csum32);358358+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V3))359359+ tag3->t_checksum = cpu_to_be32(csum32);360360+ else361361+ tag->t_checksum = cpu_to_be16(csum32);361362}362363/*363364 * jbd2_journal_commit_transaction···399396 LIST_HEAD(io_bufs);400397 LIST_HEAD(log_bufs);401398402402- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))399399+ if (jbd2_journal_has_csum_v2or3(journal))403400 csum_size = sizeof(struct jbd2_journal_block_tail);404401405402 /*···693690 tag_flag |= JBD2_FLAG_SAME_UUID;694691695692 tag = (journal_block_tag_t *) tagp;696696- write_tag_block(tag_bytes, tag, jh2bh(jh)->b_blocknr);693693+ write_tag_block(journal, tag, jh2bh(jh)->b_blocknr);697694 tag->t_flags = cpu_to_be16(tag_flag);698695 jbd2_block_tag_csum_set(journal, tag, wbuf[bufs],699696 commit_transaction->t_tid);
+37-19
fs/jbd2/journal.c
···124124/* Checksumming functions */125125static int jbd2_verify_csum_type(journal_t *j, journal_superblock_t *sb)126126{127127- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))127127+ if (!jbd2_journal_has_csum_v2or3(j))128128 return 1;129129130130 return sb->s_checksum_type == JBD2_CRC32C_CHKSUM;···145145146146static int jbd2_superblock_csum_verify(journal_t *j, journal_superblock_t *sb)147147{148148- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))148148+ if (!jbd2_journal_has_csum_v2or3(j))149149 return 1;150150151151 return sb->s_checksum == jbd2_superblock_csum(j, sb);···153153154154static void jbd2_superblock_csum_set(journal_t *j, journal_superblock_t *sb)155155{156156- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))156156+ if (!jbd2_journal_has_csum_v2or3(j))157157 return;158158159159 sb->s_checksum = jbd2_superblock_csum(j, sb);···15221522 goto out;15231523 }1524152415251525- if (JBD2_HAS_COMPAT_FEATURE(journal, JBD2_FEATURE_COMPAT_CHECKSUM) &&15261526- JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {15251525+ if (jbd2_journal_has_csum_v2or3(journal) &&15261526+ JBD2_HAS_COMPAT_FEATURE(journal, JBD2_FEATURE_COMPAT_CHECKSUM)) {15271527 /* Can't have checksum v1 and v2 on at the same time! */15281528 printk(KERN_ERR "JBD2: Can't enable checksumming v1 and v2 "15291529+ "at the same time!\n");15301530+ goto out;15311531+ }15321532+15331533+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2) &&15341534+ JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3)) {15351535+ /* Can't have checksum v2 and v3 at the same time! */15361536+ printk(KERN_ERR "JBD2: Can't enable checksumming v2 and v3 "15291537 "at the same time!\n");15301538 goto out;15311539 }···15441536 }1545153715461538 /* Load the checksum driver */15471547- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2)) {15391539+ if (jbd2_journal_has_csum_v2or3(journal)) {15481540 journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);15491541 if (IS_ERR(journal->j_chksum_driver)) {15501542 printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");···15611553 }1562155415631555 /* Precompute checksum seed for all metadata */15641564- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))15561556+ if (jbd2_journal_has_csum_v2or3(journal))15651557 journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,15661558 sizeof(sb->s_uuid));15671559···18211813 if (!jbd2_journal_check_available_features(journal, compat, ro, incompat))18221814 return 0;1823181518241824- /* Asking for checksumming v2 and v1? Only give them v2. */18251825- if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V2 &&18161816+ /* If enabling v2 checksums, turn on v3 instead */18171817+ if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V2) {18181818+ incompat &= ~JBD2_FEATURE_INCOMPAT_CSUM_V2;18191819+ incompat |= JBD2_FEATURE_INCOMPAT_CSUM_V3;18201820+ }18211821+18221822+ /* Asking for checksumming v3 and v1? Only give them v3. */18231823+ if (incompat & JBD2_FEATURE_INCOMPAT_CSUM_V3 &&18261824 compat & JBD2_FEATURE_COMPAT_CHECKSUM)18271825 compat &= ~JBD2_FEATURE_COMPAT_CHECKSUM;18281826···1837182318381824 sb = journal->j_superblock;1839182518401840- /* If enabling v2 checksums, update superblock */18411841- if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V2)) {18261826+ /* If enabling v3 checksums, update superblock */18271827+ if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {18421828 sb->s_checksum_type = JBD2_CRC32C_CHKSUM;18431829 sb->s_feature_compat &=18441830 ~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);···18561842 }1857184318581844 /* Precompute checksum seed for all metadata */18591859- if (JBD2_HAS_INCOMPAT_FEATURE(journal,18601860- JBD2_FEATURE_INCOMPAT_CSUM_V2))18451845+ if (jbd2_journal_has_csum_v2or3(journal))18611846 journal->j_csum_seed = jbd2_chksum(journal, ~0,18621847 sb->s_uuid,18631848 sizeof(sb->s_uuid));···18651852 /* If enabling v1 checksums, downgrade superblock */18661853 if (COMPAT_FEATURE_ON(JBD2_FEATURE_COMPAT_CHECKSUM))18671854 sb->s_feature_incompat &=18681868- ~cpu_to_be32(JBD2_FEATURE_INCOMPAT_CSUM_V2);18551855+ ~cpu_to_be32(JBD2_FEATURE_INCOMPAT_CSUM_V2 |18561856+ JBD2_FEATURE_INCOMPAT_CSUM_V3);1869185718701858 sb->s_feature_compat |= cpu_to_be32(compat);18711859 sb->s_feature_ro_compat |= cpu_to_be32(ro);···21792165 */21802166size_t journal_tag_bytes(journal_t *journal)21812167{21822182- journal_block_tag_t tag;21832183- size_t x = 0;21682168+ size_t sz;21692169+21702170+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3))21712171+ return sizeof(journal_block_tag3_t);21722172+21732173+ sz = sizeof(journal_block_tag_t);2184217421852175 if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))21862186- x += sizeof(tag.t_checksum);21762176+ sz += sizeof(__u16);2187217721882178 if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))21892189- return x + JBD2_TAG_SIZE64;21792179+ return sz;21902180 else21912191- return x + JBD2_TAG_SIZE32;21812181+ return sz - sizeof(__u32);21922182}2193218321942184/*
+20-13
fs/jbd2/recovery.c
···181181 __be32 provided;182182 __u32 calculated;183183184184- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))184184+ if (!jbd2_journal_has_csum_v2or3(j))185185 return 1;186186187187 tail = (struct jbd2_journal_block_tail *)(buf + j->j_blocksize -···205205 int nr = 0, size = journal->j_blocksize;206206 int tag_bytes = journal_tag_bytes(journal);207207208208- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))208208+ if (jbd2_journal_has_csum_v2or3(journal))209209 size -= sizeof(struct jbd2_journal_block_tail);210210211211 tagp = &bh->b_data[sizeof(journal_header_t)];···338338 return err;339339}340340341341-static inline unsigned long long read_tag_block(int tag_bytes, journal_block_tag_t *tag)341341+static inline unsigned long long read_tag_block(journal_t *journal,342342+ journal_block_tag_t *tag)342343{343344 unsigned long long block = be32_to_cpu(tag->t_blocknr);344344- if (tag_bytes > JBD2_TAG_SIZE32)345345+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))345346 block |= (u64)be32_to_cpu(tag->t_blocknr_high) << 32;346347 return block;347348}···385384 __be32 provided;386385 __u32 calculated;387386388388- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))387387+ if (!jbd2_journal_has_csum_v2or3(j))389388 return 1;390389391390 h = buf;···400399static int jbd2_block_tag_csum_verify(journal_t *j, journal_block_tag_t *tag,401400 void *buf, __u32 sequence)402401{402402+ journal_block_tag3_t *tag3 = (journal_block_tag3_t *)tag;403403 __u32 csum32;404404 __be32 seq;405405406406- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))406406+ if (!jbd2_journal_has_csum_v2or3(j))407407 return 1;408408409409 seq = cpu_to_be32(sequence);410410 csum32 = jbd2_chksum(j, j->j_csum_seed, (__u8 *)&seq, sizeof(seq));411411 csum32 = jbd2_chksum(j, csum32, buf, j->j_blocksize);412412413413- return tag->t_checksum == cpu_to_be16(csum32);413413+ if (JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V3))414414+ return tag3->t_checksum == cpu_to_be32(csum32);415415+ else416416+ return tag->t_checksum == cpu_to_be16(csum32);414417}415418416419static int do_one_pass(journal_t *journal,···431426 int tag_bytes = journal_tag_bytes(journal);432427 __u32 crc32_sum = ~0; /* Transactional Checksums */433428 int descr_csum_size = 0;429429+ int block_error = 0;434430435431 /*436432 * First thing is to establish what we expect to find in the log···518512 switch(blocktype) {519513 case JBD2_DESCRIPTOR_BLOCK:520514 /* Verify checksum first */521521- if (JBD2_HAS_INCOMPAT_FEATURE(journal,522522- JBD2_FEATURE_INCOMPAT_CSUM_V2))515515+ if (jbd2_journal_has_csum_v2or3(journal))523516 descr_csum_size =524517 sizeof(struct jbd2_journal_block_tail);525518 if (descr_csum_size > 0 &&···579574 unsigned long long blocknr;580575581576 J_ASSERT(obh != NULL);582582- blocknr = read_tag_block(tag_bytes,577577+ blocknr = read_tag_block(journal,583578 tag);584579585580 /* If the block has been···603598 "checksum recovering "604599 "block %llu in log\n",605600 blocknr);606606- continue;601601+ block_error = 1;602602+ goto skip_write;607603 }608604609605 /* Find a buffer for the new···803797 success = -EIO;804798 }805799 }806806-800800+ if (block_error && success == 0)801801+ success = -EIO;807802 return success;808803809804 failed:···818811 __be32 provided;819812 __u32 calculated;820813821821- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))814814+ if (!jbd2_journal_has_csum_v2or3(j))822815 return 1;823816824817 tail = (struct jbd2_journal_revoke_tail *)(buf + j->j_blocksize -
+3-3
fs/jbd2/revoke.c
···9191#include <linux/list.h>9292#include <linux/init.h>9393#include <linux/bio.h>9494-#endif9594#include <linux/log2.h>9595+#endif96969797static struct kmem_cache *jbd2_revoke_record_cache;9898static struct kmem_cache *jbd2_revoke_table_cache;···597597 offset = *offsetp;598598599599 /* Do we need to leave space at the end for a checksum? */600600- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2))600600+ if (jbd2_journal_has_csum_v2or3(journal))601601 csum_size = sizeof(struct jbd2_journal_revoke_tail);602602603603 /* Make sure we have a descriptor with space left for the record */···644644 struct jbd2_journal_revoke_tail *tail;645645 __u32 csum;646646647647- if (!JBD2_HAS_INCOMPAT_FEATURE(j, JBD2_FEATURE_INCOMPAT_CSUM_V2))647647+ if (!jbd2_journal_has_csum_v2or3(j))648648 return;649649650650 tail = (struct jbd2_journal_revoke_tail *)(bh->b_data + j->j_blocksize -
+1-1
fs/locks.c
···16191619 smp_mb();16201620 error = check_conflicting_open(dentry, arg);16211621 if (error)16221622- locks_unlink_lock(flp);16221622+ locks_unlink_lock(before);16231623out:16241624 if (is_deleg)16251625 mutex_unlock(&inode->i_mutex);
+4-1
fs/nfs/nfs3acl.c
···129129 .rpc_argp = &args,130130 .rpc_resp = &fattr,131131 };132132- int status;132132+ int status = 0;133133+134134+ if (acl == NULL && (!S_ISDIR(inode->i_mode) || dfacl == NULL))135135+ goto out;133136134137 status = -EOPNOTSUPP;135138 if (!nfs_server_capable(inode, NFS_CAP_ACLS))
···159159 * journal_block_tag (in the descriptor). The other h_chksum* fields are160160 * not used.161161 *162162- * Checksum v1 and v2 are mutually exclusive features.162162+ * If FEATURE_INCOMPAT_CSUM_V3 is set, the descriptor block uses163163+ * journal_block_tag3_t to store a full 32-bit checksum. Everything else164164+ * is the same as v2.165165+ *166166+ * Checksum v1, v2, and v3 are mutually exclusive features.163167 */164168struct commit_header {165169 __be32 h_magic;···183179 * raw struct shouldn't be used for pointer math or sizeof() - use184180 * journal_tag_bytes(journal) instead to compute this.185181 */182182+typedef struct journal_block_tag3_s183183+{184184+ __be32 t_blocknr; /* The on-disk block number */185185+ __be32 t_flags; /* See below */186186+ __be32 t_blocknr_high; /* most-significant high 32bits. */187187+ __be32 t_checksum; /* crc32c(uuid+seq+block) */188188+} journal_block_tag3_t;189189+186190typedef struct journal_block_tag_s187191{188192 __be32 t_blocknr; /* The on-disk block number */···198186 __be16 t_flags; /* See below */199187 __be32 t_blocknr_high; /* most-significant high 32bits. */200188} journal_block_tag_t;201201-202202-#define JBD2_TAG_SIZE32 (offsetof(journal_block_tag_t, t_blocknr_high))203203-#define JBD2_TAG_SIZE64 (sizeof(journal_block_tag_t))204189205190/* Tail of descriptor block, for checksumming */206191struct jbd2_journal_block_tail {···293284#define JBD2_FEATURE_INCOMPAT_64BIT 0x00000002294285#define JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT 0x00000004295286#define JBD2_FEATURE_INCOMPAT_CSUM_V2 0x00000008287287+#define JBD2_FEATURE_INCOMPAT_CSUM_V3 0x00000010296288297289/* Features known to this kernel version: */298290#define JBD2_KNOWN_COMPAT_FEATURES JBD2_FEATURE_COMPAT_CHECKSUM···301291#define JBD2_KNOWN_INCOMPAT_FEATURES (JBD2_FEATURE_INCOMPAT_REVOKE | \302292 JBD2_FEATURE_INCOMPAT_64BIT | \303293 JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT | \304304- JBD2_FEATURE_INCOMPAT_CSUM_V2)294294+ JBD2_FEATURE_INCOMPAT_CSUM_V2 | \295295+ JBD2_FEATURE_INCOMPAT_CSUM_V3)305296306297#ifdef __KERNEL__307298···1306129513071296extern int jbd2_journal_blocks_per_page(struct inode *inode);13081297extern size_t journal_tag_bytes(journal_t *journal);12981298+12991299+static inline int jbd2_journal_has_csum_v2or3(journal_t *journal)13001300+{13011301+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V2) ||13021302+ JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_CSUM_V3))13031303+ return 1;13041304+13051305+ return 0;13061306+}1309130713101308/*13111309 * We reserve t_outstanding_credits >> JBD2_CONTROL_BLOCKS_SHIFT for
+11-2
include/linux/platform_data/mtd-nand-omap2.h
···2121};22222323enum omap_ecc {2424- /* 1-bit ECC calculation by GPMC, Error detection by Software */2525- OMAP_ECC_HAM1_CODE_HW = 0,2424+ /*2525+ * 1-bit ECC: calculation and correction by SW2626+ * ECC stored at end of spare area2727+ */2828+ OMAP_ECC_HAM1_CODE_SW = 0,2929+3030+ /*3131+ * 1-bit ECC: calculation by GPMC, Error detection by Software3232+ * ECC layout compatible with ROM code layout3333+ */3434+ OMAP_ECC_HAM1_CODE_HW,2635 /* 4-bit ECC calculation by GPMC, Error detection by Software */2736 OMAP_ECC_BCH4_CODE_HW_DETECTION_SW,2837 /* 4-bit ECC calculation by GPMC, Error detection by ELM */
+1
include/linux/seqno-fence.h
···6262 * @context: the execution context this fence is a part of6363 * @seqno_ofs: the offset within @sync_buf6464 * @seqno: the sequence # to signal on6565+ * @cond: fence wait condition6566 * @ops: the fence_ops for operations on this seqno fence6667 *6768 * This function initializes a struct seqno_fence with passed parameters,
+7
include/linux/spi/spi.h
···253253 * the device whose settings are being modified.254254 * @transfer: adds a message to the controller's transfer queue.255255 * @cleanup: frees controller-specific state256256+ * @can_dma: determine whether this master supports DMA256257 * @queued: whether this master is providing an internal message queue257258 * @kworker: thread struct for message pump258259 * @kworker_task: pointer to task for message pump kworker thread···263262 * @cur_msg: the currently in-flight message264263 * @cur_msg_prepared: spi_prepare_message was called for the currently265264 * in-flight message265265+ * @cur_msg_mapped: message has been mapped for DMA266266 * @xfer_completion: used by core transfer_one_message()267267 * @busy: message pump is busy268268 * @running: message pump is running···301299 * @cs_gpios: Array of GPIOs to use as chip select lines; one per CS302300 * number. Any individual value may be -ENOENT for CS lines that303301 * are not GPIOs (driven by the SPI controller itself).302302+ * @dma_tx: DMA transmit channel303303+ * @dma_rx: DMA receive channel304304+ * @dummy_rx: dummy receive buffer for full-duplex devices305305+ * @dummy_tx: dummy transmit buffer for full-duplex devices304306 *305307 * Each SPI master controller can communicate with one or more @spi_device306308 * children. These make a small bus, sharing MOSI, MISO and SCK signals···638632 * addresses for each transfer buffer639633 * @complete: called to report transaction completions640634 * @context: the argument to complete() when it's called635635+ * @frame_length: the total number of bytes in the message641636 * @actual_length: the total number of bytes that were transferred in all642637 * successful segments643638 * @status: zero for success, else negative errno
···1313#ifndef _UAPI_LINUX_XATTR_H1414#define _UAPI_LINUX_XATTR_H15151616-#ifdef __UAPI_DEF_XATTR1616+#if __UAPI_DEF_XATTR1717#define __USE_KERNEL_XATTR_DEFS18181919#define XATTR_CREATE 0x1 /* set value, fail if attr already exists */
+11
kernel/kexec.c
···6464char __weak kexec_purgatory[0];6565size_t __weak kexec_purgatory_size = 0;66666767+#ifdef CONFIG_KEXEC_FILE6768static int kexec_calculate_store_digests(struct kimage *image);6969+#endif68706971/* Location of the reserved area for the crash kernel */7072struct resource crashk_res = {···343341 return ret;344342}345343344344+#ifdef CONFIG_KEXEC_FILE346345static int copy_file_from_fd(int fd, void **buf, unsigned long *buf_len)347346{348347 struct fd f = fdget(fd);···615612 kfree(image);616613 return ret;617614}615615+#else /* CONFIG_KEXEC_FILE */616616+static inline void kimage_file_post_load_cleanup(struct kimage *image) { }617617+#endif /* CONFIG_KEXEC_FILE */618618619619static int kimage_is_destination_range(struct kimage *image,620620 unsigned long start,···13811375}13821376#endif1383137713781378+#ifdef CONFIG_KEXEC_FILE13841379SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,13851380 unsigned long, cmdline_len, const char __user *, cmdline_ptr,13861381 unsigned long, flags)···14571450 kimage_free(image);14581451 return ret;14591452}14531453+14541454+#endif /* CONFIG_KEXEC_FILE */1460145514611456void crash_kexec(struct pt_regs *regs)14621457{···2015200620162007subsys_initcall(crash_save_vmcoreinfo_init);2017200820092009+#ifdef CONFIG_KEXEC_FILE20182010static int __kexec_add_segment(struct kimage *image, char *buf,20192011 unsigned long bufsz, unsigned long mem,20202012 unsigned long memsz)···2692268226932683 return 0;26942684}26852685+#endif /* CONFIG_KEXEC_FILE */2695268626962687/*26972688 * Move into place and start executing a preloaded standalone
+4-7
kernel/resource.c
···351351 end = res->end;352352 BUG_ON(start >= end);353353354354+ if (first_level_children_only)355355+ sibling_only = true;356356+354357 read_lock(&resource_lock);355358356356- if (first_level_children_only) {357357- p = iomem_resource.child;358358- sibling_only = true;359359- } else360360- p = &iomem_resource;361361-362362- while ((p = next_resource(p, sibling_only))) {359359+ for (p = iomem_resource.child; p; p = next_resource(p, sibling_only)) {363360 if (p->flags != res->flags)364361 continue;365362 if (name && strcmp(p->name, name))
+15-1
kernel/trace/ring_buffer.c
···626626 work = &cpu_buffer->irq_work;627627 }628628629629- work->waiters_pending = true;630629 poll_wait(filp, &work->waiters, poll_table);630630+ work->waiters_pending = true;631631+ /*632632+ * There's a tight race between setting the waiters_pending and633633+ * checking if the ring buffer is empty. Once the waiters_pending bit634634+ * is set, the next event will wake the task up, but we can get stuck635635+ * if there's only a single event in.636636+ *637637+ * FIXME: Ideally, we need a memory barrier on the writer side as well,638638+ * but adding a memory barrier to all events will cause too much of a639639+ * performance hit in the fast path. We only need a memory barrier when640640+ * the buffer goes from empty to having content. But as this race is641641+ * extremely small, and it's not a problem if another event comes in, we642642+ * will fix it later.643643+ */644644+ smp_mb();631645632646 if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||633647 (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
+10-1
lib/Kconfig.debug
···892892 the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this893893 will test all possible w/w mutex interface abuse with the894894 exception of simply not acquiring all the required locks.895895+ Note that this feature can introduce significant overhead, so896896+ it really should not be enabled in a production or distro kernel,897897+ even a debug kernel. If you are a driver writer, enable it. If898898+ you are a distro, do not.895899896900config DEBUG_LOCK_ALLOC897901 bool "Lock debugging: detect incorrect freeing of live locks"···10361032 either tracing or lock debugging.1037103310381034config STACKTRACE10391039- bool10351035+ bool "Stack backtrace support"10401036 depends on STACKTRACE_SUPPORT10371037+ help10381038+ This option causes the kernel to create a /proc/pid/stack for10391039+ every process, showing its current stack trace.10401040+ It is also used by various kernel debugging features that require10411041+ stack trace generation.1041104210421043config DEBUG_KOBJECT10431044 bool "kobject debugging"
+1-1
mm/hugetlb_cgroup.c
···217217218218 if (hugetlb_cgroup_disabled())219219 return;220220- VM_BUG_ON(!spin_is_locked(&hugetlb_lock));220220+ lockdep_assert_held(&hugetlb_lock);221221 h_cg = hugetlb_cgroup_from_page(page);222222 if (unlikely(!h_cg))223223 return;
+1-2
mm/memblock.c
···192192 phys_addr_t align, phys_addr_t start,193193 phys_addr_t end, int nid)194194{195195- int ret;196196- phys_addr_t kernel_end;195195+ phys_addr_t kernel_end, ret;197196198197 /* pump up @end */199198 if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
+3-4
mm/memory.c
···751751 unsigned long pfn = pte_pfn(pte);752752753753 if (HAVE_PTE_SPECIAL) {754754- if (likely(!pte_special(pte) || pte_numa(pte)))754754+ if (likely(!pte_special(pte)))755755 goto check_pfn;756756 if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))757757 return NULL;···777777 }778778 }779779780780+ if (is_zero_pfn(pfn))781781+ return NULL;780782check_pfn:781783 if (unlikely(pfn > highest_memmap_pfn)) {782784 print_bad_pte(vma, addr, pte, NULL);783785 return NULL;784786 }785785-786786- if (is_zero_pfn(pfn))787787- return NULL;788787789788 /*790789 * NOTE! We still have PageReserved() pages in the page tables.
···315315 .total_size = zs_zpool_total_size,316316};317317318318+MODULE_ALIAS("zpool-zsmalloc");318319#endif /* CONFIG_ZPOOL */319320320321/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
+2-2
scripts/checkpatch.pl
···21332133# Check for improperly formed commit descriptions21342134 if ($in_commit_log &&21352135 $line =~ /\bcommit\s+[0-9a-f]{5,}/i &&21362136- $line !~ /\b[Cc]ommit [0-9a-f]{12,16} \("/) {21362136+ $line !~ /\b[Cc]ommit [0-9a-f]{12,40} \("/) {21372137 $line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i;21382138 my $init_char = $1;21392139 my $orig_commit = lc($2);···21412141 my $desc = 'commit description';21422142 ($id, $desc) = git_commit_info($orig_commit, $id, $desc);21432143 ERROR("GIT_COMMIT_ID",21442144- "Please use 12 to 16 chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);21442144+ "Please use 12 or more chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr);21452145 }2146214621472147# Check for added, moved or deleted files
···173173 * Use filesystem name if filesystem does not support rename()174174 * operation.175175 */176176- if (!inode->i_op->rename)176176+ if (!inode->i_op->rename && !inode->i_op->rename2)177177 goto prepend_filesystem_name;178178 }179179 /* Prepend device name. */···282282 * Get local name for filesystems without rename() operation283283 * or dentry without vfsmount.284284 */285285- if (!path->mnt || !inode->i_op->rename)285285+ if (!path->mnt ||286286+ (!inode->i_op->rename && !inode->i_op->rename2))286287 pos = tomoyo_get_local_path(path->dentry, buf,287288 buf_len - 1);288289 /* Get absolute name for the rest. */
+2-2
sound/core/info.c
···684684 * snd_info_get_line - read one line from the procfs buffer685685 * @buffer: the procfs buffer686686 * @line: the buffer to store687687- * @len: the max. buffer size - 1687687+ * @len: the max. buffer size688688 *689689 * Reads one line from the buffer and stores the string.690690 *···704704 buffer->stop = 1;705705 if (c == '\n')706706 break;707707- if (len) {707707+ if (len > 1) {708708 len--;709709 *line++ = c;710710 }
···507507static void update_pcm_pointers(struct amdtp_stream *s,508508 struct snd_pcm_substream *pcm,509509 unsigned int frames)510510-{ unsigned int ptr;510510+{511511+ unsigned int ptr;512512+513513+ /*514514+ * In IEC 61883-6, one data block represents one event. In ALSA, one515515+ * event equals to one PCM frame. But Dice has a quirk to transfer516516+ * two PCM frames in one data block.517517+ */518518+ if (s->double_pcm_frames)519519+ frames *= 2;511520512521 ptr = s->pcm_buffer_pointer + frames;513522 if (ptr >= pcm->runtime->buffer_size)
+1
sound/firewire/amdtp.h
···125125 unsigned int pcm_buffer_pointer;126126 unsigned int pcm_period_pointer;127127 bool pointer_flush;128128+ bool double_pcm_frames;128129129130 struct snd_rawmidi_substream *midi[AMDTP_MAX_CHANNELS_FOR_MIDI * 8];130131
+20-9
sound/firewire/dice.c
···567567 return err;568568569569 /*570570- * At rates above 96 kHz, pretend that the stream runs at half the571571- * actual sample rate with twice the number of channels; two samples572572- * of a channel are stored consecutively in the packet. Requires573573- * blocking mode and PCM buffer size should be aligned to SYT_INTERVAL.570570+ * At 176.4/192.0 kHz, Dice has a quirk to transfer two PCM frames in571571+ * one data block of AMDTP packet. Thus sampling transfer frequency is572572+ * a half of PCM sampling frequency, i.e. PCM frames at 192.0 kHz are573573+ * transferred on AMDTP packets at 96 kHz. Two successive samples of a574574+ * channel are stored consecutively in the packet. This quirk is called575575+ * as 'Dual Wire'.576576+ * For this quirk, blocking mode is required and PCM buffer size should577577+ * be aligned to SYT_INTERVAL.574578 */575579 channels = params_channels(hw_params);576580 if (rate_index > 4) {···583579 return err;584580 }585581586586- for (i = 0; i < channels; i++) {587587- dice->stream.pcm_positions[i * 2] = i;588588- dice->stream.pcm_positions[i * 2 + 1] = i + channels;589589- }590590-591582 rate /= 2;592583 channels *= 2;584584+ dice->stream.double_pcm_frames = true;585585+ } else {586586+ dice->stream.double_pcm_frames = false;593587 }594588595589 mode = rate_index_to_mode(rate_index);596590 amdtp_stream_set_parameters(&dice->stream, rate, channels,597591 dice->rx_midi_ports[mode]);592592+ if (rate_index > 4) {593593+ channels /= 2;594594+595595+ for (i = 0; i < channels; i++) {596596+ dice->stream.pcm_positions[i] = i * 2;597597+ dice->stream.pcm_positions[i + channels] = i * 2 + 1;598598+ }599599+ }600600+598601 amdtp_stream_set_pcm_format(&dice->stream,599602 params_format(hw_params));600603
···247247 };248248249249 /* it shouldn't happen */250250- if (use_dvc & !use_src)250250+ if (use_dvc && !use_src)251251 dev_err(dev, "DVC is selected without SRC\n");252252253253 /* use SSIU or SSI ? */