···11+Pinctrl-based I2C Bus Mux22+33+This binding describes an I2C bus multiplexer that uses pin multiplexing to44+route the I2C signals, and represents the pin multiplexing configuration55+using the pinctrl device tree bindings.66+77+ +-----+ +-----+88+ | dev | | dev |99+ +------------------------+ +-----+ +-----+1010+ | SoC | | |1111+ | /----|------+--------+1212+ | +---+ +------+ | child bus A, on first set of pins1313+ | |I2C|---|Pinmux| |1414+ | +---+ +------+ | child bus B, on second set of pins1515+ | \----|------+--------+--------+1616+ | | | | |1717+ +------------------------+ +-----+ +-----+ +-----+1818+ | dev | | dev | | dev |1919+ +-----+ +-----+ +-----+2020+2121+Required properties:2222+- compatible: i2c-mux-pinctrl2323+- i2c-parent: The phandle of the I2C bus that this multiplexer's master-side2424+ port is connected to.2525+2626+Also required are:2727+2828+* Standard pinctrl properties that specify the pin mux state for each child2929+ bus. See ../pinctrl/pinctrl-bindings.txt.3030+3131+* Standard I2C mux properties. See mux.txt in this directory.3232+3333+* I2C child bus nodes. See mux.txt in this directory.3434+3535+For each named state defined in the pinctrl-names property, an I2C child bus3636+will be created. I2C child bus numbers are assigned based on the index into3737+the pinctrl-names property.3838+3939+The only exception is that no bus will be created for a state named "idle". If4040+such a state is defined, it must be the last entry in pinctrl-names. For4141+example:4242+4343+ pinctrl-names = "ddc", "pta", "idle" -> ddc = bus 0, pta = bus 14444+ pinctrl-names = "ddc", "idle", "pta" -> Invalid ("idle" not last)4545+ pinctrl-names = "idle", "ddc", "pta" -> Invalid ("idle" not last)4646+4747+Whenever an access is made to a device on a child bus, the relevant pinctrl4848+state will be programmed into hardware.4949+5050+If an idle state is defined, whenever an access is not being made to a device5151+on a child bus, the idle pinctrl state will be programmed into hardware.5252+5353+If an idle state is not defined, the most recently used pinctrl state will be5454+left programmed into hardware whenever no access is being made of a device on5555+a child bus.5656+5757+Example:5858+5959+ i2cmux {6060+ compatible = "i2c-mux-pinctrl";6161+ #address-cells = <1>;6262+ #size-cells = <0>;6363+6464+ i2c-parent = <&i2c1>;6565+6666+ pinctrl-names = "ddc", "pta", "idle";6767+ pinctrl-0 = <&state_i2cmux_ddc>;6868+ pinctrl-1 = <&state_i2cmux_pta>;6969+ pinctrl-2 = <&state_i2cmux_idle>;7070+7171+ i2c@0 {7272+ reg = <0>;7373+ #address-cells = <1>;7474+ #size-cells = <0>;7575+7676+ eeprom {7777+ compatible = "eeprom";7878+ reg = <0x50>;7979+ };8080+ };8181+8282+ i2c@1 {8383+ reg = <1>;8484+ #address-cells = <1>;8585+ #size-cells = <0>;8686+8787+ eeprom {8888+ compatible = "eeprom";8989+ reg = <0x50>;9090+ };9191+ };9292+ };9393+
+25-19
Documentation/networking/stmmac.txt
···1010(i.e. 7xxx/5xxx SoCs), SPEAr (arm), Loongson1B (mips) and XLINX XC2V30001111FF1152AMT0221 D1215994A VIRTEX FPGA board.12121313-DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether MAC 10/1001414-Universal version 4.0 have been used for developing this driver.1313+DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether1414+MAC 10/100 Universal version 4.0 have been used for developing this driver.15151616This driver supports both the platform bus and PCI.1717···5454When one or more packets are received, an interrupt happens. The interrupts5555are not queued so the driver has to scan all the descriptors in the ring during5656the receive process.5757-This is based on NAPI so the interrupt handler signals only if there is work to be5858-done, and it exits.5757+This is based on NAPI so the interrupt handler signals only if there is work5858+to be done, and it exits.5959Then the poll method will be scheduled at some future point.6060The incoming packets are stored, by the DMA, in a list of pre-allocated socket6161buffers in order to avoid the memcpy (Zero-copy).626263634.3) Timer-Driver Interrupt6464-Instead of having the device that asynchronously notifies the frame receptions, the6565-driver configures a timer to generate an interrupt at regular intervals.6666-Based on the granularity of the timer, the frames that are received by the device6767-will experience different levels of latency. Some NICs have dedicated timer6868-device to perform this task. STMMAC can use either the RTC device or the TMU6969-channel 2 on STLinux platforms.6464+Instead of having the device that asynchronously notifies the frame receptions,6565+the driver configures a timer to generate an interrupt at regular intervals.6666+Based on the granularity of the timer, the frames that are received by the6767+device will experience different levels of latency. Some NICs have dedicated6868+timer device to perform this task. STMMAC can use either the RTC device or the6969+TMU channel 2 on STLinux platforms.7070The timers frequency can be passed to the driver as parameter; when change it,7171take care of both hardware capability and network stability/performance impact.7272-Several performance tests on STM platforms showed this optimisation allows to spare7373-the CPU while having the maximum throughput.7272+Several performance tests on STM platforms showed this optimisation allows to7373+spare the CPU while having the maximum throughput.747475754.4) WOL7676-Wake up on Lan feature through Magic and Unicast frames are supported for the GMAC7777-core.7676+Wake up on Lan feature through Magic and Unicast frames are supported for the7777+GMAC core.787879794.5) DMA descriptors8080Driver handles both normal and enhanced descriptors. The latter has been only···106106These are included in the include/linux/stmmac.h header file107107and detailed below as well:108108109109- struct plat_stmmacenet_data {109109+struct plat_stmmacenet_data {110110+ char *phy_bus_name;110111 int bus_id;111112 int phy_addr;112113 int interface;···125124 void (*bus_setup)(void __iomem *ioaddr);126125 int (*init)(struct platform_device *pdev);127126 void (*exit)(struct platform_device *pdev);127127+ void *custom_cfg;128128+ void *custom_data;128129 void *bsp_priv;129130 };130131131132Where:133133+ o phy_bus_name: phy bus name to attach to the stmmac.132134 o bus_id: bus identifier.133135 o phy_addr: the physical address can be passed from the platform.134136 If it is set to -1 the driver will automatically135137 detect it at run-time by probing all the 32 addresses.136138 o interface: PHY device's interface.137139 o mdio_bus_data: specific platform fields for the MDIO bus.138138- o pbl: the Programmable Burst Length is maximum number of beats to140140+ o dma_cfg: internal DMA parameters141141+ o pbl: the Programmable Burst Length is maximum number of beats to139142 be transferred in one DMA transaction.140143 GMAC also enables the 4xPBL by default.144144+ o fixed_burst/mixed_burst/burst_len141145 o clk_csr: fixed CSR Clock range selection.142146 o has_gmac: uses the GMAC core.143147 o enh_desc: if sets the MAC will use the enhanced descriptor structure.···166160 this is sometime necessary on some platforms (e.g. ST boxes)167161 where the HW needs to have set some PIO lines or system cfg168162 registers.169169- o custom_cfg: this is a custom configuration that can be passed while170170- initialising the resources.163163+ o custom_cfg/custom_data: this is a custom configuration that can be passed164164+ while initialising the resources.165165+ o bsp_priv: another private poiter.171166172167For MDIO bus The we have:173168···186179 o phy_mask: phy mask passed when register the MDIO bus within the driver.187180 o irqs: list of IRQs, one per PHY.188181 o probed_phy_irq: if irqs is NULL, use this for probed PHY.189189-190182191183For DMA engine we have the following internal fields that should be192184tuned according to the HW capabilities.
···271271 goto err;272272 }273273274274- r = omap_device_register(pdev);274274+ r = platform_device_add(pdev);275275 if (r) {276276- pr_err("Could not register omap_device for %s\n", pdev_name);276276+ pr_err("Could not register platform_device for %s\n", pdev_name);277277 goto err;278278 }279279
+6-8
arch/arm/mm/dma-mapping.c
···228228229229#define DEFAULT_CONSISTENT_DMA_SIZE SZ_2M230230231231-unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE;231231+static unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE;232232233233void __init init_consistent_dma_size(unsigned long size)234234{···268268 unsigned long base = consistent_base;269269 unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT;270270271271-#ifndef CONFIG_ARM_DMA_USE_IOMMU272272- if (cpu_architecture() >= CPU_ARCH_ARMv6)271271+ if (IS_ENABLED(CONFIG_CMA) && !IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU))273272 return 0;274274-#endif275273276274 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL);277275 if (!consistent_pte) {···321323 .vm_list = LIST_HEAD_INIT(coherent_head.vm_list),322324};323325324324-size_t coherent_pool_size = DEFAULT_CONSISTENT_DMA_SIZE / 8;326326+static size_t coherent_pool_size = DEFAULT_CONSISTENT_DMA_SIZE / 8;325327326328static int __init early_coherent_pool(char *p)327329{···340342 struct page *page;341343 void *ptr;342344343343- if (cpu_architecture() < CPU_ARCH_ARMv6)345345+ if (!IS_ENABLED(CONFIG_CMA))344346 return 0;345347346348 ptr = __alloc_from_contiguous(NULL, size, prot, &page);···702704703705 if (arch_is_coherent() || nommu())704706 addr = __alloc_simple_buffer(dev, size, gfp, &page);705705- else if (cpu_architecture() < CPU_ARCH_ARMv6)707707+ else if (!IS_ENABLED(CONFIG_CMA))706708 addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);707709 else if (gfp & GFP_ATOMIC)708710 addr = __alloc_from_pool(dev, size, &page, caller);···771773772774 if (arch_is_coherent() || nommu()) {773775 __dma_free_buffer(page, size);774774- } else if (cpu_architecture() < CPU_ARCH_ARMv6) {776776+ } else if (!IS_ENABLED(CONFIG_CMA)) {775777 __dma_free_remap(cpu_addr, size);776778 __dma_free_buffer(page, size);777779 } else {
+1-1
arch/arm/mm/init.c
···212212 * allocations. This must be the smallest DMA mask in the system,213213 * so a successful GFP_DMA allocation will always satisfy this.214214 */215215-u32 arm_dma_limit;215215+phys_addr_t arm_dma_limit;216216217217static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,218218 unsigned long dma_size)
···379379#define copy_from_user(to, from, n) __copy_from_user(to, from, n)380380#define copy_to_user(to, from, n) __copy_to_user(to, from, n)381381382382-long strncpy_from_user(char *dst, const char __user *src, long count);383383-long strnlen_user(const char __user *src, long n);382382+#define user_addr_max() \383383+ (segment_eq(get_fs(), USER_DS) ? TASK_SIZE : ~0UL)384384+385385+extern long strncpy_from_user(char *dst, const char __user *src, long count);386386+extern __must_check long strlen_user(const char __user *str);387387+extern __must_check long strnlen_user(const char __user *str, long n);388388+384389unsigned long __clear_user(void __user *to, unsigned long n);385390386391#define clear_user __clear_user387387-388388-#define strlen_user(str) strnlen_user(str, 32767)389392390393#endif /* _M68K_UACCESS_H */
+1-1
arch/m68k/kernel/ptrace.c
···286286 }287287}288288289289-#ifdef CONFIG_COLDFIRE289289+#if defined(CONFIG_COLDFIRE) || !defined(CONFIG_MMU)290290asmlinkage int syscall_trace_enter(void)291291{292292 int ret = 0;
···11#ifndef _PARISC_BUG_H22#define _PARISC_BUG_H3344+#include <linux/kernel.h> /* for BUGFLAG_TAINT */55+46/*57 * Tell the user there is some problem.68 * The offending file and line are encoded in the __bug_table section.
+3
arch/powerpc/include/asm/hw_irq.h
···100100 get_paca()->irq_happened |= PACA_IRQ_HARD_DIS;101101}102102103103+/* include/linux/interrupt.h needs hard_irq_disable to be a macro */104104+#define hard_irq_disable hard_irq_disable105105+103106/*104107 * This is called by asynchronous interrupts to conditionally105108 * re-enable hard interrupts when soft-disabled after having
···475475 struct pt_regs *old_regs;476476 u64 *next_tb = &__get_cpu_var(decrementers_next_tb);477477 struct clock_event_device *evt = &__get_cpu_var(decrementers);478478+ u64 now;478479479480 /* Ensure a positive value is written to the decrementer, or else480481 * some CPUs will continue to take decrementer exceptions.···510509 irq_work_run();511510 }512511513513- *next_tb = ~(u64)0;514514- if (evt->event_handler)515515- evt->event_handler(evt);512512+ now = get_tb_or_rtc();513513+ if (now >= *next_tb) {514514+ *next_tb = ~(u64)0;515515+ if (evt->event_handler)516516+ evt->event_handler(evt);517517+ } else {518518+ now = *next_tb - now;519519+ if (now <= DECREMENTER_MAX)520520+ set_dec((int)now);521521+ }516522517523#ifdef CONFIG_PPC64518524 /* collect purr register values often, for accurate calculations */
+2
arch/sh/Kconfig
···3232 select GENERIC_SMP_IDLE_THREAD3333 select GENERIC_CLOCKEVENTS3434 select GENERIC_CMOS_UPDATE if SH_SH03 || SH_DREAMCAST3535+ select GENERIC_STRNCPY_FROM_USER3636+ select GENERIC_STRNLEN_USER3537 help3638 The SuperH is a RISC processor targeted for use in embedded systems3739 and consumer electronics; it was also used in the Sega Dreamcast
+8-8
arch/sh/Makefile
···99# License. See the file "COPYING" in the main directory of this archive1010# for more details.1111#1212+ifneq ($(SUBARCH),$(ARCH))1313+ ifeq ($(CROSS_COMPILE),)1414+ CROSS_COMPILE := $(call cc-cross-prefix, $(UTS_MACHINE)-linux- $(UTS_MACHINE)-linux-gnu- $(UTS_MACHINE)-unknown-linux-gnu-)1515+ endif1616+endif1717+1218isa-y := any1319isa-$(CONFIG_SH_DSP) := sh1420isa-$(CONFIG_CPU_SH2) := sh2···112106KBUILD_DEFCONFIG := cayman_defconfig113107endif114108115115-ifneq ($(SUBARCH),$(ARCH))116116- ifeq ($(CROSS_COMPILE),)117117- CROSS_COMPILE := $(call cc-cross-prefix, $(UTS_MACHINE)-linux- $(UTS_MACHINE)-linux-gnu- $(UTS_MACHINE)-unknown-linux-gnu-)118118- endif119119-endif120120-121109ifdef CONFIG_CPU_LITTLE_ENDIAN122110ld-bfd := elf32-$(UTS_MACHINE)-linux123123-LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64' --oformat $(ld-bfd)111111+LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld-bfd)124112LDFLAGS += -EL125113else126114ld-bfd := elf32-$(UTS_MACHINE)big-linux127127-LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64+4' --oformat $(ld-bfd)115115+LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)128116LDFLAGS += -EB129117endif130118
···2525 (__chk_user_ptr(addr), \2626 __access_ok((unsigned long __force)(addr), (size)))27272828+#define user_addr_max() (current_thread_info()->addr_limit.seg)2929+2830/*2931 * Uh, these should become the main single-value transfer routines ...3032 * They automatically use the right size if we just have the right···102100# include "uaccess_64.h"103101#endif104102103103+extern long strncpy_from_user(char *dest, const char __user *src, long count);104104+105105+extern __must_check long strlen_user(const char __user *str);106106+extern __must_check long strnlen_user(const char __user *str, long n);107107+105108/* Generic arbitrary sized copy. */106109/* Return the number of bytes NOT copied */107110__kernel_size_t __copy_user(void *to, const void *from, __kernel_size_t n);···144137 __cl_size; \145138})146139147147-/**148148- * strncpy_from_user: - Copy a NUL terminated string from userspace.149149- * @dst: Destination address, in kernel space. This buffer must be at150150- * least @count bytes long.151151- * @src: Source address, in user space.152152- * @count: Maximum number of bytes to copy, including the trailing NUL.153153- *154154- * Copies a NUL-terminated string from userspace to kernel space.155155- *156156- * On success, returns the length of the string (not including the trailing157157- * NUL).158158- *159159- * If access to userspace fails, returns -EFAULT (some data may have been160160- * copied).161161- *162162- * If @count is smaller than the length of the string, copies @count bytes163163- * and returns @count.164164- */165165-#define strncpy_from_user(dest,src,count) \166166-({ \167167- unsigned long __sfu_src = (unsigned long)(src); \168168- int __sfu_count = (int)(count); \169169- long __sfu_res = -EFAULT; \170170- \171171- if (__access_ok(__sfu_src, __sfu_count)) \172172- __sfu_res = __strncpy_from_user((unsigned long)(dest), \173173- __sfu_src, __sfu_count); \174174- \175175- __sfu_res; \176176-})177177-178140static inline unsigned long179141copy_from_user(void *to, const void __user *from, unsigned long n)180142{···167191168192 return __copy_size;169193}170170-171171-/**172172- * strnlen_user: - Get the size of a string in user space.173173- * @s: The string to measure.174174- * @n: The maximum valid length175175- *176176- * Context: User context only. This function may sleep.177177- *178178- * Get the size of a NUL-terminated string in user space.179179- *180180- * Returns the size of the string INCLUDING the terminating NUL.181181- * On exception, returns 0.182182- * If the string is too long, returns a value greater than @n.183183- */184184-static inline long strnlen_user(const char __user *s, long n)185185-{186186- if (!__addr_ok(s))187187- return 0;188188- else189189- return __strnlen_user(s, n);190190-}191191-192192-/**193193- * strlen_user: - Get the size of a string in user space.194194- * @str: The string to measure.195195- *196196- * Context: User context only. This function may sleep.197197- *198198- * Get the size of a NUL-terminated string in user space.199199- *200200- * Returns the size of the string INCLUDING the terminating NUL.201201- * On exception, returns 0.202202- *203203- * If there is a limit on the length of a valid string, you may wish to204204- * consider using strnlen_user() instead.205205- */206206-#define strlen_user(str) strnlen_user(str, ~0UL >> 1)207194208195/*209196 * The exception table consists of pairs of addresses: the first is the
···8484extern long __put_user_asm_q(void *, long);8585extern void __put_user_unknown(void);86868787-extern long __strnlen_user(const char *__s, long __n);8888-extern int __strncpy_from_user(unsigned long __dest,8989- unsigned long __user __src, int __count);9090-9187#endif /* __ASM_SH_UACCESS_64_H */
···11+#ifndef __ASM_SH_WORD_AT_A_TIME_H22+#define __ASM_SH_WORD_AT_A_TIME_H33+44+#ifdef CONFIG_CPU_BIG_ENDIAN55+# include <asm-generic/word-at-a-time.h>66+#else77+/*88+ * Little-endian version cribbed from x86.99+ */1010+struct word_at_a_time {1111+ const unsigned long one_bits, high_bits;1212+};1313+1414+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }1515+1616+/* Carl Chatfield / Jan Achrenius G+ version for 32-bit */1717+static inline long count_masked_bytes(long mask)1818+{1919+ /* (000000 0000ff 00ffff ffffff) -> ( 1 1 2 3 ) */2020+ long a = (0x0ff0001+mask) >> 23;2121+ /* Fix the 1 for 00 case */2222+ return a & mask;2323+}2424+2525+/* Return nonzero if it has a zero */2626+static inline unsigned long has_zero(unsigned long a, unsigned long *bits, const struct word_at_a_time *c)2727+{2828+ unsigned long mask = ((a - c->one_bits) & ~a) & c->high_bits;2929+ *bits = mask;3030+ return mask;3131+}3232+3333+static inline unsigned long prep_zero_mask(unsigned long a, unsigned long bits, const struct word_at_a_time *c)3434+{3535+ return bits;3636+}3737+3838+static inline unsigned long create_zero_mask(unsigned long bits)3939+{4040+ bits = (bits - 1) & ~bits;4141+ return bits >> 7;4242+}4343+4444+/* The mask we created is directly usable as a bytemask */4545+#define zero_bytemask(mask) (mask)4646+4747+static inline unsigned long find_zero(unsigned long mask)4848+{4949+ return count_masked_bytes(mask);5050+}5151+#endif5252+5353+#endif
···11-/*22- * SH-2A UBC definitions33- *44- * Copyright (C) 2008 Kieran Bingham55- *66- * This file is subject to the terms and conditions of the GNU General Public77- * License. See the file "COPYING" in the main directory of this archive88- * for more details.99- */1010-1111-#ifndef __ASM_CPU_SH2A_UBC_H1212-#define __ASM_CPU_SH2A_UBC_H1313-1414-#define UBC_BARA 0xfffc04001515-#define UBC_BAMRA 0xfffc04041616-#define UBC_BBRA 0xfffc04a0 /* 16 bit access */1717-#define UBC_BDRA 0xfffc04081818-#define UBC_BDMRA 0xfffc040c1919-2020-#define UBC_BARB 0xfffc04102121-#define UBC_BAMRB 0xfffc04142222-#define UBC_BBRB 0xfffc04b0 /* 16 bit access */2323-#define UBC_BDRB 0xfffc04182424-#define UBC_BDMRB 0xfffc041c2525-2626-#define UBC_BRCR 0xfffc04c02727-2828-#endif /* __ASM_CPU_SH2A_UBC_H */
-82
arch/sh/kernel/cpu/sh5/entry.S
···15691569#endif /* CONFIG_MMU */1570157015711571/*15721572- * int __strncpy_from_user(unsigned long __dest, unsigned long __src,15731573- * int __count)15741574- *15751575- * Inputs:15761576- * (r2) target address15771577- * (r3) source address15781578- * (r4) maximum size in bytes15791579- *15801580- * Ouputs:15811581- * (*r2) copied data15821582- * (r2) -EFAULT (in case of faulting)15831583- * copied data (otherwise)15841584- */15851585- .global __strncpy_from_user15861586-__strncpy_from_user:15871587- pta ___strncpy_from_user1, tr015881588- pta ___strncpy_from_user_done, tr115891589- or r4, ZERO, r5 /* r5 = original count */15901590- beq/u r4, r63, tr1 /* early exit if r4==0 */15911591- movi -(EFAULT), r6 /* r6 = reply, no real fixup */15921592- or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */15931593-15941594-___strncpy_from_user1:15951595- ld.b r3, 0, r7 /* Fault address: only in reading */15961596- st.b r2, 0, r715971597- addi r2, 1, r215981598- addi r3, 1, r315991599- beq/u ZERO, r7, tr116001600- addi r4, -1, r4 /* return real number of copied bytes */16011601- bne/l ZERO, r4, tr016021602-16031603-___strncpy_from_user_done:16041604- sub r5, r4, r6 /* If done, return copied */16051605-16061606-___strncpy_from_user_exit:16071607- or r6, ZERO, r216081608- ptabs LINK, tr016091609- blink tr0, ZERO16101610-16111611-/*16121612- * extern long __strnlen_user(const char *__s, long __n)16131613- *16141614- * Inputs:16151615- * (r2) source address16161616- * (r3) source size in bytes16171617- *16181618- * Ouputs:16191619- * (r2) -EFAULT (in case of faulting)16201620- * string length (otherwise)16211621- */16221622- .global __strnlen_user16231623-__strnlen_user:16241624- pta ___strnlen_user_set_reply, tr016251625- pta ___strnlen_user1, tr116261626- or ZERO, ZERO, r5 /* r5 = counter */16271627- movi -(EFAULT), r6 /* r6 = reply, no real fixup */16281628- or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */16291629- beq r3, ZERO, tr016301630-16311631-___strnlen_user1:16321632- ldx.b r2, r5, r7 /* Fault address: only in reading */16331633- addi r3, -1, r3 /* No real fixup */16341634- addi r5, 1, r516351635- beq r3, ZERO, tr016361636- bne r7, ZERO, tr116371637-! The line below used to be active. This meant led to a junk byte lying between each pair16381638-! of entries in the argv & envp structures in memory. Whilst the program saw the right data16391639-! via the argv and envp arguments to main, it meant the 'flat' representation visible through16401640-! /proc/$pid/cmdline was corrupt, causing trouble with ps, for example.16411641-! addi r5, 1, r5 /* Include '\0' */16421642-16431643-___strnlen_user_set_reply:16441644- or r5, ZERO, r6 /* If done, return counter */16451645-16461646-___strnlen_user_exit:16471647- or r6, ZERO, r216481648- ptabs LINK, tr016491649- blink tr0, ZERO16501650-16511651-/*16521572 * extern long __get_user_asm_?(void *val, long addr)16531573 *16541574 * Inputs:···19021982 .long ___copy_user2, ___copy_user_exit19031983 .long ___clear_user1, ___clear_user_exit19041984#endif19051905- .long ___strncpy_from_user1, ___strncpy_from_user_exit19061906- .long ___strnlen_user1, ___strnlen_user_exit19071985 .long ___get_user_asm_b1, ___get_user_asm_b_exit19081986 .long ___get_user_asm_w1, ___get_user_asm_w_exit19091987 .long ___get_user_asm_l1, ___get_user_asm_l_exit
···11-/*22- * mpmbox.h: Interface and defines for the OpenProm mailbox33- * facilities for MP machines under Linux.44- *55- * Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)66- */77-88-#ifndef _SPARC_MPMBOX_H99-#define _SPARC_MPMBOX_H1010-1111-/* The prom allocates, for each CPU on the machine an unsigned1212- * byte in physical ram. You probe the device tree prom nodes1313- * for these values. The purpose of this byte is to be able to1414- * pass messages from one cpu to another.1515- */1616-1717-/* These are the main message types we have to look for in our1818- * Cpu mailboxes, based upon these values we decide what course1919- * of action to take.2020- */2121-2222-/* The CPU is executing code in the kernel. */2323-#define MAILBOX_ISRUNNING 0xf02424-2525-/* Another CPU called romvec->pv_exit(), you should call2626- * prom_stopcpu() when you see this in your mailbox.2727- */2828-#define MAILBOX_EXIT 0xfb2929-3030-/* Another CPU called romvec->pv_enter(), you should call3131- * prom_cpuidle() when this is seen.3232- */3333-#define MAILBOX_GOSPIN 0xfc3434-3535-/* Another CPU has hit a breakpoint either into kadb or the prom3636- * itself. Just like MAILBOX_GOSPIN, you should call prom_cpuidle()3737- * at this point.3838- */3939-#define MAILBOX_BPT_SPIN 0xfd4040-4141-/* Oh geese, some other nitwit got a damn watchdog reset. The party's4242- * over so go call prom_stopcpu().4343- */4444-#define MAILBOX_WDOG_STOP 0xfe4545-4646-#ifndef __ASSEMBLY__4747-4848-/* Handy macro's to determine a cpu's state. */4949-5050-/* Is the cpu still in Power On Self Test? */5151-#define MBOX_POST_P(letter) ((letter) >= 0x00 && (letter) <= 0x7f)5252-5353-/* Is the cpu at the 'ok' prompt of the PROM? */5454-#define MBOX_PROMPROMPT_P(letter) ((letter) >= 0x80 && (letter) <= 0x8f)5555-5656-/* Is the cpu spinning in the PROM? */5757-#define MBOX_PROMSPIN_P(letter) ((letter) >= 0x90 && (letter) <= 0xef)5858-5959-/* Sanity check... This is junk mail, throw it out. */6060-#define MBOX_BOGON_P(letter) ((letter) >= 0xf1 && (letter) <= 0xfa)6161-6262-/* Is the cpu actively running an application/kernel-code? */6363-#define MBOX_RUNNING_P(letter) ((letter) == MAILBOX_ISRUNNING)6464-6565-#endif /* !(__ASSEMBLY__) */6666-6767-#endif /* !(_SPARC_MPMBOX_H) */
-5
arch/tile/include/asm/thread_info.h
···9191/* Enable interrupts racelessly and nap forever: helper for cpu_idle(). */9292extern void _cpu_idle(void);93939494-/* Switch boot idle thread to a freshly-allocated stack and free old stack. */9595-extern void cpu_idle_on_new_stack(struct thread_info *old_ti,9696- unsigned long new_sp,9797- unsigned long new_ss10);9898-9994#else /* __ASSEMBLY__ */1009510196/*
···6868 jrp lr /* keep backtracer happy */6969 STD_ENDPROC(KBacktraceIterator_init_current)70707171-/*7272- * Reset our stack to r1/r2 (sp and ksp0+cpu respectively), then7373- * free the old stack (passed in r0) and re-invoke cpu_idle().7474- * We update sp and ksp0 simultaneously to avoid backtracer warnings.7575- */7676-STD_ENTRY(cpu_idle_on_new_stack)7777- {7878- move sp, r17979- mtspr SPR_SYSTEM_SAVE_K_0, r28080- }8181- jal free_thread_info8282- j cpu_idle8383- STD_ENDPROC(cpu_idle_on_new_stack)8484-8571/* Loop forever on a nap during SMP boot. */8672STD_ENTRY(smp_nap)8773 nap
···5454 __register_nmi_handler((t), &fn##_na); \5555})56565757+/*5858+ * For special handlers that register/unregister in the5959+ * init section only. This should be considered rare.6060+ */6161+#define register_nmi_handler_initonly(t, fn, fg, n) \6262+({ \6363+ static struct nmiaction fn##_na __initdata = { \6464+ .handler = (fn), \6565+ .name = (n), \6666+ .flags = (fg), \6767+ }; \6868+ __register_nmi_handler((t), &fn##_na); \6969+})7070+5771int __register_nmi_handler(unsigned int, struct nmiaction *);58725973void unregister_nmi_handler(unsigned int, const char *);
+6-6
arch/x86/include/asm/uaccess.h
···3333#define segment_eq(a, b) ((a).seg == (b).seg)34343535#define user_addr_max() (current_thread_info()->addr_limit.seg)3636-#define __addr_ok(addr) \3737- ((unsigned long __force)(addr) < \3838- (current_thread_info()->addr_limit.seg))3636+#define __addr_ok(addr) \3737+ ((unsigned long __force)(addr) < user_addr_max())39384039/*4140 * Test whether a block of memory is a valid user space address.···4647 * This needs 33-bit (65-bit for x86_64) arithmetic. We have a carry...4748 */48494949-#define __range_not_ok(addr, size) \5050+#define __range_not_ok(addr, size, limit) \5051({ \5152 unsigned long flag, roksum; \5253 __chk_user_ptr(addr); \5354 asm("add %3,%1 ; sbb %0,%0 ; cmp %1,%4 ; sbb $0,%0" \5455 : "=&r" (flag), "=r" (roksum) \5556 : "1" (addr), "g" ((long)(size)), \5656- "rm" (current_thread_info()->addr_limit.seg)); \5757+ "rm" (limit)); \5758 flag; \5859})5960···7677 * checks that the pointer is in the user space range - after calling7778 * this function, memory access functions may still return -EFAULT.7879 */7979-#define access_ok(type, addr, size) (likely(__range_not_ok(addr, size) == 0))8080+#define access_ok(type, addr, size) \8181+ (likely(__range_not_ok(addr, size, user_addr_max()) == 0))80828183/*8284 * The exception table consists of pairs of addresses relative to the
-1
arch/x86/include/asm/uv/uv_bau.h
···149149/* 4 bits of software ack period */150150#define UV2_ACK_MASK 0x7UL151151#define UV2_ACK_UNITS_SHFT 3152152-#define UV2_LEG_SHFT UV2H_LB_BAU_MISC_CONTROL_USE_LEGACY_DESCRIPTOR_FORMATS_SHFT153152#define UV2_EXT_SHFT UV2H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_SHFT154153155154/*
-6
arch/x86/kernel/aperture_64.c
···2020#include <linux/bitops.h>2121#include <linux/ioport.h>2222#include <linux/suspend.h>2323-#include <linux/kmemleak.h>2423#include <asm/e820.h>2524#include <asm/io.h>2625#include <asm/iommu.h>···9495 return 0;9596 }9697 memblock_reserve(addr, aper_size);9797- /*9898- * Kmemleak should not scan this block as it may not be mapped via the9999- * kernel direct mapping.100100- */101101- kmemleak_ignore(phys_to_virt(addr));10298 printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",10399 aper_size >> 10, addr);104100 insert_aperture_resource((u32)addr, aper_size);
···12781278 */12791279 iv = __this_cpu_read(mce_next_interval);12801280 if (mce_notify_irq())12811281- iv = max(iv, (unsigned long) HZ/100);12811281+ iv = max(iv / 2, (unsigned long) HZ/100);12821282 else12831283 iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));12841284 __this_cpu_write(mce_next_interval, iv);···15601560static void __mcheck_cpu_init_timer(void)15611561{15621562 struct timer_list *t = &__get_cpu_var(mce_timer);15631563- unsigned long iv = __this_cpu_read(mce_next_interval);15631563+ unsigned long iv = check_interval * HZ;1564156415651565 setup_timer(t, mce_timer_fn, smp_processor_id());15661566
+9-2
arch/x86/kernel/cpu/perf_event.c
···14961496 if (!cpuc->shared_regs)14971497 goto error;14981498 }14991499+ cpuc->is_fake = 1;14991500 return cpuc;15001501error:15011502 free_fake_cpuc(cpuc);···17571756 dump_trace(NULL, regs, NULL, 0, &backtrace_ops, entry);17581757}1759175817591759+static inline int17601760+valid_user_frame(const void __user *fp, unsigned long size)17611761+{17621762+ return (__range_not_ok(fp, size, TASK_SIZE) == 0);17631763+}17641764+17601765#ifdef CONFIG_COMPAT1761176617621767#include <asm/compat.h>···17871780 if (bytes != sizeof(frame))17881781 break;1789178217901790- if (fp < compat_ptr(regs->sp))17831783+ if (!valid_user_frame(fp, sizeof(frame)))17911784 break;1792178517931786 perf_callchain_store(entry, frame.return_address);···18331826 if (bytes != sizeof(frame))18341827 break;1835182818361836- if ((unsigned long)fp < regs->sp)18291829+ if (!valid_user_frame(fp, sizeof(frame)))18371830 break;1838183118391832 perf_callchain_store(entry, frame.return_address);
+2
arch/x86/kernel/cpu/perf_event.h
···117117 struct perf_event *event_list[X86_PMC_IDX_MAX]; /* in enabled order */118118119119 unsigned int group_flag;120120+ int is_fake;120121121122 /*122123 * Intel DebugStore bits···365364 int pebs_record_size;366365 void (*drain_pebs)(struct pt_regs *regs);367366 struct event_constraint *pebs_constraints;367367+ void (*pebs_aliases)(struct perf_event *event);368368369369 /*370370 * Intel LBR
+108-37
arch/x86/kernel/cpu/perf_event_intel.c
···11211121 return NULL;11221122}1123112311241124-static bool intel_try_alt_er(struct perf_event *event, int orig_idx)11241124+static int intel_alt_er(int idx)11251125{11261126 if (!(x86_pmu.er_flags & ERF_HAS_RSP_1))11271127- return false;11271127+ return idx;1128112811291129- if (event->hw.extra_reg.idx == EXTRA_REG_RSP_0) {11301130- event->hw.config &= ~INTEL_ARCH_EVENT_MASK;11311131- event->hw.config |= 0x01bb;11321132- event->hw.extra_reg.idx = EXTRA_REG_RSP_1;11331133- event->hw.extra_reg.reg = MSR_OFFCORE_RSP_1;11341134- } else if (event->hw.extra_reg.idx == EXTRA_REG_RSP_1) {11291129+ if (idx == EXTRA_REG_RSP_0)11301130+ return EXTRA_REG_RSP_1;11311131+11321132+ if (idx == EXTRA_REG_RSP_1)11331133+ return EXTRA_REG_RSP_0;11341134+11351135+ return idx;11361136+}11371137+11381138+static void intel_fixup_er(struct perf_event *event, int idx)11391139+{11401140+ event->hw.extra_reg.idx = idx;11411141+11421142+ if (idx == EXTRA_REG_RSP_0) {11351143 event->hw.config &= ~INTEL_ARCH_EVENT_MASK;11361144 event->hw.config |= 0x01b7;11371137- event->hw.extra_reg.idx = EXTRA_REG_RSP_0;11381145 event->hw.extra_reg.reg = MSR_OFFCORE_RSP_0;11461146+ } else if (idx == EXTRA_REG_RSP_1) {11471147+ event->hw.config &= ~INTEL_ARCH_EVENT_MASK;11481148+ event->hw.config |= 0x01bb;11491149+ event->hw.extra_reg.reg = MSR_OFFCORE_RSP_1;11391150 }11401140-11411141- if (event->hw.extra_reg.idx == orig_idx)11421142- return false;11431143-11441144- return true;11451151}1146115211471153/*···11651159 struct event_constraint *c = &emptyconstraint;11661160 struct er_account *era;11671161 unsigned long flags;11681168- int orig_idx = reg->idx;11621162+ int idx = reg->idx;1169116311701170- /* already allocated shared msr */11711171- if (reg->alloc)11641164+ /*11651165+ * reg->alloc can be set due to existing state, so for fake cpuc we11661166+ * need to ignore this, otherwise we might fail to allocate proper fake11671167+ * state for this extra reg constraint. Also see the comment below.11681168+ */11691169+ if (reg->alloc && !cpuc->is_fake)11721170 return NULL; /* call x86_get_event_constraint() */1173117111741172again:11751175- era = &cpuc->shared_regs->regs[reg->idx];11731173+ era = &cpuc->shared_regs->regs[idx];11761174 /*11771175 * we use spin_lock_irqsave() to avoid lockdep issues when11781176 * passing a fake cpuc···1185117511861176 if (!atomic_read(&era->ref) || era->config == reg->config) {1187117711781178+ /*11791179+ * If its a fake cpuc -- as per validate_{group,event}() we11801180+ * shouldn't touch event state and we can avoid doing so11811181+ * since both will only call get_event_constraints() once11821182+ * on each event, this avoids the need for reg->alloc.11831183+ *11841184+ * Not doing the ER fixup will only result in era->reg being11851185+ * wrong, but since we won't actually try and program hardware11861186+ * this isn't a problem either.11871187+ */11881188+ if (!cpuc->is_fake) {11891189+ if (idx != reg->idx)11901190+ intel_fixup_er(event, idx);11911191+11921192+ /*11931193+ * x86_schedule_events() can call get_event_constraints()11941194+ * multiple times on events in the case of incremental11951195+ * scheduling(). reg->alloc ensures we only do the ER11961196+ * allocation once.11971197+ */11981198+ reg->alloc = 1;11991199+ }12001200+11881201 /* lock in msr value */11891202 era->config = reg->config;11901203 era->reg = reg->reg;···12151182 /* one more user */12161183 atomic_inc(&era->ref);1217118412181218- /* no need to reallocate during incremental event scheduling */12191219- reg->alloc = 1;12201220-12211185 /*12221186 * need to call x86_get_event_constraint()12231187 * to check if associated event has constraints12241188 */12251189 c = NULL;12261226- } else if (intel_try_alt_er(event, orig_idx)) {12271227- raw_spin_unlock_irqrestore(&era->lock, flags);12281228- goto again;11901190+ } else {11911191+ idx = intel_alt_er(idx);11921192+ if (idx != reg->idx) {11931193+ raw_spin_unlock_irqrestore(&era->lock, flags);11941194+ goto again;11951195+ }12291196 }12301197 raw_spin_unlock_irqrestore(&era->lock, flags);12311198···12391206 struct er_account *era;1240120712411208 /*12421242- * only put constraint if extra reg was actually12431243- * allocated. Also takes care of event which do12441244- * not use an extra shared reg12091209+ * Only put constraint if extra reg was actually allocated. Also takes12101210+ * care of event which do not use an extra shared reg.12111211+ *12121212+ * Also, if this is a fake cpuc we shouldn't touch any event state12131213+ * (reg->alloc) and we don't care about leaving inconsistent cpuc state12141214+ * either since it'll be thrown out.12451215 */12461246- if (!reg->alloc)12161216+ if (!reg->alloc || cpuc->is_fake)12471217 return;1248121812491219 era = &cpuc->shared_regs->regs[reg->idx];···13381302 intel_put_shared_regs_event_constraints(cpuc, event);13391303}1340130413411341-static int intel_pmu_hw_config(struct perf_event *event)13051305+static void intel_pebs_aliases_core2(struct perf_event *event)13421306{13431343- int ret = x86_pmu_hw_config(event);13441344-13451345- if (ret)13461346- return ret;13471347-13481348- if (event->attr.precise_ip &&13491349- (event->hw.config & X86_RAW_EVENT_MASK) == 0x003c) {13071307+ if ((event->hw.config & X86_RAW_EVENT_MASK) == 0x003c) {13501308 /*13511309 * Use an alternative encoding for CPU_CLK_UNHALTED.THREAD_P13521310 * (0x003c) so that we can use it with PEBS.···13611331 */13621332 u64 alt_config = X86_CONFIG(.event=0xc0, .inv=1, .cmask=16);1363133313341334+ alt_config |= (event->hw.config & ~X86_RAW_EVENT_MASK);13351335+ event->hw.config = alt_config;13361336+ }13371337+}13381338+13391339+static void intel_pebs_aliases_snb(struct perf_event *event)13401340+{13411341+ if ((event->hw.config & X86_RAW_EVENT_MASK) == 0x003c) {13421342+ /*13431343+ * Use an alternative encoding for CPU_CLK_UNHALTED.THREAD_P13441344+ * (0x003c) so that we can use it with PEBS.13451345+ *13461346+ * The regular CPU_CLK_UNHALTED.THREAD_P event (0x003c) isn't13471347+ * PEBS capable. However we can use UOPS_RETIRED.ALL13481348+ * (0x01c2), which is a PEBS capable event, to get the same13491349+ * count.13501350+ *13511351+ * UOPS_RETIRED.ALL counts the number of cycles that retires13521352+ * CNTMASK micro-ops. By setting CNTMASK to a value (16)13531353+ * larger than the maximum number of micro-ops that can be13541354+ * retired per cycle (4) and then inverting the condition, we13551355+ * count all cycles that retire 16 or less micro-ops, which13561356+ * is every cycle.13571357+ *13581358+ * Thereby we gain a PEBS capable cycle counter.13591359+ */13601360+ u64 alt_config = X86_CONFIG(.event=0xc2, .umask=0x01, .inv=1, .cmask=16);1364136113651362 alt_config |= (event->hw.config & ~X86_RAW_EVENT_MASK);13661363 event->hw.config = alt_config;13671364 }13651365+}13661366+13671367+static int intel_pmu_hw_config(struct perf_event *event)13681368+{13691369+ int ret = x86_pmu_hw_config(event);13701370+13711371+ if (ret)13721372+ return ret;13731373+13741374+ if (event->attr.precise_ip && x86_pmu.pebs_aliases)13751375+ x86_pmu.pebs_aliases(event);1368137613691377 if (intel_pmu_needs_lbr_smpl(event)) {13701378 ret = intel_pmu_setup_lbr_filter(event);···16771609 .max_period = (1ULL << 31) - 1,16781610 .get_event_constraints = intel_get_event_constraints,16791611 .put_event_constraints = intel_put_event_constraints,16121612+ .pebs_aliases = intel_pebs_aliases_core2,1680161316811614 .format_attrs = intel_arch3_formats_attr,16821615···19111842 break;1912184319131844 case 42: /* SandyBridge */19141914- x86_add_quirk(intel_sandybridge_quirk);19151845 case 45: /* SandyBridge, "Romely-EP" */18461846+ x86_add_quirk(intel_sandybridge_quirk);18471847+ case 58: /* IvyBridge */19161848 memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,19171849 sizeof(hw_cache_event_ids));19181850···1921185119221852 x86_pmu.event_constraints = intel_snb_event_constraints;19231853 x86_pmu.pebs_constraints = intel_snb_pebs_event_constraints;18541854+ x86_pmu.pebs_aliases = intel_pebs_aliases_snb;19241855 x86_pmu.extra_regs = intel_snb_extra_regs;19251856 /* all extra regs are per-cpu when HT is on */19261857 x86_pmu.er_flags |= ERF_HAS_RSP_1;
···120120 bool ret = false;121121 struct pvclock_vcpu_time_info *src;122122123123- /*124124- * per_cpu() is safe here because this function is only called from125125- * timer functions where preemption is already disabled.126126- */127127- WARN_ON(!in_atomic());128123 src = &__get_cpu_var(hv_clock);129124 if ((src->flags & PVCLOCK_GUEST_STOPPED) != 0) {130125 __this_cpu_and(hv_clock.flags, ~PVCLOCK_GUEST_STOPPED);
+2-2
arch/x86/kernel/nmi_selftest.c
···4242static void __init init_nmi_testsuite(void)4343{4444 /* trap all the unknown NMIs we may generate */4545- register_nmi_handler(NMI_UNKNOWN, nmi_unk_cb, 0, "nmi_selftest_unk");4545+ register_nmi_handler_initonly(NMI_UNKNOWN, nmi_unk_cb, 0, "nmi_selftest_unk");4646}47474848static void __init cleanup_nmi_testsuite(void)···6464{6565 unsigned long timeout;66666767- if (register_nmi_handler(NMI_LOCAL, test_nmi_ipi_callback,6767+ if (register_nmi_handler_initonly(NMI_LOCAL, test_nmi_ipi_callback,6868 NMI_FLAG_FIRST, "nmi_selftest")) {6969 nmi_fail = FAILURE;7070 return;
+2-1
arch/x86/kernel/pci-dma.c
···100100 struct dma_attrs *attrs)101101{102102 unsigned long dma_mask;103103- struct page *page = NULL;103103+ struct page *page;104104 unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;105105 dma_addr_t addr;106106···108108109109 flag |= __GFP_ZERO;110110again:111111+ page = NULL;111112 if (!(flag & GFP_ATOMIC))112113 page = dma_alloc_from_contiguous(dev, count, get_order(size));113114 if (!page)
+4-2
arch/x86/kernel/reboot.c
···643643 set_cpus_allowed_ptr(current, cpumask_of(reboot_cpu_id));644644645645 /*646646- * O.K Now that I'm on the appropriate processor,647647- * stop all of the others.646646+ * O.K Now that I'm on the appropriate processor, stop all of the647647+ * others. Also disable the local irq to not receive the per-cpu648648+ * timer interrupt which may trigger scheduler's load balance.648649 */650650+ local_irq_disable();649651 stop_other_cpus();650652#endif651653
+14-2
arch/x86/kernel/smpboot.c
···351351352352static bool __cpuinit match_mc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)353353{354354- if (c->phys_proc_id == o->phys_proc_id)355355- return topology_sane(c, o, "mc");354354+ if (c->phys_proc_id == o->phys_proc_id) {355355+ if (cpu_has(c, X86_FEATURE_AMD_DCM))356356+ return true;356357358358+ return topology_sane(c, o, "mc");359359+ }357360 return false;358361}359362···386383387384 if ((i == cpu) || (has_mc && match_llc(c, o)))388385 link_mask(llc_shared, cpu, i);386386+387387+ }388388+389389+ /*390390+ * This needs a separate iteration over the cpus because we rely on all391391+ * cpu_sibling_mask links to be set-up.392392+ */393393+ for_each_cpu(i, cpu_sibling_setup_mask) {394394+ o = &cpu_data(i);389395390396 if ((i == cpu) || (has_mc && match_mc(c, o))) {391397 link_mask(core, cpu, i);
+4
arch/x86/lib/usercopy.c
···88#include <linux/module.h>991010#include <asm/word-at-a-time.h>1111+#include <linux/sched.h>11121213/*1314 * best effort, GUP based copy_from_user() that is NMI-safe···2120 struct page *page;2221 void *map;2322 int ret;2323+2424+ if (__range_not_ok(from, n, TASK_SIZE))2525+ return len;24262527 do {2628 ret = __get_user_pages_fast(addr, 1, 0, &page);
+4-4
arch/x86/lib/x86-opcode-map.txt
···2828# - (66): the last prefix is 0x662929# - (F3): the last prefix is 0xF33030# - (F2): the last prefix is 0xF23131-#3131+# - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)32323333Table: one byte opcode3434Referrer:···515515b5: LGS Gv,Mp516516b6: MOVZX Gv,Eb517517b7: MOVZX Gv,Ew518518-b8: JMPE | POPCNT Gv,Ev (F3)518518+b8: JMPE (!F3) | POPCNT Gv,Ev (F3)519519b9: Grp10 (1A)520520ba: Grp8 Ev,Ib (1A)521521bb: BTC Ev,Gv522522-bc: BSF Gv,Ev | TZCNT Gv,Ev (F3)523523-bd: BSR Gv,Ev | LZCNT Gv,Ev (F3)522522+bc: BSF Gv,Ev (!F3) | TZCNT Gv,Ev (F3)523523+bd: BSR Gv,Ev (!F3) | LZCNT Gv,Ev (F3)524524be: MOVSX Gv,Eb525525bf: MOVSX Gv,Ew526526# 0x0f 0xc0-0xcf
+2-1
arch/x86/mm/init.c
···6262 extra += PMD_SIZE;6363#endif6464 /* The first 2/4M doesn't use large pages. */6565- extra += mr->end - mr->start;6565+ if (mr->start < PMD_SIZE)6666+ extra += mr->end - mr->start;66676768 ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;6869 } else
+2-2
arch/x86/mm/ioremap.c
···180180181181/**182182 * ioremap_nocache - map bus memory into CPU space183183- * @offset: bus address of the memory183183+ * @phys_addr: bus address of the memory184184 * @size: size of the resource to map185185 *186186 * ioremap_nocache performs a platform specific sequence of operations to···217217218218/**219219 * ioremap_wc - map memory into CPU space write combined220220- * @offset: bus address of the memory220220+ * @phys_addr: bus address of the memory221221 * @size: size of the resource to map222222 *223223 * This version of ioremap ensures that the memory is marked write combining.
+1-1
arch/x86/mm/pageattr.c
···122122123123/**124124 * clflush_cache_range - flush a cache range with clflush125125- * @addr: virtual start address125125+ * @vaddr: virtual start address126126 * @size: number of bytes to flush127127 *128128 * clflush is an unordered instruction which needs fencing with mfence
+2
arch/x86/mm/srat.c
···176176 return;177177 }178178179179+ node_set(node, numa_nodes_parsed);180180+179181 printk(KERN_INFO "SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]\n",180182 node, pxm,181183 (unsigned long long) start, (unsigned long long) end - 1);
+1-1
arch/x86/platform/mrst/mrst.c
···782782EXPORT_SYMBOL_GPL(intel_scu_notifier);783783784784/* Called by IPC driver */785785-void intel_scu_devices_create(void)785785+void __devinit intel_scu_devices_create(void)786786{787787 int i;788788
···706706 unsigned long uninitialized_var(address);707707 unsigned level;708708 pte_t *ptep = NULL;709709+ int ret = 0;709710710711 pfn = page_to_pfn(page);711712 if (!PageHighMem(page)) {···742741 list_add(&page->lru, &m2p_overrides[mfn_hash(mfn)]);743742 spin_unlock_irqrestore(&m2p_override_lock, flags);744743744744+ /* p2m(m2p(mfn)) == mfn: the mfn is already present somewhere in745745+ * this domain. Set the FOREIGN_FRAME_BIT in the p2m for the other746746+ * pfn so that the following mfn_to_pfn(mfn) calls will return the747747+ * pfn from the m2p_override (the backend pfn) instead.748748+ * We need to do this because the pages shared by the frontend749749+ * (xen-blkfront) can be already locked (lock_page, called by750750+ * do_read_cache_page); when the userspace backend tries to use them751751+ * with direct_IO, mfn_to_pfn returns the pfn of the frontend, so752752+ * do_blockdev_direct_IO is going to try to lock the same pages753753+ * again resulting in a deadlock.754754+ * As a side effect get_user_pages_fast might not be safe on the755755+ * frontend pages while they are being shared with the backend,756756+ * because mfn_to_pfn (that ends up being called by GUPF) will757757+ * return the backend pfn rather than the frontend pfn. */758758+ ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);759759+ if (ret == 0 && get_phys_to_machine(pfn) == mfn)760760+ set_phys_to_machine(pfn, FOREIGN_FRAME(mfn));761761+745762 return 0;746763}747764EXPORT_SYMBOL_GPL(m2p_add_override);···771752 unsigned long uninitialized_var(address);772753 unsigned level;773754 pte_t *ptep = NULL;755755+ int ret = 0;774756775757 pfn = page_to_pfn(page);776758 mfn = get_phys_to_machine(pfn);···840820 }841821 } else842822 set_phys_to_machine(pfn, page->index);823823+824824+ /* p2m(m2p(mfn)) == FOREIGN_FRAME(mfn): the mfn is already present825825+ * somewhere in this domain, even before being added to the826826+ * m2p_override (see comment above in m2p_add_override).827827+ * If there are no other entries in the m2p_override corresponding828828+ * to this mfn, then remove the FOREIGN_FRAME_BIT from the p2m for829829+ * the original pfn (the one shared by the frontend): the backend830830+ * cannot do any IO on this page anymore because it has been831831+ * unshared. Removing the FOREIGN_FRAME_BIT from the p2m entry of832832+ * the original pfn causes mfn_to_pfn(mfn) to return the frontend833833+ * pfn again. */834834+ mfn &= ~FOREIGN_FRAME_BIT;835835+ ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);836836+ if (ret == 0 && get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) &&837837+ m2p_find_override(mfn) == NULL)838838+ set_phys_to_machine(pfn, mfn);843839844840 return 0;845841}
···208208209209config ACPI_HOTPLUG_CPU210210 bool211211- depends on ACPI_PROCESSOR && HOTPLUG_CPU211211+ depends on EXPERIMENTAL && ACPI_PROCESSOR && HOTPLUG_CPU212212 select ACPI_CONTAINER213213 default y214214
+9-1
drivers/acpi/battery.c
···643643644644static void acpi_battery_refresh(struct acpi_battery *battery)645645{646646+ int power_unit;647647+646648 if (!battery->bat.dev)647649 return;648650651651+ power_unit = battery->power_unit;652652+649653 acpi_battery_get_info(battery);650650- /* The battery may have changed its reporting units. */654654+655655+ if (power_unit == battery->power_unit)656656+ return;657657+658658+ /* The battery has changed its reporting units. */651659 sysfs_remove_battery(battery);652660 sysfs_add_battery(battery);653661}
+25-5
drivers/acpi/processor_perflib.c
···333333 struct acpi_buffer state = { 0, NULL };334334 union acpi_object *pss = NULL;335335 int i;336336+ int last_invalid = -1;336337337338338339 status = acpi_evaluate_object(pr->handle, "_PSS", NULL, &buffer);···395394 ((u32)(px->core_frequency * 1000) !=396395 (px->core_frequency * 1000))) {397396 printk(KERN_ERR FW_BUG PREFIX398398- "Invalid BIOS _PSS frequency: 0x%llx MHz\n",399399- px->core_frequency);400400- result = -EFAULT;401401- kfree(pr->performance->states);402402- goto end;397397+ "Invalid BIOS _PSS frequency found for processor %d: 0x%llx MHz\n",398398+ pr->id, px->core_frequency);399399+ if (last_invalid == -1)400400+ last_invalid = i;401401+ } else {402402+ if (last_invalid != -1) {403403+ /*404404+ * Copy this valid entry over last_invalid entry405405+ */406406+ memcpy(&(pr->performance->states[last_invalid]),407407+ px, sizeof(struct acpi_processor_px));408408+ ++last_invalid;409409+ }403410 }404411 }412412+413413+ if (last_invalid == 0) {414414+ printk(KERN_ERR FW_BUG PREFIX415415+ "No valid BIOS _PSS frequency found for processor %d\n", pr->id);416416+ result = -EFAULT;417417+ kfree(pr->performance->states);418418+ pr->performance->states = NULL;419419+ }420420+421421+ if (last_invalid > 0)422422+ pr->performance->state_count = last_invalid;405423406424 end:407425 kfree(buffer.pointer);
+22-11
drivers/acpi/video.c
···16871687 set_bit(KEY_BRIGHTNESS_ZERO, input->keybit);16881688 set_bit(KEY_DISPLAY_OFF, input->keybit);1689168916901690- error = input_register_device(input);16911691- if (error)16921692- goto err_stop_video;16931693-16941690 printk(KERN_INFO PREFIX "%s [%s] (multi-head: %s rom: %s post: %s)\n",16951691 ACPI_VIDEO_DEVICE_NAME, acpi_device_bid(device),16961692 video->flags.multihead ? "yes" : "no",···16971701 video->pm_nb.priority = 0;16981702 error = register_pm_notifier(&video->pm_nb);16991703 if (error)17001700- goto err_unregister_input_dev;17041704+ goto err_stop_video;17051705+17061706+ error = input_register_device(input);17071707+ if (error)17081708+ goto err_unregister_pm_notifier;1701170917021710 return 0;1703171117041704- err_unregister_input_dev:17051705- input_unregister_device(input);17121712+ err_unregister_pm_notifier:17131713+ unregister_pm_notifier(&video->pm_nb);17061714 err_stop_video:17071715 acpi_video_bus_stop_devices(video);17081716 err_free_input_dev:···17431743 return 0;17441744}1745174517461746+static int __init is_i740(struct pci_dev *dev)17471747+{17481748+ if (dev->device == 0x00D1)17491749+ return 1;17501750+ if (dev->device == 0x7000)17511751+ return 1;17521752+ return 0;17531753+}17541754+17461755static int __init intel_opregion_present(void)17471756{17481748-#if defined(CONFIG_DRM_I915) || defined(CONFIG_DRM_I915_MODULE)17571757+ int opregion = 0;17491758 struct pci_dev *dev = NULL;17501759 u32 address;17511760···17631754 continue;17641755 if (dev->vendor != PCI_VENDOR_ID_INTEL)17651756 continue;17571757+ /* We don't want to poke around undefined i740 registers */17581758+ if (is_i740(dev))17591759+ continue;17661760 pci_read_config_dword(dev, 0xfc, &address);17671761 if (!address)17681762 continue;17691769- return 1;17631763+ opregion = 1;17701764 }17711771-#endif17721772- return 0;17651765+ return opregion;17731766}1774176717751768int acpi_video_register(void)
···139139 bcma_chipco_chipctl_maskset(cc, 0, ~0, 0x7);140140 break;141141 case 0x4331:142142- /* BCM4331 workaround is SPROM-related, we put it in sprom.c */142142+ case 43431:143143+ /* Ext PA lines must be enabled for tx on BCM4331 */144144+ bcma_chipco_bcm4331_ext_pa_lines_ctl(cc, true);143145 break;144146 case 43224:145147 if (bus->chipinfo.rev == 0) {
+4-2
drivers/bcma/driver_pci.c
···232232int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,233233 bool enable)234234{235235- struct pci_dev *pdev = pc->core->bus->host_pci;235235+ struct pci_dev *pdev;236236 u32 coremask, tmp;237237 int err = 0;238238239239- if (core->bus->hosttype != BCMA_HOSTTYPE_PCI) {239239+ if (!pc || core->bus->hosttype != BCMA_HOSTTYPE_PCI) {240240 /* This bcma device is not on a PCI host-bus. So the IRQs are241241 * not routed through the PCI core.242242 * So we must not enable routing through the PCI core. */243243 goto out;244244 }245245+246246+ pdev = pc->core->bus->host_pci;245247246248 err = pci_read_config_dword(pdev, BCMA_PCI_IRQMASK, &tmp);247249 if (err)
+2-2
drivers/bcma/sprom.c
···579579 if (!sprom)580580 return -ENOMEM;581581582582- if (bus->chipinfo.id == 0x4331)582582+ if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)583583 bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, false);584584585585 pr_debug("SPROM offset 0x%x\n", offset);586586 bcma_sprom_read(bus, offset, sprom);587587588588- if (bus->chipinfo.id == 0x4331)588588+ if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)589589 bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, true);590590591591 err = bcma_sprom_valid(sprom);
···3636 /* data ready? */3737 if (readl(trng->base + TRNG_ODATA) & 1) {3838 *data = readl(trng->base + TRNG_ODATA);3939+ /*4040+ ensure data ready is only set again AFTER the next data4141+ word is ready in case it got set between checking ISR4242+ and reading ODATA, so we don't risk re-reading the4343+ same word4444+ */4545+ readl(trng->base + TRNG_ISR);3946 return 4;4047 } else4148 return 0;
+13-13
drivers/clocksource/sh_cmt.c
···4848 unsigned long next_match_value;4949 unsigned long max_match_value;5050 unsigned long rate;5151- spinlock_t lock;5151+ raw_spinlock_t lock;5252 struct clock_event_device ced;5353 struct clocksource cs;5454 unsigned long total_cycles;5555};56565757-static DEFINE_SPINLOCK(sh_cmt_lock);5757+static DEFINE_RAW_SPINLOCK(sh_cmt_lock);58585959#define CMSTR -1 /* shared register */6060#define CMCSR 0 /* channel register */···139139 unsigned long flags, value;140140141141 /* start stop register shared by multiple timer channels */142142- spin_lock_irqsave(&sh_cmt_lock, flags);142142+ raw_spin_lock_irqsave(&sh_cmt_lock, flags);143143 value = sh_cmt_read(p, CMSTR);144144145145 if (start)···148148 value &= ~(1 << cfg->timer_bit);149149150150 sh_cmt_write(p, CMSTR, value);151151- spin_unlock_irqrestore(&sh_cmt_lock, flags);151151+ raw_spin_unlock_irqrestore(&sh_cmt_lock, flags);152152}153153154154static int sh_cmt_enable(struct sh_cmt_priv *p, unsigned long *rate)···328328{329329 unsigned long flags;330330331331- spin_lock_irqsave(&p->lock, flags);331331+ raw_spin_lock_irqsave(&p->lock, flags);332332 __sh_cmt_set_next(p, delta);333333- spin_unlock_irqrestore(&p->lock, flags);333333+ raw_spin_unlock_irqrestore(&p->lock, flags);334334}335335336336static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id)···385385 int ret = 0;386386 unsigned long flags;387387388388- spin_lock_irqsave(&p->lock, flags);388388+ raw_spin_lock_irqsave(&p->lock, flags);389389390390 if (!(p->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE)))391391 ret = sh_cmt_enable(p, &p->rate);···398398 if ((flag == FLAG_CLOCKSOURCE) && (!(p->flags & FLAG_CLOCKEVENT)))399399 __sh_cmt_set_next(p, p->max_match_value);400400 out:401401- spin_unlock_irqrestore(&p->lock, flags);401401+ raw_spin_unlock_irqrestore(&p->lock, flags);402402403403 return ret;404404}···408408 unsigned long flags;409409 unsigned long f;410410411411- spin_lock_irqsave(&p->lock, flags);411411+ raw_spin_lock_irqsave(&p->lock, flags);412412413413 f = p->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE);414414 p->flags &= ~flag;···420420 if ((flag == FLAG_CLOCKEVENT) && (p->flags & FLAG_CLOCKSOURCE))421421 __sh_cmt_set_next(p, p->max_match_value);422422423423- spin_unlock_irqrestore(&p->lock, flags);423423+ raw_spin_unlock_irqrestore(&p->lock, flags);424424}425425426426static struct sh_cmt_priv *cs_to_sh_cmt(struct clocksource *cs)···435435 unsigned long value;436436 int has_wrapped;437437438438- spin_lock_irqsave(&p->lock, flags);438438+ raw_spin_lock_irqsave(&p->lock, flags);439439 value = p->total_cycles;440440 raw = sh_cmt_get_counter(p, &has_wrapped);441441442442 if (unlikely(has_wrapped))443443 raw += p->match_value + 1;444444- spin_unlock_irqrestore(&p->lock, flags);444444+ raw_spin_unlock_irqrestore(&p->lock, flags);445445446446 return value + raw;447447}···591591 p->max_match_value = (1 << p->width) - 1;592592593593 p->match_value = p->max_match_value;594594- spin_lock_init(&p->lock);594594+ raw_spin_lock_init(&p->lock);595595596596 if (clockevent_rating)597597 sh_cmt_register_clockevent(p, name, clockevent_rating);
+3-3
drivers/clocksource/sh_mtu2.c
···4343 struct clock_event_device ced;4444};45454646-static DEFINE_SPINLOCK(sh_mtu2_lock);4646+static DEFINE_RAW_SPINLOCK(sh_mtu2_lock);47474848#define TSTR -1 /* shared register */4949#define TCR 0 /* channel register */···107107 unsigned long flags, value;108108109109 /* start stop register shared by multiple timer channels */110110- spin_lock_irqsave(&sh_mtu2_lock, flags);110110+ raw_spin_lock_irqsave(&sh_mtu2_lock, flags);111111 value = sh_mtu2_read(p, TSTR);112112113113 if (start)···116116 value &= ~(1 << cfg->timer_bit);117117118118 sh_mtu2_write(p, TSTR, value);119119- spin_unlock_irqrestore(&sh_mtu2_lock, flags);119119+ raw_spin_unlock_irqrestore(&sh_mtu2_lock, flags);120120}121121122122static int sh_mtu2_enable(struct sh_mtu2_priv *p)
+6-10
drivers/clocksource/sh_tmu.c
···4545 struct clocksource cs;4646};47474848-static DEFINE_SPINLOCK(sh_tmu_lock);4848+static DEFINE_RAW_SPINLOCK(sh_tmu_lock);49495050#define TSTR -1 /* shared register */5151#define TCOR 0 /* channel register */···9595 unsigned long flags, value;96969797 /* start stop register shared by multiple timer channels */9898- spin_lock_irqsave(&sh_tmu_lock, flags);9898+ raw_spin_lock_irqsave(&sh_tmu_lock, flags);9999 value = sh_tmu_read(p, TSTR);100100101101 if (start)···104104 value &= ~(1 << cfg->timer_bit);105105106106 sh_tmu_write(p, TSTR, value);107107- spin_unlock_irqrestore(&sh_tmu_lock, flags);107107+ raw_spin_unlock_irqrestore(&sh_tmu_lock, flags);108108}109109110110static int sh_tmu_enable(struct sh_tmu_priv *p)···245245246246 sh_tmu_enable(p);247247248248- /* TODO: calculate good shift from rate and counter bit width */249249-250250- ced->shift = 32;251251- ced->mult = div_sc(p->rate, NSEC_PER_SEC, ced->shift);252252- ced->max_delta_ns = clockevent_delta2ns(0xffffffff, ced);253253- ced->min_delta_ns = 5000;248248+ clockevents_config(ced, p->rate);254249255250 if (periodic) {256251 p->periodic = (p->rate + HZ/2) / HZ;···318323 ced->set_mode = sh_tmu_clock_event_mode;319324320325 dev_info(&p->pdev->dev, "used for clock events\n");321321- clockevents_register_device(ced);326326+327327+ clockevents_config_and_register(ced, 1, 0x300, 0xffffffff);322328323329 ret = setup_irq(p->irqaction.irq, &p->irqaction);324330 if (ret) {
···5151static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)5252{5353 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);5454+ unsigned int i;54555556 DRM_DEBUG_KMS("%s\n", __FILE__);56575758 drm_framebuffer_cleanup(fb);5959+6060+ for (i = 0; i < ARRAY_SIZE(exynos_fb->exynos_gem_obj); i++) {6161+ struct drm_gem_object *obj;6262+6363+ if (exynos_fb->exynos_gem_obj[i] == NULL)6464+ continue;6565+6666+ obj = &exynos_fb->exynos_gem_obj[i]->base;6767+ drm_gem_object_unreference_unlocked(obj);6868+ }58695970 kfree(exynos_fb);6071 exynos_fb = NULL;···145134 return ERR_PTR(-ENOENT);146135 }147136148148- drm_gem_object_unreference_unlocked(obj);149149-150137 fb = exynos_drm_framebuffer_init(dev, mode_cmd, obj);151151- if (IS_ERR(fb))138138+ if (IS_ERR(fb)) {139139+ drm_gem_object_unreference_unlocked(obj);152140 return fb;141141+ }153142154143 exynos_fb = to_exynos_fb(fb);155144 nr = exynos_drm_format_num_buffers(fb->pixel_format);···162151 exynos_drm_fb_destroy(fb);163152 return ERR_PTR(-ENOENT);164153 }165165-166166- drm_gem_object_unreference_unlocked(obj);167154168155 exynos_fb->exynos_gem_obj[i] = to_exynos_gem_obj(obj);169156 }
+2-2
drivers/gpu/drm/exynos/exynos_drm_fb.h
···3131static inline int exynos_drm_format_num_buffers(uint32_t format)3232{3333 switch (format) {3434- case DRM_FORMAT_NV12M:3434+ case DRM_FORMAT_NV12:3535 case DRM_FORMAT_NV12MT:3636 return 2;3737- case DRM_FORMAT_YUV420M:3737+ case DRM_FORMAT_YUV420:3838 return 3;3939 default:4040 return 1;
+3-6
drivers/gpu/drm/exynos/exynos_drm_gem.c
···689689 struct drm_device *dev, uint32_t handle,690690 uint64_t *offset)691691{692692- struct exynos_drm_gem_obj *exynos_gem_obj;693692 struct drm_gem_object *obj;694693 int ret = 0;695694···709710 goto unlock;710711 }711712712712- exynos_gem_obj = to_exynos_gem_obj(obj);713713-714714- if (!exynos_gem_obj->base.map_list.map) {715715- ret = drm_gem_create_mmap_offset(&exynos_gem_obj->base);713713+ if (!obj->map_list.map) {714714+ ret = drm_gem_create_mmap_offset(obj);716715 if (ret)717716 goto out;718717 }719718720720- *offset = (u64)exynos_gem_obj->base.map_list.hash.key << PAGE_SHIFT;719719+ *offset = (u64)obj->map_list.hash.key << PAGE_SHIFT;721720 DRM_DEBUG_KMS("offset = 0x%lx\n", (unsigned long)*offset);722721723722out:
+7-5
drivers/gpu/drm/exynos/exynos_mixer.c
···365365 switch (win_data->pixel_format) {366366 case DRM_FORMAT_NV12MT:367367 tiled_mode = true;368368- case DRM_FORMAT_NV12M:368368+ case DRM_FORMAT_NV12:369369 crcb_mode = false;370370 buf_num = 2;371371 break;···601601 mixer_reg_write(res, MXR_BG_COLOR2, 0x008080);602602603603 /* setting graphical layers */604604-605604 val = MXR_GRP_CFG_COLOR_KEY_DISABLE; /* no blank key */606605 val |= MXR_GRP_CFG_WIN_BLEND_EN;606606+ val |= MXR_GRP_CFG_BLEND_PRE_MUL;607607+ val |= MXR_GRP_CFG_PIXEL_BLEND_EN;607608 val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */608609609610 /* the same configuration for both layers */610611 mixer_reg_write(res, MXR_GRAPHIC_CFG(0), val);611611-612612- val |= MXR_GRP_CFG_BLEND_PRE_MUL;613613- val |= MXR_GRP_CFG_PIXEL_BLEND_EN;614612 mixer_reg_write(res, MXR_GRAPHIC_CFG(1), val);613613+614614+ /* setting video layers */615615+ val = MXR_GRP_CFG_ALPHA_VAL(0);616616+ mixer_reg_write(res, MXR_VIDEO_CFG, val);615617616618 /* configuration of Video Processor Registers */617619 vp_win_reset(ctx);
···348348 WREG32(HDMI0_AUDIO_PACKET_CONTROL + offset,349349 HDMI0_AUDIO_SAMPLE_SEND | /* send audio packets */350350 HDMI0_AUDIO_DELAY_EN(1) | /* default audio delay */351351- HDMI0_AUDIO_SEND_MAX_PACKETS | /* send NULL packets if no audio is available */352351 HDMI0_AUDIO_PACKETS_PER_LINE(3) | /* should be suffient for all audio modes and small enough for all hblanks */353352 HDMI0_60958_CS_UPDATE); /* allow 60958 channel status fields to be updated */354353 }
···476476477477 mutex_lock(&vm->mutex);478478 if (last_pfn > vm->last_pfn) {479479- /* grow va space 32M by 32M */480480- unsigned align = ((32 << 20) >> 12) - 1;479479+ /* release mutex and lock in right order */480480+ mutex_unlock(&vm->mutex);481481 radeon_mutex_lock(&rdev->cs_mutex);482482- radeon_vm_unbind_locked(rdev, vm);482482+ mutex_lock(&vm->mutex);483483+ /* and check again */484484+ if (last_pfn > vm->last_pfn) {485485+ /* grow va space 32M by 32M */486486+ unsigned align = ((32 << 20) >> 12) - 1;487487+ radeon_vm_unbind_locked(rdev, vm);488488+ vm->last_pfn = (last_pfn + align) & ~align;489489+ }483490 radeon_mutex_unlock(&rdev->cs_mutex);484484- vm->last_pfn = (last_pfn + align) & ~align;485491 }486492 head = &vm->va;487493 last_offset = 0;···601595 if (bo_va == NULL)602596 return 0;603597604604- mutex_lock(&vm->mutex);605598 radeon_mutex_lock(&rdev->cs_mutex);599599+ mutex_lock(&vm->mutex);606600 radeon_vm_bo_update_pte(rdev, vm, bo, NULL);607601 radeon_mutex_unlock(&rdev->cs_mutex);608602 list_del(&bo_va->vm_list);···647641 struct radeon_bo_va *bo_va, *tmp;648642 int r;649643650650- mutex_lock(&vm->mutex);651651-652644 radeon_mutex_lock(&rdev->cs_mutex);645645+ mutex_lock(&vm->mutex);653646 radeon_vm_unbind_locked(rdev, vm);654647 radeon_mutex_unlock(&rdev->cs_mutex);655648
+1-1
drivers/gpu/drm/radeon/radeon_kms.c
···273273 break;274274 case RADEON_INFO_MAX_PIPES:275275 if (rdev->family >= CHIP_TAHITI)276276- value = rdev->config.si.max_pipes_per_simd;276276+ value = rdev->config.si.max_cu_per_sh;277277 else if (rdev->family >= CHIP_CAYMAN)278278 value = rdev->config.cayman.max_pipes_per_simd;279279 else if (rdev->family >= CHIP_CEDAR)
+6-6
drivers/gpu/drm/radeon/rs600.c
···908908 return r;909909 }910910911911- r = r600_audio_init(rdev);912912- if (r) {913913- dev_err(rdev->dev, "failed initializing audio\n");914914- return r;915915- }916916-917911 r = radeon_ib_pool_start(rdev);918912 if (r)919913 return r;···915921 r = radeon_ib_ring_tests(rdev);916922 if (r)917923 return r;924924+925925+ r = r600_audio_init(rdev);926926+ if (r) {927927+ dev_err(rdev->dev, "failed initializing audio\n");928928+ return r;929929+ }918930919931 return 0;920932}
+6-6
drivers/gpu/drm/radeon/rs690.c
···637637 return r;638638 }639639640640- r = r600_audio_init(rdev);641641- if (r) {642642- dev_err(rdev->dev, "failed initializing audio\n");643643- return r;644644- }645645-646640 r = radeon_ib_pool_start(rdev);647641 if (r)648642 return r;···644650 r = radeon_ib_ring_tests(rdev);645651 if (r)646652 return r;653653+654654+ r = r600_audio_init(rdev);655655+ if (r) {656656+ dev_err(rdev->dev, "failed initializing audio\n");657657+ return r;658658+ }647659648660 return 0;649661}
+6-12
drivers/gpu/drm/radeon/rv770.c
···956956 if (r)957957 return r;958958959959+ r = r600_audio_init(rdev);960960+ if (r) {961961+ DRM_ERROR("radeon: audio init failed\n");962962+ return r;963963+ }964964+959965 return 0;960966}961967···981975 if (r) {982976 DRM_ERROR("r600 startup failed on resume\n");983977 rdev->accel_working = false;984984- return r;985985- }986986-987987- r = r600_audio_init(rdev);988988- if (r) {989989- dev_err(rdev->dev, "radeon: audio init failed\n");990978 return r;991979 }992980···10901090 radeon_irq_kms_fini(rdev);10911091 rv770_pcie_gart_fini(rdev);10921092 rdev->accel_working = false;10931093- }10941094-10951095- r = r600_audio_init(rdev);10961096- if (r) {10971097- dev_err(rdev->dev, "radeon: audio init failed\n");10981098- return r;10991093 }1100109411011095 return 0;
+164-317
drivers/gpu/drm/radeon/si.c
···867867/*868868 * Core functions869869 */870870-static u32 si_get_tile_pipe_to_backend_map(struct radeon_device *rdev,871871- u32 num_tile_pipes,872872- u32 num_backends_per_asic,873873- u32 *backend_disable_mask_per_asic,874874- u32 num_shader_engines)875875-{876876- u32 backend_map = 0;877877- u32 enabled_backends_mask = 0;878878- u32 enabled_backends_count = 0;879879- u32 num_backends_per_se;880880- u32 cur_pipe;881881- u32 swizzle_pipe[SI_MAX_PIPES];882882- u32 cur_backend = 0;883883- u32 i;884884- bool force_no_swizzle;885885-886886- /* force legal values */887887- if (num_tile_pipes < 1)888888- num_tile_pipes = 1;889889- if (num_tile_pipes > rdev->config.si.max_tile_pipes)890890- num_tile_pipes = rdev->config.si.max_tile_pipes;891891- if (num_shader_engines < 1)892892- num_shader_engines = 1;893893- if (num_shader_engines > rdev->config.si.max_shader_engines)894894- num_shader_engines = rdev->config.si.max_shader_engines;895895- if (num_backends_per_asic < num_shader_engines)896896- num_backends_per_asic = num_shader_engines;897897- if (num_backends_per_asic > (rdev->config.si.max_backends_per_se * num_shader_engines))898898- num_backends_per_asic = rdev->config.si.max_backends_per_se * num_shader_engines;899899-900900- /* make sure we have the same number of backends per se */901901- num_backends_per_asic = ALIGN(num_backends_per_asic, num_shader_engines);902902- /* set up the number of backends per se */903903- num_backends_per_se = num_backends_per_asic / num_shader_engines;904904- if (num_backends_per_se > rdev->config.si.max_backends_per_se) {905905- num_backends_per_se = rdev->config.si.max_backends_per_se;906906- num_backends_per_asic = num_backends_per_se * num_shader_engines;907907- }908908-909909- /* create enable mask and count for enabled backends */910910- for (i = 0; i < SI_MAX_BACKENDS; ++i) {911911- if (((*backend_disable_mask_per_asic >> i) & 1) == 0) {912912- enabled_backends_mask |= (1 << i);913913- ++enabled_backends_count;914914- }915915- if (enabled_backends_count == num_backends_per_asic)916916- break;917917- }918918-919919- /* force the backends mask to match the current number of backends */920920- if (enabled_backends_count != num_backends_per_asic) {921921- u32 this_backend_enabled;922922- u32 shader_engine;923923- u32 backend_per_se;924924-925925- enabled_backends_mask = 0;926926- enabled_backends_count = 0;927927- *backend_disable_mask_per_asic = SI_MAX_BACKENDS_MASK;928928- for (i = 0; i < SI_MAX_BACKENDS; ++i) {929929- /* calc the current se */930930- shader_engine = i / rdev->config.si.max_backends_per_se;931931- /* calc the backend per se */932932- backend_per_se = i % rdev->config.si.max_backends_per_se;933933- /* default to not enabled */934934- this_backend_enabled = 0;935935- if ((shader_engine < num_shader_engines) &&936936- (backend_per_se < num_backends_per_se))937937- this_backend_enabled = 1;938938- if (this_backend_enabled) {939939- enabled_backends_mask |= (1 << i);940940- *backend_disable_mask_per_asic &= ~(1 << i);941941- ++enabled_backends_count;942942- }943943- }944944- }945945-946946-947947- memset((uint8_t *)&swizzle_pipe[0], 0, sizeof(u32) * SI_MAX_PIPES);948948- switch (rdev->family) {949949- case CHIP_TAHITI:950950- case CHIP_PITCAIRN:951951- case CHIP_VERDE:952952- force_no_swizzle = true;953953- break;954954- default:955955- force_no_swizzle = false;956956- break;957957- }958958- if (force_no_swizzle) {959959- bool last_backend_enabled = false;960960-961961- force_no_swizzle = false;962962- for (i = 0; i < SI_MAX_BACKENDS; ++i) {963963- if (((enabled_backends_mask >> i) & 1) == 1) {964964- if (last_backend_enabled)965965- force_no_swizzle = true;966966- last_backend_enabled = true;967967- } else968968- last_backend_enabled = false;969969- }970970- }971971-972972- switch (num_tile_pipes) {973973- case 1:974974- case 3:975975- case 5:976976- case 7:977977- DRM_ERROR("odd number of pipes!\n");978978- break;979979- case 2:980980- swizzle_pipe[0] = 0;981981- swizzle_pipe[1] = 1;982982- break;983983- case 4:984984- if (force_no_swizzle) {985985- swizzle_pipe[0] = 0;986986- swizzle_pipe[1] = 1;987987- swizzle_pipe[2] = 2;988988- swizzle_pipe[3] = 3;989989- } else {990990- swizzle_pipe[0] = 0;991991- swizzle_pipe[1] = 2;992992- swizzle_pipe[2] = 1;993993- swizzle_pipe[3] = 3;994994- }995995- break;996996- case 6:997997- if (force_no_swizzle) {998998- swizzle_pipe[0] = 0;999999- swizzle_pipe[1] = 1;10001000- swizzle_pipe[2] = 2;10011001- swizzle_pipe[3] = 3;10021002- swizzle_pipe[4] = 4;10031003- swizzle_pipe[5] = 5;10041004- } else {10051005- swizzle_pipe[0] = 0;10061006- swizzle_pipe[1] = 2;10071007- swizzle_pipe[2] = 4;10081008- swizzle_pipe[3] = 1;10091009- swizzle_pipe[4] = 3;10101010- swizzle_pipe[5] = 5;10111011- }10121012- break;10131013- case 8:10141014- if (force_no_swizzle) {10151015- swizzle_pipe[0] = 0;10161016- swizzle_pipe[1] = 1;10171017- swizzle_pipe[2] = 2;10181018- swizzle_pipe[3] = 3;10191019- swizzle_pipe[4] = 4;10201020- swizzle_pipe[5] = 5;10211021- swizzle_pipe[6] = 6;10221022- swizzle_pipe[7] = 7;10231023- } else {10241024- swizzle_pipe[0] = 0;10251025- swizzle_pipe[1] = 2;10261026- swizzle_pipe[2] = 4;10271027- swizzle_pipe[3] = 6;10281028- swizzle_pipe[4] = 1;10291029- swizzle_pipe[5] = 3;10301030- swizzle_pipe[6] = 5;10311031- swizzle_pipe[7] = 7;10321032- }10331033- break;10341034- }10351035-10361036- for (cur_pipe = 0; cur_pipe < num_tile_pipes; ++cur_pipe) {10371037- while (((1 << cur_backend) & enabled_backends_mask) == 0)10381038- cur_backend = (cur_backend + 1) % SI_MAX_BACKENDS;10391039-10401040- backend_map |= (((cur_backend & 0xf) << (swizzle_pipe[cur_pipe] * 4)));10411041-10421042- cur_backend = (cur_backend + 1) % SI_MAX_BACKENDS;10431043- }10441044-10451045- return backend_map;10461046-}10471047-10481048-static u32 si_get_disable_mask_per_asic(struct radeon_device *rdev,10491049- u32 disable_mask_per_se,10501050- u32 max_disable_mask_per_se,10511051- u32 num_shader_engines)10521052-{10531053- u32 disable_field_width_per_se = r600_count_pipe_bits(disable_mask_per_se);10541054- u32 disable_mask_per_asic = disable_mask_per_se & max_disable_mask_per_se;10551055-10561056- if (num_shader_engines == 1)10571057- return disable_mask_per_asic;10581058- else if (num_shader_engines == 2)10591059- return disable_mask_per_asic | (disable_mask_per_asic << disable_field_width_per_se);10601060- else10611061- return 0xffffffff;10621062-}10631063-1064870static void si_tiling_mode_table_init(struct radeon_device *rdev)1065871{1066872 const u32 num_tile_mode_states = 32;···13681562 DRM_ERROR("unknown asic: 0x%x\n", rdev->family);13691563}1370156415651565+static void si_select_se_sh(struct radeon_device *rdev,15661566+ u32 se_num, u32 sh_num)15671567+{15681568+ u32 data = INSTANCE_BROADCAST_WRITES;15691569+15701570+ if ((se_num == 0xffffffff) && (sh_num == 0xffffffff))15711571+ data = SH_BROADCAST_WRITES | SE_BROADCAST_WRITES;15721572+ else if (se_num == 0xffffffff)15731573+ data |= SE_BROADCAST_WRITES | SH_INDEX(sh_num);15741574+ else if (sh_num == 0xffffffff)15751575+ data |= SH_BROADCAST_WRITES | SE_INDEX(se_num);15761576+ else15771577+ data |= SH_INDEX(sh_num) | SE_INDEX(se_num);15781578+ WREG32(GRBM_GFX_INDEX, data);15791579+}15801580+15811581+static u32 si_create_bitmask(u32 bit_width)15821582+{15831583+ u32 i, mask = 0;15841584+15851585+ for (i = 0; i < bit_width; i++) {15861586+ mask <<= 1;15871587+ mask |= 1;15881588+ }15891589+ return mask;15901590+}15911591+15921592+static u32 si_get_cu_enabled(struct radeon_device *rdev, u32 cu_per_sh)15931593+{15941594+ u32 data, mask;15951595+15961596+ data = RREG32(CC_GC_SHADER_ARRAY_CONFIG);15971597+ if (data & 1)15981598+ data &= INACTIVE_CUS_MASK;15991599+ else16001600+ data = 0;16011601+ data |= RREG32(GC_USER_SHADER_ARRAY_CONFIG);16021602+16031603+ data >>= INACTIVE_CUS_SHIFT;16041604+16051605+ mask = si_create_bitmask(cu_per_sh);16061606+16071607+ return ~data & mask;16081608+}16091609+16101610+static void si_setup_spi(struct radeon_device *rdev,16111611+ u32 se_num, u32 sh_per_se,16121612+ u32 cu_per_sh)16131613+{16141614+ int i, j, k;16151615+ u32 data, mask, active_cu;16161616+16171617+ for (i = 0; i < se_num; i++) {16181618+ for (j = 0; j < sh_per_se; j++) {16191619+ si_select_se_sh(rdev, i, j);16201620+ data = RREG32(SPI_STATIC_THREAD_MGMT_3);16211621+ active_cu = si_get_cu_enabled(rdev, cu_per_sh);16221622+16231623+ mask = 1;16241624+ for (k = 0; k < 16; k++) {16251625+ mask <<= k;16261626+ if (active_cu & mask) {16271627+ data &= ~mask;16281628+ WREG32(SPI_STATIC_THREAD_MGMT_3, data);16291629+ break;16301630+ }16311631+ }16321632+ }16331633+ }16341634+ si_select_se_sh(rdev, 0xffffffff, 0xffffffff);16351635+}16361636+16371637+static u32 si_get_rb_disabled(struct radeon_device *rdev,16381638+ u32 max_rb_num, u32 se_num,16391639+ u32 sh_per_se)16401640+{16411641+ u32 data, mask;16421642+16431643+ data = RREG32(CC_RB_BACKEND_DISABLE);16441644+ if (data & 1)16451645+ data &= BACKEND_DISABLE_MASK;16461646+ else16471647+ data = 0;16481648+ data |= RREG32(GC_USER_RB_BACKEND_DISABLE);16491649+16501650+ data >>= BACKEND_DISABLE_SHIFT;16511651+16521652+ mask = si_create_bitmask(max_rb_num / se_num / sh_per_se);16531653+16541654+ return data & mask;16551655+}16561656+16571657+static void si_setup_rb(struct radeon_device *rdev,16581658+ u32 se_num, u32 sh_per_se,16591659+ u32 max_rb_num)16601660+{16611661+ int i, j;16621662+ u32 data, mask;16631663+ u32 disabled_rbs = 0;16641664+ u32 enabled_rbs = 0;16651665+16661666+ for (i = 0; i < se_num; i++) {16671667+ for (j = 0; j < sh_per_se; j++) {16681668+ si_select_se_sh(rdev, i, j);16691669+ data = si_get_rb_disabled(rdev, max_rb_num, se_num, sh_per_se);16701670+ disabled_rbs |= data << ((i * sh_per_se + j) * TAHITI_RB_BITMAP_WIDTH_PER_SH);16711671+ }16721672+ }16731673+ si_select_se_sh(rdev, 0xffffffff, 0xffffffff);16741674+16751675+ mask = 1;16761676+ for (i = 0; i < max_rb_num; i++) {16771677+ if (!(disabled_rbs & mask))16781678+ enabled_rbs |= mask;16791679+ mask <<= 1;16801680+ }16811681+16821682+ for (i = 0; i < se_num; i++) {16831683+ si_select_se_sh(rdev, i, 0xffffffff);16841684+ data = 0;16851685+ for (j = 0; j < sh_per_se; j++) {16861686+ switch (enabled_rbs & 3) {16871687+ case 1:16881688+ data |= (RASTER_CONFIG_RB_MAP_0 << (i * sh_per_se + j) * 2);16891689+ break;16901690+ case 2:16911691+ data |= (RASTER_CONFIG_RB_MAP_3 << (i * sh_per_se + j) * 2);16921692+ break;16931693+ case 3:16941694+ default:16951695+ data |= (RASTER_CONFIG_RB_MAP_2 << (i * sh_per_se + j) * 2);16961696+ break;16971697+ }16981698+ enabled_rbs >>= 2;16991699+ }17001700+ WREG32(PA_SC_RASTER_CONFIG, data);17011701+ }17021702+ si_select_se_sh(rdev, 0xffffffff, 0xffffffff);17031703+}17041704+13711705static void si_gpu_init(struct radeon_device *rdev)13721706{13731373- u32 cc_rb_backend_disable = 0;13741374- u32 cc_gc_shader_array_config;13751707 u32 gb_addr_config = 0;13761708 u32 mc_shared_chmap, mc_arb_ramcfg;13771377- u32 gb_backend_map;13781378- u32 cgts_tcc_disable;13791709 u32 sx_debug_1;13801380- u32 gc_user_shader_array_config;13811381- u32 gc_user_rb_backend_disable;13821382- u32 cgts_user_tcc_disable;13831710 u32 hdp_host_path_cntl;13841711 u32 tmp;13851712 int i, j;···15201581 switch (rdev->family) {15211582 case CHIP_TAHITI:15221583 rdev->config.si.max_shader_engines = 2;15231523- rdev->config.si.max_pipes_per_simd = 4;15241584 rdev->config.si.max_tile_pipes = 12;15251525- rdev->config.si.max_simds_per_se = 8;15851585+ rdev->config.si.max_cu_per_sh = 8;15861586+ rdev->config.si.max_sh_per_se = 2;15261587 rdev->config.si.max_backends_per_se = 4;15271588 rdev->config.si.max_texture_channel_caches = 12;15281589 rdev->config.si.max_gprs = 256;···15331594 rdev->config.si.sc_prim_fifo_size_backend = 0x100;15341595 rdev->config.si.sc_hiz_tile_fifo_size = 0x30;15351596 rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;15971597+ gb_addr_config = TAHITI_GB_ADDR_CONFIG_GOLDEN;15361598 break;15371599 case CHIP_PITCAIRN:15381600 rdev->config.si.max_shader_engines = 2;15391539- rdev->config.si.max_pipes_per_simd = 4;15401601 rdev->config.si.max_tile_pipes = 8;15411541- rdev->config.si.max_simds_per_se = 5;16021602+ rdev->config.si.max_cu_per_sh = 5;16031603+ rdev->config.si.max_sh_per_se = 2;15421604 rdev->config.si.max_backends_per_se = 4;15431605 rdev->config.si.max_texture_channel_caches = 8;15441606 rdev->config.si.max_gprs = 256;···15501610 rdev->config.si.sc_prim_fifo_size_backend = 0x100;15511611 rdev->config.si.sc_hiz_tile_fifo_size = 0x30;15521612 rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;16131613+ gb_addr_config = TAHITI_GB_ADDR_CONFIG_GOLDEN;15531614 break;15541615 case CHIP_VERDE:15551616 default:15561617 rdev->config.si.max_shader_engines = 1;15571557- rdev->config.si.max_pipes_per_simd = 4;15581618 rdev->config.si.max_tile_pipes = 4;15591559- rdev->config.si.max_simds_per_se = 2;16191619+ rdev->config.si.max_cu_per_sh = 2;16201620+ rdev->config.si.max_sh_per_se = 2;15601621 rdev->config.si.max_backends_per_se = 4;15611622 rdev->config.si.max_texture_channel_caches = 4;15621623 rdev->config.si.max_gprs = 256;···15681627 rdev->config.si.sc_prim_fifo_size_backend = 0x40;15691628 rdev->config.si.sc_hiz_tile_fifo_size = 0x30;15701629 rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;16301630+ gb_addr_config = VERDE_GB_ADDR_CONFIG_GOLDEN;15711631 break;15721632 }15731633···15901648 mc_shared_chmap = RREG32(MC_SHARED_CHMAP);15911649 mc_arb_ramcfg = RREG32(MC_ARB_RAMCFG);1592165015931593- cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE);15941594- cc_gc_shader_array_config = RREG32(CC_GC_SHADER_ARRAY_CONFIG);15951595- cgts_tcc_disable = 0xffff0000;15961596- for (i = 0; i < rdev->config.si.max_texture_channel_caches; i++)15971597- cgts_tcc_disable &= ~(1 << (16 + i));15981598- gc_user_rb_backend_disable = RREG32(GC_USER_RB_BACKEND_DISABLE);15991599- gc_user_shader_array_config = RREG32(GC_USER_SHADER_ARRAY_CONFIG);16001600- cgts_user_tcc_disable = RREG32(CGTS_USER_TCC_DISABLE);16011601-16021602- rdev->config.si.num_shader_engines = rdev->config.si.max_shader_engines;16031651 rdev->config.si.num_tile_pipes = rdev->config.si.max_tile_pipes;16041604- tmp = ((~gc_user_rb_backend_disable) & BACKEND_DISABLE_MASK) >> BACKEND_DISABLE_SHIFT;16051605- rdev->config.si.num_backends_per_se = r600_count_pipe_bits(tmp);16061606- tmp = (gc_user_rb_backend_disable & BACKEND_DISABLE_MASK) >> BACKEND_DISABLE_SHIFT;16071607- rdev->config.si.backend_disable_mask_per_asic =16081608- si_get_disable_mask_per_asic(rdev, tmp, SI_MAX_BACKENDS_PER_SE_MASK,16091609- rdev->config.si.num_shader_engines);16101610- rdev->config.si.backend_map =16111611- si_get_tile_pipe_to_backend_map(rdev, rdev->config.si.num_tile_pipes,16121612- rdev->config.si.num_backends_per_se *16131613- rdev->config.si.num_shader_engines,16141614- &rdev->config.si.backend_disable_mask_per_asic,16151615- rdev->config.si.num_shader_engines);16161616- tmp = ((~cgts_user_tcc_disable) & TCC_DISABLE_MASK) >> TCC_DISABLE_SHIFT;16171617- rdev->config.si.num_texture_channel_caches = r600_count_pipe_bits(tmp);16181652 rdev->config.si.mem_max_burst_length_bytes = 256;16191653 tmp = (mc_arb_ramcfg & NOOFCOLS_MASK) >> NOOFCOLS_SHIFT;16201654 rdev->config.si.mem_row_size_in_kb = (4 * (1 << (8 + tmp))) / 1024;···16011683 rdev->config.si.num_gpus = 1;16021684 rdev->config.si.multi_gpu_tile_size = 64;1603168516041604- gb_addr_config = 0;16051605- switch (rdev->config.si.num_tile_pipes) {16061606- case 1:16071607- gb_addr_config |= NUM_PIPES(0);16081608- break;16091609- case 2:16101610- gb_addr_config |= NUM_PIPES(1);16111611- break;16121612- case 4:16131613- gb_addr_config |= NUM_PIPES(2);16141614- break;16151615- case 8:16161616- default:16171617- gb_addr_config |= NUM_PIPES(3);16181618- break;16191619- }16201620-16211621- tmp = (rdev->config.si.mem_max_burst_length_bytes / 256) - 1;16221622- gb_addr_config |= PIPE_INTERLEAVE_SIZE(tmp);16231623- gb_addr_config |= NUM_SHADER_ENGINES(rdev->config.si.num_shader_engines - 1);16241624- tmp = (rdev->config.si.shader_engine_tile_size / 16) - 1;16251625- gb_addr_config |= SHADER_ENGINE_TILE_SIZE(tmp);16261626- switch (rdev->config.si.num_gpus) {16271627- case 1:16281628- default:16291629- gb_addr_config |= NUM_GPUS(0);16301630- break;16311631- case 2:16321632- gb_addr_config |= NUM_GPUS(1);16331633- break;16341634- case 4:16351635- gb_addr_config |= NUM_GPUS(2);16361636- break;16371637- }16381638- switch (rdev->config.si.multi_gpu_tile_size) {16391639- case 16:16401640- gb_addr_config |= MULTI_GPU_TILE_SIZE(0);16411641- break;16421642- case 32:16431643- default:16441644- gb_addr_config |= MULTI_GPU_TILE_SIZE(1);16451645- break;16461646- case 64:16471647- gb_addr_config |= MULTI_GPU_TILE_SIZE(2);16481648- break;16491649- case 128:16501650- gb_addr_config |= MULTI_GPU_TILE_SIZE(3);16511651- break;16521652- }16861686+ /* fix up row size */16871687+ gb_addr_config &= ~ROW_SIZE_MASK;16531688 switch (rdev->config.si.mem_row_size_in_kb) {16541689 case 1:16551690 default:···16151744 gb_addr_config |= ROW_SIZE(2);16161745 break;16171746 }16181618-16191619- tmp = (gb_addr_config & NUM_PIPES_MASK) >> NUM_PIPES_SHIFT;16201620- rdev->config.si.num_tile_pipes = (1 << tmp);16211621- tmp = (gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT;16221622- rdev->config.si.mem_max_burst_length_bytes = (tmp + 1) * 256;16231623- tmp = (gb_addr_config & NUM_SHADER_ENGINES_MASK) >> NUM_SHADER_ENGINES_SHIFT;16241624- rdev->config.si.num_shader_engines = tmp + 1;16251625- tmp = (gb_addr_config & NUM_GPUS_MASK) >> NUM_GPUS_SHIFT;16261626- rdev->config.si.num_gpus = tmp + 1;16271627- tmp = (gb_addr_config & MULTI_GPU_TILE_SIZE_MASK) >> MULTI_GPU_TILE_SIZE_SHIFT;16281628- rdev->config.si.multi_gpu_tile_size = 1 << tmp;16291629- tmp = (gb_addr_config & ROW_SIZE_MASK) >> ROW_SIZE_SHIFT;16301630- rdev->config.si.mem_row_size_in_kb = 1 << tmp;16311631-16321632- gb_backend_map =16331633- si_get_tile_pipe_to_backend_map(rdev, rdev->config.si.num_tile_pipes,16341634- rdev->config.si.num_backends_per_se *16351635- rdev->config.si.num_shader_engines,16361636- &rdev->config.si.backend_disable_mask_per_asic,16371637- rdev->config.si.num_shader_engines);1638174716391748 /* setup tiling info dword. gb_addr_config is not adequate since it does16401749 * not have bank info, so create a custom tiling dword.···16401789 rdev->config.si.tile_config |= (3 << 0);16411790 break;16421791 }16431643- rdev->config.si.tile_config |=16441644- ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4;17921792+ if ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT)17931793+ rdev->config.si.tile_config |= 1 << 4;17941794+ else17951795+ rdev->config.si.tile_config |= 0 << 4;16451796 rdev->config.si.tile_config |=16461797 ((gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT) << 8;16471798 rdev->config.si.tile_config |=16481799 ((gb_addr_config & ROW_SIZE_MASK) >> ROW_SIZE_SHIFT) << 12;1649180016501650- rdev->config.si.backend_map = gb_backend_map;16511801 WREG32(GB_ADDR_CONFIG, gb_addr_config);16521802 WREG32(DMIF_ADDR_CONFIG, gb_addr_config);16531803 WREG32(HDP_ADDR_CONFIG, gb_addr_config);1654180416551655- /* primary versions */16561656- WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);16571657- WREG32(CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);16581658- WREG32(CC_GC_SHADER_ARRAY_CONFIG, cc_gc_shader_array_config);16591659-16601660- WREG32(CGTS_TCC_DISABLE, cgts_tcc_disable);16611661-16621662- /* user versions */16631663- WREG32(GC_USER_RB_BACKEND_DISABLE, cc_rb_backend_disable);16641664- WREG32(GC_USER_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);16651665- WREG32(GC_USER_SHADER_ARRAY_CONFIG, cc_gc_shader_array_config);16661666-16671667- WREG32(CGTS_USER_TCC_DISABLE, cgts_tcc_disable);16681668-16691805 si_tiling_mode_table_init(rdev);18061806+18071807+ si_setup_rb(rdev, rdev->config.si.max_shader_engines,18081808+ rdev->config.si.max_sh_per_se,18091809+ rdev->config.si.max_backends_per_se);18101810+18111811+ si_setup_spi(rdev, rdev->config.si.max_shader_engines,18121812+ rdev->config.si.max_sh_per_se,18131813+ rdev->config.si.max_cu_per_sh);18141814+1670181516711816 /* set HW defaults for 3D engine */16721817 WREG32(CP_QUEUE_THRESHOLDS, (ROQ_IB1_START(0x16) |
···190190 return NULL;191191}192192193193+int vga_switcheroo_get_client_state(struct pci_dev *pdev)194194+{195195+ struct vga_switcheroo_client *client;196196+197197+ client = find_client_from_pci(&vgasr_priv.clients, pdev);198198+ if (!client)199199+ return VGA_SWITCHEROO_NOT_FOUND;200200+ if (!vgasr_priv.active)201201+ return VGA_SWITCHEROO_INIT;202202+ return client->pwr_state;203203+}204204+EXPORT_SYMBOL(vga_switcheroo_get_client_state);205205+193206void vga_switcheroo_unregister_client(struct pci_dev *pdev)194207{195208 struct vga_switcheroo_client *client;···304291 vga_switchon(new_client);305292306293 vga_set_default_device(new_client->pdev);307307- set_audio_state(new_client->id, VGA_SWITCHEROO_ON);308308-309294 return 0;310295}311296···319308320309 active->active = false;321310311311+ set_audio_state(active->id, VGA_SWITCHEROO_OFF);312312+322313 if (new_client->fb_info) {323314 struct fb_event event;324315 event.info = new_client->fb_info;···334321 if (new_client->ops->reprobe)335322 new_client->ops->reprobe(new_client->pdev);336323337337- set_audio_state(active->id, VGA_SWITCHEROO_OFF);338338-339324 if (active->pwr_state == VGA_SWITCHEROO_ON)340325 vga_switchoff(active);326326+327327+ set_audio_state(new_client->id, VGA_SWITCHEROO_ON);341328342329 new_client->active = true;343330 return 0;···384371 /* pwr off the device not in use */385372 if (strncmp(usercmd, "OFF", 3) == 0) {386373 list_for_each_entry(client, &vgasr_priv.clients, list) {387387- if (client->active)374374+ if (client->active || client_is_audio(client))388375 continue;376376+ set_audio_state(client->id, VGA_SWITCHEROO_OFF);389377 if (client->pwr_state == VGA_SWITCHEROO_ON)390378 vga_switchoff(client);391379 }···395381 /* pwr on the device not in use */396382 if (strncmp(usercmd, "ON", 2) == 0) {397383 list_for_each_entry(client, &vgasr_priv.clients, list) {398398- if (client->active)384384+ if (client->active || client_is_audio(client))399385 continue;400386 if (client->pwr_state == VGA_SWITCHEROO_OFF)401387 vga_switchon(client);388388+ set_audio_state(client->id, VGA_SWITCHEROO_ON);402389 }403390 goto out;404391 }
+12
drivers/i2c/muxes/Kconfig
···3737 This driver can also be built as a module. If so, the module3838 will be called i2c-mux-pca954x.39394040+config I2C_MUX_PINCTRL4141+ tristate "pinctrl-based I2C multiplexer"4242+ depends on PINCTRL4343+ help4444+ If you say yes to this option, support will be included for an I2C4545+ multiplexer that uses the pinctrl subsystem, i.e. pin multiplexing.4646+ This is useful for SoCs whose I2C module's signals can be routed to4747+ different sets of pins at run-time.4848+4949+ This driver can also be built as a module. If so, the module will be5050+ called pinctrl-i2cmux.5151+4052endmenu
···547547 spin_unlock_irqrestore(&iommu->lock, flags);548548}549549550550-static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u32 head)550550+static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u64 *raw)551551{552552 struct amd_iommu_fault fault;553553- volatile u64 *raw;554554- int i;555553556554 INC_STATS_COUNTER(pri_requests);557557-558558- raw = (u64 *)(iommu->ppr_log + head);559559-560560- /*561561- * Hardware bug: Interrupt may arrive before the entry is written to562562- * memory. If this happens we need to wait for the entry to arrive.563563- */564564- for (i = 0; i < LOOP_TIMEOUT; ++i) {565565- if (PPR_REQ_TYPE(raw[0]) != 0)566566- break;567567- udelay(1);568568- }569555570556 if (PPR_REQ_TYPE(raw[0]) != PPR_REQ_FAULT) {571557 pr_err_ratelimited("AMD-Vi: Unknown PPR request received\n");···564578 fault.tag = PPR_TAG(raw[0]);565579 fault.flags = PPR_FLAGS(raw[0]);566580567567- /*568568- * To detect the hardware bug we need to clear the entry569569- * to back to zero.570570- */571571- raw[0] = raw[1] = 0;572572-573581 atomic_notifier_call_chain(&ppr_notifier, 0, &fault);574582}575583···575595 if (iommu->ppr_log == NULL)576596 return;577597598598+ /* enable ppr interrupts again */599599+ writel(MMIO_STATUS_PPR_INT_MASK, iommu->mmio_base + MMIO_STATUS_OFFSET);600600+578601 spin_lock_irqsave(&iommu->lock, flags);579602580603 head = readl(iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);581604 tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET);582605583606 while (head != tail) {607607+ volatile u64 *raw;608608+ u64 entry[2];609609+ int i;584610585585- /* Handle PPR entry */586586- iommu_handle_ppr_entry(iommu, head);611611+ raw = (u64 *)(iommu->ppr_log + head);587612588588- /* Update and refresh ring-buffer state*/613613+ /*614614+ * Hardware bug: Interrupt may arrive before the entry is615615+ * written to memory. If this happens we need to wait for the616616+ * entry to arrive.617617+ */618618+ for (i = 0; i < LOOP_TIMEOUT; ++i) {619619+ if (PPR_REQ_TYPE(raw[0]) != 0)620620+ break;621621+ udelay(1);622622+ }623623+624624+ /* Avoid memcpy function-call overhead */625625+ entry[0] = raw[0];626626+ entry[1] = raw[1];627627+628628+ /*629629+ * To detect the hardware bug we need to clear the entry630630+ * back to zero.631631+ */632632+ raw[0] = raw[1] = 0UL;633633+634634+ /* Update head pointer of hardware ring-buffer */589635 head = (head + PPR_ENTRY_SIZE) % PPR_LOG_SIZE;590636 writel(head, iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);637637+638638+ /*639639+ * Release iommu->lock because ppr-handling might need to640640+ * re-aquire it641641+ */642642+ spin_unlock_irqrestore(&iommu->lock, flags);643643+644644+ /* Handle PPR entry */645645+ iommu_handle_ppr_entry(iommu, entry);646646+647647+ spin_lock_irqsave(&iommu->lock, flags);648648+649649+ /* Refresh ring-buffer information */650650+ head = readl(iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);591651 tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET);592652 }593593-594594- /* enable ppr interrupts again */595595- writel(MMIO_STATUS_PPR_INT_MASK, iommu->mmio_base + MMIO_STATUS_OFFSET);596653597654 spin_unlock_irqrestore(&iommu->lock, flags);598655}
+5-8
drivers/iommu/amd_iommu_init.c
···10291029 if (!iommu->dev)10301030 return 1;1031103110321032+ iommu->root_pdev = pci_get_bus_and_slot(iommu->dev->bus->number,10331033+ PCI_DEVFN(0, 0));10341034+10321035 iommu->cap_ptr = h->cap_ptr;10331036 iommu->pci_seg = h->pci_seg;10341037 iommu->mmio_phys = h->mmio_phys;···13261323{13271324 int i, j;13281325 u32 ioc_feature_control;13291329- struct pci_dev *pdev = NULL;13261326+ struct pci_dev *pdev = iommu->root_pdev;1330132713311328 /* RD890 BIOSes may not have completely reconfigured the iommu */13321332- if (!is_rd890_iommu(iommu->dev))13291329+ if (!is_rd890_iommu(iommu->dev) || !pdev)13331330 return;1334133113351332 /*13361333 * First, we need to ensure that the iommu is enabled. This is13371334 * controlled by a register in the northbridge13381335 */13391339- pdev = pci_get_bus_and_slot(iommu->dev->bus->number, PCI_DEVFN(0, 0));13401340-13411341- if (!pdev)13421342- return;1343133613441337 /* Select Northbridge indirect register 0x75 and enable writing */13451338 pci_write_config_dword(pdev, 0x60, 0x75 | (1 << 7));···13441345 /* Enable the iommu */13451346 if (!(ioc_feature_control & 0x1))13461347 pci_write_config_dword(pdev, 0x64, ioc_feature_control | 1);13471347-13481348- pci_dev_put(pdev);1349134813501349 /* Restore the iommu BAR */13511350 pci_write_config_dword(iommu->dev, iommu->cap_ptr + 4,
+3
drivers/iommu/amd_iommu_types.h
···481481 /* Pointer to PCI device of this IOMMU */482482 struct pci_dev *dev;483483484484+ /* Cache pdev to root device for resume quirks */485485+ struct pci_dev *root_pdev;486486+484487 /* physical address of MMIO space */485488 u64 mmio_phys;486489 /* virtual address of MMIO space */
+2-2
drivers/leds/Kconfig
···379379380380config LEDS_ASIC3381381 bool "LED support for the HTC ASIC3"382382- depends on LEDS_CLASS382382+ depends on LEDS_CLASS=y383383 depends on MFD_ASIC3384384 default y385385 help···390390391391config LEDS_RENESAS_TPU392392 bool "LED support for Renesas TPU"393393- depends on LEDS_CLASS && HAVE_CLK && GENERIC_GPIO393393+ depends on LEDS_CLASS=y && HAVE_CLK && GENERIC_GPIO394394 help395395 This option enables build of the LED TPU platform driver,396396 suitable to drive any TPU channel on newer Renesas SoCs.
···7676#include <net/route.h>7777#include <net/net_namespace.h>7878#include <net/netns/generic.h>7979+#include <net/pkt_sched.h>7980#include "bonding.h"8081#include "bond_3ad.h"8182#include "bond_alb.h"···382381 return next;383382}384383385385-#define bond_queue_mapping(skb) (*(u16 *)((skb)->cb))386386-387384/**388385 * bond_dev_queue_xmit - Prepare skb for xmit.389386 *···394395{395396 skb->dev = slave_dev;396397397397- skb->queue_mapping = bond_queue_mapping(skb);398398+ BUILD_BUG_ON(sizeof(skb->queue_mapping) !=399399+ sizeof(qdisc_skb_cb(skb)->bond_queue_mapping));400400+ skb->queue_mapping = qdisc_skb_cb(skb)->bond_queue_mapping;398401399402 if (unlikely(netpoll_tx_running(slave_dev)))400403 bond_netpoll_send_skb(bond_get_slave_by_dev(bond, slave_dev), skb);···41724171 /*41734172 * Save the original txq to restore before passing to the driver41744173 */41754175- bond_queue_mapping(skb) = skb->queue_mapping;41744174+ qdisc_skb_cb(skb)->bond_queue_mapping = skb->queue_mapping;4176417541774176 if (unlikely(txq >= dev->real_num_tx_queues)) {41784177 do {
+6-2
drivers/net/bonding/bond_sysfs.c
···10821082 }10831083 }1084108410851085- pr_info("%s: Unable to set %.*s as primary slave.\n",10861086- bond->dev->name, (int)strlen(buf) - 1, buf);10851085+ strncpy(bond->params.primary, ifname, IFNAMSIZ);10861086+ bond->params.primary[IFNAMSIZ - 1] = 0;10871087+10881088+ pr_info("%s: Recording %s as primary, "10891089+ "but it has not been enslaved to %s yet.\n",10901090+ bond->dev->name, ifname, bond->dev->name);10871091out:10881092 write_unlock_bh(&bond->curr_slave_lock);10891093 read_unlock(&bond->lock);
+9-7
drivers/net/can/c_can/c_can.c
···686686 *687687 * We iterate from priv->tx_echo to priv->tx_next and check if the688688 * packet has been transmitted, echo it back to the CAN framework.689689- * If we discover a not yet transmitted package, stop looking for more.689689+ * If we discover a not yet transmitted packet, stop looking for more.690690 */691691static void c_can_do_tx(struct net_device *dev)692692{···698698 for (/* nix */; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) {699699 msg_obj_no = get_tx_echo_msg_obj(priv);700700 val = c_can_read_reg32(priv, &priv->regs->txrqst1);701701- if (!(val & (1 << msg_obj_no))) {701701+ if (!(val & (1 << (msg_obj_no - 1)))) {702702 can_get_echo_skb(dev,703703 msg_obj_no - C_CAN_MSG_OBJ_TX_FIRST);704704 stats->tx_bytes += priv->read_reg(priv,···706706 & IF_MCONT_DLC_MASK;707707 stats->tx_packets++;708708 c_can_inval_msg_object(dev, 0, msg_obj_no);709709+ } else {710710+ break;709711 }710712 }711713···952950 struct net_device *dev = napi->dev;953951 struct c_can_priv *priv = netdev_priv(dev);954952955955- irqstatus = priv->read_reg(priv, &priv->regs->interrupt);953953+ irqstatus = priv->irqstatus;956954 if (!irqstatus)957955 goto end;958956···1030102810311029static irqreturn_t c_can_isr(int irq, void *dev_id)10321030{10331033- u16 irqstatus;10341031 struct net_device *dev = (struct net_device *)dev_id;10351032 struct c_can_priv *priv = netdev_priv(dev);1036103310371037- irqstatus = priv->read_reg(priv, &priv->regs->interrupt);10381038- if (!irqstatus)10341034+ priv->irqstatus = priv->read_reg(priv, &priv->regs->interrupt);10351035+ if (!priv->irqstatus)10391036 return IRQ_NONE;1040103710411038 /* disable all interrupts and schedule the NAPI */···10641063 goto exit_irq_fail;10651064 }1066106510661066+ napi_enable(&priv->napi);10671067+10671068 /* start the c_can controller */10681069 c_can_start(dev);1069107010701070- napi_enable(&priv->napi);10711071 netif_start_queue(dev);1072107210731073 return 0;
+1
drivers/net/can/c_can/c_can.h
···7676 unsigned int tx_next;7777 unsigned int tx_echo;7878 void *priv; /* for board-specific data */7979+ u16 irqstatus;7980};80818182struct net_device *alloc_c_can_dev(void);
···617617 return 0;618618}619619620620+static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe,621621+ struct bnx2x_fastpath *fp)622622+{623623+ /* Do nothing if no IP/L4 csum validation was done */624624+625625+ if (cqe->fast_path_cqe.status_flags &626626+ (ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG |627627+ ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG))628628+ return;629629+630630+ /* If both IP/L4 validation were done, check if an error was found. */631631+632632+ if (cqe->fast_path_cqe.type_error_flags &633633+ (ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG |634634+ ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))635635+ fp->eth_q_stats.hw_csum_err++;636636+ else637637+ skb->ip_summed = CHECKSUM_UNNECESSARY;638638+}620639621640int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget)622641{···825806826807 skb_checksum_none_assert(skb);827808828828- if (bp->dev->features & NETIF_F_RXCSUM) {809809+ if (bp->dev->features & NETIF_F_RXCSUM)810810+ bnx2x_csum_validate(skb, cqe, fp);829811830830- if (likely(BNX2X_RX_CSUM_OK(cqe)))831831- skb->ip_summed = CHECKSUM_UNNECESSARY;832832- else833833- fp->eth_q_stats.hw_csum_err++;834834- }835812836813 skb_record_rx_queue(skb, fp->rx_queue);837814
+2-1
drivers/net/ethernet/broadcom/tg3.c
···1427514275 }1427614276 }14277142771427814278- if (tg3_flag(tp, 5755_PLUS))1427814278+ if (tg3_flag(tp, 5755_PLUS) ||1427914279+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)1427914280 tg3_flag_set(tp, SHORT_DMA_BUG);14280142811428114282 if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+3-2
drivers/net/ethernet/emulex/benet/be_main.c
···736736737737 copied = make_tx_wrbs(adapter, txq, skb, wrb_cnt, dummy_wrb);738738 if (copied) {739739+ int gso_segs = skb_shinfo(skb)->gso_segs;740740+739741 /* record the sent skb in the sent_skb table */740742 BUG_ON(txo->sent_skb_list[start]);741743 txo->sent_skb_list[start] = skb;···755753756754 be_txq_notify(adapter, txq->id, wrb_cnt);757755758758- be_tx_stats_update(txo, wrb_cnt, copied,759759- skb_shinfo(skb)->gso_segs, stopped);756756+ be_tx_stats_update(txo, wrb_cnt, copied, gso_segs, stopped);760757 } else {761758 txq->head = start;762759 dev_kfree_skb_any(skb);
+4-2
drivers/net/ethernet/intel/e1000e/ethtool.c
···258258 * When SoL/IDER sessions are active, autoneg/speed/duplex259259 * cannot be changed260260 */261261- if (hw->phy.ops.check_reset_block(hw)) {261261+ if (hw->phy.ops.check_reset_block &&262262+ hw->phy.ops.check_reset_block(hw)) {262263 e_err("Cannot change link characteristics when SoL/IDER is active.\n");263264 return -EINVAL;264265 }···16161615 * PHY loopback cannot be performed if SoL/IDER16171616 * sessions are active16181617 */16191619- if (hw->phy.ops.check_reset_block(hw)) {16181618+ if (hw->phy.ops.check_reset_block &&16191619+ hw->phy.ops.check_reset_block(hw)) {16201620 e_err("Cannot do PHY loopback test when SoL/IDER is active.\n");16211621 *data = 0;16221622 goto out;
+1-1
drivers/net/ethernet/intel/e1000e/mac.c
···709709 * In the case of the phy reset being blocked, we already have a link.710710 * We do not need to set it up again.711711 */712712- if (hw->phy.ops.check_reset_block(hw))712712+ if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))713713 return 0;714714715715 /*
+2-2
drivers/net/ethernet/intel/e1000e/netdev.c
···62376237 adapter->hw.phy.ms_type = e1000_ms_hw_default;62386238 }6239623962406240- if (hw->phy.ops.check_reset_block(hw))62406240+ if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))62416241 e_info("PHY reset is blocked due to SOL/IDER session.\n");6242624262436243 /* Set initial default active device features */···64046404 if (!(adapter->flags & FLAG_HAS_AMT))64056405 e1000e_release_hw_control(adapter);64066406err_eeprom:64076407- if (!hw->phy.ops.check_reset_block(hw))64076407+ if (hw->phy.ops.check_reset_block && !hw->phy.ops.check_reset_block(hw))64086408 e1000_phy_hw_reset(&adapter->hw);64096409err_hw_init:64106410 kfree(adapter->tx_ring);
+5-3
drivers/net/ethernet/intel/e1000e/phy.c
···21552155 s32 ret_val;21562156 u32 ctrl;2157215721582158- ret_val = phy->ops.check_reset_block(hw);21592159- if (ret_val)21602160- return 0;21582158+ if (phy->ops.check_reset_block) {21592159+ ret_val = phy->ops.check_reset_block(hw);21602160+ if (ret_val)21612161+ return 0;21622162+ }2161216321622164 ret_val = phy->ops.acquire(hw);21632165 if (ret_val)
+10-12
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···13901390 union ixgbe_adv_rx_desc *rx_desc,13911391 struct sk_buff *skb)13921392{13931393+ struct net_device *dev = rx_ring->netdev;13941394+13931395 ixgbe_update_rsc_stats(rx_ring, skb);1394139613951397 ixgbe_rx_hash(rx_ring, rx_desc, skb);···14031401 ixgbe_ptp_rx_hwtstamp(rx_ring->q_vector, skb);14041402#endif1405140314061406- if (ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {14041404+ if ((dev->features & NETIF_F_HW_VLAN_RX) &&14051405+ ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_VP)) {14071406 u16 vid = le16_to_cpu(rx_desc->wb.upper.vlan);14081407 __vlan_hwaccel_put_tag(skb, vid);14091408 }1410140914111410 skb_record_rx_queue(skb, rx_ring->queue_index);1412141114131413- skb->protocol = eth_type_trans(skb, rx_ring->netdev);14121412+ skb->protocol = eth_type_trans(skb, dev);14141413}1415141414161415static void ixgbe_rx_skb(struct ixgbe_q_vector *q_vector,···3609360636103607 if (hw->mac.type == ixgbe_mac_82598EB)36113608 netif_set_gso_max_size(adapter->netdev, 32768);36123612-36133613-36143614- /* Enable VLAN tag insert/strip */36153615- adapter->netdev->features |= NETIF_F_HW_VLAN_RX;3616360936173610 hw->mac.ops.set_vfta(&adapter->hw, 0, 0, true);36183611···67006701{67016702 struct ixgbe_adapter *adapter = netdev_priv(netdev);6702670367036703-#ifdef CONFIG_DCB67046704- if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)67056705- features &= ~NETIF_F_HW_VLAN_RX;67066706-#endif67076707-67086704 /* return error if RXHASH is being enabled when RSS is not supported */67096705 if (!(adapter->flags & IXGBE_FLAG_RSS_ENABLED))67106706 features &= ~NETIF_F_RXHASH;···67116717 /* Turn off LRO if not RSC capable */67126718 if (!(adapter->flags2 & IXGBE_FLAG2_RSC_CAPABLE))67136719 features &= ~NETIF_F_LRO;67146714-6715672067166721 return features;67176722}···67586765 adapter->flags |= IXGBE_FLAG_FDIR_PERFECT_CAPABLE;67596766 need_reset = true;67606767 }67686768+67696769+ if (features & NETIF_F_HW_VLAN_RX)67706770+ ixgbe_vlan_strip_enable(adapter);67716771+ else67726772+ ixgbe_vlan_strip_disable(adapter);6761677367626774 if (changed & NETIF_F_RXALL)67636775 need_reset = true;
+10-5
drivers/net/ethernet/marvell/mv643xx_eth.c
···436436 /*437437 * Hardware-specific parameters.438438 */439439+#if defined(CONFIG_HAVE_CLK)439440 struct clk *clk;441441+#endif440442 unsigned int t_clk;441443};442444···28972895 mp->dev = dev;2898289628992897 /*29002900- * Get the clk rate, if there is one, otherwise use the default.28982898+ * Start with a default rate, and if there is a clock, allow28992899+ * it to override the default.29012900 */29012901+ mp->t_clk = 133000000;29022902+#if defined(CONFIG_HAVE_CLK)29022903 mp->clk = clk_get(&pdev->dev, (pdev->id ? "1" : "0"));29032904 if (!IS_ERR(mp->clk)) {29042905 clk_prepare_enable(mp->clk);29052906 mp->t_clk = clk_get_rate(mp->clk);29062906- } else {29072907- mp->t_clk = 133000000;29082908- printk(KERN_WARNING "Unable to get clock");29092907 }29102910-29082908+#endif29112909 set_params(mp, pd);29122910 netif_set_real_num_tx_queues(dev, mp->txq_count);29132911 netif_set_real_num_rx_queues(dev, mp->rxq_count);···29972995 phy_detach(mp->phy);29982996 cancel_work_sync(&mp->tx_timeout_task);2999299729982998+#if defined(CONFIG_HAVE_CLK)30002999 if (!IS_ERR(mp->clk)) {30013000 clk_disable_unprepare(mp->clk);30023001 clk_put(mp->clk);30033002 }30033003+#endif30043004+30043005 free_netdev(mp->dev);3005300630063007 platform_set_drvdata(pdev, NULL);
+6-4
drivers/net/ethernet/marvell/sky2.c
···43814381 struct sky2_port *sky2 = netdev_priv(dev);43824382 netdev_features_t changed = dev->features ^ features;4383438343844384- if (changed & NETIF_F_RXCSUM) {43854385- bool on = features & NETIF_F_RXCSUM;43864386- sky2_write32(sky2->hw, Q_ADDR(rxqaddr[sky2->port], Q_CSR),43874387- on ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);43844384+ if ((changed & NETIF_F_RXCSUM) &&43854385+ !(sky2->hw->flags & SKY2_HW_NEW_LE)) {43864386+ sky2_write32(sky2->hw,43874387+ Q_ADDR(rxqaddr[sky2->port], Q_CSR),43884388+ (features & NETIF_F_RXCSUM)43894389+ ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);43884390 }4389439143904392 if (changed & NETIF_F_RXHASH)
···1313if STMMAC_ETH14141515config STMMAC_PLATFORM1616- tristate "STMMAC platform bus support"1616+ bool "STMMAC Platform bus support"1717 depends on STMMAC_ETH1818 default y1919 ---help---···2626 If unsure, say N.27272828config STMMAC_PCI2929- tristate "STMMAC support on PCI bus (EXPERIMENTAL)"2929+ bool "STMMAC PCI bus support (EXPERIMENTAL)"3030 depends on STMMAC_ETH && PCI && EXPERIMENTAL3131 ---help---3232 This is to select the Synopsys DWMAC available on PCI devices,
+60-3
drivers/net/ethernet/stmicro/stmmac/stmmac.h
···2626#include <linux/clk.h>2727#include <linux/stmmac.h>2828#include <linux/phy.h>2929+#include <linux/pci.h>2930#include "common.h"3031#ifdef CONFIG_STMMAC_TIMER3132#include "stmmac_timer.h"···9695extern void stmmac_set_ethtool_ops(struct net_device *netdev);9796extern const struct stmmac_desc_ops enh_desc_ops;9897extern const struct stmmac_desc_ops ndesc_ops;9999-10098int stmmac_freeze(struct net_device *ndev);10199int stmmac_restore(struct net_device *ndev);102100int stmmac_resume(struct net_device *ndev);···109109static inline int stmmac_clk_enable(struct stmmac_priv *priv)110110{111111 if (!IS_ERR(priv->stmmac_clk))112112- return clk_enable(priv->stmmac_clk);112112+ return clk_prepare_enable(priv->stmmac_clk);113113114114 return 0;115115}···119119 if (IS_ERR(priv->stmmac_clk))120120 return;121121122122- clk_disable(priv->stmmac_clk);122122+ clk_disable_unprepare(priv->stmmac_clk);123123}124124static inline int stmmac_clk_get(struct stmmac_priv *priv)125125{···143143 return 0;144144}145145#endif /* CONFIG_HAVE_CLK */146146+147147+148148+#ifdef CONFIG_STMMAC_PLATFORM149149+extern struct platform_driver stmmac_pltfr_driver;150150+static inline int stmmac_register_platform(void)151151+{152152+ int err;153153+154154+ err = platform_driver_register(&stmmac_pltfr_driver);155155+ if (err)156156+ pr_err("stmmac: failed to register the platform driver\n");157157+158158+ return err;159159+}160160+static inline void stmmac_unregister_platform(void)161161+{162162+ platform_driver_register(&stmmac_pltfr_driver);163163+}164164+#else165165+static inline int stmmac_register_platform(void)166166+{167167+ pr_debug("stmmac: do not register the platf driver\n");168168+169169+ return -EINVAL;170170+}171171+static inline void stmmac_unregister_platform(void)172172+{173173+}174174+#endif /* CONFIG_STMMAC_PLATFORM */175175+176176+#ifdef CONFIG_STMMAC_PCI177177+extern struct pci_driver stmmac_pci_driver;178178+static inline int stmmac_register_pci(void)179179+{180180+ int err;181181+182182+ err = pci_register_driver(&stmmac_pci_driver);183183+ if (err)184184+ pr_err("stmmac: failed to register the PCI driver\n");185185+186186+ return err;187187+}188188+static inline void stmmac_unregister_pci(void)189189+{190190+ pci_unregister_driver(&stmmac_pci_driver);191191+}192192+#else193193+static inline int stmmac_register_pci(void)194194+{195195+ pr_debug("stmmac: do not register the PCI driver\n");196196+197197+ return -EINVAL;198198+}199199+static inline void stmmac_unregister_pci(void)200200+{201201+}202202+#endif /* CONFIG_STMMAC_PCI */
+33-2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···833833834834/**835835 * stmmac_selec_desc_mode836836- * @dev : device pointer837837- * Description: select the Enhanced/Alternate or Normal descriptors */836836+ * @priv : private structure837837+ * Description: select the Enhanced/Alternate or Normal descriptors838838+ */838839static void stmmac_selec_desc_mode(struct stmmac_priv *priv)839840{840841 if (priv->plat->enh_desc) {···18621861/**18631862 * stmmac_dvr_probe18641863 * @device: device pointer18641864+ * @plat_dat: platform data pointer18651865+ * @addr: iobase memory address18651866 * Description: this is the main probe function used to18661867 * call the alloc_etherdev, allocate the priv structure.18671868 */···20922089 return stmmac_open(ndev);20932090}20942091#endif /* CONFIG_PM */20922092+20932093+/* Driver can be configured w/ and w/ both PCI and Platf drivers20942094+ * depending on the configuration selected.20952095+ */20962096+static int __init stmmac_init(void)20972097+{20982098+ int err_plt = 0;20992099+ int err_pci = 0;21002100+21012101+ err_plt = stmmac_register_platform();21022102+ err_pci = stmmac_register_pci();21032103+21042104+ if ((err_pci) && (err_plt)) {21052105+ pr_err("stmmac: driver registration failed\n");21062106+ return -EINVAL;21072107+ }21082108+21092109+ return 0;21102110+}21112111+21122112+static void __exit stmmac_exit(void)21132113+{21142114+ stmmac_unregister_platform();21152115+ stmmac_unregister_pci();21162116+}21172117+21182118+module_init(stmmac_init);21192119+module_exit(stmmac_exit);2095212020962121#ifndef MODULE20972122static int __init stmmac_cmdline_opt(char *str)
+1-28
drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
···179179180180MODULE_DEVICE_TABLE(pci, stmmac_id_table);181181182182-static struct pci_driver stmmac_driver = {182182+struct pci_driver stmmac_pci_driver = {183183 .name = STMMAC_RESOURCE_NAME,184184 .id_table = stmmac_id_table,185185 .probe = stmmac_pci_probe,···189189 .resume = stmmac_pci_resume,190190#endif191191};192192-193193-/**194194- * stmmac_init_module - Entry point for the driver195195- * Description: This function is the entry point for the driver.196196- */197197-static int __init stmmac_init_module(void)198198-{199199- int ret;200200-201201- ret = pci_register_driver(&stmmac_driver);202202- if (ret < 0)203203- pr_err("%s: ERROR: driver registration failed\n", __func__);204204-205205- return ret;206206-}207207-208208-/**209209- * stmmac_cleanup_module - Cleanup routine for the driver210210- * Description: This function is the cleanup routine for the driver.211211- */212212-static void __exit stmmac_cleanup_module(void)213213-{214214- pci_unregister_driver(&stmmac_driver);215215-}216216-217217-module_init(stmmac_init_module);218218-module_exit(stmmac_cleanup_module);219192220193MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet PCI driver");221194MODULE_AUTHOR("Rayagond Kokatanur <rayagond.kokatanur@vayavyalabs.com>");
···77 depends on TILE88 default y99 select CRC321010+ select TILE_GXIO_MPIPE if TILEGX1111+ select HIGH_RES_TIMERS if TILEGX1012 ---help---1113 This is a standard Linux network device driver for the1214 on-chip Tilera Gigabit Ethernet and XAUI interfaces.
···11+/*22+ * Copyright 2012 Tilera Corporation. All Rights Reserved.33+ *44+ * This program is free software; you can redistribute it and/or55+ * modify it under the terms of the GNU General Public License66+ * as published by the Free Software Foundation, version 2.77+ *88+ * This program is distributed in the hope that it will be useful, but99+ * WITHOUT ANY WARRANTY; without even the implied warranty of1010+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1111+ * NON INFRINGEMENT. See the GNU General Public License for1212+ * more details.1313+ */1414+1515+#include <linux/module.h>1616+#include <linux/init.h>1717+#include <linux/moduleparam.h>1818+#include <linux/sched.h>1919+#include <linux/kernel.h> /* printk() */2020+#include <linux/slab.h> /* kmalloc() */2121+#include <linux/errno.h> /* error codes */2222+#include <linux/types.h> /* size_t */2323+#include <linux/interrupt.h>2424+#include <linux/in.h>2525+#include <linux/irq.h>2626+#include <linux/netdevice.h> /* struct device, and other headers */2727+#include <linux/etherdevice.h> /* eth_type_trans */2828+#include <linux/skbuff.h>2929+#include <linux/ioctl.h>3030+#include <linux/cdev.h>3131+#include <linux/hugetlb.h>3232+#include <linux/in6.h>3333+#include <linux/timer.h>3434+#include <linux/hrtimer.h>3535+#include <linux/ktime.h>3636+#include <linux/io.h>3737+#include <linux/ctype.h>3838+#include <linux/ip.h>3939+#include <linux/tcp.h>4040+4141+#include <asm/checksum.h>4242+#include <asm/homecache.h>4343+#include <gxio/mpipe.h>4444+#include <arch/sim.h>4545+4646+/* Default transmit lockup timeout period, in jiffies. */4747+#define TILE_NET_TIMEOUT (5 * HZ)4848+4949+/* The maximum number of distinct channels (idesc.channel is 5 bits). */5050+#define TILE_NET_CHANNELS 325151+5252+/* Maximum number of idescs to handle per "poll". */5353+#define TILE_NET_BATCH 1285454+5555+/* Maximum number of packets to handle per "poll". */5656+#define TILE_NET_WEIGHT 645757+5858+/* Number of entries in each iqueue. */5959+#define IQUEUE_ENTRIES 5126060+6161+/* Number of entries in each equeue. */6262+#define EQUEUE_ENTRIES 20486363+6464+/* Total header bytes per equeue slot. Must be big enough for 2 bytes6565+ * of NET_IP_ALIGN alignment, plus 14 bytes (?) of L2 header, plus up to6666+ * 60 bytes of actual TCP header. We round up to align to cache lines.6767+ */6868+#define HEADER_BYTES 1286969+7070+/* Maximum completions per cpu per device (must be a power of two).7171+ * ISSUE: What is the right number here? If this is too small, then7272+ * egress might block waiting for free space in a completions array.7373+ * ISSUE: At the least, allocate these only for initialized echannels.7474+ */7575+#define TILE_NET_MAX_COMPS 647676+7777+#define MAX_FRAGS (MAX_SKB_FRAGS + 1)7878+7979+/* Size of completions data to allocate.8080+ * ISSUE: Probably more than needed since we don't use all the channels.8181+ */8282+#define COMPS_SIZE (TILE_NET_CHANNELS * sizeof(struct tile_net_comps))8383+8484+/* Size of NotifRing data to allocate. */8585+#define NOTIF_RING_SIZE (IQUEUE_ENTRIES * sizeof(gxio_mpipe_idesc_t))8686+8787+/* Timeout to wake the per-device TX timer after we stop the queue.8888+ * We don't want the timeout too short (adds overhead, and might end8989+ * up causing stop/wake/stop/wake cycles) or too long (affects performance).9090+ * For the 10 Gb NIC, 30 usec means roughly 30+ 1500-byte packets.9191+ */9292+#define TX_TIMER_DELAY_USEC 309393+9494+/* Timeout to wake the per-cpu egress timer to free completions. */9595+#define EGRESS_TIMER_DELAY_USEC 10009696+9797+MODULE_AUTHOR("Tilera Corporation");9898+MODULE_LICENSE("GPL");9999+100100+/* A "packet fragment" (a chunk of memory). */101101+struct frag {102102+ void *buf;103103+ size_t length;104104+};105105+106106+/* A single completion. */107107+struct tile_net_comp {108108+ /* The "complete_count" when the completion will be complete. */109109+ s64 when;110110+ /* The buffer to be freed when the completion is complete. */111111+ struct sk_buff *skb;112112+};113113+114114+/* The completions for a given cpu and echannel. */115115+struct tile_net_comps {116116+ /* The completions. */117117+ struct tile_net_comp comp_queue[TILE_NET_MAX_COMPS];118118+ /* The number of completions used. */119119+ unsigned long comp_next;120120+ /* The number of completions freed. */121121+ unsigned long comp_last;122122+};123123+124124+/* The transmit wake timer for a given cpu and echannel. */125125+struct tile_net_tx_wake {126126+ struct hrtimer timer;127127+ struct net_device *dev;128128+};129129+130130+/* Info for a specific cpu. */131131+struct tile_net_info {132132+ /* The NAPI struct. */133133+ struct napi_struct napi;134134+ /* Packet queue. */135135+ gxio_mpipe_iqueue_t iqueue;136136+ /* Our cpu. */137137+ int my_cpu;138138+ /* True if iqueue is valid. */139139+ bool has_iqueue;140140+ /* NAPI flags. */141141+ bool napi_added;142142+ bool napi_enabled;143143+ /* Number of small sk_buffs which must still be provided. */144144+ unsigned int num_needed_small_buffers;145145+ /* Number of large sk_buffs which must still be provided. */146146+ unsigned int num_needed_large_buffers;147147+ /* A timer for handling egress completions. */148148+ struct hrtimer egress_timer;149149+ /* True if "egress_timer" is scheduled. */150150+ bool egress_timer_scheduled;151151+ /* Comps for each egress channel. */152152+ struct tile_net_comps *comps_for_echannel[TILE_NET_CHANNELS];153153+ /* Transmit wake timer for each egress channel. */154154+ struct tile_net_tx_wake tx_wake[TILE_NET_CHANNELS];155155+};156156+157157+/* Info for egress on a particular egress channel. */158158+struct tile_net_egress {159159+ /* The "equeue". */160160+ gxio_mpipe_equeue_t *equeue;161161+ /* The headers for TSO. */162162+ unsigned char *headers;163163+};164164+165165+/* Info for a specific device. */166166+struct tile_net_priv {167167+ /* Our network device. */168168+ struct net_device *dev;169169+ /* The primary link. */170170+ gxio_mpipe_link_t link;171171+ /* The primary channel, if open, else -1. */172172+ int channel;173173+ /* The "loopify" egress link, if needed. */174174+ gxio_mpipe_link_t loopify_link;175175+ /* The "loopify" egress channel, if open, else -1. */176176+ int loopify_channel;177177+ /* The egress channel (channel or loopify_channel). */178178+ int echannel;179179+ /* Total stats. */180180+ struct net_device_stats stats;181181+};182182+183183+/* Egress info, indexed by "priv->echannel" (lazily created as needed). */184184+static struct tile_net_egress egress_for_echannel[TILE_NET_CHANNELS];185185+186186+/* Devices currently associated with each channel.187187+ * NOTE: The array entry can become NULL after ifconfig down, but188188+ * we do not free the underlying net_device structures, so it is189189+ * safe to use a pointer after reading it from this array.190190+ */191191+static struct net_device *tile_net_devs_for_channel[TILE_NET_CHANNELS];192192+193193+/* A mutex for "tile_net_devs_for_channel". */194194+static DEFINE_MUTEX(tile_net_devs_for_channel_mutex);195195+196196+/* The per-cpu info. */197197+static DEFINE_PER_CPU(struct tile_net_info, per_cpu_info);198198+199199+/* The "context" for all devices. */200200+static gxio_mpipe_context_t context;201201+202202+/* Buffer sizes and mpipe enum codes for buffer stacks.203203+ * See arch/tile/include/gxio/mpipe.h for the set of possible values.204204+ */205205+#define BUFFER_SIZE_SMALL_ENUM GXIO_MPIPE_BUFFER_SIZE_128206206+#define BUFFER_SIZE_SMALL 128207207+#define BUFFER_SIZE_LARGE_ENUM GXIO_MPIPE_BUFFER_SIZE_1664208208+#define BUFFER_SIZE_LARGE 1664209209+210210+/* The small/large "buffer stacks". */211211+static int small_buffer_stack = -1;212212+static int large_buffer_stack = -1;213213+214214+/* Amount of memory allocated for each buffer stack. */215215+static size_t buffer_stack_size;216216+217217+/* The actual memory allocated for the buffer stacks. */218218+static void *small_buffer_stack_va;219219+static void *large_buffer_stack_va;220220+221221+/* The buckets. */222222+static int first_bucket = -1;223223+static int num_buckets = 1;224224+225225+/* The ingress irq. */226226+static int ingress_irq = -1;227227+228228+/* Text value of tile_net.cpus if passed as a module parameter. */229229+static char *network_cpus_string;230230+231231+/* The actual cpus in "network_cpus". */232232+static struct cpumask network_cpus_map;233233+234234+/* If "loopify=LINK" was specified, this is "LINK". */235235+static char *loopify_link_name;236236+237237+/* If "tile_net.custom" was specified, this is non-NULL. */238238+static char *custom_str;239239+240240+/* The "tile_net.cpus" argument specifies the cpus that are dedicated241241+ * to handle ingress packets.242242+ *243243+ * The parameter should be in the form "tile_net.cpus=m-n[,x-y]", where244244+ * m, n, x, y are integer numbers that represent the cpus that can be245245+ * neither a dedicated cpu nor a dataplane cpu.246246+ */247247+static bool network_cpus_init(void)248248+{249249+ char buf[1024];250250+ int rc;251251+252252+ if (network_cpus_string == NULL)253253+ return false;254254+255255+ rc = cpulist_parse_crop(network_cpus_string, &network_cpus_map);256256+ if (rc != 0) {257257+ pr_warn("tile_net.cpus=%s: malformed cpu list\n",258258+ network_cpus_string);259259+ return false;260260+ }261261+262262+ /* Remove dedicated cpus. */263263+ cpumask_and(&network_cpus_map, &network_cpus_map, cpu_possible_mask);264264+265265+ if (cpumask_empty(&network_cpus_map)) {266266+ pr_warn("Ignoring empty tile_net.cpus='%s'.\n",267267+ network_cpus_string);268268+ return false;269269+ }270270+271271+ cpulist_scnprintf(buf, sizeof(buf), &network_cpus_map);272272+ pr_info("Linux network CPUs: %s\n", buf);273273+ return true;274274+}275275+276276+module_param_named(cpus, network_cpus_string, charp, 0444);277277+MODULE_PARM_DESC(cpus, "cpulist of cores that handle network interrupts");278278+279279+/* The "tile_net.loopify=LINK" argument causes the named device to280280+ * actually use "loop0" for ingress, and "loop1" for egress. This281281+ * allows an app to sit between the actual link and linux, passing282282+ * (some) packets along to linux, and forwarding (some) packets sent283283+ * out by linux.284284+ */285285+module_param_named(loopify, loopify_link_name, charp, 0444);286286+MODULE_PARM_DESC(loopify, "name the device to use loop0/1 for ingress/egress");287287+288288+/* The "tile_net.custom" argument causes us to ignore the "conventional"289289+ * classifier metadata, in particular, the "l2_offset".290290+ */291291+module_param_named(custom, custom_str, charp, 0444);292292+MODULE_PARM_DESC(custom, "indicates a (heavily) customized classifier");293293+294294+/* Atomically update a statistics field.295295+ * Note that on TILE-Gx, this operation is fire-and-forget on the296296+ * issuing core (single-cycle dispatch) and takes only a few cycles297297+ * longer than a regular store when the request reaches the home cache.298298+ * No expensive bus management overhead is required.299299+ */300300+static void tile_net_stats_add(unsigned long value, unsigned long *field)301301+{302302+ BUILD_BUG_ON(sizeof(atomic_long_t) != sizeof(unsigned long));303303+ atomic_long_add(value, (atomic_long_t *)field);304304+}305305+306306+/* Allocate and push a buffer. */307307+static bool tile_net_provide_buffer(bool small)308308+{309309+ int stack = small ? small_buffer_stack : large_buffer_stack;310310+ const unsigned long buffer_alignment = 128;311311+ struct sk_buff *skb;312312+ int len;313313+314314+ len = sizeof(struct sk_buff **) + buffer_alignment;315315+ len += (small ? BUFFER_SIZE_SMALL : BUFFER_SIZE_LARGE);316316+ skb = dev_alloc_skb(len);317317+ if (skb == NULL)318318+ return false;319319+320320+ /* Make room for a back-pointer to 'skb' and guarantee alignment. */321321+ skb_reserve(skb, sizeof(struct sk_buff **));322322+ skb_reserve(skb, -(long)skb->data & (buffer_alignment - 1));323323+324324+ /* Save a back-pointer to 'skb'. */325325+ *(struct sk_buff **)(skb->data - sizeof(struct sk_buff **)) = skb;326326+327327+ /* Make sure "skb" and the back-pointer have been flushed. */328328+ wmb();329329+330330+ gxio_mpipe_push_buffer(&context, stack,331331+ (void *)va_to_tile_io_addr(skb->data));332332+333333+ return true;334334+}335335+336336+/* Convert a raw mpipe buffer to its matching skb pointer. */337337+static struct sk_buff *mpipe_buf_to_skb(void *va)338338+{339339+ /* Acquire the associated "skb". */340340+ struct sk_buff **skb_ptr = va - sizeof(*skb_ptr);341341+ struct sk_buff *skb = *skb_ptr;342342+343343+ /* Paranoia. */344344+ if (skb->data != va) {345345+ /* Panic here since there's a reasonable chance346346+ * that corrupt buffers means generic memory347347+ * corruption, with unpredictable system effects.348348+ */349349+ panic("Corrupt linux buffer! va=%p, skb=%p, skb->data=%p",350350+ va, skb, skb->data);351351+ }352352+353353+ return skb;354354+}355355+356356+static void tile_net_pop_all_buffers(int stack)357357+{358358+ for (;;) {359359+ tile_io_addr_t addr =360360+ (tile_io_addr_t)gxio_mpipe_pop_buffer(&context, stack);361361+ if (addr == 0)362362+ break;363363+ dev_kfree_skb_irq(mpipe_buf_to_skb(tile_io_addr_to_va(addr)));364364+ }365365+}366366+367367+/* Provide linux buffers to mPIPE. */368368+static void tile_net_provide_needed_buffers(void)369369+{370370+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);371371+372372+ while (info->num_needed_small_buffers != 0) {373373+ if (!tile_net_provide_buffer(true))374374+ goto oops;375375+ info->num_needed_small_buffers--;376376+ }377377+378378+ while (info->num_needed_large_buffers != 0) {379379+ if (!tile_net_provide_buffer(false))380380+ goto oops;381381+ info->num_needed_large_buffers--;382382+ }383383+384384+ return;385385+386386+oops:387387+ /* Add a description to the page allocation failure dump. */388388+ pr_notice("Tile %d still needs some buffers\n", info->my_cpu);389389+}390390+391391+static inline bool filter_packet(struct net_device *dev, void *buf)392392+{393393+ /* Filter packets received before we're up. */394394+ if (dev == NULL || !(dev->flags & IFF_UP))395395+ return true;396396+397397+ /* Filter out packets that aren't for us. */398398+ if (!(dev->flags & IFF_PROMISC) &&399399+ !is_multicast_ether_addr(buf) &&400400+ compare_ether_addr(dev->dev_addr, buf) != 0)401401+ return true;402402+403403+ return false;404404+}405405+406406+static void tile_net_receive_skb(struct net_device *dev, struct sk_buff *skb,407407+ gxio_mpipe_idesc_t *idesc, unsigned long len)408408+{409409+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);410410+ struct tile_net_priv *priv = netdev_priv(dev);411411+412412+ /* Encode the actual packet length. */413413+ skb_put(skb, len);414414+415415+ skb->protocol = eth_type_trans(skb, dev);416416+417417+ /* Acknowledge "good" hardware checksums. */418418+ if (idesc->cs && idesc->csum_seed_val == 0xFFFF)419419+ skb->ip_summed = CHECKSUM_UNNECESSARY;420420+421421+ netif_receive_skb(skb);422422+423423+ /* Update stats. */424424+ tile_net_stats_add(1, &priv->stats.rx_packets);425425+ tile_net_stats_add(len, &priv->stats.rx_bytes);426426+427427+ /* Need a new buffer. */428428+ if (idesc->size == BUFFER_SIZE_SMALL_ENUM)429429+ info->num_needed_small_buffers++;430430+ else431431+ info->num_needed_large_buffers++;432432+}433433+434434+/* Handle a packet. Return true if "processed", false if "filtered". */435435+static bool tile_net_handle_packet(gxio_mpipe_idesc_t *idesc)436436+{437437+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);438438+ struct net_device *dev = tile_net_devs_for_channel[idesc->channel];439439+ uint8_t l2_offset;440440+ void *va;441441+ void *buf;442442+ unsigned long len;443443+ bool filter;444444+445445+ /* Drop packets for which no buffer was available.446446+ * NOTE: This happens under heavy load.447447+ */448448+ if (idesc->be) {449449+ struct tile_net_priv *priv = netdev_priv(dev);450450+ tile_net_stats_add(1, &priv->stats.rx_dropped);451451+ gxio_mpipe_iqueue_consume(&info->iqueue, idesc);452452+ if (net_ratelimit())453453+ pr_info("Dropping packet (insufficient buffers).\n");454454+ return false;455455+ }456456+457457+ /* Get the "l2_offset", if allowed. */458458+ l2_offset = custom_str ? 0 : gxio_mpipe_idesc_get_l2_offset(idesc);459459+460460+ /* Get the raw buffer VA (includes "headroom"). */461461+ va = tile_io_addr_to_va((unsigned long)(long)idesc->va);462462+463463+ /* Get the actual packet start/length. */464464+ buf = va + l2_offset;465465+ len = idesc->l2_size - l2_offset;466466+467467+ /* Point "va" at the raw buffer. */468468+ va -= NET_IP_ALIGN;469469+470470+ filter = filter_packet(dev, buf);471471+ if (filter) {472472+ gxio_mpipe_iqueue_drop(&info->iqueue, idesc);473473+ } else {474474+ struct sk_buff *skb = mpipe_buf_to_skb(va);475475+476476+ /* Skip headroom, and any custom header. */477477+ skb_reserve(skb, NET_IP_ALIGN + l2_offset);478478+479479+ tile_net_receive_skb(dev, skb, idesc, len);480480+ }481481+482482+ gxio_mpipe_iqueue_consume(&info->iqueue, idesc);483483+ return !filter;484484+}485485+486486+/* Handle some packets for the current CPU.487487+ *488488+ * This function handles up to TILE_NET_BATCH idescs per call.489489+ *490490+ * ISSUE: Since we do not provide new buffers until this function is491491+ * complete, we must initially provide enough buffers for each network492492+ * cpu to fill its iqueue and also its batched idescs.493493+ *494494+ * ISSUE: The "rotting packet" race condition occurs if a packet495495+ * arrives after the queue appears to be empty, and before the496496+ * hypervisor interrupt is re-enabled.497497+ */498498+static int tile_net_poll(struct napi_struct *napi, int budget)499499+{500500+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);501501+ unsigned int work = 0;502502+ gxio_mpipe_idesc_t *idesc;503503+ int i, n;504504+505505+ /* Process packets. */506506+ while ((n = gxio_mpipe_iqueue_try_peek(&info->iqueue, &idesc)) > 0) {507507+ for (i = 0; i < n; i++) {508508+ if (i == TILE_NET_BATCH)509509+ goto done;510510+ if (tile_net_handle_packet(idesc + i)) {511511+ if (++work >= budget)512512+ goto done;513513+ }514514+ }515515+ }516516+517517+ /* There are no packets left. */518518+ napi_complete(&info->napi);519519+520520+ /* Re-enable hypervisor interrupts. */521521+ gxio_mpipe_enable_notif_ring_interrupt(&context, info->iqueue.ring);522522+523523+ /* HACK: Avoid the "rotting packet" problem. */524524+ if (gxio_mpipe_iqueue_try_peek(&info->iqueue, &idesc) > 0)525525+ napi_schedule(&info->napi);526526+527527+ /* ISSUE: Handle completions? */528528+529529+done:530530+ tile_net_provide_needed_buffers();531531+532532+ return work;533533+}534534+535535+/* Handle an ingress interrupt on the current cpu. */536536+static irqreturn_t tile_net_handle_ingress_irq(int irq, void *unused)537537+{538538+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);539539+ napi_schedule(&info->napi);540540+ return IRQ_HANDLED;541541+}542542+543543+/* Free some completions. This must be called with interrupts blocked. */544544+static int tile_net_free_comps(gxio_mpipe_equeue_t *equeue,545545+ struct tile_net_comps *comps,546546+ int limit, bool force_update)547547+{548548+ int n = 0;549549+ while (comps->comp_last < comps->comp_next) {550550+ unsigned int cid = comps->comp_last % TILE_NET_MAX_COMPS;551551+ struct tile_net_comp *comp = &comps->comp_queue[cid];552552+ if (!gxio_mpipe_equeue_is_complete(equeue, comp->when,553553+ force_update || n == 0))554554+ break;555555+ dev_kfree_skb_irq(comp->skb);556556+ comps->comp_last++;557557+ if (++n == limit)558558+ break;559559+ }560560+ return n;561561+}562562+563563+/* Add a completion. This must be called with interrupts blocked.564564+ * tile_net_equeue_try_reserve() will have ensured a free completion entry.565565+ */566566+static void add_comp(gxio_mpipe_equeue_t *equeue,567567+ struct tile_net_comps *comps,568568+ uint64_t when, struct sk_buff *skb)569569+{570570+ int cid = comps->comp_next % TILE_NET_MAX_COMPS;571571+ comps->comp_queue[cid].when = when;572572+ comps->comp_queue[cid].skb = skb;573573+ comps->comp_next++;574574+}575575+576576+static void tile_net_schedule_tx_wake_timer(struct net_device *dev)577577+{578578+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);579579+ struct tile_net_priv *priv = netdev_priv(dev);580580+581581+ hrtimer_start(&info->tx_wake[priv->echannel].timer,582582+ ktime_set(0, TX_TIMER_DELAY_USEC * 1000UL),583583+ HRTIMER_MODE_REL_PINNED);584584+}585585+586586+static enum hrtimer_restart tile_net_handle_tx_wake_timer(struct hrtimer *t)587587+{588588+ struct tile_net_tx_wake *tx_wake =589589+ container_of(t, struct tile_net_tx_wake, timer);590590+ netif_wake_subqueue(tx_wake->dev, smp_processor_id());591591+ return HRTIMER_NORESTART;592592+}593593+594594+/* Make sure the egress timer is scheduled. */595595+static void tile_net_schedule_egress_timer(void)596596+{597597+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);598598+599599+ if (!info->egress_timer_scheduled) {600600+ hrtimer_start(&info->egress_timer,601601+ ktime_set(0, EGRESS_TIMER_DELAY_USEC * 1000UL),602602+ HRTIMER_MODE_REL_PINNED);603603+ info->egress_timer_scheduled = true;604604+ }605605+}606606+607607+/* The "function" for "info->egress_timer".608608+ *609609+ * This timer will reschedule itself as long as there are any pending610610+ * completions expected for this tile.611611+ */612612+static enum hrtimer_restart tile_net_handle_egress_timer(struct hrtimer *t)613613+{614614+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);615615+ unsigned long irqflags;616616+ bool pending = false;617617+ int i;618618+619619+ local_irq_save(irqflags);620620+621621+ /* The timer is no longer scheduled. */622622+ info->egress_timer_scheduled = false;623623+624624+ /* Free all possible comps for this tile. */625625+ for (i = 0; i < TILE_NET_CHANNELS; i++) {626626+ struct tile_net_egress *egress = &egress_for_echannel[i];627627+ struct tile_net_comps *comps = info->comps_for_echannel[i];628628+ if (comps->comp_last >= comps->comp_next)629629+ continue;630630+ tile_net_free_comps(egress->equeue, comps, -1, true);631631+ pending = pending || (comps->comp_last < comps->comp_next);632632+ }633633+634634+ /* Reschedule timer if needed. */635635+ if (pending)636636+ tile_net_schedule_egress_timer();637637+638638+ local_irq_restore(irqflags);639639+640640+ return HRTIMER_NORESTART;641641+}642642+643643+/* Helper function for "tile_net_update()".644644+ * "dev" (i.e. arg) is the device being brought up or down,645645+ * or NULL if all devices are now down.646646+ */647647+static void tile_net_update_cpu(void *arg)648648+{649649+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);650650+ struct net_device *dev = arg;651651+652652+ if (!info->has_iqueue)653653+ return;654654+655655+ if (dev != NULL) {656656+ if (!info->napi_added) {657657+ netif_napi_add(dev, &info->napi,658658+ tile_net_poll, TILE_NET_WEIGHT);659659+ info->napi_added = true;660660+ }661661+ if (!info->napi_enabled) {662662+ napi_enable(&info->napi);663663+ info->napi_enabled = true;664664+ }665665+ enable_percpu_irq(ingress_irq, 0);666666+ } else {667667+ disable_percpu_irq(ingress_irq);668668+ if (info->napi_enabled) {669669+ napi_disable(&info->napi);670670+ info->napi_enabled = false;671671+ }672672+ /* FIXME: Drain the iqueue. */673673+ }674674+}675675+676676+/* Helper function for tile_net_open() and tile_net_stop().677677+ * Always called under tile_net_devs_for_channel_mutex.678678+ */679679+static int tile_net_update(struct net_device *dev)680680+{681681+ static gxio_mpipe_rules_t rules; /* too big to fit on the stack */682682+ bool saw_channel = false;683683+ int channel;684684+ int rc;685685+ int cpu;686686+687687+ gxio_mpipe_rules_init(&rules, &context);688688+689689+ for (channel = 0; channel < TILE_NET_CHANNELS; channel++) {690690+ if (tile_net_devs_for_channel[channel] == NULL)691691+ continue;692692+ if (!saw_channel) {693693+ saw_channel = true;694694+ gxio_mpipe_rules_begin(&rules, first_bucket,695695+ num_buckets, NULL);696696+ gxio_mpipe_rules_set_headroom(&rules, NET_IP_ALIGN);697697+ }698698+ gxio_mpipe_rules_add_channel(&rules, channel);699699+ }700700+701701+ /* NOTE: This can fail if there is no classifier.702702+ * ISSUE: Can anything else cause it to fail?703703+ */704704+ rc = gxio_mpipe_rules_commit(&rules);705705+ if (rc != 0) {706706+ netdev_warn(dev, "gxio_mpipe_rules_commit failed: %d\n", rc);707707+ return -EIO;708708+ }709709+710710+ /* Update all cpus, sequentially (to protect "netif_napi_add()"). */711711+ for_each_online_cpu(cpu)712712+ smp_call_function_single(cpu, tile_net_update_cpu,713713+ (saw_channel ? dev : NULL), 1);714714+715715+ /* HACK: Allow packets to flow in the simulator. */716716+ if (saw_channel)717717+ sim_enable_mpipe_links(0, -1);718718+719719+ return 0;720720+}721721+722722+/* Allocate and initialize mpipe buffer stacks, and register them in723723+ * the mPIPE TLBs, for both small and large packet sizes.724724+ * This routine supports tile_net_init_mpipe(), below.725725+ */726726+static int init_buffer_stacks(struct net_device *dev, int num_buffers)727727+{728728+ pte_t hash_pte = pte_set_home((pte_t) { 0 }, PAGE_HOME_HASH);729729+ int rc;730730+731731+ /* Compute stack bytes; we round up to 64KB and then use732732+ * alloc_pages() so we get the required 64KB alignment as well.733733+ */734734+ buffer_stack_size =735735+ ALIGN(gxio_mpipe_calc_buffer_stack_bytes(num_buffers),736736+ 64 * 1024);737737+738738+ /* Allocate two buffer stack indices. */739739+ rc = gxio_mpipe_alloc_buffer_stacks(&context, 2, 0, 0);740740+ if (rc < 0) {741741+ netdev_err(dev, "gxio_mpipe_alloc_buffer_stacks failed: %d\n",742742+ rc);743743+ return rc;744744+ }745745+ small_buffer_stack = rc;746746+ large_buffer_stack = rc + 1;747747+748748+ /* Allocate the small memory stack. */749749+ small_buffer_stack_va =750750+ alloc_pages_exact(buffer_stack_size, GFP_KERNEL);751751+ if (small_buffer_stack_va == NULL) {752752+ netdev_err(dev,753753+ "Could not alloc %zd bytes for buffer stacks\n",754754+ buffer_stack_size);755755+ return -ENOMEM;756756+ }757757+ rc = gxio_mpipe_init_buffer_stack(&context, small_buffer_stack,758758+ BUFFER_SIZE_SMALL_ENUM,759759+ small_buffer_stack_va,760760+ buffer_stack_size, 0);761761+ if (rc != 0) {762762+ netdev_err(dev, "gxio_mpipe_init_buffer_stack: %d\n", rc);763763+ return rc;764764+ }765765+ rc = gxio_mpipe_register_client_memory(&context, small_buffer_stack,766766+ hash_pte, 0);767767+ if (rc != 0) {768768+ netdev_err(dev,769769+ "gxio_mpipe_register_buffer_memory failed: %d\n",770770+ rc);771771+ return rc;772772+ }773773+774774+ /* Allocate the large buffer stack. */775775+ large_buffer_stack_va =776776+ alloc_pages_exact(buffer_stack_size, GFP_KERNEL);777777+ if (large_buffer_stack_va == NULL) {778778+ netdev_err(dev,779779+ "Could not alloc %zd bytes for buffer stacks\n",780780+ buffer_stack_size);781781+ return -ENOMEM;782782+ }783783+ rc = gxio_mpipe_init_buffer_stack(&context, large_buffer_stack,784784+ BUFFER_SIZE_LARGE_ENUM,785785+ large_buffer_stack_va,786786+ buffer_stack_size, 0);787787+ if (rc != 0) {788788+ netdev_err(dev, "gxio_mpipe_init_buffer_stack failed: %d\n",789789+ rc);790790+ return rc;791791+ }792792+ rc = gxio_mpipe_register_client_memory(&context, large_buffer_stack,793793+ hash_pte, 0);794794+ if (rc != 0) {795795+ netdev_err(dev,796796+ "gxio_mpipe_register_buffer_memory failed: %d\n",797797+ rc);798798+ return rc;799799+ }800800+801801+ return 0;802802+}803803+804804+/* Allocate per-cpu resources (memory for completions and idescs).805805+ * This routine supports tile_net_init_mpipe(), below.806806+ */807807+static int alloc_percpu_mpipe_resources(struct net_device *dev,808808+ int cpu, int ring)809809+{810810+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);811811+ int order, i, rc;812812+ struct page *page;813813+ void *addr;814814+815815+ /* Allocate the "comps". */816816+ order = get_order(COMPS_SIZE);817817+ page = homecache_alloc_pages(GFP_KERNEL, order, cpu);818818+ if (page == NULL) {819819+ netdev_err(dev, "Failed to alloc %zd bytes comps memory\n",820820+ COMPS_SIZE);821821+ return -ENOMEM;822822+ }823823+ addr = pfn_to_kaddr(page_to_pfn(page));824824+ memset(addr, 0, COMPS_SIZE);825825+ for (i = 0; i < TILE_NET_CHANNELS; i++)826826+ info->comps_for_echannel[i] =827827+ addr + i * sizeof(struct tile_net_comps);828828+829829+ /* If this is a network cpu, create an iqueue. */830830+ if (cpu_isset(cpu, network_cpus_map)) {831831+ order = get_order(NOTIF_RING_SIZE);832832+ page = homecache_alloc_pages(GFP_KERNEL, order, cpu);833833+ if (page == NULL) {834834+ netdev_err(dev,835835+ "Failed to alloc %zd bytes iqueue memory\n",836836+ NOTIF_RING_SIZE);837837+ return -ENOMEM;838838+ }839839+ addr = pfn_to_kaddr(page_to_pfn(page));840840+ rc = gxio_mpipe_iqueue_init(&info->iqueue, &context, ring++,841841+ addr, NOTIF_RING_SIZE, 0);842842+ if (rc < 0) {843843+ netdev_err(dev,844844+ "gxio_mpipe_iqueue_init failed: %d\n", rc);845845+ return rc;846846+ }847847+ info->has_iqueue = true;848848+ }849849+850850+ return ring;851851+}852852+853853+/* Initialize NotifGroup and buckets.854854+ * This routine supports tile_net_init_mpipe(), below.855855+ */856856+static int init_notif_group_and_buckets(struct net_device *dev,857857+ int ring, int network_cpus_count)858858+{859859+ int group, rc;860860+861861+ /* Allocate one NotifGroup. */862862+ rc = gxio_mpipe_alloc_notif_groups(&context, 1, 0, 0);863863+ if (rc < 0) {864864+ netdev_err(dev, "gxio_mpipe_alloc_notif_groups failed: %d\n",865865+ rc);866866+ return rc;867867+ }868868+ group = rc;869869+870870+ /* Initialize global num_buckets value. */871871+ if (network_cpus_count > 4)872872+ num_buckets = 256;873873+ else if (network_cpus_count > 1)874874+ num_buckets = 16;875875+876876+ /* Allocate some buckets, and set global first_bucket value. */877877+ rc = gxio_mpipe_alloc_buckets(&context, num_buckets, 0, 0);878878+ if (rc < 0) {879879+ netdev_err(dev, "gxio_mpipe_alloc_buckets failed: %d\n", rc);880880+ return rc;881881+ }882882+ first_bucket = rc;883883+884884+ /* Init group and buckets. */885885+ rc = gxio_mpipe_init_notif_group_and_buckets(886886+ &context, group, ring, network_cpus_count,887887+ first_bucket, num_buckets,888888+ GXIO_MPIPE_BUCKET_STICKY_FLOW_LOCALITY);889889+ if (rc != 0) {890890+ netdev_err(891891+ dev,892892+ "gxio_mpipe_init_notif_group_and_buckets failed: %d\n",893893+ rc);894894+ return rc;895895+ }896896+897897+ return 0;898898+}899899+900900+/* Create an irq and register it, then activate the irq and request901901+ * interrupts on all cores. Note that "ingress_irq" being initialized902902+ * is how we know not to call tile_net_init_mpipe() again.903903+ * This routine supports tile_net_init_mpipe(), below.904904+ */905905+static int tile_net_setup_interrupts(struct net_device *dev)906906+{907907+ int cpu, rc;908908+909909+ rc = create_irq();910910+ if (rc < 0) {911911+ netdev_err(dev, "create_irq failed: %d\n", rc);912912+ return rc;913913+ }914914+ ingress_irq = rc;915915+ tile_irq_activate(ingress_irq, TILE_IRQ_PERCPU);916916+ rc = request_irq(ingress_irq, tile_net_handle_ingress_irq,917917+ 0, NULL, NULL);918918+ if (rc != 0) {919919+ netdev_err(dev, "request_irq failed: %d\n", rc);920920+ destroy_irq(ingress_irq);921921+ ingress_irq = -1;922922+ return rc;923923+ }924924+925925+ for_each_online_cpu(cpu) {926926+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);927927+ if (info->has_iqueue) {928928+ gxio_mpipe_request_notif_ring_interrupt(929929+ &context, cpu_x(cpu), cpu_y(cpu),930930+ 1, ingress_irq, info->iqueue.ring);931931+ }932932+ }933933+934934+ return 0;935935+}936936+937937+/* Undo any state set up partially by a failed call to tile_net_init_mpipe. */938938+static void tile_net_init_mpipe_fail(void)939939+{940940+ int cpu;941941+942942+ /* Do cleanups that require the mpipe context first. */943943+ if (small_buffer_stack >= 0)944944+ tile_net_pop_all_buffers(small_buffer_stack);945945+ if (large_buffer_stack >= 0)946946+ tile_net_pop_all_buffers(large_buffer_stack);947947+948948+ /* Destroy mpipe context so the hardware no longer owns any memory. */949949+ gxio_mpipe_destroy(&context);950950+951951+ for_each_online_cpu(cpu) {952952+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);953953+ free_pages((unsigned long)(info->comps_for_echannel[0]),954954+ get_order(COMPS_SIZE));955955+ info->comps_for_echannel[0] = NULL;956956+ free_pages((unsigned long)(info->iqueue.idescs),957957+ get_order(NOTIF_RING_SIZE));958958+ info->iqueue.idescs = NULL;959959+ }960960+961961+ if (small_buffer_stack_va)962962+ free_pages_exact(small_buffer_stack_va, buffer_stack_size);963963+ if (large_buffer_stack_va)964964+ free_pages_exact(large_buffer_stack_va, buffer_stack_size);965965+966966+ small_buffer_stack_va = NULL;967967+ large_buffer_stack_va = NULL;968968+ large_buffer_stack = -1;969969+ small_buffer_stack = -1;970970+ first_bucket = -1;971971+}972972+973973+/* The first time any tilegx network device is opened, we initialize974974+ * the global mpipe state. If this step fails, we fail to open the975975+ * device, but if it succeeds, we never need to do it again, and since976976+ * tile_net can't be unloaded, we never undo it.977977+ *978978+ * Note that some resources in this path (buffer stack indices,979979+ * bindings from init_buffer_stack, etc.) are hypervisor resources980980+ * that are freed implicitly by gxio_mpipe_destroy().981981+ */982982+static int tile_net_init_mpipe(struct net_device *dev)983983+{984984+ int i, num_buffers, rc;985985+ int cpu;986986+ int first_ring, ring;987987+ int network_cpus_count = cpus_weight(network_cpus_map);988988+989989+ if (!hash_default) {990990+ netdev_err(dev, "Networking requires hash_default!\n");991991+ return -EIO;992992+ }993993+994994+ rc = gxio_mpipe_init(&context, 0);995995+ if (rc != 0) {996996+ netdev_err(dev, "gxio_mpipe_init failed: %d\n", rc);997997+ return -EIO;998998+ }999999+10001000+ /* Set up the buffer stacks. */10011001+ num_buffers =10021002+ network_cpus_count * (IQUEUE_ENTRIES + TILE_NET_BATCH);10031003+ rc = init_buffer_stacks(dev, num_buffers);10041004+ if (rc != 0)10051005+ goto fail;10061006+10071007+ /* Provide initial buffers. */10081008+ rc = -ENOMEM;10091009+ for (i = 0; i < num_buffers; i++) {10101010+ if (!tile_net_provide_buffer(true)) {10111011+ netdev_err(dev, "Cannot allocate initial sk_bufs!\n");10121012+ goto fail;10131013+ }10141014+ }10151015+ for (i = 0; i < num_buffers; i++) {10161016+ if (!tile_net_provide_buffer(false)) {10171017+ netdev_err(dev, "Cannot allocate initial sk_bufs!\n");10181018+ goto fail;10191019+ }10201020+ }10211021+10221022+ /* Allocate one NotifRing for each network cpu. */10231023+ rc = gxio_mpipe_alloc_notif_rings(&context, network_cpus_count, 0, 0);10241024+ if (rc < 0) {10251025+ netdev_err(dev, "gxio_mpipe_alloc_notif_rings failed %d\n",10261026+ rc);10271027+ goto fail;10281028+ }10291029+10301030+ /* Init NotifRings per-cpu. */10311031+ first_ring = rc;10321032+ ring = first_ring;10331033+ for_each_online_cpu(cpu) {10341034+ rc = alloc_percpu_mpipe_resources(dev, cpu, ring);10351035+ if (rc < 0)10361036+ goto fail;10371037+ ring = rc;10381038+ }10391039+10401040+ /* Initialize NotifGroup and buckets. */10411041+ rc = init_notif_group_and_buckets(dev, first_ring, network_cpus_count);10421042+ if (rc != 0)10431043+ goto fail;10441044+10451045+ /* Create and enable interrupts. */10461046+ rc = tile_net_setup_interrupts(dev);10471047+ if (rc != 0)10481048+ goto fail;10491049+10501050+ return 0;10511051+10521052+fail:10531053+ tile_net_init_mpipe_fail();10541054+ return rc;10551055+}10561056+10571057+/* Create persistent egress info for a given egress channel.10581058+ * Note that this may be shared between, say, "gbe0" and "xgbe0".10591059+ * ISSUE: Defer header allocation until TSO is actually needed?10601060+ */10611061+static int tile_net_init_egress(struct net_device *dev, int echannel)10621062+{10631063+ struct page *headers_page, *edescs_page, *equeue_page;10641064+ gxio_mpipe_edesc_t *edescs;10651065+ gxio_mpipe_equeue_t *equeue;10661066+ unsigned char *headers;10671067+ int headers_order, edescs_order, equeue_order;10681068+ size_t edescs_size;10691069+ int edma;10701070+ int rc = -ENOMEM;10711071+10721072+ /* Only initialize once. */10731073+ if (egress_for_echannel[echannel].equeue != NULL)10741074+ return 0;10751075+10761076+ /* Allocate memory for the "headers". */10771077+ headers_order = get_order(EQUEUE_ENTRIES * HEADER_BYTES);10781078+ headers_page = alloc_pages(GFP_KERNEL, headers_order);10791079+ if (headers_page == NULL) {10801080+ netdev_warn(dev,10811081+ "Could not alloc %zd bytes for TSO headers.\n",10821082+ PAGE_SIZE << headers_order);10831083+ goto fail;10841084+ }10851085+ headers = pfn_to_kaddr(page_to_pfn(headers_page));10861086+10871087+ /* Allocate memory for the "edescs". */10881088+ edescs_size = EQUEUE_ENTRIES * sizeof(*edescs);10891089+ edescs_order = get_order(edescs_size);10901090+ edescs_page = alloc_pages(GFP_KERNEL, edescs_order);10911091+ if (edescs_page == NULL) {10921092+ netdev_warn(dev,10931093+ "Could not alloc %zd bytes for eDMA ring.\n",10941094+ edescs_size);10951095+ goto fail_headers;10961096+ }10971097+ edescs = pfn_to_kaddr(page_to_pfn(edescs_page));10981098+10991099+ /* Allocate memory for the "equeue". */11001100+ equeue_order = get_order(sizeof(*equeue));11011101+ equeue_page = alloc_pages(GFP_KERNEL, equeue_order);11021102+ if (equeue_page == NULL) {11031103+ netdev_warn(dev,11041104+ "Could not alloc %zd bytes for equeue info.\n",11051105+ PAGE_SIZE << equeue_order);11061106+ goto fail_edescs;11071107+ }11081108+ equeue = pfn_to_kaddr(page_to_pfn(equeue_page));11091109+11101110+ /* Allocate an edma ring. Note that in practice this can't11111111+ * fail, which is good, because we will leak an edma ring if so.11121112+ */11131113+ rc = gxio_mpipe_alloc_edma_rings(&context, 1, 0, 0);11141114+ if (rc < 0) {11151115+ netdev_warn(dev, "gxio_mpipe_alloc_edma_rings failed: %d\n",11161116+ rc);11171117+ goto fail_equeue;11181118+ }11191119+ edma = rc;11201120+11211121+ /* Initialize the equeue. */11221122+ rc = gxio_mpipe_equeue_init(equeue, &context, edma, echannel,11231123+ edescs, edescs_size, 0);11241124+ if (rc != 0) {11251125+ netdev_err(dev, "gxio_mpipe_equeue_init failed: %d\n", rc);11261126+ goto fail_equeue;11271127+ }11281128+11291129+ /* Done. */11301130+ egress_for_echannel[echannel].equeue = equeue;11311131+ egress_for_echannel[echannel].headers = headers;11321132+ return 0;11331133+11341134+fail_equeue:11351135+ __free_pages(equeue_page, equeue_order);11361136+11371137+fail_edescs:11381138+ __free_pages(edescs_page, edescs_order);11391139+11401140+fail_headers:11411141+ __free_pages(headers_page, headers_order);11421142+11431143+fail:11441144+ return rc;11451145+}11461146+11471147+/* Return channel number for a newly-opened link. */11481148+static int tile_net_link_open(struct net_device *dev, gxio_mpipe_link_t *link,11491149+ const char *link_name)11501150+{11511151+ int rc = gxio_mpipe_link_open(link, &context, link_name, 0);11521152+ if (rc < 0) {11531153+ netdev_err(dev, "Failed to open '%s'\n", link_name);11541154+ return rc;11551155+ }11561156+ rc = gxio_mpipe_link_channel(link);11571157+ if (rc < 0 || rc >= TILE_NET_CHANNELS) {11581158+ netdev_err(dev, "gxio_mpipe_link_channel bad value: %d\n", rc);11591159+ gxio_mpipe_link_close(link);11601160+ return -EINVAL;11611161+ }11621162+ return rc;11631163+}11641164+11651165+/* Help the kernel activate the given network interface. */11661166+static int tile_net_open(struct net_device *dev)11671167+{11681168+ struct tile_net_priv *priv = netdev_priv(dev);11691169+ int cpu, rc;11701170+11711171+ mutex_lock(&tile_net_devs_for_channel_mutex);11721172+11731173+ /* Do one-time initialization the first time any device is opened. */11741174+ if (ingress_irq < 0) {11751175+ rc = tile_net_init_mpipe(dev);11761176+ if (rc != 0)11771177+ goto fail;11781178+ }11791179+11801180+ /* Determine if this is the "loopify" device. */11811181+ if (unlikely((loopify_link_name != NULL) &&11821182+ !strcmp(dev->name, loopify_link_name))) {11831183+ rc = tile_net_link_open(dev, &priv->link, "loop0");11841184+ if (rc < 0)11851185+ goto fail;11861186+ priv->channel = rc;11871187+ rc = tile_net_link_open(dev, &priv->loopify_link, "loop1");11881188+ if (rc < 0)11891189+ goto fail;11901190+ priv->loopify_channel = rc;11911191+ priv->echannel = rc;11921192+ } else {11931193+ rc = tile_net_link_open(dev, &priv->link, dev->name);11941194+ if (rc < 0)11951195+ goto fail;11961196+ priv->channel = rc;11971197+ priv->echannel = rc;11981198+ }11991199+12001200+ /* Initialize egress info (if needed). Once ever, per echannel. */12011201+ rc = tile_net_init_egress(dev, priv->echannel);12021202+ if (rc != 0)12031203+ goto fail;12041204+12051205+ tile_net_devs_for_channel[priv->channel] = dev;12061206+12071207+ rc = tile_net_update(dev);12081208+ if (rc != 0)12091209+ goto fail;12101210+12111211+ mutex_unlock(&tile_net_devs_for_channel_mutex);12121212+12131213+ /* Initialize the transmit wake timer for this device for each cpu. */12141214+ for_each_online_cpu(cpu) {12151215+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);12161216+ struct tile_net_tx_wake *tx_wake =12171217+ &info->tx_wake[priv->echannel];12181218+12191219+ hrtimer_init(&tx_wake->timer, CLOCK_MONOTONIC,12201220+ HRTIMER_MODE_REL);12211221+ tx_wake->timer.function = tile_net_handle_tx_wake_timer;12221222+ tx_wake->dev = dev;12231223+ }12241224+12251225+ for_each_online_cpu(cpu)12261226+ netif_start_subqueue(dev, cpu);12271227+ netif_carrier_on(dev);12281228+ return 0;12291229+12301230+fail:12311231+ if (priv->loopify_channel >= 0) {12321232+ if (gxio_mpipe_link_close(&priv->loopify_link) != 0)12331233+ netdev_warn(dev, "Failed to close loopify link!\n");12341234+ priv->loopify_channel = -1;12351235+ }12361236+ if (priv->channel >= 0) {12371237+ if (gxio_mpipe_link_close(&priv->link) != 0)12381238+ netdev_warn(dev, "Failed to close link!\n");12391239+ priv->channel = -1;12401240+ }12411241+ priv->echannel = -1;12421242+ tile_net_devs_for_channel[priv->channel] = NULL;12431243+ mutex_unlock(&tile_net_devs_for_channel_mutex);12441244+12451245+ /* Don't return raw gxio error codes to generic Linux. */12461246+ return (rc > -512) ? rc : -EIO;12471247+}12481248+12491249+/* Help the kernel deactivate the given network interface. */12501250+static int tile_net_stop(struct net_device *dev)12511251+{12521252+ struct tile_net_priv *priv = netdev_priv(dev);12531253+ int cpu;12541254+12551255+ for_each_online_cpu(cpu) {12561256+ struct tile_net_info *info = &per_cpu(per_cpu_info, cpu);12571257+ struct tile_net_tx_wake *tx_wake =12581258+ &info->tx_wake[priv->echannel];12591259+12601260+ hrtimer_cancel(&tx_wake->timer);12611261+ netif_stop_subqueue(dev, cpu);12621262+ }12631263+12641264+ mutex_lock(&tile_net_devs_for_channel_mutex);12651265+ tile_net_devs_for_channel[priv->channel] = NULL;12661266+ (void)tile_net_update(dev);12671267+ if (priv->loopify_channel >= 0) {12681268+ if (gxio_mpipe_link_close(&priv->loopify_link) != 0)12691269+ netdev_warn(dev, "Failed to close loopify link!\n");12701270+ priv->loopify_channel = -1;12711271+ }12721272+ if (priv->channel >= 0) {12731273+ if (gxio_mpipe_link_close(&priv->link) != 0)12741274+ netdev_warn(dev, "Failed to close link!\n");12751275+ priv->channel = -1;12761276+ }12771277+ priv->echannel = -1;12781278+ mutex_unlock(&tile_net_devs_for_channel_mutex);12791279+12801280+ return 0;12811281+}12821282+12831283+/* Determine the VA for a fragment. */12841284+static inline void *tile_net_frag_buf(skb_frag_t *f)12851285+{12861286+ unsigned long pfn = page_to_pfn(skb_frag_page(f));12871287+ return pfn_to_kaddr(pfn) + f->page_offset;12881288+}12891289+12901290+/* Acquire a completion entry and an egress slot, or if we can't,12911291+ * stop the queue and schedule the tx_wake timer.12921292+ */12931293+static s64 tile_net_equeue_try_reserve(struct net_device *dev,12941294+ struct tile_net_comps *comps,12951295+ gxio_mpipe_equeue_t *equeue,12961296+ int num_edescs)12971297+{12981298+ /* Try to acquire a completion entry. */12991299+ if (comps->comp_next - comps->comp_last < TILE_NET_MAX_COMPS - 1 ||13001300+ tile_net_free_comps(equeue, comps, 32, false) != 0) {13011301+13021302+ /* Try to acquire an egress slot. */13031303+ s64 slot = gxio_mpipe_equeue_try_reserve(equeue, num_edescs);13041304+ if (slot >= 0)13051305+ return slot;13061306+13071307+ /* Freeing some completions gives the equeue time to drain. */13081308+ tile_net_free_comps(equeue, comps, TILE_NET_MAX_COMPS, false);13091309+13101310+ slot = gxio_mpipe_equeue_try_reserve(equeue, num_edescs);13111311+ if (slot >= 0)13121312+ return slot;13131313+ }13141314+13151315+ /* Still nothing; give up and stop the queue for a short while. */13161316+ netif_stop_subqueue(dev, smp_processor_id());13171317+ tile_net_schedule_tx_wake_timer(dev);13181318+ return -1;13191319+}13201320+13211321+/* Determine how many edesc's are needed for TSO.13221322+ *13231323+ * Sometimes, if "sendfile()" requires copying, we will be called with13241324+ * "data" containing the header and payload, with "frags" being empty.13251325+ * Sometimes, for example when using NFS over TCP, a single segment can13261326+ * span 3 fragments. This requires special care.13271327+ */13281328+static int tso_count_edescs(struct sk_buff *skb)13291329+{13301330+ struct skb_shared_info *sh = skb_shinfo(skb);13311331+ unsigned int data_len = skb->data_len;13321332+ unsigned int p_len = sh->gso_size;13331333+ long f_id = -1; /* id of the current fragment */13341334+ long f_size = -1; /* size of the current fragment */13351335+ long f_used = -1; /* bytes used from the current fragment */13361336+ long n; /* size of the current piece of payload */13371337+ int num_edescs = 0;13381338+ int segment;13391339+13401340+ for (segment = 0; segment < sh->gso_segs; segment++) {13411341+13421342+ unsigned int p_used = 0;13431343+13441344+ /* One edesc for header and for each piece of the payload. */13451345+ for (num_edescs++; p_used < p_len; num_edescs++) {13461346+13471347+ /* Advance as needed. */13481348+ while (f_used >= f_size) {13491349+ f_id++;13501350+ f_size = sh->frags[f_id].size;13511351+ f_used = 0;13521352+ }13531353+13541354+ /* Use bytes from the current fragment. */13551355+ n = p_len - p_used;13561356+ if (n > f_size - f_used)13571357+ n = f_size - f_used;13581358+ f_used += n;13591359+ p_used += n;13601360+ }13611361+13621362+ /* The last segment may be less than gso_size. */13631363+ data_len -= p_len;13641364+ if (data_len < p_len)13651365+ p_len = data_len;13661366+ }13671367+13681368+ return num_edescs;13691369+}13701370+13711371+/* Prepare modified copies of the skbuff headers.13721372+ * FIXME: add support for IPv6.13731373+ */13741374+static void tso_headers_prepare(struct sk_buff *skb, unsigned char *headers,13751375+ s64 slot)13761376+{13771377+ struct skb_shared_info *sh = skb_shinfo(skb);13781378+ struct iphdr *ih;13791379+ struct tcphdr *th;13801380+ unsigned int data_len = skb->data_len;13811381+ unsigned char *data = skb->data;13821382+ unsigned int ih_off, th_off, sh_len, p_len;13831383+ unsigned int isum_seed, tsum_seed, id, seq;13841384+ long f_id = -1; /* id of the current fragment */13851385+ long f_size = -1; /* size of the current fragment */13861386+ long f_used = -1; /* bytes used from the current fragment */13871387+ long n; /* size of the current piece of payload */13881388+ int segment;13891389+13901390+ /* Locate original headers and compute various lengths. */13911391+ ih = ip_hdr(skb);13921392+ th = tcp_hdr(skb);13931393+ ih_off = skb_network_offset(skb);13941394+ th_off = skb_transport_offset(skb);13951395+ sh_len = th_off + tcp_hdrlen(skb);13961396+ p_len = sh->gso_size;13971397+13981398+ /* Set up seed values for IP and TCP csum and initialize id and seq. */13991399+ isum_seed = ((0xFFFF - ih->check) +14001400+ (0xFFFF - ih->tot_len) +14011401+ (0xFFFF - ih->id));14021402+ tsum_seed = th->check + (0xFFFF ^ htons(skb->len));14031403+ id = ntohs(ih->id);14041404+ seq = ntohl(th->seq);14051405+14061406+ /* Prepare all the headers. */14071407+ for (segment = 0; segment < sh->gso_segs; segment++) {14081408+ unsigned char *buf;14091409+ unsigned int p_used = 0;14101410+14111411+ /* Copy to the header memory for this segment. */14121412+ buf = headers + (slot % EQUEUE_ENTRIES) * HEADER_BYTES +14131413+ NET_IP_ALIGN;14141414+ memcpy(buf, data, sh_len);14151415+14161416+ /* Update copied ip header. */14171417+ ih = (struct iphdr *)(buf + ih_off);14181418+ ih->tot_len = htons(sh_len + p_len - ih_off);14191419+ ih->id = htons(id);14201420+ ih->check = csum_long(isum_seed + ih->tot_len +14211421+ ih->id) ^ 0xffff;14221422+14231423+ /* Update copied tcp header. */14241424+ th = (struct tcphdr *)(buf + th_off);14251425+ th->seq = htonl(seq);14261426+ th->check = csum_long(tsum_seed + htons(sh_len + p_len));14271427+ if (segment != sh->gso_segs - 1) {14281428+ th->fin = 0;14291429+ th->psh = 0;14301430+ }14311431+14321432+ /* Skip past the header. */14331433+ slot++;14341434+14351435+ /* Skip past the payload. */14361436+ while (p_used < p_len) {14371437+14381438+ /* Advance as needed. */14391439+ while (f_used >= f_size) {14401440+ f_id++;14411441+ f_size = sh->frags[f_id].size;14421442+ f_used = 0;14431443+ }14441444+14451445+ /* Use bytes from the current fragment. */14461446+ n = p_len - p_used;14471447+ if (n > f_size - f_used)14481448+ n = f_size - f_used;14491449+ f_used += n;14501450+ p_used += n;14511451+14521452+ slot++;14531453+ }14541454+14551455+ id++;14561456+ seq += p_len;14571457+14581458+ /* The last segment may be less than gso_size. */14591459+ data_len -= p_len;14601460+ if (data_len < p_len)14611461+ p_len = data_len;14621462+ }14631463+14641464+ /* Flush the headers so they are ready for hardware DMA. */14651465+ wmb();14661466+}14671467+14681468+/* Pass all the data to mpipe for egress. */14691469+static void tso_egress(struct net_device *dev, gxio_mpipe_equeue_t *equeue,14701470+ struct sk_buff *skb, unsigned char *headers, s64 slot)14711471+{14721472+ struct tile_net_priv *priv = netdev_priv(dev);14731473+ struct skb_shared_info *sh = skb_shinfo(skb);14741474+ unsigned int data_len = skb->data_len;14751475+ unsigned int p_len = sh->gso_size;14761476+ gxio_mpipe_edesc_t edesc_head = { { 0 } };14771477+ gxio_mpipe_edesc_t edesc_body = { { 0 } };14781478+ long f_id = -1; /* id of the current fragment */14791479+ long f_size = -1; /* size of the current fragment */14801480+ long f_used = -1; /* bytes used from the current fragment */14811481+ long n; /* size of the current piece of payload */14821482+ unsigned long tx_packets = 0, tx_bytes = 0;14831483+ unsigned int csum_start, sh_len;14841484+ int segment;14851485+14861486+ /* Prepare to egress the headers: set up header edesc. */14871487+ csum_start = skb_checksum_start_offset(skb);14881488+ sh_len = skb_transport_offset(skb) + tcp_hdrlen(skb);14891489+ edesc_head.csum = 1;14901490+ edesc_head.csum_start = csum_start;14911491+ edesc_head.csum_dest = csum_start + skb->csum_offset;14921492+ edesc_head.xfer_size = sh_len;14931493+14941494+ /* This is only used to specify the TLB. */14951495+ edesc_head.stack_idx = large_buffer_stack;14961496+ edesc_body.stack_idx = large_buffer_stack;14971497+14981498+ /* Egress all the edescs. */14991499+ for (segment = 0; segment < sh->gso_segs; segment++) {15001500+ void *va;15011501+ unsigned char *buf;15021502+ unsigned int p_used = 0;15031503+15041504+ /* Egress the header. */15051505+ buf = headers + (slot % EQUEUE_ENTRIES) * HEADER_BYTES +15061506+ NET_IP_ALIGN;15071507+ edesc_head.va = va_to_tile_io_addr(buf);15081508+ gxio_mpipe_equeue_put_at(equeue, edesc_head, slot);15091509+ slot++;15101510+15111511+ /* Egress the payload. */15121512+ while (p_used < p_len) {15131513+15141514+ /* Advance as needed. */15151515+ while (f_used >= f_size) {15161516+ f_id++;15171517+ f_size = sh->frags[f_id].size;15181518+ f_used = 0;15191519+ }15201520+15211521+ va = tile_net_frag_buf(&sh->frags[f_id]) + f_used;15221522+15231523+ /* Use bytes from the current fragment. */15241524+ n = p_len - p_used;15251525+ if (n > f_size - f_used)15261526+ n = f_size - f_used;15271527+ f_used += n;15281528+ p_used += n;15291529+15301530+ /* Egress a piece of the payload. */15311531+ edesc_body.va = va_to_tile_io_addr(va);15321532+ edesc_body.xfer_size = n;15331533+ edesc_body.bound = !(p_used < p_len);15341534+ gxio_mpipe_equeue_put_at(equeue, edesc_body, slot);15351535+ slot++;15361536+ }15371537+15381538+ tx_packets++;15391539+ tx_bytes += sh_len + p_len;15401540+15411541+ /* The last segment may be less than gso_size. */15421542+ data_len -= p_len;15431543+ if (data_len < p_len)15441544+ p_len = data_len;15451545+ }15461546+15471547+ /* Update stats. */15481548+ tile_net_stats_add(tx_packets, &priv->stats.tx_packets);15491549+ tile_net_stats_add(tx_bytes, &priv->stats.tx_bytes);15501550+}15511551+15521552+/* Do "TSO" handling for egress.15531553+ *15541554+ * Normally drivers set NETIF_F_TSO only to support hardware TSO;15551555+ * otherwise the stack uses scatter-gather to implement GSO in software.15561556+ * On our testing, enabling GSO support (via NETIF_F_SG) drops network15571557+ * performance down to around 7.5 Gbps on the 10G interfaces, although15581558+ * also dropping cpu utilization way down, to under 8%. But15591559+ * implementing "TSO" in the driver brings performance back up to line15601560+ * rate, while dropping cpu usage even further, to less than 4%. In15611561+ * practice, profiling of GSO shows that skb_segment() is what causes15621562+ * the performance overheads; we benefit in the driver from using15631563+ * preallocated memory to duplicate the TCP/IP headers.15641564+ */15651565+static int tile_net_tx_tso(struct sk_buff *skb, struct net_device *dev)15661566+{15671567+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);15681568+ struct tile_net_priv *priv = netdev_priv(dev);15691569+ int channel = priv->echannel;15701570+ struct tile_net_egress *egress = &egress_for_echannel[channel];15711571+ struct tile_net_comps *comps = info->comps_for_echannel[channel];15721572+ gxio_mpipe_equeue_t *equeue = egress->equeue;15731573+ unsigned long irqflags;15741574+ int num_edescs;15751575+ s64 slot;15761576+15771577+ /* Determine how many mpipe edesc's are needed. */15781578+ num_edescs = tso_count_edescs(skb);15791579+15801580+ local_irq_save(irqflags);15811581+15821582+ /* Try to acquire a completion entry and an egress slot. */15831583+ slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs);15841584+ if (slot < 0) {15851585+ local_irq_restore(irqflags);15861586+ return NETDEV_TX_BUSY;15871587+ }15881588+15891589+ /* Set up copies of header data properly. */15901590+ tso_headers_prepare(skb, egress->headers, slot);15911591+15921592+ /* Actually pass the data to the network hardware. */15931593+ tso_egress(dev, equeue, skb, egress->headers, slot);15941594+15951595+ /* Add a completion record. */15961596+ add_comp(equeue, comps, slot + num_edescs - 1, skb);15971597+15981598+ local_irq_restore(irqflags);15991599+16001600+ /* Make sure the egress timer is scheduled. */16011601+ tile_net_schedule_egress_timer();16021602+16031603+ return NETDEV_TX_OK;16041604+}16051605+16061606+/* Analyze the body and frags for a transmit request. */16071607+static unsigned int tile_net_tx_frags(struct frag *frags,16081608+ struct sk_buff *skb,16091609+ void *b_data, unsigned int b_len)16101610+{16111611+ unsigned int i, n = 0;16121612+16131613+ struct skb_shared_info *sh = skb_shinfo(skb);16141614+16151615+ if (b_len != 0) {16161616+ frags[n].buf = b_data;16171617+ frags[n++].length = b_len;16181618+ }16191619+16201620+ for (i = 0; i < sh->nr_frags; i++) {16211621+ skb_frag_t *f = &sh->frags[i];16221622+ frags[n].buf = tile_net_frag_buf(f);16231623+ frags[n++].length = skb_frag_size(f);16241624+ }16251625+16261626+ return n;16271627+}16281628+16291629+/* Help the kernel transmit a packet. */16301630+static int tile_net_tx(struct sk_buff *skb, struct net_device *dev)16311631+{16321632+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);16331633+ struct tile_net_priv *priv = netdev_priv(dev);16341634+ struct tile_net_egress *egress = &egress_for_echannel[priv->echannel];16351635+ gxio_mpipe_equeue_t *equeue = egress->equeue;16361636+ struct tile_net_comps *comps =16371637+ info->comps_for_echannel[priv->echannel];16381638+ unsigned int len = skb->len;16391639+ unsigned char *data = skb->data;16401640+ unsigned int num_edescs;16411641+ struct frag frags[MAX_FRAGS];16421642+ gxio_mpipe_edesc_t edescs[MAX_FRAGS];16431643+ unsigned long irqflags;16441644+ gxio_mpipe_edesc_t edesc = { { 0 } };16451645+ unsigned int i;16461646+ s64 slot;16471647+16481648+ if (skb_is_gso(skb))16491649+ return tile_net_tx_tso(skb, dev);16501650+16511651+ num_edescs = tile_net_tx_frags(frags, skb, data, skb_headlen(skb));16521652+16531653+ /* This is only used to specify the TLB. */16541654+ edesc.stack_idx = large_buffer_stack;16551655+16561656+ /* Prepare the edescs. */16571657+ for (i = 0; i < num_edescs; i++) {16581658+ edesc.xfer_size = frags[i].length;16591659+ edesc.va = va_to_tile_io_addr(frags[i].buf);16601660+ edescs[i] = edesc;16611661+ }16621662+16631663+ /* Mark the final edesc. */16641664+ edescs[num_edescs - 1].bound = 1;16651665+16661666+ /* Add checksum info to the initial edesc, if needed. */16671667+ if (skb->ip_summed == CHECKSUM_PARTIAL) {16681668+ unsigned int csum_start = skb_checksum_start_offset(skb);16691669+ edescs[0].csum = 1;16701670+ edescs[0].csum_start = csum_start;16711671+ edescs[0].csum_dest = csum_start + skb->csum_offset;16721672+ }16731673+16741674+ local_irq_save(irqflags);16751675+16761676+ /* Try to acquire a completion entry and an egress slot. */16771677+ slot = tile_net_equeue_try_reserve(dev, comps, equeue, num_edescs);16781678+ if (slot < 0) {16791679+ local_irq_restore(irqflags);16801680+ return NETDEV_TX_BUSY;16811681+ }16821682+16831683+ for (i = 0; i < num_edescs; i++)16841684+ gxio_mpipe_equeue_put_at(equeue, edescs[i], slot++);16851685+16861686+ /* Add a completion record. */16871687+ add_comp(equeue, comps, slot - 1, skb);16881688+16891689+ /* NOTE: Use ETH_ZLEN for short packets (e.g. 42 < 60). */16901690+ tile_net_stats_add(1, &priv->stats.tx_packets);16911691+ tile_net_stats_add(max_t(unsigned int, len, ETH_ZLEN),16921692+ &priv->stats.tx_bytes);16931693+16941694+ local_irq_restore(irqflags);16951695+16961696+ /* Make sure the egress timer is scheduled. */16971697+ tile_net_schedule_egress_timer();16981698+16991699+ return NETDEV_TX_OK;17001700+}17011701+17021702+/* Return subqueue id on this core (one per core). */17031703+static u16 tile_net_select_queue(struct net_device *dev, struct sk_buff *skb)17041704+{17051705+ return smp_processor_id();17061706+}17071707+17081708+/* Deal with a transmit timeout. */17091709+static void tile_net_tx_timeout(struct net_device *dev)17101710+{17111711+ int cpu;17121712+17131713+ for_each_online_cpu(cpu)17141714+ netif_wake_subqueue(dev, cpu);17151715+}17161716+17171717+/* Ioctl commands. */17181718+static int tile_net_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)17191719+{17201720+ return -EOPNOTSUPP;17211721+}17221722+17231723+/* Get system network statistics for device. */17241724+static struct net_device_stats *tile_net_get_stats(struct net_device *dev)17251725+{17261726+ struct tile_net_priv *priv = netdev_priv(dev);17271727+ return &priv->stats;17281728+}17291729+17301730+/* Change the MTU. */17311731+static int tile_net_change_mtu(struct net_device *dev, int new_mtu)17321732+{17331733+ if ((new_mtu < 68) || (new_mtu > 1500))17341734+ return -EINVAL;17351735+ dev->mtu = new_mtu;17361736+ return 0;17371737+}17381738+17391739+/* Change the Ethernet address of the NIC.17401740+ *17411741+ * The hypervisor driver does not support changing MAC address. However,17421742+ * the hardware does not do anything with the MAC address, so the address17431743+ * which gets used on outgoing packets, and which is accepted on incoming17441744+ * packets, is completely up to us.17451745+ *17461746+ * Returns 0 on success, negative on failure.17471747+ */17481748+static int tile_net_set_mac_address(struct net_device *dev, void *p)17491749+{17501750+ struct sockaddr *addr = p;17511751+17521752+ if (!is_valid_ether_addr(addr->sa_data))17531753+ return -EINVAL;17541754+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);17551755+ return 0;17561756+}17571757+17581758+#ifdef CONFIG_NET_POLL_CONTROLLER17591759+/* Polling 'interrupt' - used by things like netconsole to send skbs17601760+ * without having to re-enable interrupts. It's not called while17611761+ * the interrupt routine is executing.17621762+ */17631763+static void tile_net_netpoll(struct net_device *dev)17641764+{17651765+ disable_percpu_irq(ingress_irq);17661766+ tile_net_handle_ingress_irq(ingress_irq, NULL);17671767+ enable_percpu_irq(ingress_irq, 0);17681768+}17691769+#endif17701770+17711771+static const struct net_device_ops tile_net_ops = {17721772+ .ndo_open = tile_net_open,17731773+ .ndo_stop = tile_net_stop,17741774+ .ndo_start_xmit = tile_net_tx,17751775+ .ndo_select_queue = tile_net_select_queue,17761776+ .ndo_do_ioctl = tile_net_ioctl,17771777+ .ndo_get_stats = tile_net_get_stats,17781778+ .ndo_change_mtu = tile_net_change_mtu,17791779+ .ndo_tx_timeout = tile_net_tx_timeout,17801780+ .ndo_set_mac_address = tile_net_set_mac_address,17811781+#ifdef CONFIG_NET_POLL_CONTROLLER17821782+ .ndo_poll_controller = tile_net_netpoll,17831783+#endif17841784+};17851785+17861786+/* The setup function.17871787+ *17881788+ * This uses ether_setup() to assign various fields in dev, including17891789+ * setting IFF_BROADCAST and IFF_MULTICAST, then sets some extra fields.17901790+ */17911791+static void tile_net_setup(struct net_device *dev)17921792+{17931793+ ether_setup(dev);17941794+ dev->netdev_ops = &tile_net_ops;17951795+ dev->watchdog_timeo = TILE_NET_TIMEOUT;17961796+ dev->features |= NETIF_F_LLTX;17971797+ dev->features |= NETIF_F_HW_CSUM;17981798+ dev->features |= NETIF_F_SG;17991799+ dev->features |= NETIF_F_TSO;18001800+ dev->mtu = 1500;18011801+}18021802+18031803+/* Allocate the device structure, register the device, and obtain the18041804+ * MAC address from the hypervisor.18051805+ */18061806+static void tile_net_dev_init(const char *name, const uint8_t *mac)18071807+{18081808+ int ret;18091809+ int i;18101810+ int nz_addr = 0;18111811+ struct net_device *dev;18121812+ struct tile_net_priv *priv;18131813+18141814+ /* HACK: Ignore "loop" links. */18151815+ if (strncmp(name, "loop", 4) == 0)18161816+ return;18171817+18181818+ /* Allocate the device structure. Normally, "name" is a18191819+ * template, instantiated by register_netdev(), but not for us.18201820+ */18211821+ dev = alloc_netdev_mqs(sizeof(*priv), name, tile_net_setup,18221822+ NR_CPUS, 1);18231823+ if (!dev) {18241824+ pr_err("alloc_netdev_mqs(%s) failed\n", name);18251825+ return;18261826+ }18271827+18281828+ /* Initialize "priv". */18291829+ priv = netdev_priv(dev);18301830+ memset(priv, 0, sizeof(*priv));18311831+ priv->dev = dev;18321832+ priv->channel = -1;18331833+ priv->loopify_channel = -1;18341834+ priv->echannel = -1;18351835+18361836+ /* Get the MAC address and set it in the device struct; this must18371837+ * be done before the device is opened. If the MAC is all zeroes,18381838+ * we use a random address, since we're probably on the simulator.18391839+ */18401840+ for (i = 0; i < 6; i++)18411841+ nz_addr |= mac[i];18421842+18431843+ if (nz_addr) {18441844+ memcpy(dev->dev_addr, mac, 6);18451845+ dev->addr_len = 6;18461846+ } else {18471847+ random_ether_addr(dev->dev_addr);18481848+ }18491849+18501850+ /* Register the network device. */18511851+ ret = register_netdev(dev);18521852+ if (ret) {18531853+ netdev_err(dev, "register_netdev failed %d\n", ret);18541854+ free_netdev(dev);18551855+ return;18561856+ }18571857+}18581858+18591859+/* Per-cpu module initialization. */18601860+static void tile_net_init_module_percpu(void *unused)18611861+{18621862+ struct tile_net_info *info = &__get_cpu_var(per_cpu_info);18631863+ int my_cpu = smp_processor_id();18641864+18651865+ info->has_iqueue = false;18661866+18671867+ info->my_cpu = my_cpu;18681868+18691869+ /* Initialize the egress timer. */18701870+ hrtimer_init(&info->egress_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);18711871+ info->egress_timer.function = tile_net_handle_egress_timer;18721872+}18731873+18741874+/* Module initialization. */18751875+static int __init tile_net_init_module(void)18761876+{18771877+ int i;18781878+ char name[GXIO_MPIPE_LINK_NAME_LEN];18791879+ uint8_t mac[6];18801880+18811881+ pr_info("Tilera Network Driver\n");18821882+18831883+ mutex_init(&tile_net_devs_for_channel_mutex);18841884+18851885+ /* Initialize each CPU. */18861886+ on_each_cpu(tile_net_init_module_percpu, NULL, 1);18871887+18881888+ /* Find out what devices we have, and initialize them. */18891889+ for (i = 0; gxio_mpipe_link_enumerate_mac(i, name, mac) >= 0; i++)18901890+ tile_net_dev_init(name, mac);18911891+18921892+ if (!network_cpus_init())18931893+ network_cpus_map = *cpu_online_mask;18941894+18951895+ return 0;18961896+}18971897+18981898+module_init(tile_net_init_module);
···4242 if (!net_device)4343 return NULL;44444545+ init_waitqueue_head(&net_device->wait_drain);4546 net_device->start_remove = false;4647 net_device->destroy = false;4748 net_device->dev = device;···388387 spin_unlock_irqrestore(&device->channel->inbound_lock, flags);389388390389 /* Wait for all send completions */391391- while (atomic_read(&net_device->num_outstanding_sends)) {392392- dev_info(&device->device,393393- "waiting for %d requests to complete...\n",394394- atomic_read(&net_device->num_outstanding_sends));395395- udelay(100);396396- }390390+ wait_event(net_device->wait_drain,391391+ atomic_read(&net_device->num_outstanding_sends) == 0);397392398393 netvsc_disconnect_vsp(net_device);399394···482485483486 num_outstanding_sends =484487 atomic_dec_return(&net_device->num_outstanding_sends);488488+489489+ if (net_device->destroy && num_outstanding_sends == 0)490490+ wake_up(&net_device->wait_drain);485491486492 if (netif_queue_stopped(ndev) && !net_device->start_remove &&487493 (hv_ringbuf_avail_percent(&device->channel->outbound)
+7
drivers/net/phy/icplus.c
···4141#define IP1001_APS_ON 11 /* IP1001 APS Mode bit */4242#define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */4343#define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */4444+#define IP101A_G_IRQ_PIN_USED (1<<15) /* INTR pin used */4545+#define IP101A_G_IRQ_DEFAULT IP101A_G_IRQ_PIN_USED44464547static int ip175c_config_init(struct phy_device *phydev)4648{···135133 return c;136134 c |= IP1001_APS_ON;137135 c = phy_write(phydev, IP1001_SPEC_CTRL_STATUS_2, c);136136+ if (c < 0)137137+ return c;138138+139139+ /* INTR pin used: speed/link/duplex will cause an interrupt */140140+ c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT);138141 if (c < 0)139142 return c;140143
+1-1
drivers/net/phy/mdio_bus.c
···9696}9797/**9898 * of_mdio_find_bus - Given an mii_bus node, find the mii_bus.9999- * @mdio_np: Pointer to the mii_bus.9999+ * @mdio_bus_np: Pointer to the mii_bus.100100 *101101 * Returns a pointer to the mii_bus, or NULL if none found.102102 *
···877877 * from the mac80211 subsystem. */878878 u16 mac80211_initially_registered_queues;879879880880+ /* Set this if we call ieee80211_register_hw() and check if we call881881+ * ieee80211_unregister_hw(). */882882+ bool hw_registred;883883+880884 /* We can only have one operating interface (802.11 core)881885 * at a time. General information about this interface follows.882886 */
+12-7
drivers/net/wireless/b43/main.c
···24372437 err = ieee80211_register_hw(wl->hw);24382438 if (err)24392439 goto err_one_core_detach;24402440+ wl->hw_registred = true;24402441 b43_leds_register(wl->current_dev);24412442 goto out;24422443···5300529953015300 hw->queues = modparam_qos ? B43_QOS_QUEUE_NUM : 1;53025301 wl->mac80211_initially_registered_queues = hw->queues;53025302+ wl->hw_registred = false;53035303 hw->max_rates = 2;53045304 SET_IEEE80211_DEV(hw, dev->dev);53055305 if (is_valid_ether_addr(sprom->et1mac))···53725370 * as the ieee80211 unreg will destroy the workqueue. */53735371 cancel_work_sync(&wldev->restart_work);5374537253755375- /* Restore the queues count before unregistering, because firmware detect53765376- * might have modified it. Restoring is important, so the networking53775377- * stack can properly free resources. */53785378- wl->hw->queues = wl->mac80211_initially_registered_queues;53795379- b43_leds_stop(wldev);53805380- ieee80211_unregister_hw(wl->hw);53735373+ B43_WARN_ON(!wl);53745374+ if (wl->current_dev == wldev && wl->hw_registred) {53755375+ /* Restore the queues count before unregistering, because firmware detect53765376+ * might have modified it. Restoring is important, so the networking53775377+ * stack can properly free resources. */53785378+ wl->hw->queues = wl->mac80211_initially_registered_queues;53795379+ b43_leds_stop(wldev);53805380+ ieee80211_unregister_hw(wl->hw);53815381+ }5381538253825383 b43_one_core_detach(wldev->dev);53835384···54515446 cancel_work_sync(&wldev->restart_work);5452544754535448 B43_WARN_ON(!wl);54545454- if (wl->current_dev == wldev) {54495449+ if (wl->current_dev == wldev && wl->hw_registred) {54555450 /* Restore the queues count before unregistering, because firmware detect54565451 * might have modified it. Restoring is important, so the networking54575452 * stack can properly free resources. */
+2-2
drivers/net/wireless/brcm80211/brcmfmac/bcmsdh.c
···8989 data |= 1 << SDIO_FUNC_1 | 1 << SDIO_FUNC_2 | 1;9090 brcmf_sdio_regwb(sdiodev, SDIO_CCCR_IENx, data, &ret);91919292- /* redirect, configure ane enable io for interrupt signal */9292+ /* redirect, configure and enable io for interrupt signal */9393 data = SDIO_SEPINT_MASK | SDIO_SEPINT_OE;9494- if (sdiodev->irq_flags | IRQF_TRIGGER_HIGH)9494+ if (sdiodev->irq_flags & IRQF_TRIGGER_HIGH)9595 data |= SDIO_SEPINT_ACT_HI;9696 brcmf_sdio_regwb(sdiodev, SDIO_CCCR_BRCM_SEPINT, data, &ret);9797
+5-15
drivers/net/wireless/ipw2x00/ipw2100.c
···19031903 netif_stop_queue(priv->net_dev);19041904}1905190519061906-/* Called by register_netdev() */19071907-static int ipw2100_net_init(struct net_device *dev)19081908-{19091909- struct ipw2100_priv *priv = libipw_priv(dev);19101910-19111911- return ipw2100_up(priv, 1);19121912-}19131913-19141906static int ipw2100_wdev_init(struct net_device *dev)19151907{19161908 struct ipw2100_priv *priv = libipw_priv(dev);···60796087 .ndo_stop = ipw2100_close,60806088 .ndo_start_xmit = libipw_xmit,60816089 .ndo_change_mtu = libipw_change_mtu,60826082- .ndo_init = ipw2100_net_init,60836090 .ndo_tx_timeout = ipw2100_tx_timeout,60846091 .ndo_set_mac_address = ipw2100_set_address,60856092 .ndo_validate_addr = eth_validate_addr,···63206329 printk(KERN_INFO DRV_NAME63216330 ": Detected Intel PRO/Wireless 2100 Network Connection\n");6322633163326332+ err = ipw2100_up(priv, 1);63336333+ if (err)63346334+ goto fail;63356335+63236336 err = ipw2100_wdev_init(dev);63246337 if (err)63256338 goto fail;···63336338 * network device we would call ipw2100_up. This introduced a race63346339 * condition with newer hotplug configurations (network was coming63356340 * up and making calls before the device was initialized).63366336- *63376337- * If we called ipw2100_up before we registered the device, then the63386338- * device name wasn't registered. So, we instead use the net_dev->init63396339- * member to call a function that then just turns and calls ipw2100_up.63406340- * net_dev->init is called after name allocation but before the63416341- * notifier chain is called */63416341+ */63426342 err = register_netdev(dev);63436343 if (err) {63446344 printk(KERN_WARNING DRV_NAME
···861861862862 /* We have our copies now, allow OS release its copies */863863 release_firmware(ucode_raw);864864- complete(&drv->request_firmware_complete);865864866865 drv->op_mode = iwl_dvm_ops.start(drv->trans, drv->cfg, &drv->fw);867866868867 if (!drv->op_mode)869869- goto out_free_fw;868868+ goto out_unbind;870869870870+ /*871871+ * Complete the firmware request last so that872872+ * a driver unbind (stop) doesn't run while we873873+ * are doing the start() above.874874+ */875875+ complete(&drv->request_firmware_complete);871876 return;872877873878 try_again:
+9-9
drivers/net/wireless/iwlwifi/iwl-eeprom.c
···568568 * iwl_get_max_txpower_avg - get the highest tx power from all chains.569569 * find the highest tx power from all chains for the channel570570 */571571-static s8 iwl_get_max_txpower_avg(const struct iwl_cfg *cfg,571571+static s8 iwl_get_max_txpower_avg(struct iwl_priv *priv,572572 struct iwl_eeprom_enhanced_txpwr *enhanced_txpower,573573 int element, s8 *max_txpower_in_half_dbm)574574{575575 s8 max_txpower_avg = 0; /* (dBm) */576576577577 /* Take the highest tx power from any valid chains */578578- if ((cfg->valid_tx_ant & ANT_A) &&578578+ if ((priv->hw_params.valid_tx_ant & ANT_A) &&579579 (enhanced_txpower[element].chain_a_max > max_txpower_avg))580580 max_txpower_avg = enhanced_txpower[element].chain_a_max;581581- if ((cfg->valid_tx_ant & ANT_B) &&581581+ if ((priv->hw_params.valid_tx_ant & ANT_B) &&582582 (enhanced_txpower[element].chain_b_max > max_txpower_avg))583583 max_txpower_avg = enhanced_txpower[element].chain_b_max;584584- if ((cfg->valid_tx_ant & ANT_C) &&584584+ if ((priv->hw_params.valid_tx_ant & ANT_C) &&585585 (enhanced_txpower[element].chain_c_max > max_txpower_avg))586586 max_txpower_avg = enhanced_txpower[element].chain_c_max;587587- if (((cfg->valid_tx_ant == ANT_AB) |588588- (cfg->valid_tx_ant == ANT_BC) |589589- (cfg->valid_tx_ant == ANT_AC)) &&587587+ if (((priv->hw_params.valid_tx_ant == ANT_AB) |588588+ (priv->hw_params.valid_tx_ant == ANT_BC) |589589+ (priv->hw_params.valid_tx_ant == ANT_AC)) &&590590 (enhanced_txpower[element].mimo2_max > max_txpower_avg))591591 max_txpower_avg = enhanced_txpower[element].mimo2_max;592592- if ((cfg->valid_tx_ant == ANT_ABC) &&592592+ if ((priv->hw_params.valid_tx_ant == ANT_ABC) &&593593 (enhanced_txpower[element].mimo3_max > max_txpower_avg))594594 max_txpower_avg = enhanced_txpower[element].mimo3_max;595595···691691 ((txp->delta_20_in_40 & 0xf0) >> 4),692692 (txp->delta_20_in_40 & 0x0f));693693694694- max_txp_avg = iwl_get_max_txpower_avg(priv->cfg, txp_array, idx,694694+ max_txp_avg = iwl_get_max_txpower_avg(priv, txp_array, idx,695695 &max_txp_avg_halfdbm);696696697697 /*
+3
drivers/net/wireless/iwlwifi/iwl-mac80211.c
···199199 WIPHY_FLAG_DISABLE_BEACON_HINTS |200200 WIPHY_FLAG_IBSS_RSN;201201202202+#ifdef CONFIG_PM_SLEEP202203 if (priv->fw->img[IWL_UCODE_WOWLAN].sec[0].len &&203204 priv->trans->ops->wowlan_suspend &&204205 device_can_wakeup(priv->trans->dev)) {···218217 hw->wiphy->wowlan.pattern_max_len =219218 IWLAGN_WOWLAN_MAX_PATTERN_LEN;220219 }220220+#endif221221222222 if (iwlwifi_mod_params.power_save)223223 hw->wiphy->flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;···251249 ret = ieee80211_register_hw(priv->hw);252250 if (ret) {253251 IWL_ERR(priv, "Failed to register hw (error %d)\n", ret);252252+ iwl_leds_exit(priv);254253 return ret;255254 }256255 priv->mac80211_registered = 1;
···10581058 iwl_write_prph(trans, SCD_DRAM_BASE_ADDR,10591059 trans_pcie->scd_bc_tbls.dma >> 10);1060106010611061+ /* The chain extension of the SCD doesn't work well. This feature is10621062+ * enabled by default by the HW, so we need to disable it manually.10631063+ */10641064+ iwl_write_prph(trans, SCD_CHAINEXT_EN, 0);10651065+10611066 /* Enable DMA channel */10621067 for (chan = 0; chan < FH_TCSR_CHNL_NUM ; chan++)10631068 iwl_write_direct32(trans, FH_TCSR_CHNL_TX_CONFIG_REG(chan),
···17441744 if (target_state == PCI_POWER_ERROR)17451745 return -EIO;1746174617471747+ /* Some devices mustn't be in D3 during system sleep */17481748+ if (target_state == PCI_D3hot &&17491749+ (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP))17501750+ return 0;17511751+17471752 pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev));1748175317491754 error = pci_set_power_state(dev, target_state);
+26
drivers/pci/quirks.c
···29292929DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);29302930DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);2931293129322932+/*29332933+ * The Intel 6 Series/C200 Series chipset's EHCI controllers on many29342934+ * ASUS motherboards will cause memory corruption or a system crash29352935+ * if they are in D3 while the system is put into S3 sleep.29362936+ */29372937+static void __devinit asus_ehci_no_d3(struct pci_dev *dev)29382938+{29392939+ const char *sys_info;29402940+ static const char good_Asus_board[] = "P8Z68-V";29412941+29422942+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP)29432943+ return;29442944+ if (dev->subsystem_vendor != PCI_VENDOR_ID_ASUSTEK)29452945+ return;29462946+ sys_info = dmi_get_system_info(DMI_BOARD_NAME);29472947+ if (sys_info && memcmp(sys_info, good_Asus_board,29482948+ sizeof(good_Asus_board) - 1) == 0)29492949+ return;29502950+29512951+ dev_info(&dev->dev, "broken D3 during system sleep on ASUS\n");29522952+ dev->dev_flags |= PCI_DEV_FLAGS_NO_D3_DURING_SLEEP;29532953+ device_set_wakeup_capable(&dev->dev, false);29542954+}29552955+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c26, asus_ehci_no_d3);29562956+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c2d, asus_ehci_no_d3);29572957+29322958static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,29332959 struct pci_fixup *end)29342960{
+1-1
drivers/pinctrl/core.c
···6161 list_for_each_entry(_maps_node_, &pinctrl_maps, node) \6262 for (_i_ = 0, _map_ = &_maps_node_->maps[_i_]; \6363 _i_ < _maps_node_->num_maps; \6464- i++, _map_ = &_maps_node_->maps[_i_])6464+ _i_++, _map_ = &_maps_node_->maps[_i_])65656666/**6767 * pinctrl_provide_dummies() - indicate if pinctrl provides dummy state support
+16-18
drivers/pinctrl/pinctrl-imx.c
···2727#include "core.h"2828#include "pinctrl-imx.h"29293030-#define IMX_PMX_DUMP(info, p, m, c, n) \3131-{ \3232- int i, j; \3333- printk("Format: Pin Mux Config\n"); \3434- for (i = 0; i < n; i++) { \3535- j = p[i]; \3636- printk("%s %d 0x%lx\n", \3737- info->pins[j].name, \3838- m[i], c[i]); \3939- } \3030+#define IMX_PMX_DUMP(info, p, m, c, n) \3131+{ \3232+ int i, j; \3333+ printk(KERN_DEBUG "Format: Pin Mux Config\n"); \3434+ for (i = 0; i < n; i++) { \3535+ j = p[i]; \3636+ printk(KERN_DEBUG "%s %d 0x%lx\n", \3737+ info->pins[j].name, \3838+ m[i], c[i]); \3939+ } \4040}41414242/* The bits in CONFIG cell defined in binding doc*/···173173174174 /* create mux map */175175 parent = of_get_parent(np);176176- if (!parent)176176+ if (!parent) {177177+ kfree(new_map);177178 return -EINVAL;179179+ }178180 new_map[0].type = PIN_MAP_TYPE_MUX_GROUP;179181 new_map[0].data.mux.function = parent->name;180182 new_map[0].data.mux.group = np->name;···195193 }196194197195 dev_dbg(pctldev->dev, "maps: function %s group %s num %d\n",198198- new_map->data.mux.function, new_map->data.mux.group, map_num);196196+ (*map)->data.mux.function, (*map)->data.mux.group, map_num);199197200198 return 0;201199}···203201static void imx_dt_free_map(struct pinctrl_dev *pctldev,204202 struct pinctrl_map *map, unsigned num_maps)205203{206206- int i;207207-208208- for (i = 0; i < num_maps; i++)209209- kfree(map);204204+ kfree(map);210205}211206212207static struct pinctrl_ops imx_pctrl_ops = {···474475 grp->configs[j] = config & ~IMX_PAD_SION;475476 }476477477477-#ifdef DEBUG478478 IMX_PMX_DUMP(info, grp->pins, grp->mux_mode, grp->configs, grp->npins);479479-#endif479479+480480 return 0;481481}482482
+10-3
drivers/pinctrl/pinctrl-mxs.c
···107107108108 /* Compose group name */109109 group = kzalloc(length, GFP_KERNEL);110110- if (!group)111111- return -ENOMEM;110110+ if (!group) {111111+ ret = -ENOMEM;112112+ goto free;113113+ }112114 snprintf(group, length, "%s.%d", np->name, reg);113115 new_map[i].data.mux.group = group;114116 i++;···120118 pconfig = kmemdup(&config, sizeof(config), GFP_KERNEL);121119 if (!pconfig) {122120 ret = -ENOMEM;123123- goto free;121121+ goto free_group;124122 }125123126124 new_map[i].type = PIN_MAP_TYPE_CONFIGS_GROUP;···135133136134 return 0;137135136136+free_group:137137+ if (!purecfg)138138+ free(group);138139free:139140 kfree(new_map);140141 return ret;···516511 return 0;517512518513err:514514+ platform_set_drvdata(pdev, NULL);519515 iounmap(d->base);520516 return ret;521517}···526520{527521 struct mxs_pinctrl_data *d = platform_get_drvdata(pdev);528522523523+ platform_set_drvdata(pdev, NULL);529524 pinctrl_unregister(d->pctl);530525 iounmap(d->base);531526
+2-1
drivers/pinctrl/pinctrl-nomadik.c
···673673 * wakeup is anyhow controlled by the RIMSC and FIMSC registers.674674 */675675 if (nmk_chip->sleepmode && on) {676676- __nmk_gpio_set_slpm(nmk_chip, gpio % nmk_chip->chip.base,676676+ __nmk_gpio_set_slpm(nmk_chip, gpio % NMK_GPIO_PER_CHIP,677677 NMK_GPIO_SLPM_WAKEUP_ENABLE);678678 }679679···12461246 ret = PTR_ERR(clk);12471247 goto out_unmap;12481248 }12491249+ clk_prepare(clk);1249125012501251 nmk_chip = kzalloc(sizeof(*nmk_chip), GFP_KERNEL);12511252 if (!nmk_chip) {
···2626#include <linux/module.h>2727#include <linux/init.h>2828#include <linux/types.h>2929-#include <linux/version.h>3029#include <linux/blkdev.h>3130#include <linux/interrupt.h>3231#include <linux/pci.h>···24762477 }2477247824782479 cmd = qlt_ctio_to_cmd(vha, handle, ctio);24792479- if (cmd == NULL) {24802480- if (status != CTIO_SUCCESS)24812481- qlt_term_ctio_exchange(vha, ctio, NULL, status);24802480+ if (cmd == NULL)24822481 return;24832483- }24822482+24842483 se_cmd = &cmd->se_cmd;24852484 tfo = se_cmd->se_tfo;24862485···27242727out_term:27252728 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf020, "Terminating work cmd %p", cmd);27262729 /*27272727- * cmd has not sent to target yet, so pass NULL as the second argument27302730+ * cmd has not sent to target yet, so pass NULL as the second27312731+ * argument to qlt_send_term_exchange() and free the memory here.27282732 */27292733 spin_lock_irqsave(&ha->hardware_lock, flags);27302734 qlt_send_term_exchange(vha, NULL, &cmd->atio, 1);27352735+ kmem_cache_free(qla_tgt_cmd_cachep, cmd);27312736 spin_unlock_irqrestore(&ha->hardware_lock, flags);27322737 if (sess)27332738 ha->tgt.tgt_ops->put_sess(sess);
···137137 */138138static int tcm_qla2xxx_npiv_extract_wwn(const char *ns, u64 *nm)139139{140140- unsigned int i, j, value;140140+ unsigned int i, j;141141 u8 wwn[8];142142143143 memset(wwn, 0, sizeof(wwn));144144145145 /* Validate and store the new name */146146 for (i = 0, j = 0; i < 16; i++) {147147+ int value;148148+147149 value = hex_to_bin(*ns++);148150 if (value >= 0)149151 j = (j << 4) | value;···654652/*655653 * Called from qla_target.c:qlt_issue_task_mgmt()656654 */657657-int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *mcmd, uint32_t lun,658658- uint8_t tmr_func, uint32_t tag)655655+static int tcm_qla2xxx_handle_tmr(struct qla_tgt_mgmt_cmd *mcmd, uint32_t lun,656656+ uint8_t tmr_func, uint32_t tag)659657{660658 struct qla_tgt_sess *sess = mcmd->sess;661659 struct se_cmd *se_cmd = &mcmd->se_cmd;···764762struct target_fabric_configfs *tcm_qla2xxx_fabric_configfs;765763struct target_fabric_configfs *tcm_qla2xxx_npiv_fabric_configfs;766764767767-static int tcm_qla2xxx_setup_nacl_from_rport(768768- struct se_portal_group *se_tpg,769769- struct se_node_acl *se_nacl,770770- struct tcm_qla2xxx_lport *lport,771771- struct tcm_qla2xxx_nacl *nacl,772772- u64 rport_wwnn)773773-{774774- struct scsi_qla_host *vha = lport->qla_vha;775775- struct Scsi_Host *sh = vha->host;776776- struct fc_host_attrs *fc_host = shost_to_fc_host(sh);777777- struct fc_rport *rport;778778- unsigned long flags;779779- void *node;780780- int rc;781781-782782- /*783783- * Scan the existing rports, and create a session for the784784- * explict NodeACL is an matching rport->node_name already785785- * exists.786786- */787787- spin_lock_irqsave(sh->host_lock, flags);788788- list_for_each_entry(rport, &fc_host->rports, peers) {789789- if (rport_wwnn != rport->node_name)790790- continue;791791-792792- pr_debug("Located existing rport_wwpn and rport->node_name: 0x%016LX, port_id: 0x%04x\n",793793- rport->node_name, rport->port_id);794794- nacl->nport_id = rport->port_id;795795-796796- spin_unlock_irqrestore(sh->host_lock, flags);797797-798798- spin_lock_irqsave(&vha->hw->hardware_lock, flags);799799- node = btree_lookup32(&lport->lport_fcport_map, rport->port_id);800800- if (node) {801801- rc = btree_update32(&lport->lport_fcport_map,802802- rport->port_id, se_nacl);803803- } else {804804- rc = btree_insert32(&lport->lport_fcport_map,805805- rport->port_id, se_nacl,806806- GFP_ATOMIC);807807- }808808- spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);809809-810810- if (rc) {811811- pr_err("Unable to insert se_nacl into fcport_map");812812- WARN_ON(rc > 0);813813- return rc;814814- }815815-816816- pr_debug("Inserted into fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%08x\n",817817- se_nacl, rport_wwnn, nacl->nport_id);818818-819819- return 1;820820- }821821- spin_unlock_irqrestore(sh->host_lock, flags);822822-823823- return 0;824824-}825825-765765+static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *,766766+ struct tcm_qla2xxx_nacl *, struct qla_tgt_sess *);826767/*827768 * Expected to be called with struct qla_hw_data->hardware_lock held828769 */···787842788843 pr_debug("Removed from fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%06x\n",789844 se_nacl, nacl->nport_wwnn, nacl->nport_id);845845+ /*846846+ * Now clear the se_nacl and session pointers from our HW lport lookup847847+ * table mapping for this initiator's fabric S_ID and LOOP_ID entries.848848+ *849849+ * This is done ahead of callbacks into tcm_qla2xxx_free_session() ->850850+ * target_wait_for_sess_cmds() before the session waits for outstanding851851+ * I/O to complete, to avoid a race between session shutdown execution852852+ * and incoming ATIOs or TMRs picking up a stale se_node_act reference.853853+ */854854+ tcm_qla2xxx_clear_sess_lookup(lport, nacl, sess);855855+}856856+857857+static void tcm_qla2xxx_release_session(struct kref *kref)858858+{859859+ struct se_session *se_sess = container_of(kref,860860+ struct se_session, sess_kref);861861+862862+ qlt_unreg_sess(se_sess->fabric_sess_ptr);863863+}864864+865865+static void tcm_qla2xxx_put_session(struct se_session *se_sess)866866+{867867+ struct qla_tgt_sess *sess = se_sess->fabric_sess_ptr;868868+ struct qla_hw_data *ha = sess->vha->hw;869869+ unsigned long flags;870870+871871+ spin_lock_irqsave(&ha->hardware_lock, flags);872872+ kref_put(&se_sess->sess_kref, tcm_qla2xxx_release_session);873873+ spin_unlock_irqrestore(&ha->hardware_lock, flags);790874}791875792876static void tcm_qla2xxx_put_sess(struct qla_tgt_sess *sess)793877{794794- target_put_session(sess->se_sess);878878+ tcm_qla2xxx_put_session(sess->se_sess);795879}796880797881static void tcm_qla2xxx_shutdown_sess(struct qla_tgt_sess *sess)···833859 struct config_group *group,834860 const char *name)835861{836836- struct se_wwn *se_wwn = se_tpg->se_tpg_wwn;837837- struct tcm_qla2xxx_lport *lport = container_of(se_wwn,838838- struct tcm_qla2xxx_lport, lport_wwn);839862 struct se_node_acl *se_nacl, *se_nacl_new;840863 struct tcm_qla2xxx_nacl *nacl;841864 u64 wwnn;842865 u32 qla2xxx_nexus_depth;843843- int rc;844866845867 if (tcm_qla2xxx_parse_wwn(name, &wwnn, 1) < 0)846868 return ERR_PTR(-EINVAL);···863893 nacl = container_of(se_nacl, struct tcm_qla2xxx_nacl, se_node_acl);864894 nacl->nport_wwnn = wwnn;865895 tcm_qla2xxx_format_wwn(&nacl->nport_name[0], TCM_QLA2XXX_NAMELEN, wwnn);866866- /*867867- * Setup a se_nacl handle based on an a matching struct fc_rport setup868868- * via drivers/scsi/qla2xxx/qla_init.c:qla2x00_reg_remote_port()869869- */870870- rc = tcm_qla2xxx_setup_nacl_from_rport(se_tpg, se_nacl, lport,871871- nacl, wwnn);872872- if (rc < 0) {873873- tcm_qla2xxx_release_fabric_acl(se_tpg, se_nacl_new);874874- return ERR_PTR(rc);875875- }876896877897 return se_nacl;878898}···13501390 nacl->qla_tgt_sess, new_se_nacl, new_se_nacl->initiatorname);13511391}1352139213931393+/*13941394+ * Should always be called with qla_hw_data->hardware_lock held.13951395+ */13961396+static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *lport,13971397+ struct tcm_qla2xxx_nacl *nacl, struct qla_tgt_sess *sess)13981398+{13991399+ struct se_session *se_sess = sess->se_sess;14001400+ unsigned char be_sid[3];14011401+14021402+ be_sid[0] = sess->s_id.b.domain;14031403+ be_sid[1] = sess->s_id.b.area;14041404+ be_sid[2] = sess->s_id.b.al_pa;14051405+14061406+ tcm_qla2xxx_set_sess_by_s_id(lport, NULL, nacl, se_sess,14071407+ sess, be_sid);14081408+ tcm_qla2xxx_set_sess_by_loop_id(lport, NULL, nacl, se_sess,14091409+ sess, sess->loop_id);14101410+}14111411+13531412static void tcm_qla2xxx_free_session(struct qla_tgt_sess *sess)13541413{13551414 struct qla_tgt *tgt = sess->tgt;···13771398 struct se_node_acl *se_nacl;13781399 struct tcm_qla2xxx_lport *lport;13791400 struct tcm_qla2xxx_nacl *nacl;13801380- unsigned char be_sid[3];13811381- unsigned long flags;1382140113831402 BUG_ON(in_interrupt());13841403···13961419 return;13971420 }13981421 target_wait_for_sess_cmds(se_sess, 0);13991399- /*14001400- * And now clear the se_nacl and session pointers from our HW lport14011401- * mappings for fabric S_ID and LOOP_ID.14021402- */14031403- memset(&be_sid, 0, 3);14041404- be_sid[0] = sess->s_id.b.domain;14051405- be_sid[1] = sess->s_id.b.area;14061406- be_sid[2] = sess->s_id.b.al_pa;14071407-14081408- spin_lock_irqsave(&ha->hardware_lock, flags);14091409- tcm_qla2xxx_set_sess_by_s_id(lport, NULL, nacl, se_sess,14101410- sess, be_sid);14111411- tcm_qla2xxx_set_sess_by_loop_id(lport, NULL, nacl, se_sess,14121412- sess, sess->loop_id);14131413- spin_unlock_irqrestore(&ha->hardware_lock, flags);1414142214151423 transport_deregister_session_configfs(sess->se_sess);14161424 transport_deregister_session(sess->se_sess);···16931731 .new_cmd_map = NULL,16941732 .check_stop_free = tcm_qla2xxx_check_stop_free,16951733 .release_cmd = tcm_qla2xxx_release_cmd,17341734+ .put_session = tcm_qla2xxx_put_session,16961735 .shutdown_session = tcm_qla2xxx_shutdown_session,16971736 .close_session = tcm_qla2xxx_close_session,16981737 .sess_get_index = tcm_qla2xxx_sess_get_index,···17421779 .tpg_release_fabric_acl = tcm_qla2xxx_release_fabric_acl,17431780 .tpg_get_inst_index = tcm_qla2xxx_tpg_get_inst_index,17441781 .release_cmd = tcm_qla2xxx_release_cmd,17821782+ .put_session = tcm_qla2xxx_put_session,17451783 .shutdown_session = tcm_qla2xxx_shutdown_session,17461784 .close_session = tcm_qla2xxx_close_session,17471785 .sess_get_index = tcm_qla2xxx_sess_get_index,
+1-3
drivers/scsi/scsi.c
···9090EXPORT_SYMBOL(scsi_logging_level);9191#endif92929393-#if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_BLK_DEV_SD)9494-/* sd and scsi_pm need to coordinate flushing async actions */9393+/* sd, scsi core and power management need to coordinate flushing async actions */9594LIST_HEAD(scsi_sd_probe_domain);9695EXPORT_SYMBOL(scsi_sd_probe_domain);9797-#endif98969997/* NB: These are exposed through /proc/scsi/scsi and form part of the ABI.10098 * You may not alter any existing entry (although adding new ones is
···214214 /* already configured */215215 if (info->intf != NULL)216216 return 0;217217-217217+ /*218218+ * If the toolstack (or the hypervisor) hasn't set these values, the219219+ * default value is 0. Even though mfn = 0 and evtchn = 0 are220220+ * theoretically correct values, in practice they never are and they221221+ * mean that a legacy toolstack hasn't initialized the pv console correctly.222222+ */218223 r = hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v);219219- if (r < 0) {220220- kfree(info);221221- return -ENODEV;222222- }224224+ if (r < 0 || v == 0)225225+ goto err;223226 info->evtchn = v;224224- hvm_get_parameter(HVM_PARAM_CONSOLE_PFN, &v);225225- if (r < 0) {226226- kfree(info);227227- return -ENODEV;228228- }227227+ v = 0;228228+ r = hvm_get_parameter(HVM_PARAM_CONSOLE_PFN, &v);229229+ if (r < 0 || v == 0)230230+ goto err;229231 mfn = v;230232 info->intf = ioremap(mfn << PAGE_SHIFT, PAGE_SIZE);231231- if (info->intf == NULL) {232232- kfree(info);233233- return -ENODEV;234234- }233233+ if (info->intf == NULL)234234+ goto err;235235 info->vtermno = HVC_COOKIE;236236237237 spin_lock(&xencons_lock);···239239 spin_unlock(&xencons_lock);240240241241 return 0;242242+err:243243+ kfree(info);244244+ return -ENODEV;242245}243246244247static int xen_pv_console_init(void)
+24-14
drivers/tty/serial/sh-sci.c
···21792179 return 0;21802180}2181218121822182+static void sci_cleanup_single(struct sci_port *port)21832183+{21842184+ sci_free_gpios(port);21852185+21862186+ clk_put(port->iclk);21872187+ clk_put(port->fclk);21882188+21892189+ pm_runtime_disable(port->port.dev);21902190+}21912191+21822192#ifdef CONFIG_SERIAL_SH_SCI_CONSOLE21832193static void serial_console_putchar(struct uart_port *port, int ch)21842194{···23702360 cpufreq_unregister_notifier(&port->freq_transition,23712361 CPUFREQ_TRANSITION_NOTIFIER);2372236223732373- sci_free_gpios(port);23742374-23752363 uart_remove_one_port(&sci_uart_driver, &port->port);2376236423772377- clk_put(port->iclk);23782378- clk_put(port->fclk);23652365+ sci_cleanup_single(port);2379236623802380- pm_runtime_disable(&dev->dev);23812367 return 0;23822368}23832369···23912385 index+1, SCI_NPORTS);23922386 dev_notice(&dev->dev, "Consider bumping "23932387 "CONFIG_SERIAL_SH_SCI_NR_UARTS!\n");23942394- return 0;23882388+ return -EINVAL;23952389 }2396239023972391 ret = sci_init_single(dev, sciport, index, p);23982392 if (ret)23992393 return ret;2400239424012401- return uart_add_one_port(&sci_uart_driver, &sciport->port);23952395+ ret = uart_add_one_port(&sci_uart_driver, &sciport->port);23962396+ if (ret) {23972397+ sci_cleanup_single(sciport);23982398+ return ret;23992399+ }24002400+24012401+ return 0;24022402}2403240324042404static int __devinit sci_probe(struct platform_device *dev)···2425241324262414 ret = sci_probe_single(dev, dev->id, p, sp);24272415 if (ret)24282428- goto err_unreg;24162416+ return ret;2429241724302418 sp->freq_transition.notifier_call = sci_notifier;2431241924322420 ret = cpufreq_register_notifier(&sp->freq_transition,24332421 CPUFREQ_TRANSITION_NOTIFIER);24342434- if (unlikely(ret < 0))24352435- goto err_unreg;24222422+ if (unlikely(ret < 0)) {24232423+ sci_cleanup_single(sp);24242424+ return ret;24252425+ }2436242624372427#ifdef CONFIG_SH_STANDARD_BIOS24382428 sh_bios_gdb_detach();24392429#endif2440243024412431 return 0;24422442-24432443-err_unreg:24442444- sci_remove(dev);24452445- return ret;24462432}2447243324482434static int sci_suspend(struct device *dev)
+8
drivers/usb/class/cdc-acm.c
···567567568568 usb_autopm_put_interface(acm->control);569569570570+ /*571571+ * Unthrottle device in case the TTY was closed while throttled.572572+ */573573+ spin_lock_irq(&acm->read_lock);574574+ acm->throttled = 0;575575+ acm->throttle_req = 0;576576+ spin_unlock_irq(&acm->read_lock);577577+570578 if (acm_submit_read_urbs(acm, GFP_KERNEL))571579 goto error_submit_read_urbs;572580
···493493494494 pci_save_state(pci_dev);495495496496- /*497497- * Some systems crash if an EHCI controller is in D3 during498498- * a sleep transition. We have to leave such controllers in D0.499499- */500500- if (hcd->broken_pci_sleep) {501501- dev_dbg(dev, "Staying in PCI D0\n");502502- return retval;503503- }504504-505496 /* If the root hub is dead rather than suspended, disallow remote506497 * wakeup. usb_hc_died() should ensure that both hosts are marked as507498 * dying, so we only need to check the primary roothub.
+1-1
drivers/usb/core/hub.c
···33793379 return 0;3380338033813381 udev->lpm_disable_count++;33823382- if ((udev->u1_params.timeout == 0 && udev->u1_params.timeout == 0))33823382+ if ((udev->u1_params.timeout == 0 && udev->u2_params.timeout == 0))33833383 return 0;3384338433853385 /* If LPM is enabled, attempt to disable it. */
···15961596 ep = container_of(_ep, struct qe_ep, ep);1597159715981598 /* catch various bogus parameters */15991599- if (!_ep || !desc || ep->ep.desc || _ep->name == ep_name[0] ||15991599+ if (!_ep || !desc || _ep->name == ep_name[0] ||16001600 (desc->bDescriptorType != USB_DT_ENDPOINT))16011601 return -EINVAL;16021602
+2-2
drivers/usb/gadget/fsl_udc_core.c
···567567 ep = container_of(_ep, struct fsl_ep, ep);568568569569 /* catch various bogus parameters */570570- if (!_ep || !desc || ep->ep.desc570570+ if (!_ep || !desc571571 || (desc->bDescriptorType != USB_DT_ENDPOINT))572572 return -EINVAL;573573···25752575 /* for ep0: the desc defined here;25762576 * for other eps, gadget layer called ep_enable with defined desc25772577 */25782578- udc_controller->eps[0].desc = &fsl_ep0_desc;25782578+ udc_controller->eps[0].ep.desc = &fsl_ep0_desc;25792579 udc_controller->eps[0].ep.maxpacket = USB_MAX_CTRL_PAYLOAD;2580258025812581 /* setup the udc->eps[] for non-control endpoints and link
···270270 *271271 * Properly shutdown the hcd, call driver's shutdown routine.272272 */273273-static int ehci_hcd_xilinx_of_shutdown(struct platform_device *op)273273+static void ehci_hcd_xilinx_of_shutdown(struct platform_device *op)274274{275275 struct usb_hcd *hcd = dev_get_drvdata(&op->dev);276276277277 if (hcd->driver->shutdown)278278 hcd->driver->shutdown(hcd);279279-280280- return 0;281279}282280283281
+1-1
drivers/usb/host/ohci-hub.c
···317317}318318319319/* Carry out the final steps of resuming the controller device */320320-static void ohci_finish_controller_resume(struct usb_hcd *hcd)320320+static void __maybe_unused ohci_finish_controller_resume(struct usb_hcd *hcd)321321{322322 struct ohci_hcd *ohci = hcd_to_ohci(hcd);323323 int port;
+24-50
drivers/usb/host/xhci-mem.c
···793793 struct xhci_virt_device *virt_dev,794794 int slot_id)795795{796796- struct list_head *tt;797796 struct list_head *tt_list_head;798798- struct list_head *tt_next;799799- struct xhci_tt_bw_info *tt_info;797797+ struct xhci_tt_bw_info *tt_info, *next;798798+ bool slot_found = false;800799801800 /* If the device never made it past the Set Address stage,802801 * it may not have the real_port set correctly.···807808 }808809809810 tt_list_head = &(xhci->rh_bw[virt_dev->real_port - 1].tts);810810- if (list_empty(tt_list_head))811811- return;812812-813813- list_for_each(tt, tt_list_head) {814814- tt_info = list_entry(tt, struct xhci_tt_bw_info, tt_list);815815- if (tt_info->slot_id == slot_id)811811+ list_for_each_entry_safe(tt_info, next, tt_list_head, tt_list) {812812+ /* Multi-TT hubs will have more than one entry */813813+ if (tt_info->slot_id == slot_id) {814814+ slot_found = true;815815+ list_del(&tt_info->tt_list);816816+ kfree(tt_info);817817+ } else if (slot_found) {816818 break;819819+ }817820 }818818- /* Cautionary measure in case the hub was disconnected before we819819- * stored the TT information.820820- */821821- if (tt_info->slot_id != slot_id)822822- return;823823-824824- tt_next = tt->next;825825- tt_info = list_entry(tt, struct xhci_tt_bw_info,826826- tt_list);827827- /* Multi-TT hubs will have more than one entry */828828- do {829829- list_del(tt);830830- kfree(tt_info);831831- tt = tt_next;832832- if (list_empty(tt_list_head))833833- break;834834- tt_next = tt->next;835835- tt_info = list_entry(tt, struct xhci_tt_bw_info,836836- tt_list);837837- } while (tt_info->slot_id == slot_id);838821}839822840823int xhci_alloc_tt_info(struct xhci_hcd *xhci,···17721791{17731792 struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);17741793 struct dev_info *dev_info, *next;17751775- struct list_head *tt_list_head;17761776- struct list_head *tt;17771777- struct list_head *endpoints;17781778- struct list_head *ep, *q;17791779- struct xhci_tt_bw_info *tt_info;17801780- struct xhci_interval_bw_table *bwt;17811781- struct xhci_virt_ep *virt_ep;17821782-17831794 unsigned long flags;17841795 int size;17851785- int i;17961796+ int i, j, num_ports;1786179717871798 /* Free the Event Ring Segment Table and the actual Event Ring */17881799 size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);···18331860 }18341861 spin_unlock_irqrestore(&xhci->lock, flags);1835186218361836- bwt = &xhci->rh_bw->bw_table;18371837- for (i = 0; i < XHCI_MAX_INTERVAL; i++) {18381838- endpoints = &bwt->interval_bw[i].endpoints;18391839- list_for_each_safe(ep, q, endpoints) {18401840- virt_ep = list_entry(ep, struct xhci_virt_ep, bw_endpoint_list);18411841- list_del(&virt_ep->bw_endpoint_list);18421842- kfree(virt_ep);18631863+ num_ports = HCS_MAX_PORTS(xhci->hcs_params1);18641864+ for (i = 0; i < num_ports; i++) {18651865+ struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table;18661866+ for (j = 0; j < XHCI_MAX_INTERVAL; j++) {18671867+ struct list_head *ep = &bwt->interval_bw[j].endpoints;18681868+ while (!list_empty(ep))18691869+ list_del_init(ep->next);18431870 }18441871 }1845187218461846- tt_list_head = &xhci->rh_bw->tts;18471847- list_for_each_safe(tt, q, tt_list_head) {18481848- tt_info = list_entry(tt, struct xhci_tt_bw_info, tt_list);18491849- list_del(tt);18501850- kfree(tt_info);18731873+ for (i = 0; i < num_ports; i++) {18741874+ struct xhci_tt_bw_info *tt, *n;18751875+ list_for_each_entry_safe(tt, n, &xhci->rh_bw[i].tts, tt_list) {18761876+ list_del(&tt->tt_list);18771877+ kfree(tt);18781878+ }18511879 }1852188018531881 xhci->num_usb2_ports = 0;
+5-5
drivers/usb/host/xhci.c
···795795 command = xhci_readl(xhci, &xhci->op_regs->command);796796 command |= CMD_CSS;797797 xhci_writel(xhci, command, &xhci->op_regs->command);798798- if (handshake(xhci, &xhci->op_regs->status, STS_SAVE, 0, 10*100)) {799799- xhci_warn(xhci, "WARN: xHC CMD_CSS timeout\n");798798+ if (handshake(xhci, &xhci->op_regs->status, STS_SAVE, 0, 10 * 1000)) {799799+ xhci_warn(xhci, "WARN: xHC save state timeout\n");800800 spin_unlock_irq(&xhci->lock);801801 return -ETIMEDOUT;802802 }···848848 command |= CMD_CRS;849849 xhci_writel(xhci, command, &xhci->op_regs->command);850850 if (handshake(xhci, &xhci->op_regs->status,851851- STS_RESTORE, 0, 10*100)) {852852- xhci_dbg(xhci, "WARN: xHC CMD_CSS timeout\n");851851+ STS_RESTORE, 0, 10 * 1000)) {852852+ xhci_warn(xhci, "WARN: xHC restore state timeout\n");853853 spin_unlock_irq(&xhci->lock);854854 return -ETIMEDOUT;855855 }···39063906 default:39073907 dev_warn(&udev->dev, "%s: Can't get timeout for non-U1 or U2 state.\n",39083908 __func__);39093909- return -EINVAL;39093909+ return USB3_LPM_DISABLED;39103910 }3911391139123912 if (sel <= max_sel_pel && pel <= max_sel_pel)
···784784#define RTSYSTEMS_VID 0x2100 /* Vendor ID */785785#define RTSYSTEMS_SERIAL_VX7_PID 0x9e52 /* Serial converter for VX-7 Radios using FT232RL */786786#define RTSYSTEMS_CT29B_PID 0x9e54 /* CT29B Radio Cable */787787+#define RTSYSTEMS_RTS01_PID 0x9e57 /* USB-RTS01 Radio Cable */787788788789789790/*
+2-8
drivers/usb/serial/generic.c
···39394040static struct usb_device_id generic_device_ids[2]; /* Initially all zeroes. */41414242-/* we want to look at all devices, as the vendor/product id can change4343- * depending on the command line argument */4444-static const struct usb_device_id generic_serial_ids[] = {4545- {.driver_info = 42},4646- {}4747-};4848-4942/* All of the device info needed for the Generic Serial Converter */5043struct usb_serial_driver usb_serial_generic_device = {5144 .driver = {···7279 USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_PRODUCT;73807481 /* register our generic driver with ourselves */7575- retval = usb_serial_register_drivers(serial_drivers, "usbserial_generic", generic_serial_ids);8282+ retval = usb_serial_register_drivers(serial_drivers,8383+ "usbserial_generic", generic_device_ids);7684#endif7785 return retval;7886}
···294294 { USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless Direct IP modems */295295 .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist296296 },297297+ /* AT&T Direct IP LTE modems */298298+ { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68AA, 0xFF, 0xFF, 0xFF),299299+ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist300300+ },297301 { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */298302 .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist299303 },
+7-5
drivers/usb/serial/usb-serial.c
···659659static struct usb_serial_driver *search_serial_device(660660 struct usb_interface *iface)661661{662662- const struct usb_device_id *id;662662+ const struct usb_device_id *id = NULL;663663 struct usb_serial_driver *drv;664664+ struct usb_driver *driver = to_usb_driver(iface->dev.driver);664665665666 /* Check if the usb id matches a known device */666667 list_for_each_entry(drv, &usb_serial_driver_list, driver_list) {667667- id = get_iface_id(drv, iface);668668+ if (drv->usb_driver == driver)669669+ id = get_iface_id(drv, iface);668670 if (id)669671 return drv;670672 }···757755758756 if (retval) {759757 dbg("sub driver rejected device");760760- kfree(serial);758758+ usb_serial_put(serial);761759 module_put(type->driver.owner);762760 return retval;763761 }···829827 */830828 if (num_bulk_in == 0 || num_bulk_out == 0) {831829 dev_info(&interface->dev, "PL-2303 hack: descriptors matched but endpoints did not\n");832832- kfree(serial);830830+ usb_serial_put(serial);833831 module_put(type->driver.owner);834832 return -ENODEV;835833 }···843841 if (num_ports == 0) {844842 dev_err(&interface->dev,845843 "Generic device with no bulk out, not allowed.\n");846846- kfree(serial);844844+ usb_serial_put(serial);847845 module_put(type->driver.owner);848846 return -EIO;849847 }
+7
drivers/usb/storage/unusual_devs.h
···11071107 USB_SC_RBC, USB_PR_BULK, NULL,11081108 0 ),1109110911101110+/* Feiya QDI U2 DISK, reported by Hans de Goede <hdegoede@redhat.com> */11111111+UNUSUAL_DEV( 0x090c, 0x1000, 0x0000, 0xffff,11121112+ "Feiya",11131113+ "QDI U2 DISK",11141114+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,11151115+ US_FL_NO_READ_CAPACITY_16 ),11161116+11101117/* aeb */11111118UNUSUAL_DEV( 0x090c, 0x1132, 0x0000, 0xffff,11121119 "Feiya",
+1-1
drivers/video/backlight/Kconfig
···88888989config LCD_TOSA9090 tristate "Sharp SL-6000 LCD Driver"9191- depends on SPI && MACH_TOSA9191+ depends on I2C && SPI && MACH_TOSA9292 help9393 If you have an Sharp SL-6000 Zaurus say Y to enable a driver9494 for its LCD.
···224224 big letters. It fits between the sun 12x22 and the normal 8x16 font.225225 If other fonts are too big or too small for you, say Y, otherwise say N.226226227227+config FONT_AUTOSELECT228228+ def_bool y229229+ depends on FRAMEBUFFER_CONSOLE || SGI_NEWPORT_CONSOLE || STI_CONSOLE || USB_SISUSBVGA_CON230230+ depends on !FONT_8x8231231+ depends on !FONT_6x11232232+ depends on !FONT_7x14233233+ depends on !FONT_PEARL_8x8234234+ depends on !FONT_ACORN_8x8235235+ depends on !FONT_MINI_4x6236236+ depends on !FONT_SUN8x16237237+ depends on !FONT_SUN12x22238238+ depends on !FONT_10x18239239+ select FONT_8x16240240+227241endmenu228242
···124124/* Used for drop dead root */125125void btrfs_kill_all_delayed_nodes(struct btrfs_root *root);126126127127+/* Used for clean the transaction */128128+void btrfs_destroy_delayed_inodes(struct btrfs_root *root);129129+127130/* Used for readdir() */128131void btrfs_get_delayed_items(struct inode *inode, struct list_head *ins_list,129132 struct list_head *del_list);
+48-28
fs/btrfs/disk-io.c
···4444#include "free-space-cache.h"4545#include "inode-map.h"4646#include "check-integrity.h"4747+#include "rcu-string.h"47484849static struct extent_io_ops btree_extent_io_ops;4950static void end_workqueue_fn(struct btrfs_work *work);···2119211821202119 features = btrfs_super_incompat_flags(disk_super);21212120 features |= BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF;21222122- if (tree_root->fs_info->compress_type & BTRFS_COMPRESS_LZO)21212121+ if (tree_root->fs_info->compress_type == BTRFS_COMPRESS_LZO)21232122 features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;2124212321252124 /*···25762575 struct btrfs_device *device = (struct btrfs_device *)25772576 bh->b_private;2578257725792579- printk_ratelimited(KERN_WARNING "lost page write due to "25802580- "I/O error on %s\n", device->name);25782578+ printk_ratelimited_in_rcu(KERN_WARNING "lost page write due to "25792579+ "I/O error on %s\n",25802580+ rcu_str_deref(device->name));25812581 /* note, we dont' set_buffer_write_io_error because we have25822582 * our own ways of dealing with the IO errors25832583 */···27512749 wait_for_completion(&device->flush_wait);2752275027532751 if (bio_flagged(bio, BIO_EOPNOTSUPP)) {27542754- printk("btrfs: disabling barriers on dev %s\n",27552755- device->name);27522752+ printk_in_rcu("btrfs: disabling barriers on dev %s\n",27532753+ rcu_str_deref(device->name));27562754 device->nobarriers = 1;27572755 }27582756 if (!bio_flagged(bio, BIO_UPTODATE)) {···3402340034033401 delayed_refs = &trans->delayed_refs;3404340234053405-again:34063403 spin_lock(&delayed_refs->lock);34073404 if (delayed_refs->num_entries == 0) {34083405 spin_unlock(&delayed_refs->lock);···34093408 return ret;34103409 }3411341034123412- node = rb_first(&delayed_refs->root);34133413- while (node) {34113411+ while ((node = rb_first(&delayed_refs->root)) != NULL) {34143412 ref = rb_entry(node, struct btrfs_delayed_ref_node, rb_node);34153415- node = rb_next(node);34163416-34173417- ref->in_tree = 0;34183418- rb_erase(&ref->rb_node, &delayed_refs->root);34193419- delayed_refs->num_entries--;3420341334213414 atomic_set(&ref->refs, 1);34223415 if (btrfs_delayed_ref_is_head(ref)) {34233416 struct btrfs_delayed_ref_head *head;3424341734253418 head = btrfs_delayed_node_to_head(ref);34263426- spin_unlock(&delayed_refs->lock);34273427- mutex_lock(&head->mutex);34193419+ if (!mutex_trylock(&head->mutex)) {34203420+ atomic_inc(&ref->refs);34213421+ spin_unlock(&delayed_refs->lock);34223422+34233423+ /* Need to wait for the delayed ref to run */34243424+ mutex_lock(&head->mutex);34253425+ mutex_unlock(&head->mutex);34263426+ btrfs_put_delayed_ref(ref);34273427+34283428+ continue;34293429+ }34303430+34283431 kfree(head->extent_op);34293432 delayed_refs->num_heads--;34303433 if (list_empty(&head->cluster))34313434 delayed_refs->num_heads_ready--;34323435 list_del_init(&head->cluster);34333433- mutex_unlock(&head->mutex);34343434- btrfs_put_delayed_ref(ref);34353435- goto again;34363436 }34373437+ ref->in_tree = 0;34383438+ rb_erase(&ref->rb_node, &delayed_refs->root);34393439+ delayed_refs->num_entries--;34403440+34373441 spin_unlock(&delayed_refs->lock);34383442 btrfs_put_delayed_ref(ref);34393443···35263520 &(&BTRFS_I(page->mapping->host)->io_tree)->buffer,35273521 offset >> PAGE_CACHE_SHIFT);35283522 spin_unlock(&dirty_pages->buffer_lock);35293529- if (eb) {35233523+ if (eb)35303524 ret = test_and_clear_bit(EXTENT_BUFFER_DIRTY,35313525 &eb->bflags);35323532- atomic_set(&eb->refs, 1);35333533- }35343526 if (PageWriteback(page))35353527 end_page_writeback(page);35363528···35423538 spin_unlock_irq(&page->mapping->tree_lock);35433539 }3544354035453545- page->mapping->a_ops->invalidatepage(page, 0);35463541 unlock_page(page);35423542+ page_cache_release(page);35473543 }35483544 }35493545···35573553 u64 start;35583554 u64 end;35593555 int ret;35563556+ bool loop = true;3560355735613558 unpin = pinned_extents;35593559+again:35623560 while (1) {35633561 ret = find_first_extent_bit(unpin, 0, &start, &end,35643562 EXTENT_DIRTY);···35783572 cond_resched();35793573 }3580357435753575+ if (loop) {35763576+ if (unpin == &root->fs_info->freed_extents[0])35773577+ unpin = &root->fs_info->freed_extents[1];35783578+ else35793579+ unpin = &root->fs_info->freed_extents[0];35803580+ loop = false;35813581+ goto again;35823582+ }35833583+35813584 return 0;35823585}35833586···36003585 /* FIXME: cleanup wait for commit */36013586 cur_trans->in_commit = 1;36023587 cur_trans->blocked = 1;36033603- if (waitqueue_active(&root->fs_info->transaction_blocked_wait))36043604- wake_up(&root->fs_info->transaction_blocked_wait);35883588+ wake_up(&root->fs_info->transaction_blocked_wait);3605358936063590 cur_trans->blocked = 0;36073607- if (waitqueue_active(&root->fs_info->transaction_wait))36083608- wake_up(&root->fs_info->transaction_wait);35913591+ wake_up(&root->fs_info->transaction_wait);3609359236103593 cur_trans->commit_done = 1;36113611- if (waitqueue_active(&cur_trans->commit_wait))36123612- wake_up(&cur_trans->commit_wait);35943594+ wake_up(&cur_trans->commit_wait);35953595+35963596+ btrfs_destroy_delayed_inodes(root);35973597+ btrfs_assert_delayed_root_empty(root);3613359836143599 btrfs_destroy_pending_snapshots(cur_trans);3615360036163601 btrfs_destroy_marked_extents(root, &cur_trans->dirty_pages,36173602 EXTENT_DIRTY);36033603+ btrfs_destroy_pinned_extent(root,36043604+ root->fs_info->pinned_extents);3618360536193606 /*36203607 memset(cur_trans, 0, sizeof(*cur_trans));···36643647 t->commit_done = 1;36653648 if (waitqueue_active(&t->commit_wait))36663649 wake_up(&t->commit_wait);36503650+36513651+ btrfs_destroy_delayed_inodes(root);36523652+ btrfs_assert_delayed_root_empty(root);3667365336683654 btrfs_destroy_pending_snapshots(t);36693655
···5252#include "locking.h"5353#include "inode-map.h"5454#include "backref.h"5555+#include "rcu-string.h"55565657/* Mask out flags that are inappropriate for the given type of inode. */5758static inline __u32 btrfs_mask_flags(umode_t mode, __u32 flags)···786785 return -ENOENT;787786}788787789789-/*790790- * Validaty check of prev em and next em:791791- * 1) no prev/next em792792- * 2) prev/next em is an hole/inline extent793793- */794794-static int check_adjacent_extents(struct inode *inode, struct extent_map *em)788788+static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start)795789{796790 struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;797797- struct extent_map *prev = NULL, *next = NULL;798798- int ret = 0;799799-800800- read_lock(&em_tree->lock);801801- prev = lookup_extent_mapping(em_tree, em->start - 1, (u64)-1);802802- next = lookup_extent_mapping(em_tree, em->start + em->len, (u64)-1);803803- read_unlock(&em_tree->lock);804804-805805- if ((!prev || prev->block_start >= EXTENT_MAP_LAST_BYTE) &&806806- (!next || next->block_start >= EXTENT_MAP_LAST_BYTE))807807- ret = 1;808808- free_extent_map(prev);809809- free_extent_map(next);810810-811811- return ret;812812-}813813-814814-static int should_defrag_range(struct inode *inode, u64 start, u64 len,815815- int thresh, u64 *last_len, u64 *skip,816816- u64 *defrag_end)817817-{818791 struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;819819- struct extent_map *em = NULL;820820- struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;821821- int ret = 1;822822-823823- /*824824- * make sure that once we start defragging an extent, we keep on825825- * defragging it826826- */827827- if (start < *defrag_end)828828- return 1;829829-830830- *skip = 0;792792+ struct extent_map *em;793793+ u64 len = PAGE_CACHE_SIZE;831794832795 /*833796 * hopefully we have this extent in the tree already, try without···808843 unlock_extent(io_tree, start, start + len - 1);809844810845 if (IS_ERR(em))811811- return 0;846846+ return NULL;812847 }848848+849849+ return em;850850+}851851+852852+static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em)853853+{854854+ struct extent_map *next;855855+ bool ret = true;856856+857857+ /* this is the last extent */858858+ if (em->start + em->len >= i_size_read(inode))859859+ return false;860860+861861+ next = defrag_lookup_extent(inode, em->start + em->len);862862+ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)863863+ ret = false;864864+865865+ free_extent_map(next);866866+ return ret;867867+}868868+869869+static int should_defrag_range(struct inode *inode, u64 start, int thresh,870870+ u64 *last_len, u64 *skip, u64 *defrag_end)871871+{872872+ struct extent_map *em;873873+ int ret = 1;874874+ bool next_mergeable = true;875875+876876+ /*877877+ * make sure that once we start defragging an extent, we keep on878878+ * defragging it879879+ */880880+ if (start < *defrag_end)881881+ return 1;882882+883883+ *skip = 0;884884+885885+ em = defrag_lookup_extent(inode, start);886886+ if (!em)887887+ return 0;813888814889 /* this will cover holes, and inline extents */815890 if (em->block_start >= EXTENT_MAP_LAST_BYTE) {···857852 goto out;858853 }859854860860- /* If we have nothing to merge with us, just skip. */861861- if (check_adjacent_extents(inode, em)) {862862- ret = 0;863863- goto out;864864- }855855+ next_mergeable = defrag_check_next_extent(inode, em);865856866857 /*867867- * we hit a real extent, if it is big don't bother defragging it again858858+ * we hit a real extent, if it is big or the next extent is not a859859+ * real extent, don't bother defragging it868860 */869869- if ((*last_len == 0 || *last_len >= thresh) && em->len >= thresh)861861+ if ((*last_len == 0 || *last_len >= thresh) &&862862+ (em->len >= thresh || !next_mergeable))870863 ret = 0;871871-872864out:873865 /*874866 * last_len ends up being a counter of how many bytes we've defragged.···11441142 break;1145114311461144 if (!should_defrag_range(inode, (u64)i << PAGE_CACHE_SHIFT,11471147- PAGE_CACHE_SIZE, extent_thresh,11481148- &last_len, &skip, &defrag_end)) {11451145+ extent_thresh, &last_len, &skip,11461146+ &defrag_end)) {11491147 unsigned long next;11501148 /*11511149 * the should_defrag function tells us how much to skip···13061304 ret = -EINVAL;13071305 goto out_free;13081306 }13071307+ if (device->fs_devices && device->fs_devices->seeding) {13081308+ printk(KERN_INFO "btrfs: resizer unable to apply on "13091309+ "seeding device %llu\n",13101310+ (unsigned long long)devid);13111311+ ret = -EINVAL;13121312+ goto out_free;13131313+ }13141314+13091315 if (!strcmp(sizestr, "max"))13101316 new_size = device->bdev->bd_inode->i_size;13111317 else {···13551345 do_div(new_size, root->sectorsize);13561346 new_size *= root->sectorsize;1357134713581358- printk(KERN_INFO "btrfs: new size for %s is %llu\n",13591359- device->name, (unsigned long long)new_size);13481348+ printk_in_rcu(KERN_INFO "btrfs: new size for %s is %llu\n",13491349+ rcu_str_deref(device->name),13501350+ (unsigned long long)new_size);1360135113611352 if (new_size > old_size) {13621353 trans = btrfs_start_transaction(root, 0);···22752264 di_args->total_bytes = dev->total_bytes;22762265 memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));22772266 if (dev->name) {22782278- strncpy(di_args->path, dev->name, sizeof(di_args->path));22672267+ struct rcu_string *name;22682268+22692269+ rcu_read_lock();22702270+ name = rcu_dereference(dev->name);22712271+ strncpy(di_args->path, name->str, sizeof(di_args->path));22722272+ rcu_read_unlock();22792273 di_args->path[sizeof(di_args->path) - 1] = 0;22802274 } else {22812275 di_args->path[0] = '\0';
+21-1
fs/btrfs/ordered-data.c
···627627 /* start IO across the range first to instantiate any delalloc628628 * extents629629 */630630- filemap_write_and_wait_range(inode->i_mapping, start, orig_end);630630+ filemap_fdatawrite_range(inode->i_mapping, start, orig_end);631631+632632+ /*633633+ * So with compression we will find and lock a dirty page and clear the634634+ * first one as dirty, setup an async extent, and immediately return635635+ * with the entire range locked but with nobody actually marked with636636+ * writeback. So we can't just filemap_write_and_wait_range() and637637+ * expect it to work since it will just kick off a thread to do the638638+ * actual work. So we need to call filemap_fdatawrite_range _again_639639+ * since it will wait on the page lock, which won't be unlocked until640640+ * after the pages have been marked as writeback and so we're good to go641641+ * from there. We have to do this otherwise we'll miss the ordered642642+ * extents and that results in badness. Please Josef, do not think you643643+ * know better and pull this out at some point in the future, it is644644+ * right and you are wrong.645645+ */646646+ if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,647647+ &BTRFS_I(inode)->runtime_flags))648648+ filemap_fdatawrite_range(inode->i_mapping, start, orig_end);649649+650650+ filemap_fdatawait_range(inode->i_mapping, start, orig_end);631651632652 end = orig_end;633653 found = 0;
+56
fs/btrfs/rcu-string.h
···11+/*22+ * Copyright (C) 2012 Red Hat. All rights reserved.33+ *44+ * This program is free software; you can redistribute it and/or55+ * modify it under the terms of the GNU General Public66+ * License v2 as published by the Free Software Foundation.77+ *88+ * This program is distributed in the hope that it will be useful,99+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1010+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU1111+ * General Public License for more details.1212+ *1313+ * You should have received a copy of the GNU General Public1414+ * License along with this program; if not, write to the1515+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,1616+ * Boston, MA 021110-1307, USA.1717+ */1818+1919+struct rcu_string {2020+ struct rcu_head rcu;2121+ char str[0];2222+};2323+2424+static inline struct rcu_string *rcu_string_strdup(const char *src, gfp_t mask)2525+{2626+ size_t len = strlen(src) + 1;2727+ struct rcu_string *ret = kzalloc(sizeof(struct rcu_string) +2828+ (len * sizeof(char)), mask);2929+ if (!ret)3030+ return ret;3131+ strncpy(ret->str, src, len);3232+ return ret;3333+}3434+3535+static inline void rcu_string_free(struct rcu_string *str)3636+{3737+ if (str)3838+ kfree_rcu(str, rcu);3939+}4040+4141+#define printk_in_rcu(fmt, ...) do { \4242+ rcu_read_lock(); \4343+ printk(fmt, __VA_ARGS__); \4444+ rcu_read_unlock(); \4545+} while (0)4646+4747+#define printk_ratelimited_in_rcu(fmt, ...) do { \4848+ rcu_read_lock(); \4949+ printk_ratelimited(fmt, __VA_ARGS__); \5050+ rcu_read_unlock(); \5151+} while (0)5252+5353+#define rcu_str_deref(rcu_str) ({ \5454+ struct rcu_string *__str = rcu_dereference(rcu_str); \5555+ __str->str; \5656+})
+18-12
fs/btrfs/scrub.c
···2626#include "backref.h"2727#include "extent_io.h"2828#include "check-integrity.h"2929+#include "rcu-string.h"29303031/*3132 * This is only the first step towards a full-features scrub. It reads all···321320 * hold all of the paths here322321 */323322 for (i = 0; i < ipath->fspath->elem_cnt; ++i)324324- printk(KERN_WARNING "btrfs: %s at logical %llu on dev "323323+ printk_in_rcu(KERN_WARNING "btrfs: %s at logical %llu on dev "325324 "%s, sector %llu, root %llu, inode %llu, offset %llu, "326325 "length %llu, links %u (path: %s)\n", swarn->errstr,327327- swarn->logical, swarn->dev->name,326326+ swarn->logical, rcu_str_deref(swarn->dev->name),328327 (unsigned long long)swarn->sector, root, inum, offset,329328 min(isize - offset, (u64)PAGE_SIZE), nlink,330329 (char *)(unsigned long)ipath->fspath->val[i]);···333332 return 0;334333335334err:336336- printk(KERN_WARNING "btrfs: %s at logical %llu on dev "335335+ printk_in_rcu(KERN_WARNING "btrfs: %s at logical %llu on dev "337336 "%s, sector %llu, root %llu, inode %llu, offset %llu: path "338337 "resolving failed with ret=%d\n", swarn->errstr,339339- swarn->logical, swarn->dev->name,338338+ swarn->logical, rcu_str_deref(swarn->dev->name),340339 (unsigned long long)swarn->sector, root, inum, offset, ret);341340342341 free_ipath(ipath);···391390 do {392391 ret = tree_backref_for_extent(&ptr, eb, ei, item_size,393392 &ref_root, &ref_level);394394- printk(KERN_WARNING393393+ printk_in_rcu(KERN_WARNING395394 "btrfs: %s at logical %llu on dev %s, "396395 "sector %llu: metadata %s (level %d) in tree "397397- "%llu\n", errstr, swarn.logical, dev->name,396396+ "%llu\n", errstr, swarn.logical,397397+ rcu_str_deref(dev->name),398398 (unsigned long long)swarn.sector,399399 ref_level ? "node" : "leaf",400400 ret < 0 ? -1 : ref_level,···582580 spin_lock(&sdev->stat_lock);583581 ++sdev->stat.uncorrectable_errors;584582 spin_unlock(&sdev->stat_lock);585585- printk_ratelimited(KERN_ERR583583+584584+ printk_ratelimited_in_rcu(KERN_ERR586585 "btrfs: unable to fixup (nodatasum) error at logical %llu on dev %s\n",587587- (unsigned long long)fixup->logical, sdev->dev->name);586586+ (unsigned long long)fixup->logical,587587+ rcu_str_deref(sdev->dev->name));588588 }589589590590 btrfs_free_path(path);···940936 spin_lock(&sdev->stat_lock);941937 sdev->stat.corrected_errors++;942938 spin_unlock(&sdev->stat_lock);943943- printk_ratelimited(KERN_ERR939939+ printk_ratelimited_in_rcu(KERN_ERR944940 "btrfs: fixed up error at logical %llu on dev %s\n",945945- (unsigned long long)logical, sdev->dev->name);941941+ (unsigned long long)logical,942942+ rcu_str_deref(sdev->dev->name));946943 }947944 } else {948945did_not_correct_error:949946 spin_lock(&sdev->stat_lock);950947 sdev->stat.uncorrectable_errors++;951948 spin_unlock(&sdev->stat_lock);952952- printk_ratelimited(KERN_ERR949949+ printk_ratelimited_in_rcu(KERN_ERR953950 "btrfs: unable to fixup (regular) error at logical %llu on dev %s\n",954954- (unsigned long long)logical, sdev->dev->name);951951+ (unsigned long long)logical,952952+ rcu_str_deref(sdev->dev->name));955953 }956954957955out:
···3535#include "volumes.h"3636#include "async-thread.h"3737#include "check-integrity.h"3838+#include "rcu-string.h"38393940static int init_first_rw_device(struct btrfs_trans_handle *trans,4041 struct btrfs_root *root,···6564 device = list_entry(fs_devices->devices.next,6665 struct btrfs_device, dev_list);6766 list_del(&device->dev_list);6868- kfree(device->name);6767+ rcu_string_free(device->name);6968 kfree(device);7069 }7170 kfree(fs_devices);···335334{336335 struct btrfs_device *device;337336 struct btrfs_fs_devices *fs_devices;337337+ struct rcu_string *name;338338 u64 found_transid = btrfs_super_generation(disk_super);339339- char *name;340339341340 fs_devices = find_fsid(disk_super->fsid);342341 if (!fs_devices) {···370369 memcpy(device->uuid, disk_super->dev_item.uuid,371370 BTRFS_UUID_SIZE);372371 spin_lock_init(&device->io_lock);373373- device->name = kstrdup(path, GFP_NOFS);374374- if (!device->name) {372372+373373+ name = rcu_string_strdup(path, GFP_NOFS);374374+ if (!name) {375375 kfree(device);376376 return -ENOMEM;377377 }378378+ rcu_assign_pointer(device->name, name);378379 INIT_LIST_HEAD(&device->dev_alloc_list);379380380381 /* init readahead state */···393390394391 device->fs_devices = fs_devices;395392 fs_devices->num_devices++;396396- } else if (!device->name || strcmp(device->name, path)) {397397- name = kstrdup(path, GFP_NOFS);393393+ } else if (!device->name || strcmp(device->name->str, path)) {394394+ name = rcu_string_strdup(path, GFP_NOFS);398395 if (!name)399396 return -ENOMEM;400400- kfree(device->name);401401- device->name = name;397397+ rcu_string_free(device->name);398398+ rcu_assign_pointer(device->name, name);402399 if (device->missing) {403400 fs_devices->missing_devices--;404401 device->missing = 0;···433430434431 /* We have held the volume lock, it is safe to get the devices. */435432 list_for_each_entry(orig_dev, &orig->devices, dev_list) {433433+ struct rcu_string *name;434434+436435 device = kzalloc(sizeof(*device), GFP_NOFS);437436 if (!device)438437 goto error;439438440440- device->name = kstrdup(orig_dev->name, GFP_NOFS);441441- if (!device->name) {439439+ /*440440+ * This is ok to do without rcu read locked because we hold the441441+ * uuid mutex so nothing we touch in here is going to disappear.442442+ */443443+ name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS);444444+ if (!name) {442445 kfree(device);443446 goto error;444447 }448448+ rcu_assign_pointer(device->name, name);445449446450 device->devid = orig_dev->devid;447451 device->work.func = pending_bios_fn;···501491 }502492 list_del_init(&device->dev_list);503493 fs_devices->num_devices--;504504- kfree(device->name);494494+ rcu_string_free(device->name);505495 kfree(device);506496 }507497···526516 if (device->bdev)527517 blkdev_put(device->bdev, device->mode);528518529529- kfree(device->name);519519+ rcu_string_free(device->name);530520 kfree(device);531521}532522···550540 mutex_lock(&fs_devices->device_list_mutex);551541 list_for_each_entry(device, &fs_devices->devices, dev_list) {552542 struct btrfs_device *new_device;543543+ struct rcu_string *name;553544554545 if (device->bdev)555546 fs_devices->open_devices--;···566555 new_device = kmalloc(sizeof(*new_device), GFP_NOFS);567556 BUG_ON(!new_device); /* -ENOMEM */568557 memcpy(new_device, device, sizeof(*new_device));569569- new_device->name = kstrdup(device->name, GFP_NOFS);570570- BUG_ON(device->name && !new_device->name); /* -ENOMEM */558558+559559+ /* Safe because we are under uuid_mutex */560560+ name = rcu_string_strdup(device->name->str, GFP_NOFS);561561+ BUG_ON(device->name && !name); /* -ENOMEM */562562+ rcu_assign_pointer(new_device->name, name);571563 new_device->bdev = NULL;572564 new_device->writeable = 0;573565 new_device->in_fs_metadata = 0;···635621 if (!device->name)636622 continue;637623638638- bdev = blkdev_get_by_path(device->name, flags, holder);624624+ bdev = blkdev_get_by_path(device->name->str, flags, holder);639625 if (IS_ERR(bdev)) {640640- printk(KERN_INFO "open %s failed\n", device->name);626626+ printk(KERN_INFO "open %s failed\n", device->name->str);641627 goto error;642628 }643629 filemap_write_and_wait(bdev->bd_inode->i_mapping);···16461632 struct block_device *bdev;16471633 struct list_head *devices;16481634 struct super_block *sb = root->fs_info->sb;16351635+ struct rcu_string *name;16491636 u64 total_bytes;16501637 int seeding_dev = 0;16511638 int ret = 0;···16861671 goto error;16871672 }1688167316891689- device->name = kstrdup(device_path, GFP_NOFS);16901690- if (!device->name) {16741674+ name = rcu_string_strdup(device_path, GFP_NOFS);16751675+ if (!name) {16911676 kfree(device);16921677 ret = -ENOMEM;16931678 goto error;16941679 }16801680+ rcu_assign_pointer(device->name, name);1695168116961682 ret = find_next_devid(root, &device->devid);16971683 if (ret) {16981698- kfree(device->name);16841684+ rcu_string_free(device->name);16991685 kfree(device);17001686 goto error;17011687 }1702168817031689 trans = btrfs_start_transaction(root, 0);17041690 if (IS_ERR(trans)) {17051705- kfree(device->name);16911691+ rcu_string_free(device->name);17061692 kfree(device);17071693 ret = PTR_ERR(trans);17081694 goto error;···18121796 unlock_chunks(root);18131797 btrfs_abort_transaction(trans, root, ret);18141798 btrfs_end_transaction(trans, root);18151815- kfree(device->name);17991799+ rcu_string_free(device->name);18161800 kfree(device);18171801error:18181802 blkdev_put(bdev, FMODE_EXCL);···42204204 bio->bi_sector = bbio->stripes[dev_nr].physical >> 9;42214205 dev = bbio->stripes[dev_nr].dev;42224206 if (dev && dev->bdev && (rw != WRITE || dev->writeable)) {42074207+#ifdef DEBUG42084208+ struct rcu_string *name;42094209+42104210+ rcu_read_lock();42114211+ name = rcu_dereference(dev->name);42234212 pr_debug("btrfs_map_bio: rw %d, secor=%llu, dev=%lu "42244213 "(%s id %llu), size=%u\n", rw,42254214 (u64)bio->bi_sector, (u_long)dev->bdev->bd_dev,42264226- dev->name, dev->devid, bio->bi_size);42154215+ name->str, dev->devid, bio->bi_size);42164216+ rcu_read_unlock();42174217+#endif42274218 bio->bi_bdev = dev->bdev;42284219 if (async_submit)42294220 schedule_bio(root, dev, rw, bio);···47174694 key.offset = device->devid;47184695 ret = btrfs_search_slot(NULL, dev_root, &key, path, 0, 0);47194696 if (ret) {47204720- printk(KERN_WARNING "btrfs: no dev_stats entry found for device %s (devid %llu) (OK on first mount after mkfs)\n",47214721- device->name, (unsigned long long)device->devid);46974697+ printk_in_rcu(KERN_WARNING "btrfs: no dev_stats entry found for device %s (devid %llu) (OK on first mount after mkfs)\n",46984698+ rcu_str_deref(device->name),46994699+ (unsigned long long)device->devid);47224700 __btrfs_reset_dev_stats(device);47234701 device->dev_stats_valid = 1;47244702 btrfs_release_path(path);···47714747 BUG_ON(!path);47724748 ret = btrfs_search_slot(trans, dev_root, &key, path, -1, 1);47734749 if (ret < 0) {47744774- printk(KERN_WARNING "btrfs: error %d while searching for dev_stats item for device %s!\n",47754775- ret, device->name);47504750+ printk_in_rcu(KERN_WARNING "btrfs: error %d while searching for dev_stats item for device %s!\n",47514751+ ret, rcu_str_deref(device->name));47764752 goto out;47774753 }47784754···47814757 /* need to delete old one and insert a new one */47824758 ret = btrfs_del_item(trans, dev_root, path);47834759 if (ret != 0) {47844784- printk(KERN_WARNING "btrfs: delete too small dev_stats item for device %s failed %d!\n",47854785- device->name, ret);47604760+ printk_in_rcu(KERN_WARNING "btrfs: delete too small dev_stats item for device %s failed %d!\n",47614761+ rcu_str_deref(device->name), ret);47864762 goto out;47874763 }47884764 ret = 1;···47944770 ret = btrfs_insert_empty_item(trans, dev_root, path,47954771 &key, sizeof(*ptr));47964772 if (ret < 0) {47974797- printk(KERN_WARNING "btrfs: insert dev_stats item for device %s failed %d!\n",47984798- device->name, ret);47734773+ printk_in_rcu(KERN_WARNING "btrfs: insert dev_stats item for device %s failed %d!\n",47744774+ rcu_str_deref(device->name), ret);47994775 goto out;48004776 }48014777 }···48474823{48484824 if (!dev->dev_stats_valid)48494825 return;48504850- printk_ratelimited(KERN_ERR48264826+ printk_ratelimited_in_rcu(KERN_ERR48514827 "btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",48524852- dev->name,48284828+ rcu_str_deref(dev->name),48534829 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_WRITE_ERRS),48544830 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_READ_ERRS),48554831 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_FLUSH_ERRS),···4861483748624838static void btrfs_dev_stat_print_on_load(struct btrfs_device *dev)48634839{48644864- printk(KERN_INFO "btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",48654865- dev->name,48404840+ printk_in_rcu(KERN_INFO "btrfs: bdev %s errs: wr %u, rd %u, flush %u, corrupt %u, gen %u\n",48414841+ rcu_str_deref(dev->name),48664842 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_WRITE_ERRS),48674843 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_READ_ERRS),48684844 btrfs_dev_stat_read(dev, BTRFS_DEV_STAT_FLUSH_ERRS),
+1-1
fs/btrfs/volumes.h
···5858 /* the mode sent to blkdev_get */5959 fmode_t mode;60606161- char *name;6161+ struct rcu_string *name;62626363 /* the internal btrfs device id */6464 u64 devid;
+10-6
fs/dcache.c
···683683/**684684 * d_find_alias - grab a hashed alias of inode685685 * @inode: inode in question686686+ * @want_discon: flag, used by d_splice_alias, to request687687+ * that only a DISCONNECTED alias be returned.686688 *687689 * If inode has a hashed alias, or is a directory and has any alias,688690 * acquire the reference to alias and return it. Otherwise return NULL.···693691 * of a filesystem.694692 *695693 * If the inode has an IS_ROOT, DCACHE_DISCONNECTED alias, then prefer696696- * any other hashed alias over that.694694+ * any other hashed alias over that one unless @want_discon is set,695695+ * in which case only return an IS_ROOT, DCACHE_DISCONNECTED alias.697696 */698698-static struct dentry *__d_find_alias(struct inode *inode)697697+static struct dentry *__d_find_alias(struct inode *inode, int want_discon)699698{700699 struct dentry *alias, *discon_alias;701700···708705 if (IS_ROOT(alias) &&709706 (alias->d_flags & DCACHE_DISCONNECTED)) {710707 discon_alias = alias;711711- } else {708708+ } else if (!want_discon) {712709 __dget_dlock(alias);713710 spin_unlock(&alias->d_lock);714711 return alias;···739736740737 if (!list_empty(&inode->i_dentry)) {741738 spin_lock(&inode->i_lock);742742- de = __d_find_alias(inode);739739+ de = __d_find_alias(inode, 0);743740 spin_unlock(&inode->i_lock);744741 }745742 return de;···1650164716511648 if (inode && S_ISDIR(inode->i_mode)) {16521649 spin_lock(&inode->i_lock);16531653- new = __d_find_any_alias(inode);16501650+ new = __d_find_alias(inode, 1);16541651 if (new) {16521652+ BUG_ON(!(new->d_flags & DCACHE_DISCONNECTED));16551653 spin_unlock(&inode->i_lock);16561654 security_d_instantiate(new, inode);16571655 d_move(new, dentry);···24822478 struct dentry *alias;2483247924842480 /* Does an aliased dentry already exist? */24852485- alias = __d_find_alias(inode);24812481+ alias = __d_find_alias(inode, 0);24862482 if (alias) {24872483 actual = alias;24882484 write_seqlock(&rename_lock);
···664664 /* Wait for I_SYNC. This function drops i_lock... */665665 inode_sleep_on_writeback(inode);666666 /* Inode may be gone, start again */667667+ spin_lock(&wb->list_lock);667668 continue;668669 }669670 inode->i_state |= I_SYNC;
+5-6
fs/nfs/callback.c
···1717#include <linux/kthread.h>1818#include <linux/sunrpc/svcauth_gss.h>1919#include <linux/sunrpc/bc_xprt.h>2020-#include <linux/nsproxy.h>21202221#include <net/inet_sock.h>2322···106107{107108 int ret;108109109109- ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET,110110+ ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET,110111 nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);111112 if (ret <= 0)112113 goto out_err;···114115 dprintk("NFS: Callback listener port = %u (af %u)\n",115116 nfs_callback_tcpport, PF_INET);116117117117- ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET6,118118+ ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET6,118119 nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);119120 if (ret > 0) {120121 nfs_callback_tcpport6 = ret;···183184 * fore channel connection.184185 * Returns the input port (0) and sets the svc_serv bc_xprt on success185186 */186186- ret = svc_create_xprt(serv, "tcp-bc", xprt->xprt_net, PF_INET, 0,187187+ ret = svc_create_xprt(serv, "tcp-bc", &init_net, PF_INET, 0,187188 SVC_SOCK_ANONYMOUS);188189 if (ret < 0) {189190 rqstp = ERR_PTR(ret);···253254 char svc_name[12];254255 int ret = 0;255256 int minorversion_setup;256256- struct net *net = current->nsproxy->net_ns;257257+ struct net *net = &init_net;257258258259 mutex_lock(&nfs_callback_mutex);259260 if (cb_info->users++ || cb_info->task != NULL) {···329330 cb_info->users--;330331 if (cb_info->users == 0 && cb_info->task != NULL) {331332 kthread_stop(cb_info->task);332332- svc_shutdown_net(cb_info->serv, current->nsproxy->net_ns);333333+ svc_shutdown_net(cb_info->serv, &init_net);333334 svc_exit_thread(cb_info->rqst);334335 cb_info->serv = NULL;335336 cb_info->rqst = NULL;
+4-4
fs/nfs/callback_xdr.c
···455455 args->csa_nrclists = ntohl(*p++);456456 args->csa_rclists = NULL;457457 if (args->csa_nrclists) {458458- args->csa_rclists = kmalloc(args->csa_nrclists *459459- sizeof(*args->csa_rclists),460460- GFP_KERNEL);458458+ args->csa_rclists = kmalloc_array(args->csa_nrclists,459459+ sizeof(*args->csa_rclists),460460+ GFP_KERNEL);461461 if (unlikely(args->csa_rclists == NULL))462462 goto out;463463···696696 const struct cb_sequenceres *res)697697{698698 __be32 *p;699699- unsigned status = res->csr_status;699699+ __be32 status = res->csr_status;700700701701 if (unlikely(status != 0))702702 goto out;
-2
fs/nfs/client.c
···544544545545 smp_rmb();546546547547- BUG_ON(clp->cl_cons_state != NFS_CS_READY);548548-549547 dprintk("<-- %s found nfs_client %p for %s\n",550548 __func__, clp, cl_init->hostname ?: "");551549 return clp;
+4-4
fs/nfs/direct.c
···523523 nfs_list_remove_request(req);524524 if (dreq->flags == NFS_ODIRECT_RESCHED_WRITES) {525525 /* Note the rewrite will go through mds */526526- kref_get(&req->wb_kref);527526 nfs_mark_request_commit(req, NULL, &cinfo);528528- }527527+ } else528528+ nfs_release_request(req);529529 nfs_unlock_and_release_request(req);530530 }531531···716716 if (dreq->flags == NFS_ODIRECT_RESCHED_WRITES)717717 bit = NFS_IOHDR_NEED_RESCHED;718718 else if (dreq->flags == 0) {719719- memcpy(&dreq->verf, &req->wb_verf,719719+ memcpy(&dreq->verf, hdr->verf,720720 sizeof(dreq->verf));721721 bit = NFS_IOHDR_NEED_COMMIT;722722 dreq->flags = NFS_ODIRECT_DO_COMMIT;723723 } else if (dreq->flags == NFS_ODIRECT_DO_COMMIT) {724724- if (memcmp(&dreq->verf, &req->wb_verf, sizeof(dreq->verf))) {724724+ if (memcmp(&dreq->verf, hdr->verf, sizeof(dreq->verf))) {725725 dreq->flags = NFS_ODIRECT_RESCHED_WRITES;726726 bit = NFS_IOHDR_NEED_RESCHED;727727 } else
···651651 /* Emulate the eof flag, which isn't normally needed in NFSv2652652 * as it is guaranteed to always return the file attributes653653 */654654- if (data->args.offset + data->args.count >= data->res.fattr->size)654654+ if (data->args.offset + data->res.count >= data->res.fattr->size)655655 data->res.eof = 1;656656 }657657 return 0;
···6464 * A structure for mapping buffer.6565 *6666 * @handle: a handle to gem object created.6767+ * @pad: just padding to be 64-bit aligned.6768 * @size: memory size to be mapped.6869 * @mapped: having user virtual address mmaped.6970 * - this variable would be filled by exynos gem module···7372 */7473struct drm_exynos_gem_mmap {7574 unsigned int handle;7676- unsigned int size;7575+ unsigned int pad;7676+ uint64_t size;7777 uint64_t mapped;7878};7979
+41
include/linux/i2c-mux-pinctrl.h
···11+/*22+ * i2c-mux-pinctrl platform data33+ *44+ * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms and conditions of the GNU General Public License,88+ * version 2, as published by the Free Software Foundation.99+ *1010+ * This program is distributed in the hope it will be useful, but WITHOUT1111+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ * more details.1414+ *1515+ * You should have received a copy of the GNU General Public License1616+ * along with this program. If not, see <http://www.gnu.org/licenses/>.1717+ */1818+1919+#ifndef _LINUX_I2C_MUX_PINCTRL_H2020+#define _LINUX_I2C_MUX_PINCTRL_H2121+2222+/**2323+ * struct i2c_mux_pinctrl_platform_data - Platform data for i2c-mux-pinctrl2424+ * @parent_bus_num: Parent I2C bus number2525+ * @base_bus_num: Base I2C bus number for the child busses. 0 for dynamic.2626+ * @bus_count: Number of child busses. Also the number of elements in2727+ * @pinctrl_states2828+ * @pinctrl_states: The names of the pinctrl state to select for each child bus2929+ * @pinctrl_state_idle: The pinctrl state to select when no child bus is being3030+ * accessed. If NULL, the most recently used pinctrl state will be left3131+ * selected.3232+ */3333+struct i2c_mux_pinctrl_platform_data {3434+ int parent_bus_num;3535+ int base_bus_num;3636+ int bus_count;3737+ const char **pinctrl_states;3838+ const char *pinctrl_state_idle;3939+};4040+4141+#endif
+5-5
include/linux/moduleparam.h
···128128 * The ops can have NULL set or get functions.129129 */130130#define module_param_cb(name, ops, arg, perm) \131131- __module_param_call(MODULE_PARAM_PREFIX, name, ops, arg, perm, 0)131131+ __module_param_call(MODULE_PARAM_PREFIX, name, ops, arg, perm, -1)132132133133/**134134 * <level>_param_cb - general callback for a module/cmdline parameter···192192 { (void *)set, (void *)get }; \193193 __module_param_call(MODULE_PARAM_PREFIX, \194194 name, &__param_ops_##name, arg, \195195- (perm) + sizeof(__check_old_set_param(set))*0, 0)195195+ (perm) + sizeof(__check_old_set_param(set))*0, -1)196196197197/* We don't get oldget: it's often a new-style param_get_uint, etc. */198198static inline int···272272 */273273#define core_param(name, var, type, perm) \274274 param_check_##type(name, &(var)); \275275- __module_param_call("", name, ¶m_ops_##type, &var, perm, 0)275275+ __module_param_call("", name, ¶m_ops_##type, &var, perm, -1)276276#endif /* !MODULE */277277278278/**···290290 = { len, string }; \291291 __module_param_call(MODULE_PARAM_PREFIX, name, \292292 ¶m_ops_string, \293293- .str = &__param_string_##name, perm, 0); \293293+ .str = &__param_string_##name, perm, -1); \294294 __MODULE_PARM_TYPE(name, "string")295295296296/**···432432 __module_param_call(MODULE_PARAM_PREFIX, name, \433433 ¶m_array_ops, \434434 .arr = &__param_arr_##name, \435435- perm, 0); \435435+ perm, -1); \436436 __MODULE_PARM_TYPE(name, "array of " #type)437437438438extern struct kernel_param_ops param_array_ops;
···127127#define PR_SET_PTRACER 0x59616d61128128# define PR_SET_PTRACER_ANY ((unsigned long)-1)129129130130-#define PR_SET_CHILD_SUBREAPER 36131131-#define PR_GET_CHILD_SUBREAPER 37130130+#define PR_SET_CHILD_SUBREAPER 36131131+#define PR_GET_CHILD_SUBREAPER 37132132133133/*134134 * If no_new_privs is set, then operations that grant new privileges (i.e.···142142 * asking selinux for a specific new context (e.g. with runcon) will result143143 * in execve returning -EPERM.144144 */145145-#define PR_SET_NO_NEW_PRIVS 38146146-#define PR_GET_NO_NEW_PRIVS 39145145+#define PR_SET_NO_NEW_PRIVS 38146146+#define PR_GET_NO_NEW_PRIVS 39147147+148148+#define PR_GET_TID_ADDRESS 40147149148150#endif /* _LINUX_PRCTL_H */
+4-1
include/linux/radix-tree.h
···368368 iter->index++;369369 if (likely(*slot))370370 return slot;371371- if (flags & RADIX_TREE_ITER_CONTIG)371371+ if (flags & RADIX_TREE_ITER_CONTIG) {372372+ /* forbid switching to the next chunk */373373+ iter->next_index = 0;372374 break;375375+ }373376 }374377 }375378 return NULL;
+4-2
include/linux/rcutiny.h
···87878888#ifdef CONFIG_TINY_RCU89899090-static inline int rcu_needs_cpu(int cpu)9090+static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)9191{9292+ *delta_jiffies = ULONG_MAX;9293 return 0;9394}9495···97969897int rcu_preempt_needs_cpu(void);9998100100-static inline int rcu_needs_cpu(int cpu)9999+static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)101100{101101+ *delta_jiffies = ULONG_MAX;102102 return rcu_preempt_needs_cpu();103103}104104
+1-1
include/linux/rcutree.h
···32323333extern void rcu_init(void);3434extern void rcu_note_context_switch(int cpu);3535-extern int rcu_needs_cpu(int cpu);3535+extern int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies);3636extern void rcu_cpu_stall_reset(void);37373838/*
+12
include/linux/sched.h
···439439 /* leave room for more dump flags */440440#define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */441441#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */442442+#define MMF_EXE_FILE_CHANGED 18 /* see prctl_set_mm_exe_file() */442443443444#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)444445···877876 * Number of busy cpus in this group.878877 */879878 atomic_t nr_busy_cpus;879879+880880+ unsigned long cpumask[0]; /* iteration mask */880881};881882882883struct sched_group {···901898static inline struct cpumask *sched_group_cpus(struct sched_group *sg)902899{903900 return to_cpumask(sg->cpumask);901901+}902902+903903+/*904904+ * cpumask masking which cpus in the group are allowed to iterate up the domain905905+ * tree.906906+ */907907+static inline struct cpumask *sched_group_mask(struct sched_group *sg)908908+{909909+ return to_cpumask(sg->sgp->cpumask);904910}905911906912/**
+5-3
include/linux/swapops.h
···99 * get good packing density in that tree, so the index should be dense in1010 * the low-order bits.1111 *1212- * We arrange the `type' and `offset' fields so that `type' is at the five1212+ * We arrange the `type' and `offset' fields so that `type' is at the seven1313 * high-order bits of the swp_entry_t and `offset' is right-aligned in the1414- * remaining bits.1414+ * remaining bits. Although `type' itself needs only five bits, we allow for1515+ * shmem/tmpfs to shift it all up a further two bits: see swp_to_radix_entry().1516 *1617 * swp_entry_t's are *never* stored anywhere in their arch-dependent format.1718 */1818-#define SWP_TYPE_SHIFT(e) (sizeof(e.val) * 8 - MAX_SWAPFILES_SHIFT)1919+#define SWP_TYPE_SHIFT(e) ((sizeof(e.val) * 8) - \2020+ (MAX_SWAPFILES_SHIFT + RADIX_TREE_EXCEPTIONAL_SHIFT))1921#define SWP_OFFSET_MASK(e) ((1UL << SWP_TYPE_SHIFT(e)) - 1)20222123/*
···4747 */4848 int (*check_stop_free)(struct se_cmd *);4949 void (*release_cmd)(struct se_cmd *);5050+ void (*put_session)(struct se_session *);5051 /*5152 * Called with spin_lock_bh(struct se_portal_group->session_lock held.5253 */
+1
include/trace/events/rcu.h
···289289 * "In holdoff": Nothing to do, holding off after unsuccessful attempt.290290 * "Begin holdoff": Attempt failed, don't retry until next jiffy.291291 * "Dyntick with callbacks": Entering dyntick-idle despite callbacks.292292+ * "Dyntick with lazy callbacks": Entering dyntick-idle w/lazy callbacks.292293 * "More callbacks": Still more callbacks, try again to clear them out.293294 * "Callbacks drained": All callbacks processed, off to dyntick idle!294295 * "Timer": Timer fired to cause CPU to continue processing callbacks.
···393393 return sfd->file->f_op->fsync(sfd->file, start, end, datasync);394394}395395396396+static long shm_fallocate(struct file *file, int mode, loff_t offset,397397+ loff_t len)398398+{399399+ struct shm_file_data *sfd = shm_file_data(file);400400+401401+ if (!sfd->file->f_op->fallocate)402402+ return -EOPNOTSUPP;403403+ return sfd->file->f_op->fallocate(file, mode, offset, len);404404+}405405+396406static unsigned long shm_get_unmapped_area(struct file *file,397407 unsigned long addr, unsigned long len, unsigned long pgoff,398408 unsigned long flags)···420410 .get_unmapped_area = shm_get_unmapped_area,421411#endif422412 .llseek = noop_llseek,413413+ .fallocate = shm_fallocate,423414};424415425416static const struct file_operations shm_file_operations_huge = {···429418 .release = shm_release,430419 .get_unmapped_area = shm_get_unmapped_area,431420 .llseek = noop_llseek,421421+ .fallocate = shm_fallocate,432422};433423434424int is_file_shm_hugepages(struct file *file)
+14-3
kernel/cgroup.c
···896896 mutex_unlock(&cgroup_mutex);897897898898 /*899899- * Drop the active superblock reference that we took when we900900- * created the cgroup899899+ * We want to drop the active superblock reference from the900900+ * cgroup creation after all the dentry refs are gone -901901+ * kill_sb gets mighty unhappy otherwise. Mark902902+ * dentry->d_fsdata with cgroup_diput() to tell903903+ * cgroup_d_release() to call deactivate_super().901904 */902902- deactivate_super(cgrp->root->sb);905905+ dentry->d_fsdata = cgroup_diput;903906904907 /*905908 * if we're getting rid of the cgroup, refcount should ensure···926923static int cgroup_delete(const struct dentry *d)927924{928925 return 1;926926+}927927+928928+static void cgroup_d_release(struct dentry *dentry)929929+{930930+ /* did cgroup_diput() tell me to deactivate super? */931931+ if (dentry->d_fsdata == cgroup_diput)932932+ deactivate_super(dentry->d_sb);929933}930934931935static void remove_dir(struct dentry *d)···15421532 static const struct dentry_operations cgroup_dops = {15431533 .d_iput = cgroup_diput,15441534 .d_delete = cgroup_delete,15351535+ .d_release = cgroup_d_release,15451536 };1546153715471538 struct inode *inode =
···2727#define PANIC_TIMER_STEP 1002828#define PANIC_BLINK_SPD 1829293030-int panic_on_oops;3030+int panic_on_oops = CONFIG_PANIC_ON_OOPS_VALUE;3131static unsigned long tainted_mask;3232static int pause_on_oops;3333static int pause_on_oops_flag;···108108 */109109 crash_kexec(NULL);110110111111- kmsg_dump(KMSG_DUMP_PANIC);112112-113111 /*114112 * Note smp_send_stop is the usual smp shutdown function, which115113 * unfortunately means it may not be hardened to work in a panic116114 * situation.117115 */118116 smp_send_stop();117117+118118+ kmsg_dump(KMSG_DUMP_PANIC);119119120120 atomic_notifier_call_chain(&panic_notifier_list, 0, buf);121121
···8484 /* Process level is worth LLONG_MAX/2. */8585 int dynticks_nmi_nesting; /* Track NMI nesting level. */8686 atomic_t dynticks; /* Even value for idle, else odd. */8787+#ifdef CONFIG_RCU_FAST_NO_HZ8888+ int dyntick_drain; /* Prepare-for-idle state variable. */8989+ unsigned long dyntick_holdoff;9090+ /* No retries for the jiffy of failure. */9191+ struct timer_list idle_gp_timer;9292+ /* Wake up CPU sleeping with callbacks. */9393+ unsigned long idle_gp_timer_expires;9494+ /* When to wake up CPU (for repost). */9595+ bool idle_first_pass; /* First pass of attempt to go idle? */9696+ unsigned long nonlazy_posted;9797+ /* # times non-lazy CBs posted to CPU. */9898+ unsigned long nonlazy_posted_snap;9999+ /* idle-period nonlazy_posted snapshot. */100100+#endif /* #ifdef CONFIG_RCU_FAST_NO_HZ */87101};8810289103/* RCU's kthread states for tracing. */
+88-77
kernel/rcutree_plugin.h
···18861886 * Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs18871887 * any flavor of RCU.18881888 */18891889-int rcu_needs_cpu(int cpu)18891889+int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)18901890{18911891+ *delta_jiffies = ULONG_MAX;18911892 return rcu_cpu_has_callbacks(cpu);18921893}18931894···19631962#define RCU_IDLE_GP_DELAY 6 /* Roughly one grace period. */19641963#define RCU_IDLE_LAZY_GP_DELAY (6 * HZ) /* Roughly six seconds. */1965196419661966-/* Loop counter for rcu_prepare_for_idle(). */19671967-static DEFINE_PER_CPU(int, rcu_dyntick_drain);19681968-/* If rcu_dyntick_holdoff==jiffies, don't try to enter dyntick-idle mode. */19691969-static DEFINE_PER_CPU(unsigned long, rcu_dyntick_holdoff);19701970-/* Timer to awaken the CPU if it enters dyntick-idle mode with callbacks. */19711971-static DEFINE_PER_CPU(struct timer_list, rcu_idle_gp_timer);19721972-/* Scheduled expiry time for rcu_idle_gp_timer to allow reposting. */19731973-static DEFINE_PER_CPU(unsigned long, rcu_idle_gp_timer_expires);19741974-/* Enable special processing on first attempt to enter dyntick-idle mode. */19751975-static DEFINE_PER_CPU(bool, rcu_idle_first_pass);19761976-/* Running count of non-lazy callbacks posted, never decremented. */19771977-static DEFINE_PER_CPU(unsigned long, rcu_nonlazy_posted);19781978-/* Snapshot of rcu_nonlazy_posted to detect meaningful exits from idle. */19791979-static DEFINE_PER_CPU(unsigned long, rcu_nonlazy_posted_snap);19801980-19811981-/*19821982- * Allow the CPU to enter dyntick-idle mode if either: (1) There are no19831983- * callbacks on this CPU, (2) this CPU has not yet attempted to enter19841984- * dyntick-idle mode, or (3) this CPU is in the process of attempting to19851985- * enter dyntick-idle mode. Otherwise, if we have recently tried and failed19861986- * to enter dyntick-idle mode, we refuse to try to enter it. After all,19871987- * it is better to incur scheduling-clock interrupts than to spin19881988- * continuously for the same time duration!19891989- */19901990-int rcu_needs_cpu(int cpu)19911991-{19921992- /* Flag a new idle sojourn to the idle-entry state machine. */19931993- per_cpu(rcu_idle_first_pass, cpu) = 1;19941994- /* If no callbacks, RCU doesn't need the CPU. */19951995- if (!rcu_cpu_has_callbacks(cpu))19961996- return 0;19971997- /* Otherwise, RCU needs the CPU only if it recently tried and failed. */19981998- return per_cpu(rcu_dyntick_holdoff, cpu) == jiffies;19991999-}20002000-20011965/*20021966 * Does the specified flavor of RCU have non-lazy callbacks pending on20031967 * the specified CPU? Both RCU flavor and CPU are specified by the···20062040}2007204120082042/*20432043+ * Allow the CPU to enter dyntick-idle mode if either: (1) There are no20442044+ * callbacks on this CPU, (2) this CPU has not yet attempted to enter20452045+ * dyntick-idle mode, or (3) this CPU is in the process of attempting to20462046+ * enter dyntick-idle mode. Otherwise, if we have recently tried and failed20472047+ * to enter dyntick-idle mode, we refuse to try to enter it. After all,20482048+ * it is better to incur scheduling-clock interrupts than to spin20492049+ * continuously for the same time duration!20502050+ *20512051+ * The delta_jiffies argument is used to store the time when RCU is20522052+ * going to need the CPU again if it still has callbacks. The reason20532053+ * for this is that rcu_prepare_for_idle() might need to post a timer,20542054+ * but if so, it will do so after tick_nohz_stop_sched_tick() has set20552055+ * the wakeup time for this CPU. This means that RCU's timer can be20562056+ * delayed until the wakeup time, which defeats the purpose of posting20572057+ * a timer.20582058+ */20592059+int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies)20602060+{20612061+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);20622062+20632063+ /* Flag a new idle sojourn to the idle-entry state machine. */20642064+ rdtp->idle_first_pass = 1;20652065+ /* If no callbacks, RCU doesn't need the CPU. */20662066+ if (!rcu_cpu_has_callbacks(cpu)) {20672067+ *delta_jiffies = ULONG_MAX;20682068+ return 0;20692069+ }20702070+ if (rdtp->dyntick_holdoff == jiffies) {20712071+ /* RCU recently tried and failed, so don't try again. */20722072+ *delta_jiffies = 1;20732073+ return 1;20742074+ }20752075+ /* Set up for the possibility that RCU will post a timer. */20762076+ if (rcu_cpu_has_nonlazy_callbacks(cpu))20772077+ *delta_jiffies = RCU_IDLE_GP_DELAY;20782078+ else20792079+ *delta_jiffies = RCU_IDLE_LAZY_GP_DELAY;20802080+ return 0;20812081+}20822082+20832083+/*20092084 * Handler for smp_call_function_single(). The only point of this20102085 * handler is to wake the CPU up, so the handler does only tracing.20112086 */···20822075 */20832076static void rcu_prepare_for_idle_init(int cpu)20842077{20852085- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1;20862086- setup_timer(&per_cpu(rcu_idle_gp_timer, cpu),20872087- rcu_idle_gp_timer_func, cpu);20882088- per_cpu(rcu_idle_gp_timer_expires, cpu) = jiffies - 1;20892089- per_cpu(rcu_idle_first_pass, cpu) = 1;20782078+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);20792079+20802080+ rdtp->dyntick_holdoff = jiffies - 1;20812081+ setup_timer(&rdtp->idle_gp_timer, rcu_idle_gp_timer_func, cpu);20822082+ rdtp->idle_gp_timer_expires = jiffies - 1;20832083+ rdtp->idle_first_pass = 1;20902084}2091208520922086/*20932087 * Clean up for exit from idle. Because we are exiting from idle, there20942094- * is no longer any point to rcu_idle_gp_timer, so cancel it. This will20882088+ * is no longer any point to ->idle_gp_timer, so cancel it. This will20952089 * do nothing if this timer is not active, so just cancel it unconditionally.20962090 */20972091static void rcu_cleanup_after_idle(int cpu)20982092{20992099- del_timer(&per_cpu(rcu_idle_gp_timer, cpu));20932093+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);20942094+20952095+ del_timer(&rdtp->idle_gp_timer);21002096 trace_rcu_prep_idle("Cleanup after idle");21012097}21022098···21182108 * Because it is not legal to invoke rcu_process_callbacks() with irqs21192109 * disabled, we do one pass of force_quiescent_state(), then do a21202110 * invoke_rcu_core() to cause rcu_process_callbacks() to be invoked21212121- * later. The per-cpu rcu_dyntick_drain variable controls the sequencing.21112111+ * later. The ->dyntick_drain field controls the sequencing.21222112 *21232113 * The caller must have disabled interrupts.21242114 */21252115static void rcu_prepare_for_idle(int cpu)21262116{21272117 struct timer_list *tp;21182118+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);2128211921292120 /*21302121 * If this is an idle re-entry, for example, due to use of21312122 * RCU_NONIDLE() or the new idle-loop tracing API within the idle21322123 * loop, then don't take any state-machine actions, unless the21332124 * momentary exit from idle queued additional non-lazy callbacks.21342134- * Instead, repost the rcu_idle_gp_timer if this CPU has callbacks21252125+ * Instead, repost the ->idle_gp_timer if this CPU has callbacks21352126 * pending.21362127 */21372137- if (!per_cpu(rcu_idle_first_pass, cpu) &&21382138- (per_cpu(rcu_nonlazy_posted, cpu) ==21392139- per_cpu(rcu_nonlazy_posted_snap, cpu))) {21282128+ if (!rdtp->idle_first_pass &&21292129+ (rdtp->nonlazy_posted == rdtp->nonlazy_posted_snap)) {21402130 if (rcu_cpu_has_callbacks(cpu)) {21412141- tp = &per_cpu(rcu_idle_gp_timer, cpu);21422142- mod_timer_pinned(tp, per_cpu(rcu_idle_gp_timer_expires, cpu));21312131+ tp = &rdtp->idle_gp_timer;21322132+ mod_timer_pinned(tp, rdtp->idle_gp_timer_expires);21432133 }21442134 return;21452135 }21462146- per_cpu(rcu_idle_first_pass, cpu) = 0;21472147- per_cpu(rcu_nonlazy_posted_snap, cpu) =21482148- per_cpu(rcu_nonlazy_posted, cpu) - 1;21362136+ rdtp->idle_first_pass = 0;21372137+ rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted - 1;2149213821502139 /*21512140 * If there are no callbacks on this CPU, enter dyntick-idle mode.21522141 * Also reset state to avoid prejudicing later attempts.21532142 */21542143 if (!rcu_cpu_has_callbacks(cpu)) {21552155- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies - 1;21562156- per_cpu(rcu_dyntick_drain, cpu) = 0;21442144+ rdtp->dyntick_holdoff = jiffies - 1;21452145+ rdtp->dyntick_drain = 0;21572146 trace_rcu_prep_idle("No callbacks");21582147 return;21592148 }···21612152 * If in holdoff mode, just return. We will presumably have21622153 * refrained from disabling the scheduling-clock tick.21632154 */21642164- if (per_cpu(rcu_dyntick_holdoff, cpu) == jiffies) {21552155+ if (rdtp->dyntick_holdoff == jiffies) {21652156 trace_rcu_prep_idle("In holdoff");21662157 return;21672158 }2168215921692169- /* Check and update the rcu_dyntick_drain sequencing. */21702170- if (per_cpu(rcu_dyntick_drain, cpu) <= 0) {21602160+ /* Check and update the ->dyntick_drain sequencing. */21612161+ if (rdtp->dyntick_drain <= 0) {21712162 /* First time through, initialize the counter. */21722172- per_cpu(rcu_dyntick_drain, cpu) = RCU_IDLE_FLUSHES;21732173- } else if (per_cpu(rcu_dyntick_drain, cpu) <= RCU_IDLE_OPT_FLUSHES &&21632163+ rdtp->dyntick_drain = RCU_IDLE_FLUSHES;21642164+ } else if (rdtp->dyntick_drain <= RCU_IDLE_OPT_FLUSHES &&21742165 !rcu_pending(cpu) &&21752166 !local_softirq_pending()) {21762167 /* Can we go dyntick-idle despite still having callbacks? */21772177- trace_rcu_prep_idle("Dyntick with callbacks");21782178- per_cpu(rcu_dyntick_drain, cpu) = 0;21792179- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies;21802180- if (rcu_cpu_has_nonlazy_callbacks(cpu))21812181- per_cpu(rcu_idle_gp_timer_expires, cpu) =21682168+ rdtp->dyntick_drain = 0;21692169+ rdtp->dyntick_holdoff = jiffies;21702170+ if (rcu_cpu_has_nonlazy_callbacks(cpu)) {21712171+ trace_rcu_prep_idle("Dyntick with callbacks");21722172+ rdtp->idle_gp_timer_expires =21822173 jiffies + RCU_IDLE_GP_DELAY;21832183- else21842184- per_cpu(rcu_idle_gp_timer_expires, cpu) =21742174+ } else {21752175+ rdtp->idle_gp_timer_expires =21852176 jiffies + RCU_IDLE_LAZY_GP_DELAY;21862186- tp = &per_cpu(rcu_idle_gp_timer, cpu);21872187- mod_timer_pinned(tp, per_cpu(rcu_idle_gp_timer_expires, cpu));21882188- per_cpu(rcu_nonlazy_posted_snap, cpu) =21892189- per_cpu(rcu_nonlazy_posted, cpu);21772177+ trace_rcu_prep_idle("Dyntick with lazy callbacks");21782178+ }21792179+ tp = &rdtp->idle_gp_timer;21802180+ mod_timer_pinned(tp, rdtp->idle_gp_timer_expires);21812181+ rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted;21902182 return; /* Nothing more to do immediately. */21912191- } else if (--per_cpu(rcu_dyntick_drain, cpu) <= 0) {21832183+ } else if (--(rdtp->dyntick_drain) <= 0) {21922184 /* We have hit the limit, so time to give up. */21932193- per_cpu(rcu_dyntick_holdoff, cpu) = jiffies;21852185+ rdtp->dyntick_holdoff = jiffies;21942186 trace_rcu_prep_idle("Begin holdoff");21952187 invoke_rcu_core(); /* Force the CPU out of dyntick-idle. */21962188 return;···22372227 */22382228static void rcu_idle_count_callbacks_posted(void)22392229{22402240- __this_cpu_add(rcu_nonlazy_posted, 1);22302230+ __this_cpu_add(rcu_dynticks.nonlazy_posted, 1);22412231}2242223222432233#endif /* #else #if !defined(CONFIG_RCU_FAST_NO_HZ) */···2248223822492239static void print_cpu_stall_fast_no_hz(char *cp, int cpu)22502240{22512251- struct timer_list *tltp = &per_cpu(rcu_idle_gp_timer, cpu);22412241+ struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);22422242+ struct timer_list *tltp = &rdtp->idle_gp_timer;2252224322532244 sprintf(cp, "drain=%d %c timer=%lu",22542254- per_cpu(rcu_dyntick_drain, cpu),22552255- per_cpu(rcu_dyntick_holdoff, cpu) == jiffies ? 'H' : '.',22452245+ rdtp->dyntick_drain,22462246+ rdtp->dyntick_holdoff == jiffies ? 'H' : '.',22562247 timer_pending(tltp) ? tltp->expires - jiffies : -1);22572248}22582249
+152-35
kernel/sched/core.c
···5556555655575557#ifdef CONFIG_SCHED_DEBUG5558555855595559-static __read_mostly int sched_domain_debug_enabled;55595559+static __read_mostly int sched_debug_enabled;5560556055615561-static int __init sched_domain_debug_setup(char *str)55615561+static int __init sched_debug_setup(char *str)55625562{55635563- sched_domain_debug_enabled = 1;55635563+ sched_debug_enabled = 1;5564556455655565 return 0;55665566}55675567-early_param("sched_debug", sched_domain_debug_setup);55675567+early_param("sched_debug", sched_debug_setup);55685568+55695569+static inline bool sched_debug(void)55705570+{55715571+ return sched_debug_enabled;55725572+}5568557355695574static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,55705575 struct cpumask *groupmask)···56095604 break;56105605 }5611560656125612- if (!group->sgp->power) {56075607+ /*56085608+ * Even though we initialize ->power to something semi-sane,56095609+ * we leave power_orig unset. This allows us to detect if56105610+ * domain iteration is still funny without causing /0 traps.56115611+ */56125612+ if (!group->sgp->power_orig) {56135613 printk(KERN_CONT "\n");56145614 printk(KERN_ERR "ERROR: domain->cpu_power not "56155615 "set\n");···56625652{56635653 int level = 0;5664565456655665- if (!sched_domain_debug_enabled)56555655+ if (!sched_debug_enabled)56665656 return;5667565756685658 if (!sd) {···56835673}56845674#else /* !CONFIG_SCHED_DEBUG */56855675# define sched_domain_debug(sd, cpu) do { } while (0)56765676+static inline bool sched_debug(void)56775677+{56785678+ return false;56795679+}56865680#endif /* CONFIG_SCHED_DEBUG */5687568156885682static int sd_degenerate(struct sched_domain *sd)···60085994 struct sd_data data;60095995};6010599659975997+/*59985998+ * Build an iteration mask that can exclude certain CPUs from the upwards59995999+ * domain traversal.60006000+ *60016001+ * Asymmetric node setups can result in situations where the domain tree is of60026002+ * unequal depth, make sure to skip domains that already cover the entire60036003+ * range.60046004+ *60056005+ * In that case build_sched_domains() will have terminated the iteration early60066006+ * and our sibling sd spans will be empty. Domains should always include the60076007+ * cpu they're built on, so check that.60086008+ *60096009+ */60106010+static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)60116011+{60126012+ const struct cpumask *span = sched_domain_span(sd);60136013+ struct sd_data *sdd = sd->private;60146014+ struct sched_domain *sibling;60156015+ int i;60166016+60176017+ for_each_cpu(i, span) {60186018+ sibling = *per_cpu_ptr(sdd->sd, i);60196019+ if (!cpumask_test_cpu(i, sched_domain_span(sibling)))60206020+ continue;60216021+60226022+ cpumask_set_cpu(i, sched_group_mask(sg));60236023+ }60246024+}60256025+60266026+/*60276027+ * Return the canonical balance cpu for this group, this is the first cpu60286028+ * of this group that's also in the iteration mask.60296029+ */60306030+int group_balance_cpu(struct sched_group *sg)60316031+{60326032+ return cpumask_first_and(sched_group_cpus(sg), sched_group_mask(sg));60336033+}60346034+60116035static int60126036build_overlap_sched_groups(struct sched_domain *sd, int cpu)60136037{···60646012 if (cpumask_test_cpu(i, covered))60656013 continue;6066601460156015+ child = *per_cpu_ptr(sdd->sd, i);60166016+60176017+ /* See the comment near build_group_mask(). */60186018+ if (!cpumask_test_cpu(i, sched_domain_span(child)))60196019+ continue;60206020+60676021 sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(),60686022 GFP_KERNEL, cpu_to_node(cpu));60696023···60776019 goto fail;6078602060796021 sg_span = sched_group_cpus(sg);60806080-60816081- child = *per_cpu_ptr(sdd->sd, i);60826022 if (child->child) {60836023 child = child->child;60846024 cpumask_copy(sg_span, sched_domain_span(child));···60866030 cpumask_or(covered, covered, sg_span);6087603160886032 sg->sgp = *per_cpu_ptr(sdd->sgp, i);60896089- atomic_inc(&sg->sgp->ref);60336033+ if (atomic_inc_return(&sg->sgp->ref) == 1)60346034+ build_group_mask(sd, sg);6090603560366036+ /*60376037+ * Initialize sgp->power such that even if we mess up the60386038+ * domains and no possible iteration will get us here, we won't60396039+ * die on a /0 trap.60406040+ */60416041+ sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span);60426042+60436043+ /*60446044+ * Make sure the first group of this domain contains the60456045+ * canonical balance cpu. Otherwise the sched_domain iteration60466046+ * breaks. See update_sg_lb_stats().60476047+ */60916048 if ((!groups && cpumask_test_cpu(cpu, sg_span)) ||60926092- cpumask_first(sg_span) == cpu) {60936093- WARN_ON_ONCE(!cpumask_test_cpu(cpu, sg_span));60496049+ group_balance_cpu(sg) == cpu)60946050 groups = sg;60956095- }6096605160976052 if (!first)60986053 first = sg;···6176610961776110 cpumask_clear(sched_group_cpus(sg));61786111 sg->sgp->power = 0;61126112+ cpumask_setall(sched_group_mask(sg));6179611361806114 for_each_cpu(j, span) {61816115 if (get_group(j, sdd, NULL) != group)···62186150 sg = sg->next;62196151 } while (sg != sd->groups);6220615262216221- if (cpu != group_first_cpu(sg))61536153+ if (cpu != group_balance_cpu(sg))62226154 return;6223615562246156 update_group_power(sd, cpu);···6268620062696201static int __init setup_relax_domain_level(char *str)62706202{62716271- unsigned long val;62726272-62736273- val = simple_strtoul(str, NULL, 0);62746274- if (val < sched_domain_level_max)62756275- default_relax_domain_level = val;62036203+ if (kstrtoint(str, 0, &default_relax_domain_level))62046204+ pr_warn("Unable to set relax_domain_level\n");6276620562776206 return 1;62786207}···63796314#ifdef CONFIG_NUMA6380631563816316static int sched_domains_numa_levels;63826382-static int sched_domains_numa_scale;63836317static int *sched_domains_numa_distance;63846318static struct cpumask ***sched_domains_numa_masks;63856319static int sched_domains_curr_level;6386632063876321static inline int sd_local_flags(int level)63886322{63896389- if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)63236323+ if (sched_domains_numa_distance[level] > RECLAIM_DISTANCE)63906324 return 0;6391632563926326 return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE;···64436379 return sched_domains_numa_masks[sched_domains_curr_level][cpu_to_node(cpu)];64446380}6445638163826382+static void sched_numa_warn(const char *str)63836383+{63846384+ static int done = false;63856385+ int i,j;63866386+63876387+ if (done)63886388+ return;63896389+63906390+ done = true;63916391+63926392+ printk(KERN_WARNING "ERROR: %s\n\n", str);63936393+63946394+ for (i = 0; i < nr_node_ids; i++) {63956395+ printk(KERN_WARNING " ");63966396+ for (j = 0; j < nr_node_ids; j++)63976397+ printk(KERN_CONT "%02d ", node_distance(i,j));63986398+ printk(KERN_CONT "\n");63996399+ }64006400+ printk(KERN_WARNING "\n");64016401+}64026402+64036403+static bool find_numa_distance(int distance)64046404+{64056405+ int i;64066406+64076407+ if (distance == node_distance(0, 0))64086408+ return true;64096409+64106410+ for (i = 0; i < sched_domains_numa_levels; i++) {64116411+ if (sched_domains_numa_distance[i] == distance)64126412+ return true;64136413+ }64146414+64156415+ return false;64166416+}64176417+64466418static void sched_init_numa(void)64476419{64486420 int next_distance, curr_distance = node_distance(0, 0);···64866386 int level = 0;64876387 int i, j, k;6488638864896489- sched_domains_numa_scale = curr_distance;64906389 sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);64916390 if (!sched_domains_numa_distance)64926391 return;···64966397 *64976398 * Assumes node_distance(0,j) includes all distances in64986399 * node_distance(i,j) in order to avoid cubic time.64996499- *65006500- * XXX: could be optimized to O(n log n) by using sort()65016400 */65026401 next_distance = curr_distance;65036402 for (i = 0; i < nr_node_ids; i++) {65046403 for (j = 0; j < nr_node_ids; j++) {65056505- int distance = node_distance(0, j);65066506- if (distance > curr_distance &&65076507- (distance < next_distance ||65086508- next_distance == curr_distance))65096509- next_distance = distance;64046404+ for (k = 0; k < nr_node_ids; k++) {64056405+ int distance = node_distance(i, k);64066406+64076407+ if (distance > curr_distance &&64086408+ (distance < next_distance ||64096409+ next_distance == curr_distance))64106410+ next_distance = distance;64116411+64126412+ /*64136413+ * While not a strong assumption it would be nice to know64146414+ * about cases where if node A is connected to B, B is not64156415+ * equally connected to A.64166416+ */64176417+ if (sched_debug() && node_distance(k, i) != distance)64186418+ sched_numa_warn("Node-distance not symmetric");64196419+64206420+ if (sched_debug() && i && !find_numa_distance(distance))64216421+ sched_numa_warn("Node-0 not representative");64226422+ }64236423+ if (next_distance != curr_distance) {64246424+ sched_domains_numa_distance[level++] = next_distance;64256425+ sched_domains_numa_levels = level;64266426+ curr_distance = next_distance;64276427+ } else break;65106428 }65116511- if (next_distance != curr_distance) {65126512- sched_domains_numa_distance[level++] = next_distance;65136513- sched_domains_numa_levels = level;65146514- curr_distance = next_distance;65156515- } else break;64296429+64306430+ /*64316431+ * In case of sched_debug() we verify the above assumption.64326432+ */64336433+ if (!sched_debug())64346434+ break;65166435 }65176436 /*65186437 * 'level' contains the number of unique distances, excluding the···6642652566436526 *per_cpu_ptr(sdd->sg, j) = sg;6644652766456645- sgp = kzalloc_node(sizeof(struct sched_group_power),65286528+ sgp = kzalloc_node(sizeof(struct sched_group_power) + cpumask_size(),66466529 GFP_KERNEL, cpu_to_node(j));66476530 if (!sgp)66486531 return -ENOMEM;···66956578 if (!sd)66966579 return child;6697658066986698- set_domain_attribute(sd, attr);66996581 cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));67006582 if (child) {67016583 sd->level = child->level + 1;···67026586 child->parent = sd;67036587 }67046588 sd->child = child;65896589+ set_domain_attribute(sd, attr);6705659067066591 return sd;67076592}
+10-19
kernel/sched/fair.c
···36023602 } while (group != child->groups);36033603 }3604360436053605- sdg->sgp->power = power;36053605+ sdg->sgp->power_orig = sdg->sgp->power = power;36063606}3607360736083608/*···3632363236333633/**36343634 * update_sg_lb_stats - Update sched_group's statistics for load balancing.36353635- * @sd: The sched_domain whose statistics are to be updated.36353635+ * @env: The load balancing environment.36363636 * @group: sched_group whose statistics are to be updated.36373637 * @load_idx: Load index of sched_domain of this_cpu for load calc.36383638 * @local_group: Does group contain this_cpu.···36523652 int i;3653365336543654 if (local_group)36553655- balance_cpu = group_first_cpu(group);36553655+ balance_cpu = group_balance_cpu(group);3656365636573657 /* Tally up the load of all CPUs in the group */36583658 max_cpu_load = 0;···3667366736683668 /* Bias balancing toward cpus of our domain */36693669 if (local_group) {36703670- if (idle_cpu(i) && !first_idle_cpu) {36703670+ if (idle_cpu(i) && !first_idle_cpu &&36713671+ cpumask_test_cpu(i, sched_group_mask(group))) {36713672 first_idle_cpu = 1;36723673 balance_cpu = i;36733674 }···3742374137433742/**37443743 * update_sd_pick_busiest - return 1 on busiest group37453745- * @sd: sched_domain whose statistics are to be checked37443744+ * @env: The load balancing environment.37463745 * @sds: sched_domain statistics37473746 * @sg: sched_group candidate to be checked for being the busiest37483747 * @sgs: sched_group statistics37493749- * @this_cpu: the current cpu37503748 *37513749 * Determine if @sg is a busier group than the previously selected37523750 * busiest group.···3783378337843784/**37853785 * update_sd_lb_stats - Update sched_domain's statistics for load balancing.37863786- * @sd: sched_domain whose statistics are to be updated.37873787- * @this_cpu: Cpu for which load balance is currently performed.37883788- * @idle: Idle status of this_cpu37863786+ * @env: The load balancing environment.37893787 * @cpus: Set of cpus considered for load balancing.37903788 * @balance: Should we balance.37913789 * @sds: variable to hold the statistics for this sched_domain.···38723874 * Returns 1 when packing is required and a task should be moved to38733875 * this CPU. The amount of the imbalance is returned in *imbalance.38743876 *38753875- * @sd: The sched_domain whose packing is to be checked.38773877+ * @env: The load balancing environment.38763878 * @sds: Statistics of the sched_domain which is to be packed38773877- * @this_cpu: The cpu at whose sched_domain we're performing load-balance.38783878- * @imbalance: returns amount of imbalanced due to packing.38793879 */38803880static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)38813881{···38993903 * fix_small_imbalance - Calculate the minor imbalance that exists39003904 * amongst the groups of a sched_domain, during39013905 * load balancing.39063906+ * @env: The load balancing environment.39023907 * @sds: Statistics of the sched_domain whose imbalance is to be calculated.39033903- * @this_cpu: The cpu at whose sched_domain we're performing load-balance.39043904- * @imbalance: Variable to store the imbalance.39053908 */39063909static inline39073910void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)···40434048 * Also calculates the amount of weighted load which should be moved40444049 * to restore balance.40454050 *40464046- * @sd: The sched_domain whose busiest group is to be returned.40474047- * @this_cpu: The cpu for which load balancing is currently being performed.40484048- * @imbalance: Variable which stores amount of weighted load which should40494049- * be moved to restore balance/put a group to idle.40504050- * @idle: The idle status of this_cpu.40514051+ * @env: The load balancing environment.40514052 * @cpus: The set of CPUs under consideration for load-balancing.40524053 * @balance: Pointer to a variable indicating if this_cpu40534054 * is the appropriate cpu to perform load balancing at this_level.
···371371void tracing_off(void)372372{373373 if (global_trace.buffer)374374- ring_buffer_record_on(global_trace.buffer);374374+ ring_buffer_record_off(global_trace.buffer);375375 /*376376 * This flag is only looked at when buffers haven't been377377 * allocated yet. We don't really care about the race
+18-1
kernel/watchdog.c
···372372373373374374#ifdef CONFIG_HARDLOCKUP_DETECTOR375375+/*376376+ * People like the simple clean cpu node info on boot.377377+ * Reduce the watchdog noise by only printing messages378378+ * that are different from what cpu0 displayed.379379+ */380380+static unsigned long cpu0_err;381381+375382static int watchdog_nmi_enable(int cpu)376383{377384 struct perf_event_attr *wd_attr;···397390398391 /* Try to register using hardware perf events */399392 event = perf_event_create_kernel_counter(wd_attr, cpu, NULL, watchdog_overflow_callback, NULL);393393+394394+ /* save cpu0 error for future comparision */395395+ if (cpu == 0 && IS_ERR(event))396396+ cpu0_err = PTR_ERR(event);397397+400398 if (!IS_ERR(event)) {401401- pr_info("enabled, takes one hw-pmu counter.\n");399399+ /* only print for cpu0 or different than cpu0 */400400+ if (cpu == 0 || cpu0_err)401401+ pr_info("enabled on all CPUs, permanently consumes one hw-PMU counter.\n");402402 goto out_save;403403 }404404405405+ /* skip displaying the same error again */406406+ if (cpu > 0 && (PTR_ERR(event) == cpu0_err))407407+ return PTR_ERR(event);405408406409 /* vary the KERN level based on the returned errno */407410 if (PTR_ERR(event) == -EOPNOTSUPP)
+20
lib/Kconfig.debug
···241241 default 0 if !BOOTPARAM_SOFTLOCKUP_PANIC242242 default 1 if BOOTPARAM_SOFTLOCKUP_PANIC243243244244+config PANIC_ON_OOPS245245+ bool "Panic on Oops" if EXPERT246246+ default n247247+ help248248+ Say Y here to enable the kernel to panic when it oopses. This249249+ has the same effect as setting oops=panic on the kernel command250250+ line.251251+252252+ This feature is useful to ensure that the kernel does not do253253+ anything erroneous after an oops which could result in data254254+ corruption or other issues.255255+256256+ Say N if unsure.257257+258258+config PANIC_ON_OOPS_VALUE259259+ int260260+ range 0 1261261+ default 0 if !PANIC_ON_OOPS262262+ default 1 if PANIC_ON_OOPS263263+244264config DETECT_HUNG_TASK245265 bool "Detect Hung Tasks"246266 depends on DEBUG_KERNEL
···686686 * during iterating; it can be zero only at the beginning.687687 * And we cannot overflow iter->next_index in a single step,688688 * because RADIX_TREE_MAP_SHIFT < BITS_PER_LONG.689689+ *690690+ * This condition also used by radix_tree_next_slot() to stop691691+ * contiguous iterating, and forbid swithing to the next chunk.689692 */690693 index = iter->next_index;691694 if (!index && iter->index)
+4-3
lib/raid6/recov.c
···2222#include <linux/raid/pq.h>23232424/* Recover two failed data blocks. */2525-void raid6_2data_recov_intx1(int disks, size_t bytes, int faila, int failb,2626- void **ptrs)2525+static void raid6_2data_recov_intx1(int disks, size_t bytes, int faila,2626+ int failb, void **ptrs)2727{2828 u8 *p, *q, *dp, *dq;2929 u8 px, qx, db;···6666}67676868/* Recover failure of one data block plus the P block */6969-void raid6_datap_recov_intx1(int disks, size_t bytes, int faila, void **ptrs)6969+static void raid6_datap_recov_intx1(int disks, size_t bytes, int faila,7070+ void **ptrs)7071{7172 u8 *p, *q, *dq;7273 const u8 *qmul; /* Q multiplier table */
+4-3
lib/raid6/recov_ssse3.c
···1919 boot_cpu_has(X86_FEATURE_SSSE3);2020}21212222-void raid6_2data_recov_ssse3(int disks, size_t bytes, int faila, int failb,2323- void **ptrs)2222+static void raid6_2data_recov_ssse3(int disks, size_t bytes, int faila,2323+ int failb, void **ptrs)2424{2525 u8 *p, *q, *dp, *dq;2626 const u8 *pbmul; /* P multiplier table for B data */···194194}195195196196197197-void raid6_datap_recov_ssse3(int disks, size_t bytes, int faila, void **ptrs)197197+static void raid6_datap_recov_ssse3(int disks, size_t bytes, int faila,198198+ void **ptrs)198199{199200 u8 *p, *q, *dq;200201 const u8 *qmul; /* Q multiplier table */
···867867 return memblock_search(&memblock.memory, addr) != -1;868868}869869870870+/**871871+ * memblock_is_region_memory - check if a region is a subset of memory872872+ * @base: base of region to check873873+ * @size: size of region to check874874+ *875875+ * Check if the region [@base, @base+@size) is a subset of a memory block.876876+ *877877+ * RETURNS:878878+ * 0 if false, non-zero if true879879+ */870880int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size)871881{872882 int idx = memblock_search(&memblock.memory, base);···889879 memblock.memory.regions[idx].size) >= end;890880}891881882882+/**883883+ * memblock_is_region_reserved - check if a region intersects reserved memory884884+ * @base: base of region to check885885+ * @size: size of region to check886886+ *887887+ * Check if the region [@base, @base+@size) intersects a reserved memory block.888888+ *889889+ * RETURNS:890890+ * 0 if false, non-zero if true891891+ */892892int __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t size)893893{894894 memblock_cap_size(base, &size);
+2-2
mm/oom_kill.c
···183183unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,184184 const nodemask_t *nodemask, unsigned long totalpages)185185{186186- unsigned long points;186186+ long points;187187188188 if (oom_unkillable_task(p, memcg, nodemask))189189 return 0;···223223 * Never return 0 for an eligible task regardless of the root bonus and224224 * oom_score_adj (oom_score_adj can't be OOM_SCORE_ADJ_MIN here).225225 */226226- return points ? points : 1;226226+ return points > 0 ? points : 1;227227}228228229229/*
+37-20
mm/shmem.c
···683683 mutex_lock(&shmem_swaplist_mutex);684684 /*685685 * We needed to drop mutex to make that restrictive page686686- * allocation; but the inode might already be freed by now,687687- * and we cannot refer to inode or mapping or info to check.688688- * However, we do hold page lock on the PageSwapCache page,689689- * so can check if that still has our reference remaining.686686+ * allocation, but the inode might have been freed while we687687+ * dropped it: although a racing shmem_evict_inode() cannot688688+ * complete without emptying the radix_tree, our page lock689689+ * on this swapcache page is not enough to prevent that -690690+ * free_swap_and_cache() of our swap entry will only691691+ * trylock_page(), removing swap from radix_tree whatever.692692+ *693693+ * We must not proceed to shmem_add_to_page_cache() if the694694+ * inode has been freed, but of course we cannot rely on695695+ * inode or mapping or info to check that. However, we can696696+ * safely check if our swap entry is still in use (and here697697+ * it can't have got reused for another page): if it's still698698+ * in use, then the inode cannot have been freed yet, and we699699+ * can safely proceed (if it's no longer in use, that tells700700+ * nothing about the inode, but we don't need to unuse swap).690701 */691702 if (!page_swapcount(*pagep))692703 error = -ENOENT;···741730742731 /*743732 * There's a faint possibility that swap page was replaced before744744- * caller locked it: it will come back later with the right page.733733+ * caller locked it: caller will come back later with the right page.745734 */746746- if (unlikely(!PageSwapCache(page)))735735+ if (unlikely(!PageSwapCache(page) || page_private(page) != swap.val))747736 goto out;748737749738 /*···1006995 newpage = shmem_alloc_page(gfp, info, index);1007996 if (!newpage)1008997 return -ENOMEM;10091009- VM_BUG_ON(shmem_should_replace_page(newpage, gfp));101099810111011- *pagep = newpage;1012999 page_cache_get(newpage);10131000 copy_highpage(newpage, oldpage);10011001+ flush_dcache_page(newpage);1014100210151015- VM_BUG_ON(!PageLocked(oldpage));10161003 __set_page_locked(newpage);10171017- VM_BUG_ON(!PageUptodate(oldpage));10181004 SetPageUptodate(newpage);10191019- VM_BUG_ON(!PageSwapBacked(oldpage));10201005 SetPageSwapBacked(newpage);10211021- VM_BUG_ON(!swap_index);10221006 set_page_private(newpage, swap_index);10231023- VM_BUG_ON(!PageSwapCache(oldpage));10241007 SetPageSwapCache(newpage);1025100810261009 /*···10241019 spin_lock_irq(&swap_mapping->tree_lock);10251020 error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,10261021 newpage);10271027- __inc_zone_page_state(newpage, NR_FILE_PAGES);10281028- __dec_zone_page_state(oldpage, NR_FILE_PAGES);10221022+ if (!error) {10231023+ __inc_zone_page_state(newpage, NR_FILE_PAGES);10241024+ __dec_zone_page_state(oldpage, NR_FILE_PAGES);10251025+ }10291026 spin_unlock_irq(&swap_mapping->tree_lock);10301030- BUG_ON(error);1031102710321032- mem_cgroup_replace_page_cache(oldpage, newpage);10331033- lru_cache_add_anon(newpage);10281028+ if (unlikely(error)) {10291029+ /*10301030+ * Is this possible? I think not, now that our callers check10311031+ * both PageSwapCache and page_private after getting page lock;10321032+ * but be defensive. Reverse old to newpage for clear and free.10331033+ */10341034+ oldpage = newpage;10351035+ } else {10361036+ mem_cgroup_replace_page_cache(oldpage, newpage);10371037+ lru_cache_add_anon(newpage);10381038+ *pagep = newpage;10391039+ }1034104010351041 ClearPageSwapCache(oldpage);10361042 set_page_private(oldpage, 0);···10491033 unlock_page(oldpage);10501034 page_cache_release(oldpage);10511035 page_cache_release(oldpage);10521052- return 0;10361036+ return error;10531037}1054103810551039/*···1123110711241108 /* We have to do this with page locked to prevent races */11251109 lock_page(page);11261126- if (!PageSwapCache(page) || page->mapping) {11101110+ if (!PageSwapCache(page) || page_private(page) != swap.val ||11111111+ page->mapping) {11271112 error = -EEXIST; /* try again */11281113 goto failed;11291114 }
+4-8
mm/swapfile.c
···1916191619171917 /*19181918 * Find out how many pages are allowed for a single swap19191919- * device. There are three limiting factors: 1) the number19191919+ * device. There are two limiting factors: 1) the number19201920 * of bits for the swap offset in the swp_entry_t type, and19211921 * 2) the number of bits in the swap pte as defined by the19221922- * the different architectures, and 3) the number of free bits19231923- * in an exceptional radix_tree entry. In order to find the19221922+ * different architectures. In order to find the19241923 * largest possible bit mask, a swap entry with swap type 019251924 * and swap offset ~0UL is created, encoded to a swap pte,19261925 * decoded to a swp_entry_t again, and finally the swap19271926 * offset is extracted. This will mask all the bits from19281927 * the initial ~0UL mask that can't be encoded in either19291928 * the swp_entry_t or the architecture definition of a19301930- * swap pte. Then the same is done for a radix_tree entry.19291929+ * swap pte.19311930 */19321931 maxpages = swp_offset(pte_to_swp_entry(19331933- swp_entry_to_pte(swp_entry(0, ~0UL))));19341934- maxpages = swp_offset(radix_to_swp_entry(19351935- swp_to_radix_entry(swp_entry(0, maxpages)))) + 1;19361936-19321932+ swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;19371933 if (maxpages > swap_header->info.last_page) {19381934 maxpages = swap_header->info.last_page + 1;19391935 /* p->max is an unsigned int: don't overflow it */
+1-3
net/appletalk/ddp.c
···12081208 if (addr->sat_addr.s_node == ATADDR_BCAST &&12091209 !sock_flag(sk, SOCK_BROADCAST)) {12101210#if 112111211- printk(KERN_WARNING "%s is broken and did not set "12121212- "SO_BROADCAST. It will break when 2.2 is "12131213- "released.\n",12111211+ pr_warn("atalk_connect: %s is broken and did not set SO_BROADCAST.\n",12141212 current->comm);12151213#else12161214 return -EACCES;
+1-1
net/bluetooth/af_bluetooth.c
···210210 }211211212212 if (sk->sk_state == BT_CONNECTED || !newsock ||213213- test_bit(BT_DEFER_SETUP, &bt_sk(parent)->flags)) {213213+ test_bit(BT_SK_DEFER_SETUP, &bt_sk(parent)->flags)) {214214 bt_accept_unlink(sk);215215 if (newsock)216216 sock_graft(sk, newsock);
+33-69
net/core/drop_monitor.c
···3636#define TRACE_ON 13737#define TRACE_OFF 038383939-static void send_dm_alert(struct work_struct *unused);4040-4141-4239/*4340 * Globals, our netlink socket pointer4441 * and the work handle that will send up···4548static DEFINE_MUTEX(trace_state_mutex);46494750struct per_cpu_dm_data {4848- struct work_struct dm_alert_work;4949- struct sk_buff __rcu *skb;5050- atomic_t dm_hit_count;5151- struct timer_list send_timer;5252- int cpu;5151+ spinlock_t lock;5252+ struct sk_buff *skb;5353+ struct work_struct dm_alert_work;5454+ struct timer_list send_timer;5355};54565557struct dm_hw_stat_delta {···7478static unsigned long dm_hw_check_delta = 2*HZ;7579static LIST_HEAD(hw_stats_list);76807777-static void reset_per_cpu_data(struct per_cpu_dm_data *data)8181+static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)7882{7983 size_t al;8084 struct net_dm_alert_msg *msg;8185 struct nlattr *nla;8286 struct sk_buff *skb;8383- struct sk_buff *oskb = rcu_dereference_protected(data->skb, 1);8787+ unsigned long flags;84888589 al = sizeof(struct net_dm_alert_msg);8690 al += dm_hit_limit * sizeof(struct net_dm_drop_point);···9599 sizeof(struct net_dm_alert_msg));96100 msg = nla_data(nla);97101 memset(msg, 0, al);9898- } else9999- schedule_work_on(data->cpu, &data->dm_alert_work);100100-101101- /*102102- * Don't need to lock this, since we are guaranteed to only103103- * run this on a single cpu at a time.104104- * Note also that we only update data->skb if the old and new skb105105- * pointers don't match. This ensures that we don't continually call106106- * synchornize_rcu if we repeatedly fail to alloc a new netlink message.107107- */108108- if (skb != oskb) {109109- rcu_assign_pointer(data->skb, skb);110110-111111- synchronize_rcu();112112-113113- atomic_set(&data->dm_hit_count, dm_hit_limit);102102+ } else {103103+ mod_timer(&data->send_timer, jiffies + HZ / 10);114104 }115105106106+ spin_lock_irqsave(&data->lock, flags);107107+ swap(data->skb, skb);108108+ spin_unlock_irqrestore(&data->lock, flags);109109+110110+ return skb;116111}117112118118-static void send_dm_alert(struct work_struct *unused)113113+static void send_dm_alert(struct work_struct *work)119114{120115 struct sk_buff *skb;121121- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);116116+ struct per_cpu_dm_data *data;122117123123- WARN_ON_ONCE(data->cpu != smp_processor_id());118118+ data = container_of(work, struct per_cpu_dm_data, dm_alert_work);124119125125- /*126126- * Grab the skb we're about to send127127- */128128- skb = rcu_dereference_protected(data->skb, 1);120120+ skb = reset_per_cpu_data(data);129121130130- /*131131- * Replace it with a new one132132- */133133- reset_per_cpu_data(data);134134-135135- /*136136- * Ship it!137137- */138122 if (skb)139123 genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL);140140-141141- put_cpu_var(dm_cpu_data);142124}143125144126/*145127 * This is the timer function to delay the sending of an alert146128 * in the event that more drops will arrive during the147147- * hysteresis period. Note that it operates under the timer interrupt148148- * so we don't need to disable preemption here129129+ * hysteresis period.149130 */150150-static void sched_send_work(unsigned long unused)131131+static void sched_send_work(unsigned long _data)151132{152152- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);133133+ struct per_cpu_dm_data *data = (struct per_cpu_dm_data *)_data;153134154154- schedule_work_on(smp_processor_id(), &data->dm_alert_work);155155-156156- put_cpu_var(dm_cpu_data);135135+ schedule_work(&data->dm_alert_work);157136}158137159138static void trace_drop_common(struct sk_buff *skb, void *location)···138167 struct nlattr *nla;139168 int i;140169 struct sk_buff *dskb;141141- struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);170170+ struct per_cpu_dm_data *data;171171+ unsigned long flags;142172143143-144144- rcu_read_lock();145145- dskb = rcu_dereference(data->skb);173173+ local_irq_save(flags);174174+ data = &__get_cpu_var(dm_cpu_data);175175+ spin_lock(&data->lock);176176+ dskb = data->skb;146177147178 if (!dskb)148179 goto out;149149-150150- if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {151151- /*152152- * we're already at zero, discard this hit153153- */154154- goto out;155155- }156180157181 nlh = (struct nlmsghdr *)dskb->data;158182 nla = genlmsg_data(nlmsg_data(nlh));···155189 for (i = 0; i < msg->entries; i++) {156190 if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {157191 msg->points[i].count++;158158- atomic_inc(&data->dm_hit_count);159192 goto out;160193 }161194 }162162-195195+ if (msg->entries == dm_hit_limit)196196+ goto out;163197 /*164198 * We need to create a new entry165199 */···171205172206 if (!timer_pending(&data->send_timer)) {173207 data->send_timer.expires = jiffies + dm_delay * HZ;174174- add_timer_on(&data->send_timer, smp_processor_id());208208+ add_timer(&data->send_timer);175209 }176210177211out:178178- rcu_read_unlock();179179- put_cpu_var(dm_cpu_data);180180- return;212212+ spin_unlock_irqrestore(&data->lock, flags);181213}182214183215static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *location)···382418383419 for_each_possible_cpu(cpu) {384420 data = &per_cpu(dm_cpu_data, cpu);385385- data->cpu = cpu;386421 INIT_WORK(&data->dm_alert_work, send_dm_alert);387422 init_timer(&data->send_timer);388388- data->send_timer.data = cpu;423423+ data->send_timer.data = (unsigned long)data;389424 data->send_timer.function = sched_send_work;425425+ spin_lock_init(&data->lock);390426 reset_per_cpu_data(data);391427 }392428
+2-2
net/core/filter.c
···616616/**617617 * sk_unattached_filter_create - create an unattached filter618618 * @fprog: the filter program619619- * @sk: the socket to use619619+ * @pfp: the unattached filter that is created620620 *621621- * Create a filter independent ofr any socket. We first run some621621+ * Create a filter independent of any socket. We first run some622622 * sanity checks on it to make sure it does not explode on us later.623623 * If an error occurs or there is insufficient memory for the filter624624 * a negative errno code is returned. On success the return is zero.
+6-8
net/core/neighbour.c
···22192219 rcu_read_lock_bh();22202220 nht = rcu_dereference_bh(tbl->nht);2221222122222222- for (h = 0; h < (1 << nht->hash_shift); h++) {22232223- if (h < s_h)22242224- continue;22222222+ for (h = s_h; h < (1 << nht->hash_shift); h++) {22252223 if (h > s_h)22262224 s_idx = 0;22272225 for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;···2258226022592261 read_lock_bh(&tbl->lock);2260226222612261- for (h = 0; h <= PNEIGH_HASHMASK; h++) {22622262- if (h < s_h)22632263- continue;22632263+ for (h = s_h; h <= PNEIGH_HASHMASK; h++) {22642264 if (h > s_h)22652265 s_idx = 0;22662266 for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) {···22932297 struct neigh_table *tbl;22942298 int t, family, s_t;22952299 int proxy = 0;22962296- int err = 0;23002300+ int err;2297230122982302 read_lock(&neigh_tbl_lock);22992303 family = ((struct rtgenmsg *) nlmsg_data(cb->nlh))->rtgen_family;···2307231123082312 s_t = cb->args[0];2309231323102310- for (tbl = neigh_tables, t = 0; tbl && (err >= 0);23142314+ for (tbl = neigh_tables, t = 0; tbl;23112315 tbl = tbl->next, t++) {23122316 if (t < s_t || (family && tbl->family != family))23132317 continue;···23182322 err = pneigh_dump_table(tbl, skb, cb);23192323 else23202324 err = neigh_dump_table(tbl, skb, cb);23252325+ if (err < 0)23262326+ break;23212327 }23222328 read_unlock(&neigh_tbl_lock);23232329
···33613361 * @to: prior buffer33623362 * @from: buffer to add33633363 * @fragstolen: pointer to boolean33643364- *33643364+ * @delta_truesize: how much more was allocated than was requested33653365 */33663366bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,33673367 bool *fragstolen, int *delta_truesize)
···162162 if (dev) {163163 unregister_netdev(dev);164164 spriv->dev = NULL;165165+ module_put(THIS_MODULE);165166 }166167 }167168}···250249 if (rc < 0)251250 goto out_del_dev;252251252252+ __module_get(THIS_MODULE);253253 /* Must be done after register_netdev() */254254 strlcpy(session->ifname, dev->name, IFNAMSIZ);255255
+6-3
net/l2tp/l2tp_ip.c
···464464 sk->sk_bound_dev_if);465465 if (IS_ERR(rt))466466 goto no_route;467467- if (connected)467467+ if (connected) {468468 sk_setup_caps(sk, &rt->dst);469469- else470470- dst_release(&rt->dst); /* safe since we hold rcu_read_lock */469469+ } else {470470+ skb_dst_set(skb, &rt->dst);471471+ goto xmit;472472+ }471473 }472474473475 /* We dont need to clone dst here, it is guaranteed to not disappear.···477475 */478476 skb_dst_set_noref(skb, &rt->dst);479477478478+xmit:480479 /* Queue the packet to IP for output */481480 rc = ip_queue_xmit(skb, &inet->cork.fl);482481 rcu_read_unlock();
+6-1
net/mac80211/agg-rx.c
···145145 struct tid_ampdu_rx *tid_rx;146146 unsigned long timeout;147147148148+ rcu_read_lock();148149 tid_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[*ptid]);149149- if (!tid_rx)150150+ if (!tid_rx) {151151+ rcu_read_unlock();150152 return;153153+ }151154152155 timeout = tid_rx->last_rx + TU_TO_JIFFIES(tid_rx->timeout);153156 if (time_is_after_jiffies(timeout)) {154157 mod_timer(&tid_rx->session_timer, timeout);158158+ rcu_read_unlock();155159 return;156160 }161161+ rcu_read_unlock();157162158163#ifdef CONFIG_MAC80211_HT_DEBUG159164 printk(KERN_DEBUG "rx session timer expired on tid %d\n", (u16)*ptid);
+3-3
net/mac80211/cfg.c
···533533 sinfo.filled = 0;534534 sta_set_sinfo(sta, &sinfo);535535536536- if (sinfo.filled | STATION_INFO_TX_BITRATE)536536+ if (sinfo.filled & STATION_INFO_TX_BITRATE)537537 data[i] = 100000 *538538 cfg80211_calculate_bitrate(&sinfo.txrate);539539 i++;540540- if (sinfo.filled | STATION_INFO_RX_BITRATE)540540+ if (sinfo.filled & STATION_INFO_RX_BITRATE)541541 data[i] = 100000 *542542 cfg80211_calculate_bitrate(&sinfo.rxrate);543543 i++;544544545545- if (sinfo.filled | STATION_INFO_SIGNAL_AVG)545545+ if (sinfo.filled & STATION_INFO_SIGNAL_AVG)546546 data[i] = (u8)sinfo.signal_avg;547547 i++;548548 } else {
+12
net/mac80211/iface.c
···637637 ieee80211_configure_filter(local);638638 break;639639 default:640640+ mutex_lock(&local->mtx);641641+ if (local->hw_roc_dev == sdata->dev &&642642+ local->hw_roc_channel) {643643+ /* ignore return value since this is racy */644644+ drv_cancel_remain_on_channel(local);645645+ ieee80211_queue_work(&local->hw, &local->hw_roc_done);646646+ }647647+ mutex_unlock(&local->mtx);648648+649649+ flush_work(&local->hw_roc_start);650650+ flush_work(&local->hw_roc_done);651651+640652 flush_work(&sdata->work);641653 /*642654 * When we get here, the interface is marked down.
···234234 return;235235 }236236237237+ /* was never transmitted */238238+ if (local->hw_roc_skb) {239239+ u64 cookie;240240+241241+ cookie = local->hw_roc_cookie ^ 2;242242+243243+ cfg80211_mgmt_tx_status(local->hw_roc_dev, cookie,244244+ local->hw_roc_skb->data,245245+ local->hw_roc_skb->len, false,246246+ GFP_KERNEL);247247+248248+ kfree_skb(local->hw_roc_skb);249249+ local->hw_roc_skb = NULL;250250+ local->hw_roc_skb_for_status = NULL;251251+ }252252+237253 if (!local->hw_roc_for_tx)238254 cfg80211_remain_on_channel_expired(local->hw_roc_dev,239255 local->hw_roc_cookie,
+2-2
net/mac80211/sta_info.c
···378378 /* make the station visible */379379 sta_info_hash_add(local, sta);380380381381- list_add(&sta->list, &local->sta_list);381381+ list_add_rcu(&sta->list, &local->sta_list);382382383383 set_sta_flag(sta, WLAN_STA_INSERTED);384384···688688 if (ret)689689 return ret;690690691691- list_del(&sta->list);691691+ list_del_rcu(&sta->list);692692693693 mutex_lock(&local->key_mtx);694694 for (i = 0; i < NUM_DEFAULT_KEYS; i++)
+6-3
net/mac80211/tx.c
···17371737 __le16 fc;17381738 struct ieee80211_hdr hdr;17391739 struct ieee80211s_hdr mesh_hdr __maybe_unused;17401740- struct mesh_path __maybe_unused *mppath = NULL;17401740+ struct mesh_path __maybe_unused *mppath = NULL, *mpath = NULL;17411741 const u8 *encaps_data;17421742 int encaps_len, skip_header_bytes;17431743 int nh_pos, h_pos;···18031803 goto fail;18041804 }18051805 rcu_read_lock();18061806- if (!is_multicast_ether_addr(skb->data))18071807- mppath = mpp_path_lookup(skb->data, sdata);18061806+ if (!is_multicast_ether_addr(skb->data)) {18071807+ mpath = mesh_path_lookup(skb->data, sdata);18081808+ if (!mpath)18091809+ mppath = mpp_path_lookup(skb->data, sdata);18101810+ }1808181118091812 /*18101813 * Use address extension if it is a packet from
+1-1
net/mac80211/util.c
···12711271 enum ieee80211_sta_state state;1272127212731273 for (state = IEEE80211_STA_NOTEXIST;12741274- state < sta->sta_state - 1; state++)12741274+ state < sta->sta_state; state++)12751275 WARN_ON(drv_sta_state(local, sta->sdata, sta,12761276 state, state + 1));12771277 }
+2-3
net/netfilter/nf_conntrack_h323_main.c
···270270 return 0;271271272272 /* RTP port is even */273273- port &= htons(~1);274274- rtp_port = port;275275- rtcp_port = htons(ntohs(port) + 1);273273+ rtp_port = port & ~htons(1);274274+ rtcp_port = port | htons(1);276275277276 /* Create expect for RTP */278277 if ((rtp_exp = nf_ct_expect_alloc(ct)) == NULL)
···935935 enum nl80211_iftype iftype)936936{937937 struct wireless_dev *wdev_iter;938938+ u32 used_iftypes = BIT(iftype);938939 int num[NUM_NL80211_IFTYPES];939940 int total = 1;940941 int i, j;···962961963962 num[wdev_iter->iftype]++;964963 total++;964964+ used_iftypes |= BIT(wdev_iter->iftype);965965 }966966 mutex_unlock(&rdev->devlist_mtx);967967···972970 for (i = 0; i < rdev->wiphy.n_iface_combinations; i++) {973971 const struct ieee80211_iface_combination *c;974972 struct ieee80211_iface_limit *limits;973973+ u32 all_iftypes = 0;975974976975 c = &rdev->wiphy.iface_combinations[i];977976···987984 if (rdev->wiphy.software_iftypes & BIT(iftype))988985 continue;989986 for (j = 0; j < c->n_limits; j++) {987987+ all_iftypes |= limits[j].types;990988 if (!(limits[j].types & BIT(iftype)))991989 continue;992990 if (limits[j].max < num[iftype])···995991 limits[j].max -= num[iftype];996992 }997993 }998998- /* yay, it fits */994994+995995+ /*996996+ * Finally check that all iftypes that we're currently997997+ * using are actually part of this combination. If they998998+ * aren't then we can't use this combination and have999999+ * to continue to the next.10001000+ */10011001+ if ((all_iftypes & used_iftypes) != used_iftypes)10021002+ goto cont;10031003+10041004+ /*10051005+ * This combination covered all interface types and10061006+ * supported the requested numbers, so we're good.10071007+ */9991008 kfree(limits);10001009 return 0;10011010 cont:
···913913 /* do we need to add this widget to the list ? */914914 if (list) {915915 int err;916916- err = dapm_list_add_widget(list, path->sink);916916+ err = dapm_list_add_widget(list, path->source);917917 if (err < 0) {918918 dev_err(widget->dapm->dev, "could not add widget %s\n",919919 widget->name);···954954 if (stream == SNDRV_PCM_STREAM_PLAYBACK)955955 paths = is_connected_output_ep(dai->playback_widget, list);956956 else957957- paths = is_connected_input_ep(dai->playback_widget, list);957957+ paths = is_connected_input_ep(dai->capture_widget, list);958958959959 trace_snd_soc_dapm_connected(paths, stream);960960 dapm_clear_walk(&card->dapm);
+6
sound/soc/soc-pcm.c
···794794 for (i = 0; i < card->num_links; i++) {795795 be = &card->rtd[i];796796797797+ if (!be->dai_link->no_pcm)798798+ continue;799799+797800 if (be->cpu_dai->playback_widget == widget ||798801 be->codec_dai->playback_widget == widget)799802 return be;···805802806803 for (i = 0; i < card->num_links; i++) {807804 be = &card->rtd[i];805805+806806+ if (!be->dai_link->no_pcm)807807+ continue;808808809809 if (be->cpu_dai->capture_widget == widget ||810810 be->codec_dai->capture_widget == widget)
···119119 unsigned long unlink_mask; /* bitmask of unlinked urbs */120120121121 /* data and sync endpoints for this stream */122122+ unsigned int ep_num; /* the endpoint number */122123 struct snd_usb_endpoint *data_endpoint;123124 struct snd_usb_endpoint *sync_endpoint;124125 unsigned long flags;
+3-4
sound/usb/stream.c
···9797 subs->formats |= fp->formats;9898 subs->num_formats++;9999 subs->fmt_type = fp->fmt_type;100100+ subs->ep_num = fp->endpoint;100101}101102102103/*···120119 if (as->fmt_type != fp->fmt_type)121120 continue;122121 subs = &as->substream[stream];123123- if (!subs->data_endpoint)124124- continue;125125- if (subs->data_endpoint->ep_num == fp->endpoint) {122122+ if (subs->ep_num == fp->endpoint) {126123 list_add_tail(&fp->list, &subs->fmt_list);127124 subs->num_formats++;128125 subs->formats |= fp->formats;···133134 if (as->fmt_type != fp->fmt_type)134135 continue;135136 subs = &as->substream[stream];136136- if (subs->data_endpoint)137137+ if (subs->ep_num)137138 continue;138139 err = snd_pcm_new_stream(as->pcm, stream, 1);139140 if (err < 0)
···152152153153 if (symbol_conf.use_callchain) {154154 err = callchain_append(he->callchain,155155- &evsel->hists.callchain_cursor,155155+ &callchain_cursor,156156 sample->period);157157 if (err)158158 return err;···162162 * so we don't allocated the extra space needed because the stdio163163 * code will not use it.164164 */165165- if (al->sym != NULL && use_browser > 0) {165165+ if (he->ms.sym != NULL && use_browser > 0) {166166 struct annotation *notes = symbol__annotation(he->ms.sym);167167168168 assert(evsel != NULL);
+4-4
tools/perf/builtin-stat.c
···11291129 return 0;1130113011311131 if (!evsel_list->nr_entries) {11321132- if (perf_evlist__add_attrs_array(evsel_list, default_attrs) < 0)11321132+ if (perf_evlist__add_default_attrs(evsel_list, default_attrs) < 0)11331133 return -1;11341134 }11351135···11391139 return 0;1140114011411141 /* Append detailed run extra attributes: */11421142- if (perf_evlist__add_attrs_array(evsel_list, detailed_attrs) < 0)11421142+ if (perf_evlist__add_default_attrs(evsel_list, detailed_attrs) < 0)11431143 return -1;1144114411451145 if (detailed_run < 2)11461146 return 0;1147114711481148 /* Append very detailed run extra attributes: */11491149- if (perf_evlist__add_attrs_array(evsel_list, very_detailed_attrs) < 0)11491149+ if (perf_evlist__add_default_attrs(evsel_list, very_detailed_attrs) < 0)11501150 return -1;1151115111521152 if (detailed_run < 3)11531153 return 0;1154115411551155 /* Append very, very detailed run extra attributes: */11561156- return perf_evlist__add_attrs_array(evsel_list, very_very_detailed_attrs);11561156+ return perf_evlist__add_default_attrs(evsel_list, very_very_detailed_attrs);11571157}1158115811591159int cmd_stat(int argc, const char **argv, const char *prefix __used)
+1-1
tools/perf/builtin-top.c
···787787 }788788789789 if (symbol_conf.use_callchain) {790790- err = callchain_append(he->callchain, &evsel->hists.callchain_cursor,790790+ err = callchain_append(he->callchain, &callchain_cursor,791791 sample->period);792792 if (err)793793 return;
+4-3
tools/perf/design.txt
···409409prctl. When a counter is disabled, it doesn't count or generate410410events but does continue to exist and maintain its count value.411411412412-An individual counter or counter group can be enabled with412412+An individual counter can be enabled with413413414414- ioctl(fd, PERF_EVENT_IOC_ENABLE);414414+ ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);415415416416or disabled with417417418418- ioctl(fd, PERF_EVENT_IOC_DISABLE);418418+ ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);419419420420+For a counter group, pass PERF_IOC_FLAG_GROUP as the third argument.420421Enabling or disabling the leader of a group enables or disables the421422whole group; that is, while the group leader is disabled, none of the422423counters in the group will count. Enabling or disabling a member of a
+1-1
tools/perf/ui/browsers/annotate.c
···668668 "q/ESC/CTRL+C Exit\n\n"669669 "-> Go to target\n"670670 "<- Exit\n"671671- "h Cycle thru hottest instructions\n"671671+ "H Cycle thru hottest instructions\n"672672 "j Toggle showing jump to target arrows\n"673673 "J Toggle showing number of jump sources on targets\n"674674 "n Search next string\n"
+1-1
tools/perf/util/PERF-VERSION-GEN
···1212# First check if there is a .git to get the version from git describe1313# otherwise try to get the version from the kernel makefile1414if test -d ../../.git -o -f ../../.git &&1515- VN=$(git describe --abbrev=4 HEAD 2>/dev/null) &&1515+ VN=$(git describe --match 'v[0-9].[0-9]*' --abbrev=4 HEAD 2>/dev/null) &&1616 case "$VN" in1717 *$LF*) (exit 1) ;;1818 v[0-9]*)