···1411411.2.0 Handle creation of arrays that contain failed devices.1421421.3.0 Added support for RAID 101431431.3.1 Allow device replacement/rebuild for RAID 10144144+1.3.2 Fix/improve redundancy checking for RAID10
···8181Required properties for pin configuration node:8282- atmel,pins: 4 integers array, represents a group of pins mux and config8383 setting. The format is atmel,pins = <PIN_BANK PIN_BANK_NUM PERIPH CONFIG>.8484- The PERIPH 0 means gpio.8484+ The PERIPH 0 means gpio, PERIPH 1 is periph A, PERIPH 2 is periph B...8585+ PIN_BANK 0 is pioA, PIN_BANK 1 is pioB...85868687Bits used for CONFIG:8788PULL_UP (1 << 0): indicate this pin need a pull up.···127126 pinctrl_dbgu: dbgu-0 {128127 atmel,pins =129128 <1 14 0x1 0x0 /* PB14 periph A */130130- 1 15 0x1 0x1>; /* PB15 periph with pullup */129129+ 1 15 0x1 0x1>; /* PB15 periph A with pullup */131130 };132131 };133132};
Documentation/hid/hid-sensor.txt
+1-1
Documentation/kernel-parameters.txt
···24382438 real-time workloads. It can also improve energy24392439 efficiency for asymmetric multiprocessors.2440244024412441- rcu_nocbs_poll [KNL,BOOT]24412441+ rcu_nocb_poll [KNL,BOOT]24422442 Rather than requiring that offloaded CPUs24432443 (specified by rcu_nocbs= above) explicitly24442444 awaken the corresponding "rcuoN" kthreads,
+26-1
Documentation/x86/boot.txt
···5757Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover5858 protocol entry point.59596060+Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields6161+ to struct boot_params for for loading bzImage and ramdisk6262+ above 4G in 64bit.6363+6064**** MEMORY LAYOUT61656266The traditional memory map for the kernel loader, used for Image or···1861820230/4 2.05+ kernel_alignment Physical addr alignment required for kernel1871830234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not1881840235/1 2.10+ min_alignment Minimum alignment, as a power of two189189-0236/2 N/A pad3 Unused185185+0236/2 2.12+ xloadflags Boot protocol option flags1901860238/4 2.06+ cmdline_size Maximum size of the kernel command line191187023C/4 2.07+ hardware_subarch Hardware subarchitecture1921880240/8 2.07+ hardware_subarch_data Subarchitecture-specific data···585581 There may be a considerable performance cost with an excessively586582 misaligned kernel. Therefore, a loader should typically try each587583 power-of-two alignment from kernel_alignment down to this alignment.584584+585585+Field name: xloadflags586586+Type: read587587+Offset/size: 0x236/2588588+Protocol: 2.12+589589+590590+ This field is a bitmask.591591+592592+ Bit 0 (read): XLF_KERNEL_64593593+ - If 1, this kernel has the legacy 64-bit entry point at 0x200.594594+595595+ Bit 1 (read): XLF_CAN_BE_LOADED_ABOVE_4G596596+ - If 1, kernel/boot_params/cmdline/ramdisk can be above 4G.597597+598598+ Bit 2 (read): XLF_EFI_HANDOVER_32599599+ - If 1, the kernel supports the 32-bit EFI handoff entry point600600+ given at handover_offset.601601+602602+ Bit 3 (read): XLF_EFI_HANDOVER_64603603+ - If 1, the kernel supports the 64-bit EFI handoff entry point604604+ given at handover_offset + 0x200.588605589606Field name: cmdline_size590607Type: read
+4
Documentation/x86/zero-page.txt
···1919090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!!20200A0/010 ALL sys_desc_table System description table (struct sys_desc_table)21210B0/010 ALL olpc_ofw_header OLPC's OpenFirmware CIF and friends2222+0C0/004 ALL ext_ramdisk_image ramdisk_image high 32bits2323+0C4/004 ALL ext_ramdisk_size ramdisk_size high 32bits2424+0C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits2225140/080 ALL edid_info Video mode setup (struct edid_info)23261C0/020 ALL efi_info EFI 32 information (struct efi_info)24271E0/004 ALL alk_mem_k Alternative mem check, in KB···30271E9/001 ALL eddbuf_entries Number of entries in eddbuf (below)31281EA/001 ALL edd_mbr_sig_buf_entries Number of entries in edd_mbr_sig_buffer3229 (below)3030+1EF/001 ALL sentinel Used to detect broken bootloaders3331290/040 ALL edd_mbr_sig_buffer EDD MBR signatures34322D0/A00 ALL e820_map E820 memory map table3533 (array of struct e820entry)
+5-5
MAINTAINERS
···14891489M: Haavard Skinnemoen <hskinnemoen@gmail.com>14901490M: Hans-Christian Egtvedt <egtvedt@samfundet.no>14911491W: http://www.atmel.com/products/AVR32/14921492-W: http://avr32linux.org/14921492+W: http://mirror.egtvedt.no/avr32linux.org/14931493W: http://avrfreaks.net/14941494S: Maintained14951495F: arch/avr32/···29662966F: drivers/net/ethernet/i825xx/eexpress.*2967296729682968ETHERNET BRIDGE29692969-M: Stephen Hemminger <shemminger@vyatta.com>29692969+M: Stephen Hemminger <stephen@networkplumber.org>29702970L: bridge@lists.linux-foundation.org29712971L: netdev@vger.kernel.org29722972W: http://www.linuxfoundation.org/en/Net:Bridge···4905490549064906MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)49074907M: Mirko Lindner <mlindner@marvell.com>49084908-M: Stephen Hemminger <shemminger@vyatta.com>49084908+M: Stephen Hemminger <stephen@networkplumber.org>49094909L: netdev@vger.kernel.org49104910S: Maintained49114911F: drivers/net/ethernet/marvell/sk*···51805180F: drivers/infiniband/hw/nes/5181518151825182NETEM NETWORK EMULATOR51835183-M: Stephen Hemminger <shemminger@vyatta.com>51835183+M: Stephen Hemminger <stephen@networkplumber.org>51845184L: netem@lists.linux-foundation.org51855185S: Maintained51865186F: net/sched/sch_netem.c···70887088F: sound/7089708970907090SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC)70917091-M: Liam Girdwood <lrg@ti.com>70917091+M: Liam Girdwood <lgirdwood@gmail.com>70927092M: Mark Brown <broonie@opensource.wolfsonmicro.com>70937093T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git70947094L: alsa-devel@alsa-project.org (moderated for non-subscribers)
+2-2
Makefile
···11VERSION = 322PATCHLEVEL = 833SUBLEVEL = 044-EXTRAVERSION = -rc455-NAME = Terrified Chipmunk44+EXTRAVERSION = -rc755+NAME = Unicycling Gorilla6677# *DOCUMENTATION*88# To see a list of typical targets execute "make help"
···351351 irq_set_chained_handler(irq, gic_handle_cascade_irq);352352}353353354354+static u8 gic_get_cpumask(struct gic_chip_data *gic)355355+{356356+ void __iomem *base = gic_data_dist_base(gic);357357+ u32 mask, i;358358+359359+ for (i = mask = 0; i < 32; i += 4) {360360+ mask = readl_relaxed(base + GIC_DIST_TARGET + i);361361+ mask |= mask >> 16;362362+ mask |= mask >> 8;363363+ if (mask)364364+ break;365365+ }366366+367367+ if (!mask)368368+ pr_crit("GIC CPU mask not found - kernel will fail to boot.\n");369369+370370+ return mask;371371+}372372+354373static void __init gic_dist_init(struct gic_chip_data *gic)355374{356375 unsigned int i;···388369 /*389370 * Set all global interrupts to this CPU only.390371 */391391- cpumask = readl_relaxed(base + GIC_DIST_TARGET + 0);372372+ cpumask = gic_get_cpumask(gic);373373+ cpumask |= cpumask << 8;374374+ cpumask |= cpumask << 16;392375 for (i = 32; i < gic_irqs; i += 4)393376 writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);394377···421400 * Get what the GIC says our CPU mask is.422401 */423402 BUG_ON(cpu >= NR_GIC_CPU_IF);424424- cpu_mask = readl_relaxed(dist_base + GIC_DIST_TARGET + 0);403403+ cpu_mask = gic_get_cpumask(gic);425404 gic_cpu_map[cpu] = cpu_mask;426405427406 /*
+2-1
arch/arm/configs/at91_dt_defconfig
···1919CONFIG_SOC_AT91SAM9263=y2020CONFIG_SOC_AT91SAM9G45=y2121CONFIG_SOC_AT91SAM9X5=y2222+CONFIG_SOC_AT91SAM9N12=y2223CONFIG_MACH_AT91SAM_DT=y2324CONFIG_AT91_PROGRAMMABLE_CLOCKS=y2425CONFIG_AT91_TIMER_HZ=128···3231CONFIG_ZBOOT_ROM_BSS=0x03332CONFIG_ARM_APPENDED_DTB=y3433CONFIG_ARM_ATAG_DTB_COMPAT=y3535-CONFIG_CMDLINE="mem=128M console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw"3434+CONFIG_CMDLINE="console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw"3635CONFIG_KEXEC=y3736CONFIG_AUTO_ZRELADDR=y3837# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+1-1
arch/arm/include/asm/memory.h
···3737 */3838#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)3939#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(0x01000000))4040-#define TASK_UNMAPPED_BASE (UL(CONFIG_PAGE_OFFSET) / 3)4040+#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)41414242/*4343 * The maximum size of a 26-bit user space task.
···246246247247 /*248248 * Then map boot params address in r2 if specified.249249+ * We map 2 sections in case the ATAGs/DTB crosses a section boundary.249250 */250251 mov r0, r2, lsr #SECTION_SHIFT251252 movs r0, r0, lsl #SECTION_SHIFT···254253 addne r3, r3, #PAGE_OFFSET255254 addne r3, r4, r3, lsr #(SECTION_SHIFT - PMD_ORDER)256255 orrne r6, r7, r0256256+ strne r6, [r3], #1 << PMD_ORDER257257+ addne r6, r6, #1 << SECTION_SHIFT257258 strne r6, [r3]258259259260#ifdef CONFIG_DEBUG_LL···334331 * as it has already been validated by the primary processor.335332 */336333#ifdef CONFIG_ARM_VIRT_EXT337337- bl __hyp_stub_install334334+ bl __hyp_stub_install_secondary338335#endif339336 safe_svcmode_maskall r9340337
+6-12
arch/arm/kernel/hyp-stub.S
···9999 * immediately.100100 */101101 compare_cpu_mode_with_primary r4, r5, r6, r7102102- bxne lr102102+ movne pc, lr103103104104 /*105105 * Once we have given up on one CPU, we do not try to install the···111111 */112112113113 cmp r4, #HYP_MODE114114- bxne lr @ give up if the CPU is not in HYP mode114114+ movne pc, lr @ give up if the CPU is not in HYP mode115115116116/*117117 * Configure HSCTLR to set correct exception endianness/instruction set···120120 * Eventually, CPU-specific code might be needed -- assume not for now121121 *122122 * This code relies on the "eret" instruction to synchronize the123123- * various coprocessor accesses.123123+ * various coprocessor accesses. This is done when we switch to SVC124124+ * (see safe_svcmode_maskall).124125 */125126 @ Now install the hypervisor stub:126127 adr r7, __hyp_stub_vectors···1561551:157156#endif158157159159- bic r7, r4, #MODE_MASK160160- orr r7, r7, #SVC_MODE161161-THUMB( orr r7, r7, #PSR_T_BIT )162162- msr spsr_cxsf, r7 @ This is SPSR_hyp.163163-164164- __MSR_ELR_HYP(14) @ msr elr_hyp, lr165165- __ERET @ return, switching to SVC mode166166- @ The boot CPU mode is left in r4.158158+ bx lr @ The boot CPU mode is left in r4.167159ENDPROC(__hyp_stub_install_secondary)168160169161__hyp_stub_do_trap:···194200 @ fall through195201ENTRY(__hyp_set_vectors)196202 __HVC(0)197197- bx lr203203+ mov pc, lr198204ENDPROC(__hyp_set_vectors)199205200206#ifndef ZIMAGE
+2
arch/arm/mach-at91/setup.c
···105105 switch (socid) {106106 case ARCH_ID_AT91RM9200:107107 at91_soc_initdata.type = AT91_SOC_RM9200;108108+ if (at91_soc_initdata.subtype == AT91_SOC_SUBTYPE_NONE)109109+ at91_soc_initdata.subtype = AT91_SOC_RM9200_BGA;108110 at91_boot_soc = at91rm9200_soc;109111 break;110112
+1-1
arch/arm/mach-exynos/Kconfig
···414414 select CPU_EXYNOS4210415415 select HAVE_SAMSUNG_KEYPAD if INPUT_KEYBOARD416416 select PINCTRL417417- select PINCTRL_EXYNOS4417417+ select PINCTRL_EXYNOS418418 select USE_OF419419 help420420 Machine support for Samsung Exynos4 machine with device tree enabled.
···436436 for (i = 0; i < ARRAY_SIZE(clks_init_on); i++)437437 clk_prepare_enable(clk[clks_init_on[i]]);438438439439+ /* Set initial power mode */440440+ imx6q_set_lpm(WAIT_CLOCKED);441441+439442 np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpt");440443 base = of_iomap(np, 0);441444 WARN_ON(!base);
+1
arch/arm/mach-imx/common.h
···142142extern void imx6q_clock_map_io(void);143143144144extern void imx_cpu_die(unsigned int cpu);145145+extern int imx_cpu_kill(unsigned int cpu);145146146147#ifdef CONFIG_PM147148extern void imx6q_pm_init(void);
···4646void imx_cpu_die(unsigned int cpu)4747{4848 cpu_enter_lowpower();4949- imx_enable_cpu(cpu, false);4949+ cpu_do_idle();5050+}50515151- /* spin here until hardware takes it down */5252- while (1)5353- ;5252+int imx_cpu_kill(unsigned int cpu)5353+{5454+ imx_enable_cpu(cpu, false);5555+ return 1;5456}
···475475{476476 int ret = 0;477477478478+ if (!ap_syscon_base)479479+ return -EINVAL;480480+478481 if (nr == 0) {479482 sys->mem_offset = PHYS_PCI_MEM_BASE;480483 ret = pci_v3_setup_resources(sys);481481- /* Remap the Integrator system controller */482482- ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100);483483- if (!ap_syscon_base)484484- return -EINVAL;485484 }486485487486 return ret;···495496 unsigned long flags;496497 unsigned int temp;497498 int ret;499499+500500+ /* Remap the Integrator system controller */501501+ ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100);502502+ if (!ap_syscon_base) {503503+ pr_err("unable to remap the AP syscon for PCIv3\n");504504+ return;505505+ }498506499507 pcibios_min_mem = 0x00100000;500508
-38
arch/arm/mach-kirkwood/board-ns2.c
···1818#include <linux/gpio.h>1919#include <linux/of.h>2020#include "common.h"2121-#include "mpp.h"22212322static struct mv643xx_eth_platform_data ns2_ge00_data = {2423 .phy_addr = MV643XX_ETH_PHY_ADDR(8),2525-};2626-2727-static unsigned int ns2_mpp_config[] __initdata = {2828- MPP0_SPI_SCn,2929- MPP1_SPI_MOSI,3030- MPP2_SPI_SCK,3131- MPP3_SPI_MISO,3232- MPP4_NF_IO6,3333- MPP5_NF_IO7,3434- MPP6_SYSRST_OUTn,3535- MPP7_GPO, /* Fan speed (bit 1) */3636- MPP8_TW0_SDA,3737- MPP9_TW0_SCK,3838- MPP10_UART0_TXD,3939- MPP11_UART0_RXD,4040- MPP12_GPO, /* Red led */4141- MPP14_GPIO, /* USB fuse */4242- MPP16_GPIO, /* SATA 0 power */4343- MPP17_GPIO, /* SATA 1 power */4444- MPP18_NF_IO0,4545- MPP19_NF_IO1,4646- MPP20_SATA1_ACTn,4747- MPP21_SATA0_ACTn,4848- MPP22_GPIO, /* Fan speed (bit 0) */4949- MPP23_GPIO, /* Fan power */5050- MPP24_GPIO, /* USB mode select */5151- MPP25_GPIO, /* Fan rotation fail */5252- MPP26_GPIO, /* USB device vbus */5353- MPP28_GPIO, /* USB enable host vbus */5454- MPP29_GPIO, /* Blue led (slow register) */5555- MPP30_GPIO, /* Blue led (command register) */5656- MPP31_GPIO, /* Board power off */5757- MPP32_GPIO, /* Power button (0 = Released, 1 = Pushed) */5858- MPP33_GPO, /* Fan speed (bit 2) */5959- 06024};61256226#define NS2_GPIO_POWER_OFF 31···3571 /*3672 * Basic setup. Needs to be called early.3773 */3838- kirkwood_mpp_conf(ns2_mpp_config);3939-4074 if (of_machine_is_compatible("lacie,netspace_lite_v2") ||4175 of_machine_is_compatible("lacie,netspace_mini_v2"))4276 ns2_ge00_data.phy_addr = MV643XX_ETH_PHY_ADDR(0);
···20262026 * On OMAP4460 the ABE DPLL fails to turn on if in idle low-power20272027 * state when turning the ABE clock domain. Workaround this by20282028 * locking the ABE DPLL on boot.20292029+ * Lock the ABE DPLL in any case to avoid issues with audio.20292030 */20302030- if (cpu_is_omap446x()) {20312031- rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck);20322032- if (!rc)20332033- rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ);20342034- if (rc)20352035- pr_err("%s: failed to configure ABE DPLL!\n", __func__);20362036- }20312031+ rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck);20322032+ if (!rc)20332033+ rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ);20342034+ if (rc)20352035+ pr_err("%s: failed to configure ABE DPLL!\n", __func__);2037203620382037 return 0;20392038}
···21322132 * currently reset very early during boot, before I2C is21332133 * available, so it doesn't seem that we have any choice in21342134 * the kernel other than to avoid resetting it.21352135+ *21362136+ * Also, McPDM needs to be configured to NO_IDLE mode when it21372137+ * is in used otherwise vital clocks will be gated which21382138+ * results 'slow motion' audio playback.21352139 */21362136- .flags = HWMOD_EXT_OPT_MAIN_CLK,21402140+ .flags = HWMOD_EXT_OPT_MAIN_CLK | HWMOD_SWSUP_SIDLE,21372141 .mpu_irqs = omap44xx_mcpdm_irqs,21382142 .sdma_reqs = omap44xx_mcpdm_sdma_reqs,21392143 .main_clk = "mcpdm_fck",
+2-6
arch/arm/mach-omap2/timer.c
···165165 struct device_node *np;166166167167 for_each_matching_node(np, match) {168168- if (!of_device_is_available(np)) {169169- of_node_put(np);168168+ if (!of_device_is_available(np))170169 continue;171171- }172170173173- if (property && !of_get_property(np, property, NULL)) {174174- of_node_put(np);171171+ if (property && !of_get_property(np, property, NULL))175172 continue;176176- }177173178174 of_add_property(np, &device_disabled);179175 return np;
+1-1
arch/arm/mach-realview/include/mach/irqs-eb.h
···115115/*116116 * Only define NR_IRQS if less than NR_IRQS_EB117117 */118118-#define NR_IRQS_EB (IRQ_EB_GIC_START + 96)118118+#define NR_IRQS_EB (IRQ_EB_GIC_START + 128)119119120120#if defined(CONFIG_MACH_REALVIEW_EB) \121121 && (!defined(NR_IRQS) || (NR_IRQS < NR_IRQS_EB))
···338338 for (i = 0; i < ARRAY_SIZE(s3c64xx_pm_domains); i++)339339 pm_genpd_init(&s3c64xx_pm_domains[i]->pd, NULL, false);340340341341+#ifdef CONFIG_S3C_DEV_FB341342 if (dev_get_platdata(&s3c_device_fb.dev))342343 pm_genpd_add_device(&s3c64xx_pm_f.pd, &s3c_device_fb.dev);344344+#endif343345344346 return 0;345347}
+11-9
arch/arm/mm/dma-mapping.c
···640640641641 if (is_coherent || nommu())642642 addr = __alloc_simple_buffer(dev, size, gfp, &page);643643- else if (gfp & GFP_ATOMIC)643643+ else if (!(gfp & __GFP_WAIT))644644 addr = __alloc_from_pool(size, &page);645645 else if (!IS_ENABLED(CONFIG_CMA))646646 addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);···774774 size_t size, enum dma_data_direction dir,775775 void (*op)(const void *, size_t, int))776776{777777+ unsigned long pfn;778778+ size_t left = size;779779+780780+ pfn = page_to_pfn(page) + offset / PAGE_SIZE;781781+ offset %= PAGE_SIZE;782782+777783 /*778784 * A single sg entry may refer to multiple physically contiguous779785 * pages. But we still need to process highmem pages individually.780786 * If highmem is not configured then the bulk of this loop gets781787 * optimized out.782788 */783783- size_t left = size;784789 do {785790 size_t len = left;786791 void *vaddr;787792793793+ page = pfn_to_page(pfn);794794+788795 if (PageHighMem(page)) {789789- if (len + offset > PAGE_SIZE) {790790- if (offset >= PAGE_SIZE) {791791- page += offset / PAGE_SIZE;792792- offset %= PAGE_SIZE;793793- }796796+ if (len + offset > PAGE_SIZE)794797 len = PAGE_SIZE - offset;795795- }796798 vaddr = kmap_high_get(page);797799 if (vaddr) {798800 vaddr += offset;···811809 op(vaddr, len, dir);812810 }813811 offset = 0;814814- page++;812812+ pfn++;815813 left -= len;816814 } while (left);817815}
···88 select SSB_DRIVER_EXTIF99 select SSB_EMBEDDED1010 select SSB_B43_PCI_BRIDGE if PCI1111+ select SSB_DRIVER_PCICORE if PCI1112 select SSB_PCICORE_HOSTMODE if PCI1213 select SSB_DRIVER_GPIO1414+ select GPIOLIB1315 default y1416 help1517 Add support for old Broadcom BCM47xx boards with Sonics Silicon Backplane support.···2725 select BCMA_HOST_PCI if PCI2826 select BCMA_DRIVER_PCI_HOSTMODE if PCI2927 select BCMA_DRIVER_GPIO2828+ select GPIOLIB3029 default y3130 help3231 Add support for new Broadcom BCM47xx boards with Broadcom specific Advanced Microcontroller Bus.
+5-4
arch/mips/cavium-octeon/executive/cvmx-l2c.c
···3030 * measurement, and debugging facilities.3131 */32323333+#include <linux/compiler.h>3334#include <linux/irqflags.h>3435#include <asm/octeon/cvmx.h>3536#include <asm/octeon/cvmx-l2c.h>···286285 */287286static void fault_in(uint64_t addr, int len)288287{289289- volatile char *ptr;290290- volatile char dummy;288288+ char *ptr;289289+291290 /*292291 * Adjust addr and length so we get all cache lines even for293292 * small ranges spanning two cache lines.294293 */295294 len += addr & CVMX_CACHE_LINE_MASK;296295 addr &= ~CVMX_CACHE_LINE_MASK;297297- ptr = (volatile char *)cvmx_phys_to_ptr(addr);296296+ ptr = cvmx_phys_to_ptr(addr);298297 /*299298 * Invalidate L1 cache to make sure all loads result in data300299 * being in L2.301300 */302301 CVMX_DCACHE_INVALIDATE;303302 while (len > 0) {304304- dummy += *ptr;303303+ ACCESS_ONCE(*ptr);305304 len -= CVMX_CACHE_LINE_SIZE;306305 ptr += CVMX_CACHE_LINE_SIZE;307306 }
···2525#define MCOUNT_OFFSET_INSNS 42626#endif27272828+/* Arch override because MIPS doesn't need to run this from stop_machine() */2929+void arch_ftrace_update_code(int command)3030+{3131+ ftrace_modify_all_code(command);3232+}3333+2834/*2935 * Check if the address is in kernel space3036 *···9589 return 0;9690}97919292+#ifndef CONFIG_64BIT9393+static int ftrace_modify_code_2(unsigned long ip, unsigned int new_code1,9494+ unsigned int new_code2)9595+{9696+ int faulted;9797+9898+ safe_store_code(new_code1, ip, faulted);9999+ if (unlikely(faulted))100100+ return -EFAULT;101101+ ip += 4;102102+ safe_store_code(new_code2, ip, faulted);103103+ if (unlikely(faulted))104104+ return -EFAULT;105105+ flush_icache_range(ip, ip + 8); /* original ip + 12 */106106+ return 0;107107+}108108+#endif109109+98110/*99111 * The details about the calling site of mcount on MIPS100112 *···155131 * needed.156132 */157133 new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F;158158-134134+#ifdef CONFIG_64BIT159135 return ftrace_modify_code(ip, new);136136+#else137137+ /*138138+ * On 32 bit MIPS platforms, gcc adds a stack adjust139139+ * instruction in the delay slot after the branch to140140+ * mcount and expects mcount to restore the sp on return.141141+ * This is based on a legacy API and does nothing but142142+ * waste instructions so it's being removed at runtime.143143+ */144144+ return ftrace_modify_code_2(ip, new, INSN_NOP);145145+#endif160146}161147162148int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
···705705706706 printk(KERN_WARNING707707 "VPE loader: TC %d is already in use.\n",708708- t->index);708708+ v->tc->index);709709 return -ENOEXEC;710710 }711711 } else {
+1-1
arch/mips/lantiq/irq.c
···408408#endif409409410410 /* tell oprofile which irq to use */411411- cp0_perfcount_irq = LTQ_PERF_IRQ;411411+ cp0_perfcount_irq = irq_create_mapping(ltq_domain, LTQ_PERF_IRQ);412412413413 /*414414 * if the timer irq is not one of the mips irqs we need to
···193193194194void __init prom_init(void)195195{196196- int i, *argv, *envp; /* passed as 32 bit ptrs */196196+ int *argv, *envp; /* passed as 32 bit ptrs */197197 struct psb_info *prom_infop;198198+#ifdef CONFIG_SMP199199+ int i;200200+#endif198201199202 /* truncate to 32 bit and sign extend all args */200203 argv = (int *)(long)(int)fw_arg1;
···664664 ld r4,TI_FLAGS(r9)665665 andi. r0,r4,_TIF_NEED_RESCHED666666 bne 1b667667+668668+ /*669669+ * arch_local_irq_restore() from preempt_schedule_irq above may670670+ * enable hard interrupt but we really should disable interrupts671671+ * when we return from the interrupt, and so that we don't get672672+ * interrupted after loading SRR0/1.673673+ */674674+#ifdef CONFIG_PPC_BOOK3E675675+ wrteei 0676676+#else677677+ ld r10,PACAKMSR(r13) /* Get kernel MSR without EE */678678+ mtmsrd r10,1 /* Update machine state */679679+#endif /* CONFIG_PPC_BOOK3E */667680#endif /* CONFIG_PREEMPT */668681669682 .globl fast_exc_return_irq
+3-2
arch/powerpc/kernel/kgdb.c
···154154static int kgdb_singlestep(struct pt_regs *regs)155155{156156 struct thread_info *thread_info, *exception_thread_info;157157- struct thread_info *backup_current_thread_info = \158158- (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL);157157+ struct thread_info *backup_current_thread_info;159158160159 if (user_mode(regs))161160 return 0;162161162162+ backup_current_thread_info = (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL);163163 /*164164 * On Book E and perhaps other processors, singlestep is handled on165165 * the critical exception stack. This causes current_thread_info()···185185 /* Restore current_thread_info lastly. */186186 memcpy(exception_thread_info, backup_current_thread_info, sizeof *thread_info);187187188188+ kfree(backup_current_thread_info);188189 return 1;189190}190191
+7-2
arch/powerpc/kernel/time.c
···494494 set_dec(DECREMENTER_MAX);495495496496 /* Some implementations of hotplug will get timer interrupts while497497- * offline, just ignore these497497+ * offline, just ignore these and we also need to set498498+ * decrementers_next_tb as MAX to make sure __check_irq_replay499499+ * don't replay timer interrupt when return, otherwise we'll trap500500+ * here infinitely :(498501 */499499- if (!cpu_online(smp_processor_id()))502502+ if (!cpu_online(smp_processor_id())) {503503+ *next_tb = ~(u64)0;500504 return;505505+ }501506502507 /* Conditionally hard-enable interrupts now that the DEC has been503508 * bumped to its maximum value
+2
arch/powerpc/kvm/emulate.c
···3939#define OP_31_XOP_TRAP 44040#define OP_31_XOP_LWZX 234141#define OP_31_XOP_TRAP_64 684242+#define OP_31_XOP_DCBF 864243#define OP_31_XOP_LBZX 874344#define OP_31_XOP_STWX 1514445#define OP_31_XOP_STBX 215···375374 emulated = kvmppc_emulate_mtspr(vcpu, sprn, rs);376375 break;377376377377+ case OP_31_XOP_DCBF:378378 case OP_31_XOP_DCBI:379379 /* Do nothing. The guest is performing dcbi because380380 * hardware DMA is not snooped by the dcache, but
+35-27
arch/powerpc/mm/hash_low_64.S
···115115 sldi r29,r5,SID_SHIFT - VPN_SHIFT116116 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)117117 or r29,r28,r29118118-119119- /* Calculate hash value for primary slot and store it in r28 */120120- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */121121- rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */122122- xor r28,r5,r0118118+ /*119119+ * Calculate hash value for primary slot and store it in r28120120+ * r3 = va, r5 = vsid121121+ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)122122+ */123123+ rldicl r0,r3,64-12,48124124+ xor r28,r5,r0 /* hash */123125 b 4f1241261251273: /* Calc vpn and put it in r29 */···132130 /*133131 * calculate hash value for primary slot and134132 * store it in r28 for 1T segment133133+ * r3 = va, r5 = vsid135134 */136136- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */137137- clrldi r5,r5,40 /* vsid & 0xffffff */138138- rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */139139- xor r28,r28,r5135135+ sldi r28,r5,25 /* vsid << 25 */136136+ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */137137+ rldicl r0,r3,64-12,36138138+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */140139 xor r28,r28,r0 /* hash */141140142141 /* Convert linux PTE bits into HW equivalents */···410407 */411408 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)412409 or r29,r28,r29413413-414414- /* Calculate hash value for primary slot and store it in r28 */415415- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */416416- rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */417417- xor r28,r5,r0410410+ /*411411+ * Calculate hash value for primary slot and store it in r28412412+ * r3 = va, r5 = vsid413413+ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)414414+ */415415+ rldicl r0,r3,64-12,48416416+ xor r28,r5,r0 /* hash */418417 b 4f4194184204193: /* Calc vpn and put it in r29 */···431426 /*432427 * Calculate hash value for primary slot and433428 * store it in r28 for 1T segment429429+ * r3 = va, r5 = vsid434430 */435435- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */436436- clrldi r5,r5,40 /* vsid & 0xffffff */437437- rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */438438- xor r28,r28,r5431431+ sldi r28,r5,25 /* vsid << 25 */432432+ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */433433+ rldicl r0,r3,64-12,36434434+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */439435 xor r28,r28,r0 /* hash */440436441437 /* Convert linux PTE bits into HW equivalents */···758752 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)759753 or r29,r28,r29760754761761- /* Calculate hash value for primary slot and store it in r28 */762762- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */763763- rldicl r0,r3,64-16,52 /* (ea >> 16) & 0xfff */764764- xor r28,r5,r0755755+ /* Calculate hash value for primary slot and store it in r28756756+ * r3 = va, r5 = vsid757757+ * r0 = (va >> 16) & ((1ul << (28 - 16)) -1)758758+ */759759+ rldicl r0,r3,64-16,52760760+ xor r28,r5,r0 /* hash */765761 b 4f7667627677633: /* Calc vpn and put it in r29 */768764 sldi r29,r5,SID_SHIFT_1T - VPN_SHIFT769765 rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT_1T - VPN_SHIFT)770766 or r29,r28,r29771771-772767 /*773768 * calculate hash value for primary slot and774769 * store it in r28 for 1T segment770770+ * r3 = va, r5 = vsid775771 */776776- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */777777- clrldi r5,r5,40 /* vsid & 0xffffff */778778- rldicl r0,r3,64-16,40 /* (ea >> 16) & 0xffffff */779779- xor r28,r28,r5772772+ sldi r28,r5,25 /* vsid << 25 */773773+ /* r0 = (va >> 16) & ((1ul << (40 - 16)) -1) */774774+ rldicl r0,r3,64-16,40775775+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */780776 xor r28,r28,r0 /* hash */781777782778 /* Convert linux PTE bits into HW equivalents */
···236236237237static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)238238{239239+ /*240240+ * We don't support CPU hotplug. Don't unmap after the system241241+ * has already made it to a running state.242242+ */243243+ if (system_state != SYSTEM_BOOTING)244244+ return 0;245245+239246 if (sdcasr_mapbase)240247 iounmap(sdcasr_mapbase);241248 if (sdcpwr_mapbase)
···256256 int i;257257 struct setup_data *data;258258259259- data = (struct setup_data *)params->hdr.setup_data;259259+ data = (struct setup_data *)(unsigned long)params->hdr.setup_data;260260261261 while (data && data->next)262262- data = (struct setup_data *)data->next;262262+ data = (struct setup_data *)(unsigned long)data->next;263263264264 status = efi_call_phys5(sys_table->boottime->locate_handle,265265 EFI_LOCATE_BY_PROTOCOL, &pci_proto,···295295 if (!pci)296296 continue;297297298298+#ifdef CONFIG_X86_64298299 status = efi_call_phys4(pci->attributes, pci,299300 EfiPciIoAttributeOperationGet, 0,300301 &attributes);301301-302302+#else303303+ status = efi_call_phys5(pci->attributes, pci,304304+ EfiPciIoAttributeOperationGet, 0, 0,305305+ &attributes);306306+#endif302307 if (status != EFI_SUCCESS)303303- continue;304304-305305- if (!(attributes & EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM))306308 continue;307309308310 if (!pci->romimage || !pci->romsize)···347345 memcpy(rom->romdata, pci->romimage, pci->romsize);348346349347 if (data)350350- data->next = (uint64_t)rom;348348+ data->next = (unsigned long)rom;351349 else352352- params->hdr.setup_data = (uint64_t)rom;350350+ params->hdr.setup_data = (unsigned long)rom;353351354352 data = (struct setup_data *)rom;355353···434432 * Once we've found a GOP supporting ConOut,435433 * don't bother looking any further.436434 */435435+ first_gop = gop;437436 if (conout_found)438437 break;439439-440440- first_gop = gop;441438 }442439 }443440
+5-3
arch/x86/boot/compressed/head_32.S
···3535#ifdef CONFIG_EFI_STUB3636 jmp preferred_addr37373838- .balign 0x103938 /*4039 * We don't need the return address, so set up the stack so4141- * efi_main() can find its arugments.4040+ * efi_main() can find its arguments.4241 */4242+ENTRY(efi_pe_entry)4343 add $0x4, %esp44444545 call make_boot_params···5050 pushl %eax5151 pushl %esi5252 pushl %ecx5353+ sub $0x4, %esp53545454- .org 0x30,0x905555+ENTRY(efi_stub_entry)5656+ add $0x4, %esp5557 call efi_main5658 cmpl $0, %eax5759 movl %eax, %esi
+4-4
arch/x86/boot/compressed/head_64.S
···201201 */202202#ifdef CONFIG_EFI_STUB203203 /*204204- * The entry point for the PE/COFF executable is 0x210, so only205205- * legacy boot loaders will execute this jmp.204204+ * The entry point for the PE/COFF executable is efi_pe_entry, so205205+ * only legacy boot loaders will execute this jmp.206206 */207207 jmp preferred_addr208208209209- .org 0x210209209+ENTRY(efi_pe_entry)210210 mov %rcx, %rdi211211 mov %rdx, %rsi212212 pushq %rdi···218218 popq %rsi219219 popq %rdi220220221221- .org 0x230,0x90221221+ENTRY(efi_stub_entry)222222 call efi_main223223 movq %rax,%rsi224224 cmpq $0,%rax
+29-10
arch/x86/boot/header.S
···2121#include <asm/e820.h>2222#include <asm/page_types.h>2323#include <asm/setup.h>2424+#include <asm/bootparam.h>2425#include "boot.h"2526#include "voffset.h"2627#include "zoffset.h"···256255 # header, from the old boot sector.257256258257 .section ".header", "a"258258+ .globl sentinel259259+sentinel: .byte 0xff, 0xff /* Used to detect broken loaders */260260+259261 .globl hdr260262hdr:261263setup_sects: .byte 0 /* Filled in by build.c */···283279 # Part 2 of the header, from the old setup.S284280285281 .ascii "HdrS" # header signature286286- .word 0x020b # header version number (>= 0x0105)282282+ .word 0x020c # header version number (>= 0x0105)287283 # or else old loadlin-1.5 will fail)288284 .globl realmode_swtch289285realmode_swtch: .word 0, 0 # default_switch, SETUPSEG···301297302298# flags, unused bits must be zero (RFU) bit within loadflags303299loadflags:304304-LOADED_HIGH = 1 # If set, the kernel is loaded high305305-CAN_USE_HEAP = 0x80 # If set, the loader also has set306306- # heap_end_ptr to tell how much307307- # space behind setup.S can be used for308308- # heap purposes.309309- # Only the loader knows what is free310310- .byte LOADED_HIGH300300+ .byte LOADED_HIGH # The kernel is to be loaded high311301312302setup_move_size: .word 0x8000 # size to move, when setup is not313303 # loaded at 0x90000. We will move setup···367369relocatable_kernel: .byte 0368370#endif369371min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment370370-pad3: .word 0372372+373373+xloadflags:374374+#ifdef CONFIG_X86_64375375+# define XLF0 XLF_KERNEL_64 /* 64-bit kernel */376376+#else377377+# define XLF0 0378378+#endif379379+#ifdef CONFIG_EFI_STUB380380+# ifdef CONFIG_X86_64381381+# define XLF23 XLF_EFI_HANDOVER_64 /* 64-bit EFI handover ok */382382+# else383383+# define XLF23 XLF_EFI_HANDOVER_32 /* 32-bit EFI handover ok */384384+# endif385385+#else386386+# define XLF23 0387387+#endif388388+ .word XLF0 | XLF23371389372390cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line,373391 #added with boot protocol···411397#define INIT_SIZE VO_INIT_SIZE412398#endif413399init_size: .long INIT_SIZE # kernel initialization size414414-handover_offset: .long 0x30 # offset to the handover400400+handover_offset:401401+#ifdef CONFIG_EFI_STUB402402+ .long 0x30 # offset to the handover415403 # protocol entry point404404+#else405405+ .long 0406406+#endif416407417408# End of setup header #####################################################418409
···52525353#define PECOFF_RELOC_RESERVE 0x2054545555+unsigned long efi_stub_entry;5656+unsigned long efi_pe_entry;5757+unsigned long startup_64;5858+5559/*----------------------------------------------------------------------*/56605761static const u32 crctab32[] = {···136132137133static void usage(void)138134{139139- die("Usage: build setup system [> image]");135135+ die("Usage: build setup system [zoffset.h] [> image]");140136}141137142138#ifdef CONFIG_EFI_STUB···210206 */211207 put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]);212208213213-#ifdef CONFIG_X86_32214209 /*215215- * Address of entry point.216216- *217217- * The EFI stub entry point is +16 bytes from the start of218218- * the .text section.210210+ * Address of entry point for PE/COFF executable219211 */220220- put_unaligned_le32(text_start + 16, &buf[pe_header + 0x28]);221221-#else222222- /*223223- * Address of entry point. startup_32 is at the beginning and224224- * the 64-bit entry point (startup_64) is always 512 bytes225225- * after. The EFI stub entry point is 16 bytes after that, as226226- * the first instruction allows legacy loaders to jump over227227- * the EFI stub initialisation228228- */229229- put_unaligned_le32(text_start + 528, &buf[pe_header + 0x28]);230230-#endif /* CONFIG_X86_32 */212212+ put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]);231213232214 update_pecoff_section_header(".text", text_start, text_sz);233215}234216235217#endif /* CONFIG_EFI_STUB */218218+219219+220220+/*221221+ * Parse zoffset.h and find the entry points. We could just #include zoffset.h222222+ * but that would mean tools/build would have to be rebuilt every time. It's223223+ * not as if parsing it is hard...224224+ */225225+#define PARSE_ZOFS(p, sym) do { \226226+ if (!strncmp(p, "#define ZO_" #sym " ", 11+sizeof(#sym))) \227227+ sym = strtoul(p + 11 + sizeof(#sym), NULL, 16); \228228+} while (0)229229+230230+static void parse_zoffset(char *fname)231231+{232232+ FILE *file;233233+ char *p;234234+ int c;235235+236236+ file = fopen(fname, "r");237237+ if (!file)238238+ die("Unable to open `%s': %m", fname);239239+ c = fread(buf, 1, sizeof(buf) - 1, file);240240+ if (ferror(file))241241+ die("read-error on `zoffset.h'");242242+ buf[c] = 0;243243+244244+ p = (char *)buf;245245+246246+ while (p && *p) {247247+ PARSE_ZOFS(p, efi_stub_entry);248248+ PARSE_ZOFS(p, efi_pe_entry);249249+ PARSE_ZOFS(p, startup_64);250250+251251+ p = strchr(p, '\n');252252+ while (p && (*p == '\r' || *p == '\n'))253253+ p++;254254+ }255255+}236256237257int main(int argc, char ** argv)238258{···269241 void *kernel;270242 u32 crc = 0xffffffffUL;271243272272- if (argc != 3)244244+ /* Defaults for old kernel */245245+#ifdef CONFIG_X86_32246246+ efi_pe_entry = 0x10;247247+ efi_stub_entry = 0x30;248248+#else249249+ efi_pe_entry = 0x210;250250+ efi_stub_entry = 0x230;251251+ startup_64 = 0x200;252252+#endif253253+254254+ if (argc == 4)255255+ parse_zoffset(argv[3]);256256+ else if (argc != 3)273257 usage();274258275259 /* Copy the setup code */···339299340300#ifdef CONFIG_EFI_STUB341301 update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz));302302+303303+#ifdef CONFIG_X86_64 /* Yes, this is really how we defined it :( */304304+ efi_stub_entry -= 0x200;305305+#endif306306+ put_unaligned_le32(efi_stub_entry, &buf[0x264]);342307#endif343308344309 crc = partial_crc32(buf, i, crc);
+2-2
arch/x86/ia32/ia32entry.S
···207207 testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)208208 jnz ia32_ret_from_sys_call209209 TRACE_IRQS_ON210210- sti210210+ ENABLE_INTERRUPTS(CLBR_NONE)211211 movl %eax,%esi /* second arg, syscall return value */212212 cmpl $-MAX_ERRNO,%eax /* is it an error ? */213213 jbe 1f···217217 call __audit_syscall_exit218218 movq RAX-ARGOFFSET(%rsp),%rax /* reload syscall return value */219219 movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi220220- cli220220+ DISABLE_INTERRUPTS(CLBR_NONE)221221 TRACE_IRQS_OFF222222 testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)223223 jz \exit
+1
arch/x86/include/asm/efi.h
···9494#endif /* CONFIG_X86_32 */95959696extern int add_efi_memmap;9797+extern unsigned long x86_efi_facility;9798extern void efi_set_executable(efi_memory_desc_t *md, bool executable);9899extern int efi_memblock_x86_reserve_range(void);99100extern void efi_call_phys_prelog(void);
+1-1
arch/x86/include/asm/uv/uv.h
···1616extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,1717 struct mm_struct *mm,1818 unsigned long start,1919- unsigned end,1919+ unsigned long end,2020 unsigned int cpu);21212222#else /* X86_UV */
+46-17
arch/x86/include/uapi/asm/bootparam.h
···11#ifndef _ASM_X86_BOOTPARAM_H22#define _ASM_X86_BOOTPARAM_H3344+/* setup_data types */55+#define SETUP_NONE 066+#define SETUP_E820_EXT 177+#define SETUP_DTB 288+#define SETUP_PCI 399+1010+/* ram_size flags */1111+#define RAMDISK_IMAGE_START_MASK 0x07FF1212+#define RAMDISK_PROMPT_FLAG 0x80001313+#define RAMDISK_LOAD_FLAG 0x40001414+1515+/* loadflags */1616+#define LOADED_HIGH (1<<0)1717+#define QUIET_FLAG (1<<5)1818+#define KEEP_SEGMENTS (1<<6)1919+#define CAN_USE_HEAP (1<<7)2020+2121+/* xloadflags */2222+#define XLF_KERNEL_64 (1<<0)2323+#define XLF_CAN_BE_LOADED_ABOVE_4G (1<<1)2424+#define XLF_EFI_HANDOVER_32 (1<<2)2525+#define XLF_EFI_HANDOVER_64 (1<<3)2626+2727+#ifndef __ASSEMBLY__2828+429#include <linux/types.h>530#include <linux/screen_info.h>631#include <linux/apm_bios.h>···338#include <asm/e820.h>349#include <asm/ist.h>3510#include <video/edid.h>3636-3737-/* setup data types */3838-#define SETUP_NONE 03939-#define SETUP_E820_EXT 14040-#define SETUP_DTB 24141-#define SETUP_PCI 342114312/* extensible setup data list node */4413struct setup_data {···4728 __u16 root_flags;4829 __u32 syssize;4930 __u16 ram_size;5050-#define RAMDISK_IMAGE_START_MASK 0x07FF5151-#define RAMDISK_PROMPT_FLAG 0x80005252-#define RAMDISK_LOAD_FLAG 0x40005331 __u16 vid_mode;5432 __u16 root_dev;5533 __u16 boot_flag;···5842 __u16 kernel_version;5943 __u8 type_of_loader;6044 __u8 loadflags;6161-#define LOADED_HIGH (1<<0)6262-#define QUIET_FLAG (1<<5)6363-#define KEEP_SEGMENTS (1<<6)6464-#define CAN_USE_HEAP (1<<7)6545 __u16 setup_move_size;6646 __u32 code32_start;6747 __u32 ramdisk_image;···7058 __u32 initrd_addr_max;7159 __u32 kernel_alignment;7260 __u8 relocatable_kernel;7373- __u8 _pad2[3];6161+ __u8 min_alignment;6262+ __u16 xloadflags;7463 __u32 cmdline_size;7564 __u32 hardware_subarch;7665 __u64 hardware_subarch_data;···119106 __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */120107 struct sys_desc_table sys_desc_table; /* 0x0a0 */121108 struct olpc_ofw_header olpc_ofw_header; /* 0x0b0 */122122- __u8 _pad4[128]; /* 0x0c0 */109109+ __u32 ext_ramdisk_image; /* 0x0c0 */110110+ __u32 ext_ramdisk_size; /* 0x0c4 */111111+ __u32 ext_cmd_line_ptr; /* 0x0c8 */112112+ __u8 _pad4[116]; /* 0x0cc */123113 struct edid_info edid_info; /* 0x140 */124114 struct efi_info efi_info; /* 0x1c0 */125115 __u32 alt_mem_k; /* 0x1e0 */···131115 __u8 eddbuf_entries; /* 0x1e9 */132116 __u8 edd_mbr_sig_buf_entries; /* 0x1ea */133117 __u8 kbd_status; /* 0x1eb */134134- __u8 _pad6[5]; /* 0x1ec */118118+ __u8 _pad5[3]; /* 0x1ec */119119+ /*120120+ * The sentinel is set to a nonzero value (0xff) in header.S.121121+ *122122+ * A bootloader is supposed to only take setup_header and put123123+ * it into a clean boot_params buffer. If it turns out that124124+ * it is clumsy or too generous with the buffer, it most125125+ * probably will pick up the sentinel variable too. The fact126126+ * that this variable then is still 0xff will let kernel127127+ * know that some variables in boot_params are invalid and128128+ * kernel should zero out certain portions of boot_params.129129+ */130130+ __u8 sentinel; /* 0x1ef */131131+ __u8 _pad6[1]; /* 0x1f0 */135132 struct setup_header hdr; /* setup header */ /* 0x1f1 */136133 __u8 _pad7[0x290-0x1f1-sizeof(struct setup_header)];137134 __u32 edd_mbr_sig_buffer[EDD_MBR_SIG_MAX]; /* 0x290 */···163134 X86_NR_SUBARCHS,164135};165136166166-137137+#endif /* __ASSEMBLY__ */167138168139#endif /* _ASM_X86_BOOTPARAM_H */
···17811781 * Leave room for the "copied" frame17821782 */17831783 subq $(5*8), %rsp17841784+ CFI_ADJUST_CFA_OFFSET 5*81784178517851786 /* Copy the stack frame to the Saved frame */17861787 .rept 5···18641863nmi_swapgs:18651864 SWAPGS_UNSAFE_STACK18661865nmi_restore:18671867- RESTORE_ALL 818681868-18691869- /* Pop the extra iret frame */18701870- addq $(5*8), %rsp18661866+ /* Pop the extra iret frame at once */18671867+ RESTORE_ALL 6*81871186818721869 /* Clear the NMI executing stack variable */18731870 movq $0, 5*8(%rsp)
+7-2
arch/x86/kernel/head_32.S
···300300 leal -__PAGE_OFFSET(%ecx),%esp301301302302default_entry:303303+#define CR0_STATE (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \304304+ X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \305305+ X86_CR0_PG)306306+ movl $(CR0_STATE & ~X86_CR0_PG),%eax307307+ movl %eax,%cr0308308+303309/*304310 * New page tables may be in 4Mbyte page mode and may305311 * be using the global pages. ···370364 */371365 movl $pa(initial_page_table), %eax372366 movl %eax,%cr3 /* set the page table pointer.. */373373- movl %cr0,%eax374374- orl $X86_CR0_PG,%eax367367+ movl $CR0_STATE,%eax375368 movl %eax,%cr0 /* ..and set paging (PG) bit */376369 ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */3773701:
+3
arch/x86/kernel/msr.c
···174174 unsigned int cpu;175175 struct cpuinfo_x86 *c;176176177177+ if (!capable(CAP_SYS_RAWIO))178178+ return -EPERM;179179+177180 cpu = iminor(file->f_path.dentry->d_inode);178181 if (cpu >= nr_cpu_ids || !cpu_online(cpu))179182 return -ENXIO; /* No such CPU */
+1-1
arch/x86/kernel/pci-dma.c
···5656EXPORT_SYMBOL(x86_dma_fallback_dev);57575858/* Number of entries preallocated for DMA-API debugging */5959-#define PREALLOC_DMA_DEBUG_ENTRIES 327685959+#define PREALLOC_DMA_DEBUG_ENTRIES 6553660606161int dma_set_mask(struct device *dev, u64 mask)6262{
+1-1
arch/x86/kernel/reboot.c
···584584 break;585585586586 case BOOT_EFI:587587- if (efi_enabled)587587+ if (efi_enabled(EFI_RUNTIME_SERVICES))588588 efi.reset_system(reboot_mode ?589589 EFI_RESET_WARM :590590 EFI_RESET_COLD,
+14-14
arch/x86/kernel/setup.c
···807807#ifdef CONFIG_EFI808808 if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,809809 "EL32", 4)) {810810- efi_enabled = 1;811811- efi_64bit = false;810810+ set_bit(EFI_BOOT, &x86_efi_facility);812811 } else if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,813812 "EL64", 4)) {814814- efi_enabled = 1;815815- efi_64bit = true;813813+ set_bit(EFI_BOOT, &x86_efi_facility);814814+ set_bit(EFI_64BIT, &x86_efi_facility);816815 }817817- if (efi_enabled && efi_memblock_x86_reserve_range())818818- efi_enabled = 0;816816+817817+ if (efi_enabled(EFI_BOOT))818818+ efi_memblock_x86_reserve_range();819819#endif820820821821 x86_init.oem.arch_setup();···888888889889 finish_e820_parsing();890890891891- if (efi_enabled)891891+ if (efi_enabled(EFI_BOOT))892892 efi_init();893893894894 dmi_scan_machine();···971971 * The EFI specification says that boot service code won't be called972972 * after ExitBootServices(). This is, in fact, a lie.973973 */974974- if (efi_enabled)974974+ if (efi_enabled(EFI_MEMMAP))975975 efi_reserve_boot_services();976976977977 /* preallocate 4k for mptable mpc */···1114111411151115#ifdef CONFIG_VT11161116#if defined(CONFIG_VGA_CONSOLE)11171117- if (!efi_enabled || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))11171117+ if (!efi_enabled(EFI_BOOT) || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))11181118 conswitchp = &vga_con;11191119#elif defined(CONFIG_DUMMY_CONSOLE)11201120 conswitchp = &dummy_con;···11311131 register_refined_jiffies(CLOCK_TICK_RATE);1132113211331133#ifdef CONFIG_EFI11341134- /* Once setup is done above, disable efi_enabled on mismatched11351135- * firmware/kernel archtectures since there is no support for11361136- * runtime services.11341134+ /* Once setup is done above, unmap the EFI memory map on11351135+ * mismatched firmware/kernel archtectures since there is no11361136+ * support for runtime services.11371137 */11381138- if (efi_enabled && IS_ENABLED(CONFIG_X86_64) != efi_64bit) {11381138+ if (efi_enabled(EFI_BOOT) &&11391139+ IS_ENABLED(CONFIG_X86_64) != efi_enabled(EFI_64BIT)) {11391140 pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");11401141 efi_unmap_memmap();11411141- efi_enabled = 0;11421142 }11431143#endif11441144}
+35-24
arch/x86/platform/efi/efi.c
···51515252#define EFI_DEBUG 153535454-int efi_enabled;5555-EXPORT_SYMBOL(efi_enabled);5656-5754struct efi __read_mostly efi = {5855 .mps = EFI_INVALID_TABLE_ADDR,5956 .acpi = EFI_INVALID_TABLE_ADDR,···66696770struct efi_memory_map memmap;68716969-bool efi_64bit;7070-7172static struct efi efi_phys __initdata;7273static efi_system_table_t efi_systab __initdata;73747475static inline bool efi_is_native(void)7576{7676- return IS_ENABLED(CONFIG_X86_64) == efi_64bit;7777+ return IS_ENABLED(CONFIG_X86_64) == efi_enabled(EFI_64BIT);7778}7979+8080+unsigned long x86_efi_facility;8181+8282+/*8383+ * Returns 1 if 'facility' is enabled, 0 otherwise.8484+ */8585+int efi_enabled(int facility)8686+{8787+ return test_bit(facility, &x86_efi_facility) != 0;8888+}8989+EXPORT_SYMBOL(efi_enabled);78907991static int __init setup_noefi(char *arg)8092{8181- efi_enabled = 0;9393+ clear_bit(EFI_BOOT, &x86_efi_facility);8294 return 0;8395}8496early_param("noefi", setup_noefi);···432426433427void __init efi_unmap_memmap(void)434428{429429+ clear_bit(EFI_MEMMAP, &x86_efi_facility);435430 if (memmap.map) {436431 early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);437432 memmap.map = NULL;···467460468461static int __init efi_systab_init(void *phys)469462{470470- if (efi_64bit) {463463+ if (efi_enabled(EFI_64BIT)) {471464 efi_system_table_64_t *systab64;472465 u64 tmp = 0;473466···559552 void *config_tables, *tablep;560553 int i, sz;561554562562- if (efi_64bit)555555+ if (efi_enabled(EFI_64BIT))563556 sz = sizeof(efi_config_table_64_t);564557 else565558 sz = sizeof(efi_config_table_32_t);···579572 efi_guid_t guid;580573 unsigned long table;581574582582- if (efi_64bit) {575575+ if (efi_enabled(EFI_64BIT)) {583576 u64 table64;584577 guid = ((efi_config_table_64_t *)tablep)->guid;585578 table64 = ((efi_config_table_64_t *)tablep)->table;···691684 if (boot_params.efi_info.efi_systab_hi ||692685 boot_params.efi_info.efi_memmap_hi) {693686 pr_info("Table located above 4GB, disabling EFI.\n");694694- efi_enabled = 0;695687 return;696688 }697689 efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;···700694 ((__u64)boot_params.efi_info.efi_systab_hi<<32));701695#endif702696703703- if (efi_systab_init(efi_phys.systab)) {704704- efi_enabled = 0;697697+ if (efi_systab_init(efi_phys.systab))705698 return;706706- }699699+700700+ set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility);707701708702 /*709703 * Show what we know for posterity···721715 efi.systab->hdr.revision >> 16,722716 efi.systab->hdr.revision & 0xffff, vendor);723717724724- if (efi_config_init(efi.systab->tables, efi.systab->nr_tables)) {725725- efi_enabled = 0;718718+ if (efi_config_init(efi.systab->tables, efi.systab->nr_tables))726719 return;727727- }720720+721721+ set_bit(EFI_CONFIG_TABLES, &x86_efi_facility);728722729723 /*730724 * Note: We currently don't support runtime services on an EFI···733727734728 if (!efi_is_native())735729 pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n");736736- else if (efi_runtime_init()) {737737- efi_enabled = 0;738738- return;730730+ else {731731+ if (efi_runtime_init())732732+ return;733733+ set_bit(EFI_RUNTIME_SERVICES, &x86_efi_facility);739734 }740735741741- if (efi_memmap_init()) {742742- efi_enabled = 0;736736+ if (efi_memmap_init())743737 return;744744- }738738+739739+ set_bit(EFI_MEMMAP, &x86_efi_facility);740740+745741#ifdef CONFIG_X86_32746742 if (efi_is_native()) {747743 x86_platform.get_wallclock = efi_get_time;···949941 *950942 * Call EFI services through wrapper functions.951943 */952952- efi.runtime_version = efi_systab.fw_revision;944944+ efi.runtime_version = efi_systab.hdr.revision;953945 efi.get_time = virt_efi_get_time;954946 efi.set_time = virt_efi_set_time;955947 efi.get_wakeup_time = virt_efi_get_wakeup_time;···976968{977969 efi_memory_desc_t *md;978970 void *p;971971+972972+ if (!efi_enabled(EFI_MEMMAP))973973+ return 0;979974980975 for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {981976 md = p;
+17-5
arch/x86/platform/efi/efi_64.c
···3838#include <asm/cacheflush.h>3939#include <asm/fixmap.h>40404141-static pgd_t save_pgd __initdata;4141+static pgd_t *save_pgd __initdata;4242static unsigned long efi_flags __initdata;43434444static void __init early_code_mapping_set_exec(int executable)···6161void __init efi_call_phys_prelog(void)6262{6363 unsigned long vaddress;6464+ int pgd;6565+ int n_pgds;64666567 early_code_mapping_set_exec(1);6668 local_irq_save(efi_flags);6767- vaddress = (unsigned long)__va(0x0UL);6868- save_pgd = *pgd_offset_k(0x0UL);6969- set_pgd(pgd_offset_k(0x0UL), *pgd_offset_k(vaddress));6969+7070+ n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE);7171+ save_pgd = kmalloc(n_pgds * sizeof(pgd_t), GFP_KERNEL);7272+7373+ for (pgd = 0; pgd < n_pgds; pgd++) {7474+ save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);7575+ vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);7676+ set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));7777+ }7078 __flush_tlb_all();7179}7280···8375 /*8476 * After the lock is released, the original page table is restored.8577 */8686- set_pgd(pgd_offset_k(0x0UL), save_pgd);7878+ int pgd;7979+ int n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE);8080+ for (pgd = 0; pgd < n_pgds; pgd++)8181+ set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), save_pgd[pgd]);8282+ kfree(save_pgd);8783 __flush_tlb_all();8884 local_irq_restore(efi_flags);8985 early_code_mapping_set_exec(0);
+7-3
arch/x86/platform/uv/tlb_uv.c
···10341034 * globally purge translation cache of a virtual address or all TLB's10351035 * @cpumask: mask of all cpu's in which the address is to be removed10361036 * @mm: mm_struct containing virtual address range10371037- * @va: virtual address to be removed (or TLB_FLUSH_ALL for all TLB's on cpu)10371037+ * @start: start virtual address to be removed from TLB10381038+ * @end: end virtual address to be remove from TLB10381039 * @cpu: the current cpu10391040 *10401041 * This is the entry point for initiating any UV global TLB shootdown.···10571056 */10581057const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,10591058 struct mm_struct *mm, unsigned long start,10601060- unsigned end, unsigned int cpu)10591059+ unsigned long end, unsigned int cpu)10611060{10621061 int locals = 0;10631062 int remotes = 0;···1114111311151114 record_send_statistics(stat, locals, hubs, remotes, bau_desc);1116111511171117- bau_desc->payload.address = start;11161116+ if (!end || (end - start) <= PAGE_SIZE)11171117+ bau_desc->payload.address = start;11181118+ else11191119+ bau_desc->payload.address = TLB_FLUSH_ALL;11181120 bau_desc->payload.sending_cpu = cpu;11191121 /*11201122 * uv_flush_send_and_wait returns 0 if all cpu's were messaged,
···170170 consistent_sync(vaddr, size, direction);171171}172172173173+/* Not supported for now */174174+static inline int dma_mmap_coherent(struct device *dev,175175+ struct vm_area_struct *vma, void *cpu_addr,176176+ dma_addr_t dma_addr, size_t size)177177+{178178+ return -EINVAL;179179+}180180+181181+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,182182+ void *cpu_addr, dma_addr_t dma_addr,183183+ size_t size)184184+{185185+ return -EINVAL;186186+}187187+173188#endif /* _XTENSA_DMA_MAPPING_H */
+32-10
block/genhd.c
···35353636static struct device_type disk_type;37373838+static void disk_check_events(struct disk_events *ev,3939+ unsigned int *clearing_ptr);3840static void disk_alloc_events(struct gendisk *disk);3941static void disk_add_events(struct gendisk *disk);4042static void disk_del_events(struct gendisk *disk);···15511549 const struct block_device_operations *bdops = disk->fops;15521550 struct disk_events *ev = disk->ev;15531551 unsigned int pending;15521552+ unsigned int clearing = mask;1554155315551554 if (!ev) {15561555 /* for drivers still using the old ->media_changed method */···15611558 return 0;15621559 }1563156015641564- /* tell the workfn about the events being cleared */15611561+ disk_block_events(disk);15621562+15631563+ /*15641564+ * store the union of mask and ev->clearing on the stack so that the15651565+ * race with disk_flush_events does not cause ambiguity (ev->clearing15661566+ * can still be modified even if events are blocked).15671567+ */15651568 spin_lock_irq(&ev->lock);15661566- ev->clearing |= mask;15691569+ clearing |= ev->clearing;15701570+ ev->clearing = 0;15671571 spin_unlock_irq(&ev->lock);1568157215691569- /* uncondtionally schedule event check and wait for it to finish */15701570- disk_block_events(disk);15711571- queue_delayed_work(system_freezable_wq, &ev->dwork, 0);15721572- flush_delayed_work(&ev->dwork);15731573- __disk_unblock_events(disk, false);15731573+ disk_check_events(ev, &clearing);15741574+ /*15751575+ * if ev->clearing is not 0, the disk_flush_events got called in the15761576+ * middle of this function, so we want to run the workfn without delay.15771577+ */15781578+ __disk_unblock_events(disk, ev->clearing ? true : false);1574157915751580 /* then, fetch and clear pending events */15761581 spin_lock_irq(&ev->lock);15771577- WARN_ON_ONCE(ev->clearing & mask); /* cleared by workfn */15781582 pending = ev->pending & mask;15791583 ev->pending &= ~mask;15801584 spin_unlock_irq(&ev->lock);15851585+ WARN_ON_ONCE(clearing & mask);1581158615821587 return pending;15831588}1584158915901590+/*15911591+ * Separate this part out so that a different pointer for clearing_ptr can be15921592+ * passed in for disk_clear_events.15931593+ */15851594static void disk_events_workfn(struct work_struct *work)15861595{15871596 struct delayed_work *dwork = to_delayed_work(work);15881597 struct disk_events *ev = container_of(dwork, struct disk_events, dwork);15981598+15991599+ disk_check_events(ev, &ev->clearing);16001600+}16011601+16021602+static void disk_check_events(struct disk_events *ev,16031603+ unsigned int *clearing_ptr)16041604+{15891605 struct gendisk *disk = ev->disk;15901606 char *envp[ARRAY_SIZE(disk_uevents) + 1] = { };15911591- unsigned int clearing = ev->clearing;16071607+ unsigned int clearing = *clearing_ptr;15921608 unsigned int events;15931609 unsigned long intv;15941610 int nr_events = 0, i;···1620159816211599 events &= ~ev->pending;16221600 ev->pending |= events;16231623- ev->clearing &= ~clearing;16011601+ *clearing_ptr &= ~clearing;1624160216251603 intv = disk_events_poll_jiffies(disk);16261604 if (!ev->block && intv)
···250250 return acpi_rsdp;251251#endif252252253253- if (efi_enabled) {253253+ if (efi_enabled(EFI_CONFIG_TABLES)) {254254 if (efi.acpi20 != EFI_INVALID_TABLE_ADDR)255255 return efi.acpi20;256256 else if (efi.acpi != EFI_INVALID_TABLE_ADDR)
···11061106 * @val_count: Number of registers to write11071107 *11081108 * This function is intended to be used for writing a large block of11091109- * data to be device either in single transfer or multiple transfer.11091109+ * data to the device either in single transfer or multiple transfer.11101110 *11111111 * A value of zero will be returned on success, a negative errno will11121112 * be returned in error cases.
···268268void bcma_bus_unregister(struct bcma_bus *bus)269269{270270 struct bcma_device *cores[3];271271+ int err;272272+273273+ err = bcma_gpio_unregister(&bus->drv_cc);274274+ if (err == -EBUSY)275275+ bcma_err(bus, "Some GPIOs are still in use.\n");276276+ else if (err)277277+ bcma_err(bus, "Can not unregister GPIO driver: %i\n", err);271278272279 cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K);273280 cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE);
+1-1
drivers/block/drbd/drbd_req.c
···168168}169169170170/* must hold resource->req_lock */171171-static void start_new_tl_epoch(struct drbd_tconn *tconn)171171+void start_new_tl_epoch(struct drbd_tconn *tconn)172172{173173 /* no point closing an epoch, if it is empty, anyways. */174174 if (tconn->current_tle_writes == 0)
···20622062 /* Disable interrupts for vqs */20632063 vdev->config->reset(vdev);20642064 /* Finish up work that's lined up */20652065- cancel_work_sync(&portdev->control_work);20652065+ if (use_multiport(portdev))20662066+ cancel_work_sync(&portdev->control_work);2066206720672068 list_for_each_entry_safe(port, port2, &portdev->ports, list)20682069 unplug_port(port);
···106106config X86_POWERNOW_K8107107 tristate "AMD Opteron/Athlon64 PowerNow!"108108 select CPU_FREQ_TABLE109109- depends on ACPI && ACPI_PROCESSOR109109+ depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ110110 help111111 This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.112112 Support for K10 and newer processors is now in acpi-cpufreq.
···7171 }72727373 if (cpu_reg) {7474+ rcu_read_lock();7475 opp = opp_find_freq_ceil(cpu_dev, &freq_Hz);7576 if (IS_ERR(opp)) {7777+ rcu_read_unlock();7678 pr_err("failed to find OPP for %ld\n", freq_Hz);7779 return PTR_ERR(opp);7880 }7981 volt = opp_get_voltage(opp);8282+ rcu_read_unlock();8083 tol = volt * voltage_tolerance / 100;8184 volt_old = regulator_get_voltage(cpu_reg);8285 }···239236 */240237 for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)241238 ;239239+ rcu_read_lock();242240 opp = opp_find_freq_exact(cpu_dev,243241 freq_table[0].frequency * 1000, true);244242 min_uV = opp_get_voltage(opp);245243 opp = opp_find_freq_exact(cpu_dev,246244 freq_table[i-1].frequency * 1000, true);247245 max_uV = opp_get_voltage(opp);246246+ rcu_read_unlock();248247 ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);249248 if (ret > 0)250249 transition_latency += ret * 1000;
+3
drivers/cpufreq/omap-cpufreq.c
···110110 freq = ret;111111112112 if (mpu_reg) {113113+ rcu_read_lock();113114 opp = opp_find_freq_ceil(mpu_dev, &freq);114115 if (IS_ERR(opp)) {116116+ rcu_read_unlock();115117 dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",116118 __func__, freqs.new);117119 return -EINVAL;118120 }119121 volt = opp_get_voltage(opp);122122+ rcu_read_unlock();120123 tol = volt * OPP_TOLERANCE / 100;121124 volt_old = regulator_get_voltage(mpu_reg);122125 }
+5
drivers/devfreq/devfreq.c
···994994 * @freq: The frequency given to target function995995 * @flags: Flags handed from devfreq framework.996996 *997997+ * Locking: This function must be called under rcu_read_lock(). opp is a rcu998998+ * protected pointer. The reason for the same is that the opp pointer which is999999+ * returned will remain valid for use with opp_get_{voltage, freq} only while10001000+ * under the locked area. The pointer returned must be used prior to unlocking10011001+ * with rcu_read_unlock() to maintain the integrity of the pointer.9971002 */9981003struct opp *devfreq_recommended_opp(struct device *dev, unsigned long *freq,9991004 u32 flags)
+67-27
drivers/devfreq/exynos4_bus.c
···7373#define EX4210_LV_NUM (LV_2 + 1)7474#define EX4x12_LV_NUM (LV_4 + 1)75757676+/**7777+ * struct busfreq_opp_info - opp information for bus7878+ * @rate: Frequency in hertz7979+ * @volt: Voltage in microvolts corresponding to this OPP8080+ */8181+struct busfreq_opp_info {8282+ unsigned long rate;8383+ unsigned long volt;8484+};8585+7686struct busfreq_data {7787 enum exynos4_busf_type type;7888 struct device *dev;···9080 bool disabled;9181 struct regulator *vdd_int;9282 struct regulator *vdd_mif; /* Exynos4412/4212 only */9393- struct opp *curr_opp;8383+ struct busfreq_opp_info curr_oppinfo;9484 struct exynos4_ppmu dmc[2];95859686 struct notifier_block pm_notifier;···306296};307297308298309309-static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp)299299+static int exynos4210_set_busclk(struct busfreq_data *data,300300+ struct busfreq_opp_info *oppi)310301{311302 unsigned int index;312303 unsigned int tmp;313304314305 for (index = LV_0; index < EX4210_LV_NUM; index++)315315- if (opp_get_freq(opp) == exynos4210_busclk_table[index].clk)306306+ if (oppi->rate == exynos4210_busclk_table[index].clk)316307 break;317308318309 if (index == EX4210_LV_NUM)···372361 return 0;373362}374363375375-static int exynos4x12_set_busclk(struct busfreq_data *data, struct opp *opp)364364+static int exynos4x12_set_busclk(struct busfreq_data *data,365365+ struct busfreq_opp_info *oppi)376366{377367 unsigned int index;378368 unsigned int tmp;379369380370 for (index = LV_0; index < EX4x12_LV_NUM; index++)381381- if (opp_get_freq(opp) == exynos4x12_mifclk_table[index].clk)371371+ if (oppi->rate == exynos4x12_mifclk_table[index].clk)382372 break;383373384374 if (index == EX4x12_LV_NUM)···588576 return -EINVAL;589577}590578591591-static int exynos4_bus_setvolt(struct busfreq_data *data, struct opp *opp,592592- struct opp *oldopp)579579+static int exynos4_bus_setvolt(struct busfreq_data *data,580580+ struct busfreq_opp_info *oppi,581581+ struct busfreq_opp_info *oldoppi)593582{594583 int err = 0, tmp;595595- unsigned long volt = opp_get_voltage(opp);584584+ unsigned long volt = oppi->volt;596585597586 switch (data->type) {598587 case TYPE_BUSF_EXYNOS4210:···608595 if (err)609596 break;610597611611- tmp = exynos4x12_get_intspec(opp_get_freq(opp));598598+ tmp = exynos4x12_get_intspec(oppi->rate);612599 if (tmp < 0) {613600 err = tmp;614601 regulator_set_voltage(data->vdd_mif,615615- opp_get_voltage(oldopp),602602+ oldoppi->volt,616603 MAX_SAFEVOLT);617604 break;618605 }···622609 /* Try to recover */623610 if (err)624611 regulator_set_voltage(data->vdd_mif,625625- opp_get_voltage(oldopp),612612+ oldoppi->volt,626613 MAX_SAFEVOLT);627614 break;628615 default:···639626 struct platform_device *pdev = container_of(dev, struct platform_device,640627 dev);641628 struct busfreq_data *data = platform_get_drvdata(pdev);642642- struct opp *opp = devfreq_recommended_opp(dev, _freq, flags);643643- unsigned long freq = opp_get_freq(opp);644644- unsigned long old_freq = opp_get_freq(data->curr_opp);629629+ struct opp *opp;630630+ unsigned long freq;631631+ unsigned long old_freq = data->curr_oppinfo.rate;632632+ struct busfreq_opp_info new_oppinfo;645633646646- if (IS_ERR(opp))634634+ rcu_read_lock();635635+ opp = devfreq_recommended_opp(dev, _freq, flags);636636+ if (IS_ERR(opp)) {637637+ rcu_read_unlock();647638 return PTR_ERR(opp);639639+ }640640+ new_oppinfo.rate = opp_get_freq(opp);641641+ new_oppinfo.volt = opp_get_voltage(opp);642642+ rcu_read_unlock();643643+ freq = new_oppinfo.rate;648644649645 if (old_freq == freq)650646 return 0;651647652652- dev_dbg(dev, "targetting %lukHz %luuV\n", freq, opp_get_voltage(opp));648648+ dev_dbg(dev, "targetting %lukHz %luuV\n", freq, new_oppinfo.volt);653649654650 mutex_lock(&data->lock);655651···666644 goto out;667645668646 if (old_freq < freq)669669- err = exynos4_bus_setvolt(data, opp, data->curr_opp);647647+ err = exynos4_bus_setvolt(data, &new_oppinfo,648648+ &data->curr_oppinfo);670649 if (err)671650 goto out;672651673652 if (old_freq != freq) {674653 switch (data->type) {675654 case TYPE_BUSF_EXYNOS4210:676676- err = exynos4210_set_busclk(data, opp);655655+ err = exynos4210_set_busclk(data, &new_oppinfo);677656 break;678657 case TYPE_BUSF_EXYNOS4x12:679679- err = exynos4x12_set_busclk(data, opp);658658+ err = exynos4x12_set_busclk(data, &new_oppinfo);680659 break;681660 default:682661 err = -EINVAL;···687664 goto out;688665689666 if (old_freq > freq)690690- err = exynos4_bus_setvolt(data, opp, data->curr_opp);667667+ err = exynos4_bus_setvolt(data, &new_oppinfo,668668+ &data->curr_oppinfo);691669 if (err)692670 goto out;693671694694- data->curr_opp = opp;672672+ data->curr_oppinfo = new_oppinfo;695673out:696674 mutex_unlock(&data->lock);697675 return err;···726702727703 exynos4_read_ppmu(data);728704 busier_dmc = exynos4_get_busier_dmc(data);729729- stat->current_frequency = opp_get_freq(data->curr_opp);705705+ stat->current_frequency = data->curr_oppinfo.rate;730706731707 if (busier_dmc)732708 addr = S5P_VA_DMC1;···957933 struct busfreq_data *data = container_of(this, struct busfreq_data,958934 pm_notifier);959935 struct opp *opp;936936+ struct busfreq_opp_info new_oppinfo;960937 unsigned long maxfreq = ULONG_MAX;961938 int err = 0;962939···968943969944 data->disabled = true;970945946946+ rcu_read_lock();971947 opp = opp_find_freq_floor(data->dev, &maxfreq);948948+ if (IS_ERR(opp)) {949949+ rcu_read_unlock();950950+ dev_err(data->dev, "%s: unable to find a min freq\n",951951+ __func__);952952+ return PTR_ERR(opp);953953+ }954954+ new_oppinfo.rate = opp_get_freq(opp);955955+ new_oppinfo.volt = opp_get_voltage(opp);956956+ rcu_read_unlock();972957973973- err = exynos4_bus_setvolt(data, opp, data->curr_opp);958958+ err = exynos4_bus_setvolt(data, &new_oppinfo,959959+ &data->curr_oppinfo);974960 if (err)975961 goto unlock;976962977963 switch (data->type) {978964 case TYPE_BUSF_EXYNOS4210:979979- err = exynos4210_set_busclk(data, opp);965965+ err = exynos4210_set_busclk(data, &new_oppinfo);980966 break;981967 case TYPE_BUSF_EXYNOS4x12:982982- err = exynos4x12_set_busclk(data, opp);968968+ err = exynos4x12_set_busclk(data, &new_oppinfo);983969 break;984970 default:985971 err = -EINVAL;···998962 if (err)999963 goto unlock;100096410011001- data->curr_opp = opp;965965+ data->curr_oppinfo = new_oppinfo;1002966unlock:1003967 mutex_unlock(&data->lock);1004968 if (err)···10631027 }10641028 }1065102910301030+ rcu_read_lock();10661031 opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq);10671032 if (IS_ERR(opp)) {10331033+ rcu_read_unlock();10681034 dev_err(dev, "Invalid initial frequency %lu kHz.\n",10691035 exynos4_devfreq_profile.initial_freq);10701036 return PTR_ERR(opp);10711037 }10721072- data->curr_opp = opp;10381038+ data->curr_oppinfo.rate = opp_get_freq(opp);10391039+ data->curr_oppinfo.volt = opp_get_voltage(opp);10401040+ rcu_read_unlock();1073104110741042 platform_set_drvdata(pdev, data);10751043
···951951 goto free_resources;952952 }953953 }954954- dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_TO_DEVICE);954954+ dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE);955955956956 /* skip validate if the capability is not present */957957 if (!dma_has_cap(DMA_XOR_VAL, dma_chan->device->cap_mask))
+6-2
drivers/dma/tegra20-apb-dma.c
···266266 if (async_tx_test_ack(&dma_desc->txd)) {267267 list_del(&dma_desc->node);268268 spin_unlock_irqrestore(&tdc->lock, flags);269269+ dma_desc->txd.flags = 0;269270 return dma_desc;270271 }271272 }···10511050 TEGRA_APBDMA_AHBSEQ_WRAP_SHIFT;10521051 ahb_seq |= TEGRA_APBDMA_AHBSEQ_BUS_WIDTH_32;1053105210541054- csr |= TEGRA_APBDMA_CSR_FLOW | TEGRA_APBDMA_CSR_IE_EOC;10531053+ csr |= TEGRA_APBDMA_CSR_FLOW;10541054+ if (flags & DMA_PREP_INTERRUPT)10551055+ csr |= TEGRA_APBDMA_CSR_IE_EOC;10551056 csr |= tdc->dma_sconfig.slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;1056105710571058 apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;···10981095 mem += len;10991096 }11001097 sg_req->last_sg = true;11011101- dma_desc->txd.flags = 0;10981098+ if (flags & DMA_CTRL_ACK)10991099+ dma_desc->txd.flags = DMA_CTRL_ACK;1102110011031101 /*11041102 * Make sure that mode should not be conflicting with currently
+3-3
drivers/edac/edac_mc.c
···340340 /*341341 * Alocate and fill the csrow/channels structs342342 */343343- mci->csrows = kcalloc(sizeof(*mci->csrows), tot_csrows, GFP_KERNEL);343343+ mci->csrows = kcalloc(tot_csrows, sizeof(*mci->csrows), GFP_KERNEL);344344 if (!mci->csrows)345345 goto error;346346 for (row = 0; row < tot_csrows; row++) {···351351 csr->csrow_idx = row;352352 csr->mci = mci;353353 csr->nr_channels = tot_channels;354354- csr->channels = kcalloc(sizeof(*csr->channels), tot_channels,354354+ csr->channels = kcalloc(tot_channels, sizeof(*csr->channels),355355 GFP_KERNEL);356356 if (!csr->channels)357357 goto error;···369369 /*370370 * Allocate and fill the dimm structs371371 */372372- mci->dimms = kcalloc(sizeof(*mci->dimms), tot_dimms, GFP_KERNEL);372372+ mci->dimms = kcalloc(tot_dimms, sizeof(*mci->dimms), GFP_KERNEL);373373 if (!mci->dimms)374374 goto error;375375
+1-1
drivers/edac/edac_pci_sysfs.c
···256256 struct edac_pci_dev_attribute *edac_pci_dev;257257 edac_pci_dev = (struct edac_pci_dev_attribute *)attr;258258259259- if (edac_pci_dev->show)259259+ if (edac_pci_dev->store)260260 return edac_pci_dev->store(edac_pci_dev->value, buffer, count);261261 return -EIO;262262}
+1-1
drivers/firmware/dmi_scan.c
···471471 char __iomem *p, *q;472472 int rc;473473474474- if (efi_enabled) {474474+ if (efi_enabled(EFI_CONFIG_TABLES)) {475475 if (efi.smbios == EFI_INVALID_TABLE_ADDR)476476 goto error;477477
+5-4
drivers/firmware/efivars.c
···674674 err = -EACCES;675675 break;676676 case EFI_NOT_FOUND:677677- err = -ENOENT;677677+ err = -EIO;678678 break;679679 default:680680 err = -EINVAL;···793793 spin_unlock(&efivars->lock);794794 efivar_unregister(var);795795 drop_nlink(inode);796796+ d_delete(file->f_dentry);796797 dput(file->f_dentry);797798798799 } else {···995994 list_del(&var->list);996995 spin_unlock(&efivars->lock);997996 efivar_unregister(var);998998- drop_nlink(dir);997997+ drop_nlink(dentry->d_inode);999998 dput(dentry);1000999 return 0;10011000 }···17831782 printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,17841783 EFIVARS_DATE);1785178417861786- if (!efi_enabled)17851785+ if (!efi_enabled(EFI_RUNTIME_SERVICES))17871786 return 0;1788178717891788 /* For now we'll register the efi directory at /sys/firmware/efi */···18231822static void __exit18241823efivars_exit(void)18251824{18261826- if (efi_enabled) {18251825+ if (efi_enabled(EFI_RUNTIME_SERVICES)) {18271826 unregister_efivars(&__efivars);18281827 kobject_put(efi_kobj);18291828 }
+1-1
drivers/firmware/iscsi_ibft_find.c
···9999 /* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will100100 * only use ACPI for this */101101102102- if (!efi_enabled)102102+ if (!efi_enabled(EFI_BOOT))103103 find_ibft_in_mem();104104105105 if (ibft_addr) {
+2-2
drivers/gpu/drm/exynos/Kconfig
···24242525config DRM_EXYNOS_FIMD2626 bool "Exynos DRM FIMD"2727- depends on DRM_EXYNOS && !FB_S3C2727+ depends on DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM2828 help2929 Choose this option if you want to use Exynos FIMD for DRM.3030···48484949config DRM_EXYNOS_IPP5050 bool "Exynos DRM IPP"5151- depends on DRM_EXYNOS5151+ depends on DRM_EXYNOS && !ARCH_MULTIPLATFORM5252 help5353 Choose this option if you want to use IPP feature for DRM.5454
+15-18
drivers/gpu/drm/exynos/exynos_drm_connector.c
···1818#include "exynos_drm_drv.h"1919#include "exynos_drm_encoder.h"20202121-#define MAX_EDID 2562221#define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\2322 drm_connector)2423···9596 to_exynos_connector(connector);9697 struct exynos_drm_manager *manager = exynos_connector->manager;9798 struct exynos_drm_display_ops *display_ops = manager->display_ops;9898- unsigned int count;9999+ struct edid *edid = NULL;100100+ unsigned int count = 0;101101+ int ret;99102100103 DRM_DEBUG_KMS("%s\n", __FILE__);101104···115114 * because lcd panel has only one mode.116115 */117116 if (display_ops->get_edid) {118118- int ret;119119- void *edid;120120-121121- edid = kzalloc(MAX_EDID, GFP_KERNEL);122122- if (!edid) {123123- DRM_ERROR("failed to allocate edid\n");124124- return 0;117117+ edid = display_ops->get_edid(manager->dev, connector);118118+ if (IS_ERR_OR_NULL(edid)) {119119+ ret = PTR_ERR(edid);120120+ edid = NULL;121121+ DRM_ERROR("Panel operation get_edid failed %d\n", ret);122122+ goto out;125123 }126124127127- ret = display_ops->get_edid(manager->dev, connector,128128- edid, MAX_EDID);129129- if (ret < 0) {130130- DRM_ERROR("failed to get edid data.\n");131131- kfree(edid);132132- edid = NULL;133133- return 0;125125+ count = drm_add_edid_modes(connector, edid);126126+ if (count < 0) {127127+ DRM_ERROR("Add edid modes failed %d\n", count);128128+ goto out;134129 }135130136131 drm_mode_connector_update_edid_property(connector, edid);137137- count = drm_add_edid_modes(connector, edid);138138- kfree(edid);139132 } else {140133 struct exynos_drm_panel_info *panel;141134 struct drm_display_mode *mode = drm_mode_create(connector->dev);···156161 count = 1;157162 }158163164164+out:165165+ kfree(edid);159166 return count;160167}161168
+11-13
drivers/gpu/drm/exynos/exynos_drm_dmabuf.c
···1919struct exynos_drm_dmabuf_attachment {2020 struct sg_table sgt;2121 enum dma_data_direction dir;2222+ bool is_mapped;2223};23242425static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf,···73727473 DRM_DEBUG_PRIME("%s\n", __FILE__);75747676- if (WARN_ON(dir == DMA_NONE))7777- return ERR_PTR(-EINVAL);7878-7975 /* just return current sgt if already requested. */8080- if (exynos_attach->dir == dir)7676+ if (exynos_attach->dir == dir && exynos_attach->is_mapped)8177 return &exynos_attach->sgt;8282-8383- /* reattaching is not allowed. */8484- if (WARN_ON(exynos_attach->dir != DMA_NONE))8585- return ERR_PTR(-EBUSY);86788779 buf = gem_obj->buffer;8880 if (!buf) {···101107 wr = sg_next(wr);102108 }103109104104- nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);105105- if (!nents) {106106- DRM_ERROR("failed to map sgl with iommu.\n");107107- sgt = ERR_PTR(-EIO);108108- goto err_unlock;110110+ if (dir != DMA_NONE) {111111+ nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);112112+ if (!nents) {113113+ DRM_ERROR("failed to map sgl with iommu.\n");114114+ sg_free_table(sgt);115115+ sgt = ERR_PTR(-EIO);116116+ goto err_unlock;117117+ }109118 }110119120120+ exynos_attach->is_mapped = true;111121 exynos_attach->dir = dir;112122 attach->priv = exynos_attach;113123
+2-2
drivers/gpu/drm/exynos/exynos_drm_drv.h
···148148struct exynos_drm_display_ops {149149 enum exynos_drm_output_type type;150150 bool (*is_connected)(struct device *dev);151151- int (*get_edid)(struct device *dev, struct drm_connector *connector,152152- u8 *edid, int len);151151+ struct edid *(*get_edid)(struct device *dev,152152+ struct drm_connector *connector);153153 void *(*get_panel)(struct device *dev);154154 int (*check_timing)(struct device *dev, void *timing);155155 int (*power_on)(struct device *dev, int mode);
+1-1
drivers/gpu/drm/exynos/exynos_drm_g2d.c
···324324 g2d_userptr = NULL;325325}326326327327-dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,327327+static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,328328 unsigned long userptr,329329 unsigned long size,330330 struct drm_file *filp,
···505505 struct drm_i915_private *dev_priv = dev->dev_private;506506 int ret = init_ring_common(ring);507507508508- if (INTEL_INFO(dev)->gen > 3) {508508+ if (INTEL_INFO(dev)->gen > 3)509509 I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH));510510- if (IS_GEN7(dev))511511- I915_WRITE(GFX_MODE_GEN7,512512- _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) |513513- _MASKED_BIT_ENABLE(GFX_REPLAY_MODE));514514- }510510+511511+ /* We need to disable the AsyncFlip performance optimisations in order512512+ * to use MI_WAIT_FOR_EVENT within the CS. It should already be513513+ * programmed to '1' on all products.514514+ */515515+ if (INTEL_INFO(dev)->gen >= 6)516516+ I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));517517+518518+ /* Required for the hardware to program scanline values for waiting */519519+ if (INTEL_INFO(dev)->gen == 6)520520+ I915_WRITE(GFX_MODE,521521+ _MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_ALWAYS));522522+523523+ if (IS_GEN7(dev))524524+ I915_WRITE(GFX_MODE_GEN7,525525+ _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) |526526+ _MASKED_BIT_ENABLE(GFX_REPLAY_MODE));515527516528 if (INTEL_INFO(dev)->gen >= 5) {517529 ret = init_pipe_control(ring);
···241241 y = 0;242242 }243243244244- if (ASIC_IS_AVIVO(rdev)) {244244+ /* fixed on DCE6 and newer */245245+ if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE6(rdev)) {245246 int i = 0;246247 struct drm_crtc *crtc_p;247248
+2-1
drivers/gpu/drm/radeon/radeon_device.c
···429429{430430 uint32_t reg;431431432432- if (efi_enabled && rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE)432432+ if (efi_enabled(EFI_BOOT) &&433433+ rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE)433434 return false;434435435436 /* first check CRTCs */
+4-2
drivers/gpu/drm/radeon/radeon_display.c
···11151115 }1116111611171117 radeon_fb = kzalloc(sizeof(*radeon_fb), GFP_KERNEL);11181118- if (radeon_fb == NULL)11181118+ if (radeon_fb == NULL) {11191119+ drm_gem_object_unreference_unlocked(obj);11191120 return ERR_PTR(-ENOMEM);11211121+ }1120112211211123 ret = radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj);11221124 if (ret) {11231125 kfree(radeon_fb);11241126 drm_gem_object_unreference_unlocked(obj);11251125- return NULL;11271127+ return ERR_PTR(ret);11261128 }1127112911281130 return &radeon_fb->base;
+3
drivers/gpu/drm/radeon/radeon_ring.c
···377377{378378 int r;379379380380+ /* make sure we aren't trying to allocate more space than there is on the ring */381381+ if (ndw > (ring->ring_size / 4))382382+ return -ENOMEM;380383 /* Align requested size with padding so unlock_commit can381384 * pad safely */382385 ndw = (ndw + ring->align_mask) & ~ring->align_mask;
···336336 WREG32(R600_CITF_CNTL, blackout);337337 }338338 }339339+ /* wait for the MC to settle */340340+ udelay(100);339341}340342341343void rv515_mc_resume(struct radeon_device *rdev, struct rv515_mc_save *save)
···975975}976976977977/*978978+ * Family15h Model 10h-1fh erratum 746 (IOMMU Logging May Stall Translations)979979+ * Workaround:980980+ * BIOS should disable L2B micellaneous clock gating by setting981981+ * L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b982982+ */983983+static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)984984+{985985+ u32 value;986986+987987+ if ((boot_cpu_data.x86 != 0x15) ||988988+ (boot_cpu_data.x86_model < 0x10) ||989989+ (boot_cpu_data.x86_model > 0x1f))990990+ return;991991+992992+ pci_write_config_dword(iommu->dev, 0xf0, 0x90);993993+ pci_read_config_dword(iommu->dev, 0xf4, &value);994994+995995+ if (value & BIT(2))996996+ return;997997+998998+ /* Select NB indirect register 0x90 and enable writing */999999+ pci_write_config_dword(iommu->dev, 0xf0, 0x90 | (1 << 8));10001000+10011001+ pci_write_config_dword(iommu->dev, 0xf4, value | 0x4);10021002+ pr_info("AMD-Vi: Applying erratum 746 workaround for IOMMU at %s\n",10031003+ dev_name(&iommu->dev->dev));10041004+10051005+ /* Clear the enable writing bit */10061006+ pci_write_config_dword(iommu->dev, 0xf0, 0x90);10071007+}10081008+10091009+/*9781010 * This function clues the initialization function for one IOMMU9791011 * together and also allocates the command buffer and programs the9801012 * hardware. It does NOT enable the IOMMU. This is done afterwards.···12031171 for (i = 0; i < 0x83; i++)12041172 iommu->stored_l2[i] = iommu_read_l2(iommu, i);12051173 }11741174+11751175+ amd_iommu_erratum_746_workaround(iommu);1206117612071177 return pci_enable_device(iommu->dev);12081178}
+15-6
drivers/iommu/intel-iommu.c
···42344234 .pgsize_bitmap = INTEL_IOMMU_PGSIZES,42354235};4236423642374237+static void quirk_iommu_g4x_gfx(struct pci_dev *dev)42384238+{42394239+ /* G4x/GM45 integrated gfx dmar support is totally busted. */42404240+ printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n");42414241+ dmar_map_gfx = 0;42424242+}42434243+42444244+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_g4x_gfx);42454245+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e00, quirk_iommu_g4x_gfx);42464246+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e10, quirk_iommu_g4x_gfx);42474247+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e20, quirk_iommu_g4x_gfx);42484248+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_g4x_gfx);42494249+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_g4x_gfx);42504250+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_g4x_gfx);42514251+42374252static void quirk_iommu_rwbf(struct pci_dev *dev)42384253{42394254 /*···42574242 */42584243 printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n");42594244 rwbf_quirk = 1;42604260-42614261- /* https://bugzilla.redhat.com/show_bug.cgi?id=538163 */42624262- if (dev->revision == 0x07) {42634263- printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n");42644264- dmar_map_gfx = 0;42654265- }42664245}4267424642684247DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
+2
drivers/isdn/gigaset/capi.c
···248248 CAPIMSG_APPID(data), CAPIMSG_MSGID(data), l,249249 CAPIMSG_CONTROL(data));250250 l -= 12;251251+ if (l <= 0)252252+ return;251253 dbgline = kmalloc(3 * l, GFP_ATOMIC);252254 if (!dbgline)253255 return;
+37-64
drivers/md/dm-raid.c
···340340}341341342342/*343343- * validate_rebuild_devices343343+ * validate_raid_redundancy344344 * @rs345345 *346346- * Determine if the devices specified for rebuild can result in a valid347347- * usable array that is capable of rebuilding the given devices.346346+ * Determine if there are enough devices in the array that haven't347347+ * failed (or are being rebuilt) to form a usable array.348348 *349349 * Returns: 0 on success, -EINVAL on failure.350350 */351351-static int validate_rebuild_devices(struct raid_set *rs)351351+static int validate_raid_redundancy(struct raid_set *rs)352352{353353 unsigned i, rebuild_cnt = 0;354354 unsigned rebuilds_per_group, copies, d;355355356356- if (!(rs->print_flags & DMPF_REBUILD))357357- return 0;358358-359356 for (i = 0; i < rs->md.raid_disks; i++)360360- if (!test_bit(In_sync, &rs->dev[i].rdev.flags))357357+ if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||358358+ !rs->dev[i].rdev.sb_page)361359 rebuild_cnt++;362360363361 switch (rs->raid_type->level) {···391393 * A A B B C392394 * C D D E E393395 */394394- rebuilds_per_group = 0;395396 for (i = 0; i < rs->md.raid_disks * copies; i++) {397397+ if (!(i % copies))398398+ rebuilds_per_group = 0;396399 d = i % rs->md.raid_disks;397397- if (!test_bit(In_sync, &rs->dev[d].rdev.flags) &&400400+ if ((!rs->dev[d].rdev.sb_page ||401401+ !test_bit(In_sync, &rs->dev[d].rdev.flags)) &&398402 (++rebuilds_per_group >= copies))399403 goto too_many;400400- if (!((i + 1) % copies))401401- rebuilds_per_group = 0;402404 }403405 break;404406 default:405405- DMERR("The rebuild parameter is not supported for %s",406406- rs->raid_type->name);407407- rs->ti->error = "Rebuild not supported for this RAID type";408408- return -EINVAL;407407+ if (rebuild_cnt)408408+ return -EINVAL;409409 }410410411411 return 0;412412413413too_many:414414- rs->ti->error = "Too many rebuild devices specified";415414 return -EINVAL;416415}417416···658663 return -EINVAL;659664 }660665 rs->md.dev_sectors = sectors_per_dev;661661-662662- if (validate_rebuild_devices(rs))663663- return -EINVAL;664666665667 /* Assume there are no metadata devices until the drives are parsed */666668 rs->md.persistent = 0;···987995static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)988996{989997 int ret;990990- unsigned redundancy = 0;991998 struct raid_dev *dev;992999 struct md_rdev *rdev, *tmp, *freshest;9931000 struct mddev *mddev = &rs->md;994994-995995- switch (rs->raid_type->level) {996996- case 1:997997- redundancy = rs->md.raid_disks - 1;998998- break;999999- case 4:10001000- case 5:10011001- case 6:10021002- redundancy = rs->raid_type->parity_devs;10031003- break;10041004- case 10:10051005- redundancy = raid10_md_layout_to_copies(mddev->layout) - 1;10061006- break;10071007- default:10081008- ti->error = "Unknown RAID type";10091009- return -EINVAL;10101010- }1011100110121002 freshest = NULL;10131003 rdev_for_each_safe(rdev, tmp, mddev) {···10191045 break;10201046 default:10211047 dev = container_of(rdev, struct raid_dev, rdev);10221022- if (redundancy--) {10231023- if (dev->meta_dev)10241024- dm_put_device(ti, dev->meta_dev);10481048+ if (dev->meta_dev)10491049+ dm_put_device(ti, dev->meta_dev);1025105010261026- dev->meta_dev = NULL;10271027- rdev->meta_bdev = NULL;10511051+ dev->meta_dev = NULL;10521052+ rdev->meta_bdev = NULL;1028105310291029- if (rdev->sb_page)10301030- put_page(rdev->sb_page);10541054+ if (rdev->sb_page)10551055+ put_page(rdev->sb_page);1031105610321032- rdev->sb_page = NULL;10571057+ rdev->sb_page = NULL;1033105810341034- rdev->sb_loaded = 0;10591059+ rdev->sb_loaded = 0;1035106010361036- /*10371037- * We might be able to salvage the data device10381038- * even though the meta device has failed. For10391039- * now, we behave as though '- -' had been10401040- * set for this device in the table.10411041- */10421042- if (dev->data_dev)10431043- dm_put_device(ti, dev->data_dev);10611061+ /*10621062+ * We might be able to salvage the data device10631063+ * even though the meta device has failed. For10641064+ * now, we behave as though '- -' had been10651065+ * set for this device in the table.10661066+ */10671067+ if (dev->data_dev)10681068+ dm_put_device(ti, dev->data_dev);1044106910451045- dev->data_dev = NULL;10461046- rdev->bdev = NULL;10701070+ dev->data_dev = NULL;10711071+ rdev->bdev = NULL;1047107210481048- list_del(&rdev->same_set);10491049-10501050- continue;10511051- }10521052- ti->error = "Failed to load superblock";10531053- return ret;10731073+ list_del(&rdev->same_set);10541074 }10551075 }1056107610571077 if (!freshest)10581078 return 0;10791079+10801080+ if (validate_raid_redundancy(rs)) {10811081+ rs->ti->error = "Insufficient redundancy to activate array";10821082+ return -EINVAL;10831083+ }1059108410601085 /*10611086 * Validation of the freshest device provides the source of···1405143214061433static struct target_type raid_target = {14071434 .name = "raid",14081408- .version = {1, 4, 0},14351435+ .version = {1, 4, 1},14091436 .module = THIS_MODULE,14101437 .ctr = raid_ctr,14111438 .dtr = raid_dtr,
···518518 .ioctl_ops = &fm_drv_ioctl_ops,519519 .name = FM_DRV_NAME,520520 .release = video_device_release,521521+ /*522522+ * To ensure both the tuner and modulator ioctls are accessible we523523+ * set the vfl_dir to M2M to indicate this.524524+ *525525+ * It is not really a mem2mem device of course, but it can both receive526526+ * and transmit using the same radio device. It's the only radio driver527527+ * that does this and it should really be split in two radio devices,528528+ * but that would affect applications using this driver.529529+ */530530+ .vfl_dir = VFL_DIR_M2M,521531};522532523533int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr)
+1
drivers/mfd/Kconfig
···237237 depends on I2C=y && GPIOLIB238238 select MFD_CORE239239 select REGMAP_I2C240240+ select REGMAP_IRQ240241 select IRQ_DOMAIN241242 help242243 if you say yes here you get support for the TPS65910 series of
···2727#include <linux/of_device.h>2828#endif29293030+/* I2C safe register check */3131+static inline bool i2c_safe_reg(unsigned char reg)3232+{3333+ switch (reg) {3434+ case DA9052_STATUS_A_REG:3535+ case DA9052_STATUS_B_REG:3636+ case DA9052_STATUS_C_REG:3737+ case DA9052_STATUS_D_REG:3838+ case DA9052_ADC_RES_L_REG:3939+ case DA9052_ADC_RES_H_REG:4040+ case DA9052_VDD_RES_REG:4141+ case DA9052_ICHG_AV_REG:4242+ case DA9052_TBAT_RES_REG:4343+ case DA9052_ADCIN4_RES_REG:4444+ case DA9052_ADCIN5_RES_REG:4545+ case DA9052_ADCIN6_RES_REG:4646+ case DA9052_TJUNC_RES_REG:4747+ case DA9052_TSI_X_MSB_REG:4848+ case DA9052_TSI_Y_MSB_REG:4949+ case DA9052_TSI_LSB_REG:5050+ case DA9052_TSI_Z_MSB_REG:5151+ return true;5252+ default:5353+ return false;5454+ }5555+}5656+5757+/*5858+ * There is an issue with DA9052 and DA9053_AA/BA/BB PMIC where the PMIC5959+ * gets lockup up or fails to respond following a system reset.6060+ * This fix is to follow any read or write with a dummy read to a safe6161+ * register.6262+ */6363+int da9052_i2c_fix(struct da9052 *da9052, unsigned char reg)6464+{6565+ int val;6666+6767+ switch (da9052->chip_id) {6868+ case DA9052:6969+ case DA9053_AA:7070+ case DA9053_BA:7171+ case DA9053_BB:7272+ /* A dummy read to a safe register address. */7373+ if (!i2c_safe_reg(reg))7474+ return regmap_read(da9052->regmap,7575+ DA9052_PARK_REGISTER,7676+ &val);7777+ break;7878+ default:7979+ /*8080+ * For other chips parking of I2C register8181+ * to a safe place is not required.8282+ */8383+ break;8484+ }8585+8686+ return 0;8787+}8888+EXPORT_SYMBOL(da9052_i2c_fix);8989+3090static int da9052_i2c_enable_multiwrite(struct da9052 *da9052)3191{3292 int reg_val, ret;···1438314484 da9052->dev = &client->dev;14585 da9052->chip_irq = client->irq;8686+ da9052->fix_io = da9052_i2c_fix;1468714788 i2c_set_clientdata(client, da9052);14889
+9-4
drivers/mfd/db8500-prcmu.c
···2524252425252525 for (n = 0; n < NUM_PRCMU_WAKEUPS; n++) {25262526 if (ev & prcmu_irq_bit[n])25272527- generic_handle_irq(IRQ_PRCMU_BASE + n);25272527+ generic_handle_irq(irq_find_mapping(db8500_irq_domain, n));25282528 }25292529 r = true;25302530 break;···27372737}2738273827392739static struct irq_domain_ops db8500_irq_ops = {27402740- .map = db8500_irq_map,27412741- .xlate = irq_domain_xlate_twocell,27402740+ .map = db8500_irq_map,27412741+ .xlate = irq_domain_xlate_twocell,27422742};2743274327442744static int db8500_irq_init(struct device_node *np)27452745{27462746- int irq_base = -1;27462746+ int irq_base = 0;27472747+ int i;2747274827482749 /* In the device tree case, just take some IRQs */27492750 if (!np)···27582757 pr_err("Failed to create irqdomain\n");27592758 return -ENOSYS;27602759 }27602760+27612761+ /* All wakeups will be used, so create mappings for all */27622762+ for (i = 0; i < NUM_PRCMU_WAKEUPS; i++)27632763+ irq_create_mapping(db8500_irq_domain, i);2761276427622765 return 0;27632766}
+9-9
drivers/mfd/max77686.c
···9393 if (max77686 == NULL)9494 return -ENOMEM;95959696- max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config);9797- if (IS_ERR(max77686->regmap)) {9898- ret = PTR_ERR(max77686->regmap);9999- dev_err(max77686->dev, "Failed to allocate register map: %d\n",100100- ret);101101- kfree(max77686);102102- return ret;103103- }104104-10596 i2c_set_clientdata(i2c, max77686);10697 max77686->dev = &i2c->dev;10798 max77686->i2c = i2c;···101110 max77686->wakeup = pdata->wakeup;102111 max77686->irq_gpio = pdata->irq_gpio;103112 max77686->irq = i2c->irq;113113+114114+ max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config);115115+ if (IS_ERR(max77686->regmap)) {116116+ ret = PTR_ERR(max77686->regmap);117117+ dev_err(max77686->dev, "Failed to allocate register map: %d\n",118118+ ret);119119+ kfree(max77686);120120+ return ret;121121+ }104122105123 if (regmap_read(max77686->regmap,106124 MAX77686_REG_DEVICE_ID, &data) < 0) {
+18-16
drivers/mfd/max77693.c
···114114 u8 reg_data;115115 int ret = 0;116116117117+ if (!pdata) {118118+ dev_err(&i2c->dev, "No platform data found.\n");119119+ return -EINVAL;120120+ }121121+117122 max77693 = devm_kzalloc(&i2c->dev,118123 sizeof(struct max77693_dev), GFP_KERNEL);119124 if (max77693 == NULL)120125 return -ENOMEM;121121-122122- max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config);123123- if (IS_ERR(max77693->regmap)) {124124- ret = PTR_ERR(max77693->regmap);125125- dev_err(max77693->dev,"failed to allocate register map: %d\n",126126- ret);127127- goto err_regmap;128128- }129126130127 i2c_set_clientdata(i2c, max77693);131128 max77693->dev = &i2c->dev;···130133 max77693->irq = i2c->irq;131134 max77693->type = id->driver_data;132135133133- if (!pdata)134134- goto err_regmap;136136+ max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config);137137+ if (IS_ERR(max77693->regmap)) {138138+ ret = PTR_ERR(max77693->regmap);139139+ dev_err(max77693->dev, "failed to allocate register map: %d\n",140140+ ret);141141+ return ret;142142+ }135143136144 max77693->wakeup = pdata->wakeup;137145138138- if (max77693_read_reg(max77693->regmap,139139- MAX77693_PMIC_REG_PMIC_ID2, ®_data) < 0) {146146+ ret = max77693_read_reg(max77693->regmap, MAX77693_PMIC_REG_PMIC_ID2,147147+ ®_data);148148+ if (ret < 0) {140149 dev_err(max77693->dev, "device not found on this channel\n");141141- ret = -ENODEV;142142- goto err_regmap;150150+ return ret;143151 } else144152 dev_info(max77693->dev, "device ID: 0x%x\n", reg_data);145153···165163 ret = PTR_ERR(max77693->regmap_muic);166164 dev_err(max77693->dev,167165 "failed to allocate register map: %d\n", ret);168168- goto err_regmap;166166+ goto err_regmap_muic;169167 }170168171169 ret = max77693_irq_init(max77693);···186184err_mfd:187185 max77693_irq_exit(max77693);188186err_irq:187187+err_regmap_muic:189188 i2c_unregister_device(max77693->muic);190189 i2c_unregister_device(max77693->haptic);191191-err_regmap:192190 return ret;193191}194192
···159159static int twl4030_write_script(u8 address, struct twl4030_ins *script,160160 int len)161161{162162- int err;162162+ int err = -EINVAL;163163164164 for (; len; len--, address++, script++) {165165 if (len == 1) {
···1717#include "bcm47xxnflash.h"18181919/* Broadcom uses 1'000'000 but it seems to be too many. Tests on WNDR4500 has2020- * shown 164 retries as maxiumum. */2121-#define NFLASH_READY_RETRIES 10002020+ * shown ~1000 retries as maxiumum. */2121+#define NFLASH_READY_RETRIES 1000022222323#define NFLASH_SECTOR_SIZE 5122424
···28572857 int i;28582858 int val;2859285928602860- /* ONFI need to be probed in 8 bits mode */28612861- WARN_ON(chip->options & NAND_BUSWIDTH_16);28602860+ /* ONFI need to be probed in 8 bits mode, and 16 bits should be selected with NAND_BUSWIDTH_AUTO */28612861+ if (chip->options & NAND_BUSWIDTH_16) {28622862+ pr_err("Trying ONFI probe in 16 bits mode, aborting !\n");28632863+ return 0;28642864+ }28622865 /* Try ONFI for unknown chip or LP */28632866 chip->cmdfunc(mtd, NAND_CMD_READID, 0x20, -1);28642867 if (chip->read_byte(mtd) != 'O' || chip->read_byte(mtd) != 'N' ||
···12831283 return tg3_writephy(tp, MII_TG3_AUX_CTRL, set | reg);12841284}1285128512861286-#define TG3_PHY_AUXCTL_SMDSP_ENABLE(tp) \12871287- tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \12881288- MII_TG3_AUXCTL_ACTL_SMDSP_ENA | \12891289- MII_TG3_AUXCTL_ACTL_TX_6DB)12861286+static int tg3_phy_toggle_auxctl_smdsp(struct tg3 *tp, bool enable)12871287+{12881288+ u32 val;12891289+ int err;1290129012911291-#define TG3_PHY_AUXCTL_SMDSP_DISABLE(tp) \12921292- tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \12931293- MII_TG3_AUXCTL_ACTL_TX_6DB);12911291+ err = tg3_phy_auxctl_read(tp, MII_TG3_AUXCTL_SHDWSEL_AUXCTL, &val);12921292+12931293+ if (err)12941294+ return err;12951295+ if (enable)12961296+12971297+ val |= MII_TG3_AUXCTL_ACTL_SMDSP_ENA;12981298+ else12991299+ val &= ~MII_TG3_AUXCTL_ACTL_SMDSP_ENA;13001300+13011301+ err = tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL,13021302+ val | MII_TG3_AUXCTL_ACTL_TX_6DB);13031303+13041304+ return err;13051305+}1294130612951307static int tg3_bmcr_reset(struct tg3 *tp)12961308{···2235222322362224 otp = tp->phy_otp;2237222522382238- if (TG3_PHY_AUXCTL_SMDSP_ENABLE(tp))22262226+ if (tg3_phy_toggle_auxctl_smdsp(tp, true))22392227 return;2240222822412229 phy = ((otp & TG3_OTP_AGCTGT_MASK) >> TG3_OTP_AGCTGT_SHIFT);···22602248 ((otp & TG3_OTP_RCOFF_MASK) >> TG3_OTP_RCOFF_SHIFT);22612249 tg3_phydsp_write(tp, MII_TG3_DSP_EXP97, phy);2262225022632263- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);22512251+ tg3_phy_toggle_auxctl_smdsp(tp, false);22642252}2265225322662254static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up)···2296228422972285 if (!tp->setlpicnt) {22982286 if (current_link_up == 1 &&22992299- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {22872287+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {23002288 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, 0x0000);23012301- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);22892289+ tg3_phy_toggle_auxctl_smdsp(tp, false);23022290 }2303229123042292 val = tr32(TG3_CPMU_EEE_MODE);···23142302 (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||23152303 GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719 ||23162304 tg3_flag(tp, 57765_CLASS)) &&23172317- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {23052305+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {23182306 val = MII_TG3_DSP_TAP26_ALNOKO |23192307 MII_TG3_DSP_TAP26_RMRXSTO;23202308 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, val);23212321- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);23092309+ tg3_phy_toggle_auxctl_smdsp(tp, false);23222310 }2323231123242312 val = tr32(TG3_CPMU_EEE_MODE);···24622450 tg3_writephy(tp, MII_CTRL1000,24632451 CTL1000_AS_MASTER | CTL1000_ENABLE_MASTER);2464245224652465- err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp);24532453+ err = tg3_phy_toggle_auxctl_smdsp(tp, true);24662454 if (err)24672455 return err;24682456···24832471 tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x8200);24842472 tg3_writephy(tp, MII_TG3_DSP_CONTROL, 0x0000);2485247324862486- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);24742474+ tg3_phy_toggle_auxctl_smdsp(tp, false);2487247524882476 tg3_writephy(tp, MII_CTRL1000, phy9_orig);24892477···2584257225852573out:25862574 if ((tp->phy_flags & TG3_PHYFLG_ADC_BUG) &&25872587- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {25752575+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {25882576 tg3_phydsp_write(tp, 0x201f, 0x2aaa);25892577 tg3_phydsp_write(tp, 0x000a, 0x0323);25902590- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);25782578+ tg3_phy_toggle_auxctl_smdsp(tp, false);25912579 }2592258025932581 if (tp->phy_flags & TG3_PHYFLG_5704_A0_BUG) {···25962584 }2597258525982586 if (tp->phy_flags & TG3_PHYFLG_BER_BUG) {25992599- if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {25872587+ if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) {26002588 tg3_phydsp_write(tp, 0x000a, 0x310b);26012589 tg3_phydsp_write(tp, 0x201f, 0x9506);26022590 tg3_phydsp_write(tp, 0x401f, 0x14e2);26032603- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);25912591+ tg3_phy_toggle_auxctl_smdsp(tp, false);26042592 }26052593 } else if (tp->phy_flags & TG3_PHYFLG_JITTER_BUG) {26062606- if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {25942594+ if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) {26072595 tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x000a);26082596 if (tp->phy_flags & TG3_PHYFLG_ADJUST_TRIM) {26092597 tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x110b);···26122600 } else26132601 tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x010b);2614260226152615- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);26032603+ tg3_phy_toggle_auxctl_smdsp(tp, false);26162604 }26172605 }26182606···40214009 tw32(TG3_CPMU_EEE_MODE,40224010 tr32(TG3_CPMU_EEE_MODE) & ~TG3_CPMU_EEEMD_LPI_ENABLE);4023401140244024- err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp);40124012+ err = tg3_phy_toggle_auxctl_smdsp(tp, true);40254013 if (!err) {40264014 u32 err2;40274015···40544042 MII_TG3_DSP_CH34TP2_HIBW01);40554043 }4056404440574057- err2 = TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);40454045+ err2 = tg3_phy_toggle_auxctl_smdsp(tp, false);40584046 if (!err)40594047 err = err2;40604048 }···69616949{69626950 int i;69636951 struct tg3 *tp = netdev_priv(dev);69526952+69536953+ if (tg3_irq_sync(tp))69546954+ return;6964695569656956 for (i = 0; i < tp->irq_cnt; i++)69666957 tg3_interrupt(tp->napi[i].irq_vec, &tp->napi[i]);···1638216367 tp->pm_cap = pm_cap;1638316368 tp->rx_mode = TG3_DEF_RX_MODE;1638416369 tp->tx_mode = TG3_DEF_TX_MODE;1637016370+ tp->irq_sync = 1;16385163711638616372 if (tg3_debug > 0)1638716373 tp->msg_enable = tg3_debug;
+4
drivers/net/ethernet/calxeda/xgmac.c
···548548 return -1;549549 }550550551551+ /* All frames should fit into a single buffer */552552+ if (!(status & RXDESC_FIRST_SEG) || !(status & RXDESC_LAST_SEG))553553+ return -1;554554+551555 /* Check if packet has checksum already */552556 if ((status & RXDESC_FRAME_TYPE) && (status & RXDESC_EXT_STATUS) &&553557 !(ext_status & RXDESC_IP_PAYLOAD_MASK))
+13-2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
···19941994{19951995 const struct port_info *pi = netdev_priv(dev);19961996 struct adapter *adap = pi->adapter;19971997+ struct sge_rspq *q;19981998+ int i;19991999+ int r = 0;1997200019981998- return set_rxq_intr_params(adap, &adap->sge.ethrxq[pi->first_qset].rspq,19991999- c->rx_coalesce_usecs, c->rx_max_coalesced_frames);20012001+ for (i = pi->first_qset; i < pi->first_qset + pi->nqsets; i++) {20022002+ q = &adap->sge.ethrxq[i].rspq;20032003+ r = set_rxq_intr_params(adap, q, c->rx_coalesce_usecs,20042004+ c->rx_max_coalesced_frames);20052005+ if (r) {20062006+ dev_err(&dev->dev, "failed to set coalesce %d\n", r);20072007+ break;20082008+ }20092009+ }20102010+ return r;20002011}2001201220022013static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
···77777878 skb_orphan(skb);79798080+ /* Before queueing this packet to netif_rx(),8181+ * make sure dst is refcounted.8282+ */8383+ skb_dst_force(skb);8484+8085 skb->protocol = eth_type_trans(skb, dev);81868287 /* it's OK to use per_cpu_ptr() because BHs are off */
···380380 unsigned long lockflags;381381 size_t size = dev->rx_urb_size;382382383383+ /* prevent rx skb allocation when error ratio is high */384384+ if (test_bit(EVENT_RX_KILL, &dev->flags)) {385385+ usb_free_urb(urb);386386+ return -ENOLINK;387387+ }388388+383389 skb = __netdev_alloc_skb_ip_align(dev->net, size, flags);384390 if (!skb) {385391 netif_dbg(dev, rx_err, dev->net, "no rx skb\n");···543537 dev->net->stats.rx_errors++;544538 netif_dbg(dev, rx_err, dev->net, "rx status %d\n", urb_status);545539 break;540540+ }541541+542542+ /* stop rx if packet error rate is high */543543+ if (++dev->pkt_cnt > 30) {544544+ dev->pkt_cnt = 0;545545+ dev->pkt_err = 0;546546+ } else {547547+ if (state == rx_cleanup)548548+ dev->pkt_err++;549549+ if (dev->pkt_err > 20)550550+ set_bit(EVENT_RX_KILL, &dev->flags);546551 }547552548553 state = defer_bh(dev, skb, &dev->rxq, state);···807790 (dev->driver_info->flags & FLAG_FRAMING_RN) ? "RNDIS" :808791 (dev->driver_info->flags & FLAG_FRAMING_AX) ? "ASIX" :809792 "simple");793793+794794+ /* reset rx error state */795795+ dev->pkt_cnt = 0;796796+ dev->pkt_err = 0;797797+ clear_bit(EVENT_RX_KILL, &dev->flags);810798811799 // delay posting reads until we're fully open812800 tasklet_schedule (&dev->bh);···11251103 if (info->tx_fixup) {11261104 skb = info->tx_fixup (dev, skb, GFP_ATOMIC);11271105 if (!skb) {11281128- if (netif_msg_tx_err(dev)) {11291129- netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n");11301130- goto drop;11311131- } else {11321132- /* cdc_ncm collected packet; waits for more */11061106+ /* packet collected; minidriver waiting for more */11071107+ if (info->flags & FLAG_MULTI_PACKET)11331108 goto not_drop;11341134- }11091109+ netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n");11101110+ goto drop;11351111 }11361112 }11371113 length = skb->len;···12731253 netdev_dbg(dev->net, "bogus skb state %d\n", entry->state);12741254 }12751255 }12561256+12571257+ /* restart RX again after disabling due to high error rate */12581258+ clear_bit(EVENT_RX_KILL, &dev->flags);1276125912771260 // waiting for all pending urbs to complete?12781261 if (dev->wait) {···14701447 /* WWAN devices should always be named "wwan%d" */14711448 if ((dev->driver_info->flags & FLAG_WWAN) != 0)14721449 strcpy(net->name, "wwan%d");14501450+14511451+ /* devices that cannot do ARP */14521452+ if ((dev->driver_info->flags & FLAG_NOARP) != 0)14531453+ net->flags |= IFF_NOARP;1473145414741455 /* maybe the remote can't receive an Ethernet MTU */14751456 if (net->mtu > (dev->hard_mtu - net->hard_header_len))
+98-20
drivers/net/virtio_net.c
···2626#include <linux/scatterlist.h>2727#include <linux/if_vlan.h>2828#include <linux/slab.h>2929+#include <linux/cpu.h>29303031static int napi_weight = 128;3132module_param(napi_weight, int, 0444);···124123125124 /* Does the affinity hint is set for virtqueues? */126125 bool affinity_hint_set;126126+127127+ /* Per-cpu variable to show the mapping from CPU to virtqueue */128128+ int __percpu *vq_index;129129+130130+ /* CPU hot plug notifier */131131+ struct notifier_block nb;127132};128133129134struct skb_vnet_hdr {···10201013 return 0;10211014}1022101510231023-static void virtnet_set_affinity(struct virtnet_info *vi, bool set)10161016+static void virtnet_clean_affinity(struct virtnet_info *vi, long hcpu)10241017{10251018 int i;10191019+ int cpu;10201020+10211021+ if (vi->affinity_hint_set) {10221022+ for (i = 0; i < vi->max_queue_pairs; i++) {10231023+ virtqueue_set_affinity(vi->rq[i].vq, -1);10241024+ virtqueue_set_affinity(vi->sq[i].vq, -1);10251025+ }10261026+10271027+ vi->affinity_hint_set = false;10281028+ }10291029+10301030+ i = 0;10311031+ for_each_online_cpu(cpu) {10321032+ if (cpu == hcpu) {10331033+ *per_cpu_ptr(vi->vq_index, cpu) = -1;10341034+ } else {10351035+ *per_cpu_ptr(vi->vq_index, cpu) =10361036+ ++i % vi->curr_queue_pairs;10371037+ }10381038+ }10391039+}10401040+10411041+static void virtnet_set_affinity(struct virtnet_info *vi)10421042+{10431043+ int i;10441044+ int cpu;1026104510271046 /* In multiqueue mode, when the number of cpu is equal to the number of10281047 * queue pairs, we let the queue pairs to be private to one cpu by10291048 * setting the affinity hint to eliminate the contention.10301049 */10311031- if ((vi->curr_queue_pairs == 1 ||10321032- vi->max_queue_pairs != num_online_cpus()) && set) {10331033- if (vi->affinity_hint_set)10341034- set = false;10351035- else10361036- return;10501050+ if (vi->curr_queue_pairs == 1 ||10511051+ vi->max_queue_pairs != num_online_cpus()) {10521052+ virtnet_clean_affinity(vi, -1);10531053+ return;10371054 }1038105510391039- for (i = 0; i < vi->max_queue_pairs; i++) {10401040- int cpu = set ? i : -1;10561056+ i = 0;10571057+ for_each_online_cpu(cpu) {10411058 virtqueue_set_affinity(vi->rq[i].vq, cpu);10421059 virtqueue_set_affinity(vi->sq[i].vq, cpu);10601060+ *per_cpu_ptr(vi->vq_index, cpu) = i;10611061+ i++;10431062 }1044106310451045- if (set)10461046- vi->affinity_hint_set = true;10471047- else10481048- vi->affinity_hint_set = false;10641064+ vi->affinity_hint_set = true;10651065+}10661066+10671067+static int virtnet_cpu_callback(struct notifier_block *nfb,10681068+ unsigned long action, void *hcpu)10691069+{10701070+ struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);10711071+10721072+ switch(action & ~CPU_TASKS_FROZEN) {10731073+ case CPU_ONLINE:10741074+ case CPU_DOWN_FAILED:10751075+ case CPU_DEAD:10761076+ virtnet_set_affinity(vi);10771077+ break;10781078+ case CPU_DOWN_PREPARE:10791079+ virtnet_clean_affinity(vi, (long)hcpu);10801080+ break;10811081+ default:10821082+ break;10831083+ }10841084+ return NOTIFY_OK;10491085}1050108610511087static void virtnet_get_ringparam(struct net_device *dev,···11321082 if (queue_pairs > vi->max_queue_pairs)11331083 return -EINVAL;1134108410851085+ get_online_cpus();11351086 err = virtnet_set_queues(vi, queue_pairs);11361087 if (!err) {11371088 netif_set_real_num_tx_queues(dev, queue_pairs);11381089 netif_set_real_num_rx_queues(dev, queue_pairs);1139109011401140- virtnet_set_affinity(vi, true);10911091+ virtnet_set_affinity(vi);11411092 }10931093+ put_online_cpus();1142109411431095 return err;11441096}···1179112711801128/* To avoid contending a lock hold by a vcpu who would exit to host, select the11811129 * txq based on the processor id.11821182- * TODO: handle cpu hotplug.11831130 */11841131static u16 virtnet_select_queue(struct net_device *dev, struct sk_buff *skb)11851132{11861186- int txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) :11871187- smp_processor_id();11331133+ int txq;11341134+ struct virtnet_info *vi = netdev_priv(dev);11351135+11361136+ if (skb_rx_queue_recorded(skb)) {11371137+ txq = skb_get_rx_queue(skb);11381138+ } else {11391139+ txq = *__this_cpu_ptr(vi->vq_index);11401140+ if (txq == -1)11411141+ txq = 0;11421142+ }1188114311891144 while (unlikely(txq >= dev->real_num_tx_queues))11901145 txq -= dev->real_num_tx_queues;···13071248{13081249 struct virtio_device *vdev = vi->vdev;1309125013101310- virtnet_set_affinity(vi, false);12511251+ virtnet_clean_affinity(vi, -1);1311125213121253 vdev->config->del_vqs(vdev);13131254···14301371 if (ret)14311372 goto err_free;1432137314331433- virtnet_set_affinity(vi, true);13741374+ get_online_cpus();13751375+ virtnet_set_affinity(vi);13761376+ put_online_cpus();13771377+14341378 return 0;1435137914361380err_free:···15151453 if (vi->stats == NULL)15161454 goto free;1517145514561456+ vi->vq_index = alloc_percpu(int);14571457+ if (vi->vq_index == NULL)14581458+ goto free_stats;14591459+15181460 mutex_init(&vi->config_lock);15191461 vi->config_enable = true;15201462 INIT_WORK(&vi->config_work, virtnet_config_changed_work);···15421476 /* Allocate/initialize the rx/tx queues, and invoke find_vqs */15431477 err = init_vqs(vi);15441478 if (err)15451545- goto free_stats;14791479+ goto free_index;1546148015471481 netif_set_real_num_tx_queues(dev, 1);15481482 netif_set_real_num_rx_queues(dev, 1);···15631497 err = -ENOMEM;15641498 goto free_recv_bufs;15651499 }15001500+ }15011501+15021502+ vi->nb.notifier_call = &virtnet_cpu_callback;15031503+ err = register_hotcpu_notifier(&vi->nb);15041504+ if (err) {15051505+ pr_debug("virtio_net: registering cpu notifier failed\n");15061506+ goto free_recv_bufs;15661507 }1567150815681509 /* Assume link up if device can't report link status,···15931520free_vqs:15941521 cancel_delayed_work_sync(&vi->refill);15951522 virtnet_del_vqs(vi);15231523+free_index:15241524+ free_percpu(vi->vq_index);15961525free_stats:15971526 free_percpu(vi->stats);15981527free:···16181543{16191544 struct virtnet_info *vi = vdev->priv;1620154515461546+ unregister_hotcpu_notifier(&vi->nb);15471547+16211548 /* Prevent config work handler from accessing the device. */16221549 mutex_lock(&vi->config_lock);16231550 vi->config_enable = false;···1631155416321555 flush_work(&vi->config_work);1633155615571557+ free_percpu(vi->vq_index);16341558 free_percpu(vi->stats);16351559 free_netdev(vi->dev);16361560}
+3-4
drivers/net/vmxnet3/vmxnet3_drv.c
···154154 if (ret & 1) { /* Link is up. */155155 printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n",156156 adapter->netdev->name, adapter->link_speed);157157- if (!netif_carrier_ok(adapter->netdev))158158- netif_carrier_on(adapter->netdev);157157+ netif_carrier_on(adapter->netdev);159158160159 if (affectTxQueue) {161160 for (i = 0; i < adapter->num_tx_queues; i++)···164165 } else {165166 printk(KERN_INFO "%s: NIC Link is Down\n",166167 adapter->netdev->name);167167- if (netif_carrier_ok(adapter->netdev))168168- netif_carrier_off(adapter->netdev);168168+ netif_carrier_off(adapter->netdev);169169170170 if (affectTxQueue) {171171 for (i = 0; i < adapter->num_tx_queues; i++)···30593061 netif_set_real_num_tx_queues(adapter->netdev, adapter->num_tx_queues);30603062 netif_set_real_num_rx_queues(adapter->netdev, adapter->num_rx_queues);3061306330643064+ netif_carrier_off(netdev);30623065 err = register_netdev(netdev);3063306630643067 if (err) {
+2
drivers/net/wireless/ath/ath9k/ar9003_calib.c
···976976 AR_PHY_CL_TAB_1,977977 AR_PHY_CL_TAB_2 };978978979979+ ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask);980980+979981 if (rtt) {980982 if (!ar9003_hw_rtt_restore(ah, chan))981983 run_rtt_cal = true;
···216216 * @rx_oom_err: No. of frames dropped due to OOM issues.217217 * @rx_rate_err: No. of frames dropped due to rate errors.218218 * @rx_too_many_frags_err: Frames dropped due to too-many-frags received.219219- * @rx_drop_rxflush: No. of frames dropped due to RX-FLUSH.220219 * @rx_beacons: No. of beacons received.221220 * @rx_frags: No. of rx-fragements received.222221 */···234235 u32 rx_oom_err;235236 u32 rx_rate_err;236237 u32 rx_too_many_frags_err;237237- u32 rx_drop_rxflush;238238 u32 rx_beacons;239239 u32 rx_frags;240240};
···3958395839593959 memset(&il->staging, 0, sizeof(il->staging));3960396039613961- if (!il->vif) {39613961+ switch (il->iw_mode) {39623962+ case NL80211_IFTYPE_UNSPECIFIED:39623963 il->staging.dev_type = RXON_DEV_TYPE_ESS;39633963- } else if (il->vif->type == NL80211_IFTYPE_STATION) {39643964+ break;39653965+ case NL80211_IFTYPE_STATION:39643966 il->staging.dev_type = RXON_DEV_TYPE_ESS;39653967 il->staging.filter_flags = RXON_FILTER_ACCEPT_GRP_MSK;39663966- } else if (il->vif->type == NL80211_IFTYPE_ADHOC) {39683968+ break;39693969+ case NL80211_IFTYPE_ADHOC:39673970 il->staging.dev_type = RXON_DEV_TYPE_IBSS;39683971 il->staging.flags = RXON_FLG_SHORT_PREAMBLE_MSK;39693972 il->staging.filter_flags =39703973 RXON_FILTER_BCON_AWARE_MSK | RXON_FILTER_ACCEPT_GRP_MSK;39713971- } else {39743974+ break;39753975+ default:39723976 IL_ERR("Unsupported interface type %d\n", il->vif->type);39733977 return;39743978 }···45544550EXPORT_SYMBOL(il_mac_add_interface);4555455145564552static void45574557-il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif,45584558- bool mode_change)45534553+il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif)45594554{45604555 lockdep_assert_held(&il->mutex);45614556···45634560 il_force_scan_end(il);45644561 }4565456245664566- if (!mode_change)45674567- il_set_mode(il);45684568-45634563+ il_set_mode(il);45694564}4570456545714566void···4576457545774576 WARN_ON(il->vif != vif);45784577 il->vif = NULL;45794579-45804580- il_teardown_interface(il, vif, false);45784578+ il->iw_mode = NL80211_IFTYPE_UNSPECIFIED;45794579+ il_teardown_interface(il, vif);45814580 memset(il->bssid, 0, ETH_ALEN);4582458145834582 D_MAC80211("leave\n");···46864685 }4687468646884687 /* success */46894689- il_teardown_interface(il, vif, true);46904688 vif->type = newtype;46914689 vif->p2p = false;46924692- err = il_set_mode(il);46934693- WARN_ON(err);46944694- /*46954695- * We've switched internally, but submitting to the46964696- * device may have failed for some reason. Mask this46974697- * error, because otherwise mac80211 will not switch46984698- * (and set the interface type back) and we'll be46994699- * out of sync with it.47004700- */46904690+ il->iw_mode = newtype;46914691+ il_teardown_interface(il, vif);47014692 err = 0;4702469347034694out:
+9-17
drivers/net/wireless/iwlwifi/dvm/tx.c
···10791079{10801080 u16 status = le16_to_cpu(tx_resp->status.status);1081108110821082+ info->flags &= ~IEEE80211_TX_CTL_AMPDU;10831083+10821084 info->status.rates[0].count = tx_resp->failure_frame + 1;10831085 info->flags |= iwl_tx_status_to_mac80211(status);10841086 iwlagn_hwrate_to_tx_control(priv, le32_to_cpu(tx_resp->rate_n_flags),···11531151 next_reclaimed = ssn;11541152 }1155115311541154+ if (tid != IWL_TID_NON_QOS) {11551155+ priv->tid_data[sta_id][tid].next_reclaimed =11561156+ next_reclaimed;11571157+ IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n",11581158+ next_reclaimed);11591159+ }11601160+11561161 iwl_trans_reclaim(priv->trans, txq_id, ssn, &skbs);1157116211581163 iwlagn_check_ratid_empty(priv, sta_id, tid);···12101201 if (!is_agg)12111202 iwlagn_non_agg_tx_status(priv, ctx, hdr->addr1);1212120312131213- /*12141214- * W/A for FW bug - the seq_ctl isn't updated when the12151215- * queues are flushed. Fetch it from the packet itself12161216- */12171217- if (!is_agg && status == TX_STATUS_FAIL_FIFO_FLUSHED) {12181218- next_reclaimed = le16_to_cpu(hdr->seq_ctrl);12191219- next_reclaimed =12201220- SEQ_TO_SN(next_reclaimed + 0x10);12211221- }12221222-12231204 is_offchannel_skb =12241205 (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN);12251206 freed++;12261226- }12271227-12281228- if (tid != IWL_TID_NON_QOS) {12291229- priv->tid_data[sta_id][tid].next_reclaimed =12301230- next_reclaimed;12311231- IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n",12321232- next_reclaimed);12331207 }1234120812351209 WARN_ON(!is_agg && freed != 1);
+2-15
drivers/net/wireless/mwifiex/cfg80211.c
···14591459 struct cfg80211_ssid req_ssid;14601460 int ret, auth_type = 0;14611461 struct cfg80211_bss *bss = NULL;14621462- u8 is_scanning_required = 0, config_bands = 0;14621462+ u8 is_scanning_required = 0;1463146314641464 memset(&req_ssid, 0, sizeof(struct cfg80211_ssid));14651465···1477147714781478 /* disconnect before try to associate */14791479 mwifiex_deauthenticate(priv, NULL);14801480-14811481- if (channel) {14821482- if (mode == NL80211_IFTYPE_STATION) {14831483- if (channel->band == IEEE80211_BAND_2GHZ)14841484- config_bands = BAND_B | BAND_G | BAND_GN;14851485- else14861486- config_bands = BAND_A | BAND_AN;14871487-14881488- if (!((config_bands | priv->adapter->fw_bands) &14891489- ~priv->adapter->fw_bands))14901490- priv->adapter->config_bands = config_bands;14911491- }14921492- }1493148014941481 /* As this is new association, clear locally stored14951482 * keys and security related flags */···1694170716951708 if (cfg80211_get_chandef_type(¶ms->chandef) !=16961709 NL80211_CHAN_NO_HT)16971697- config_bands |= BAND_GN;17101710+ config_bands |= BAND_G | BAND_GN;16981711 } else {16991712 if (cfg80211_get_chandef_type(¶ms->chandef) ==17001713 NL80211_CHAN_NO_HT)
+1-1
drivers/net/wireless/mwifiex/pcie.c
···161161162162 if (pdev) {163163 card = (struct pcie_service_card *) pci_get_drvdata(pdev);164164- if (!card || card->adapter) {164164+ if (!card || !card->adapter) {165165 pr_err("Card or adapter structure is not valid\n");166166 return 0;167167 }
+5-4
drivers/net/wireless/mwifiex/scan.c
···15631563 dev_err(adapter->dev, "SCAN_RESP: too many AP returned (%d)\n",15641564 scan_rsp->number_of_sets);15651565 ret = -1;15661566- goto done;15661566+ goto check_next_scan;15671567 }1568156815691569 bytes_left = le16_to_cpu(scan_rsp->bss_descript_size);···16341634 if (!beacon_size || beacon_size > bytes_left) {16351635 bss_info += bytes_left;16361636 bytes_left = 0;16371637- return -1;16371637+ ret = -1;16381638+ goto check_next_scan;16381639 }1639164016401641 /* Initialize the current working beacon pointer for this BSS···16911690 dev_err(priv->adapter->dev,16921691 "%s: bytes left < IE length\n",16931692 __func__);16941694- goto done;16931693+ goto check_next_scan;16951694 }16961695 if (element_id == WLAN_EID_DS_PARAMS) {16971696 channel = *(current_ptr + sizeof(struct ieee_types_header));···17541753 }17551754 }1756175517561756+check_next_scan:17571757 spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);17581758 if (list_empty(&adapter->scan_pending_q)) {17591759 spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);···18151813 }18161814 }1817181518181818-done:18191816 return ret;18201817}18211818
+14
drivers/net/wireless/mwifiex/sta_ioctl.c
···283283 if (ret)284284 goto done;285285286286+ if (bss_desc) {287287+ u8 config_bands = 0;288288+289289+ if (mwifiex_band_to_radio_type((u8) bss_desc->bss_band)290290+ == HostCmd_SCAN_RADIO_TYPE_BG)291291+ config_bands = BAND_B | BAND_G | BAND_GN;292292+ else293293+ config_bands = BAND_A | BAND_AN;294294+295295+ if (!((config_bands | adapter->fw_bands) &296296+ ~adapter->fw_bands))297297+ adapter->config_bands = config_bands;298298+ }299299+286300 ret = mwifiex_check_network_compatibility(priv, bss_desc);287301 if (ret)288302 goto done;
···151151/* Notify xenvif that ring now has space to send an skb to the frontend */152152void xenvif_notify_tx_completion(struct xenvif *vif);153153154154+/* Prevent the device from generating any further traffic. */155155+void xenvif_carrier_off(struct xenvif *vif);156156+154157/* Returns number of ring slots required to send an skb to the frontend */155158unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);156159
···147147 atomic_dec(&netbk->netfront_count);148148}149149150150-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx);150150+static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,151151+ u8 status);151152static void make_tx_response(struct xenvif *vif,152153 struct xen_netif_tx_request *txp,153154 s8 st);···880879881880 do {882881 make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);883883- if (cons >= end)882882+ if (cons == end)884883 break;885884 txp = RING_GET_REQUEST(&vif->tx, cons++);886885 } while (1);887886 vif->tx.req_cons = cons;888887 xen_netbk_check_rx_xenvif(vif);888888+ xenvif_put(vif);889889+}890890+891891+static void netbk_fatal_tx_err(struct xenvif *vif)892892+{893893+ netdev_err(vif->dev, "fatal error; disabling device\n");894894+ xenvif_carrier_off(vif);889895 xenvif_put(vif);890896}891897···909901910902 do {911903 if (frags >= work_to_do) {912912- netdev_dbg(vif->dev, "Need more frags\n");904904+ netdev_err(vif->dev, "Need more frags\n");905905+ netbk_fatal_tx_err(vif);913906 return -frags;914907 }915908916909 if (unlikely(frags >= MAX_SKB_FRAGS)) {917917- netdev_dbg(vif->dev, "Too many frags\n");910910+ netdev_err(vif->dev, "Too many frags\n");911911+ netbk_fatal_tx_err(vif);918912 return -frags;919913 }920914921915 memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + frags),922916 sizeof(*txp));923917 if (txp->size > first->size) {924924- netdev_dbg(vif->dev, "Frags galore\n");918918+ netdev_err(vif->dev, "Frag is bigger than frame.\n");919919+ netbk_fatal_tx_err(vif);925920 return -frags;926921 }927922···932921 frags++;933922934923 if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {935935- netdev_dbg(vif->dev, "txp->offset: %x, size: %u\n",924924+ netdev_err(vif->dev, "txp->offset: %x, size: %u\n",936925 txp->offset, txp->size);926926+ netbk_fatal_tx_err(vif);937927 return -frags;938928 }939929 } while ((txp++)->flags & XEN_NETTXF_more_data);···978966 pending_idx = netbk->pending_ring[index];979967 page = xen_netbk_alloc_page(netbk, skb, pending_idx);980968 if (!page)981981- return NULL;969969+ goto err;982970983971 gop->source.u.ref = txp->gref;984972 gop->source.domid = vif->domid;···1000988 }10019891002990 return gop;991991+err:992992+ /* Unwind, freeing all pages and sending error responses. */993993+ while (i-- > start) {994994+ xen_netbk_idx_release(netbk, frag_get_pending_idx(&frags[i]),995995+ XEN_NETIF_RSP_ERROR);996996+ }997997+ /* The head too, if necessary. */998998+ if (start)999999+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);10001000+10011001+ return NULL;10031002}1004100310051004static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,···1019996{1020997 struct gnttab_copy *gop = *gopp;1021998 u16 pending_idx = *((u16 *)skb->data);10221022- struct pending_tx_info *pending_tx_info = netbk->pending_tx_info;10231023- struct xenvif *vif = pending_tx_info[pending_idx].vif;10241024- struct xen_netif_tx_request *txp;1025999 struct skb_shared_info *shinfo = skb_shinfo(skb);10261000 int nr_frags = shinfo->nr_frags;10271001 int i, err, start;1028100210291003 /* Check status of header. */10301004 err = gop->status;10311031- if (unlikely(err)) {10321032- pending_ring_idx_t index;10331033- index = pending_index(netbk->pending_prod++);10341034- txp = &pending_tx_info[pending_idx].req;10351035- make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);10361036- netbk->pending_ring[index] = pending_idx;10371037- xenvif_put(vif);10381038- }10051005+ if (unlikely(err))10061006+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);1039100710401008 /* Skip first skb fragment if it is on same page as header fragment. */10411009 start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);1042101010431011 for (i = start; i < nr_frags; i++) {10441012 int j, newerr;10451045- pending_ring_idx_t index;1046101310471014 pending_idx = frag_get_pending_idx(&shinfo->frags[i]);10481015···10411028 if (likely(!newerr)) {10421029 /* Had a previous error? Invalidate this fragment. */10431030 if (unlikely(err))10441044- xen_netbk_idx_release(netbk, pending_idx);10311031+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);10451032 continue;10461033 }1047103410481035 /* Error on this fragment: respond to client with an error. */10491049- txp = &netbk->pending_tx_info[pending_idx].req;10501050- make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);10511051- index = pending_index(netbk->pending_prod++);10521052- netbk->pending_ring[index] = pending_idx;10531053- xenvif_put(vif);10361036+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);1054103710551038 /* Not the first error? Preceding frags already invalidated. */10561039 if (err)···1054104510551046 /* First error: invalidate header and preceding fragments. */10561047 pending_idx = *((u16 *)skb->data);10571057- xen_netbk_idx_release(netbk, pending_idx);10481048+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);10581049 for (j = start; j < i; j++) {10591050 pending_idx = frag_get_pending_idx(&shinfo->frags[j]);10601060- xen_netbk_idx_release(netbk, pending_idx);10511051+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);10611052 }1062105310631054 /* Remember the error: invalidate all subsequent fragments. */···1091108210921083 /* Take an extra reference to offset xen_netbk_idx_release */10931084 get_page(netbk->mmap_pages[pending_idx]);10941094- xen_netbk_idx_release(netbk, pending_idx);10851085+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);10951086 }10961087}10971088···1104109511051096 do {11061097 if (unlikely(work_to_do-- <= 0)) {11071107- netdev_dbg(vif->dev, "Missing extra info\n");10981098+ netdev_err(vif->dev, "Missing extra info\n");10991099+ netbk_fatal_tx_err(vif);11081100 return -EBADR;11091101 }11101102···11141104 if (unlikely(!extra.type ||11151105 extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {11161106 vif->tx.req_cons = ++cons;11171117- netdev_dbg(vif->dev,11071107+ netdev_err(vif->dev,11181108 "Invalid extra type: %d\n", extra.type);11091109+ netbk_fatal_tx_err(vif);11191110 return -EINVAL;11201111 }11211112···11321121 struct xen_netif_extra_info *gso)11331122{11341123 if (!gso->u.gso.size) {11351135- netdev_dbg(vif->dev, "GSO size must not be zero.\n");11241124+ netdev_err(vif->dev, "GSO size must not be zero.\n");11251125+ netbk_fatal_tx_err(vif);11361126 return -EINVAL;11371127 }1138112811391129 /* Currently only TCPv4 S.O. is supported. */11401130 if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {11411141- netdev_dbg(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type);11311131+ netdev_err(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type);11321132+ netbk_fatal_tx_err(vif);11421133 return -EINVAL;11431134 }11441135···1277126412781265 /* Get a netif from the list with work to do. */12791266 vif = poll_net_schedule_list(netbk);12671267+ /* This can sometimes happen because the test of12681268+ * list_empty(net_schedule_list) at the top of the12691269+ * loop is unlocked. Just go back and have another12701270+ * look.12711271+ */12801272 if (!vif)12811273 continue;12741274+12751275+ if (vif->tx.sring->req_prod - vif->tx.req_cons >12761276+ XEN_NETIF_TX_RING_SIZE) {12771277+ netdev_err(vif->dev,12781278+ "Impossible number of requests. "12791279+ "req_prod %d, req_cons %d, size %ld\n",12801280+ vif->tx.sring->req_prod, vif->tx.req_cons,12811281+ XEN_NETIF_TX_RING_SIZE);12821282+ netbk_fatal_tx_err(vif);12831283+ continue;12841284+ }1282128512831286 RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);12841287 if (!work_to_do) {···13231294 work_to_do = xen_netbk_get_extras(vif, extras,13241295 work_to_do);13251296 idx = vif->tx.req_cons;13261326- if (unlikely(work_to_do < 0)) {13271327- netbk_tx_err(vif, &txreq, idx);12971297+ if (unlikely(work_to_do < 0))13281298 continue;13291329- }13301299 }1331130013321301 ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do);13331333- if (unlikely(ret < 0)) {13341334- netbk_tx_err(vif, &txreq, idx - ret);13021302+ if (unlikely(ret < 0))13351303 continue;13361336- }13041304+13371305 idx += ret;1338130613391307 if (unlikely(txreq.size < ETH_HLEN)) {···1342131613431317 /* No crossing a page as the payload mustn't fragment. */13441318 if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {13451345- netdev_dbg(vif->dev,13191319+ netdev_err(vif->dev,13461320 "txreq.offset: %x, size: %u, end: %lu\n",13471321 txreq.offset, txreq.size,13481322 (txreq.offset&~PAGE_MASK) + txreq.size);13491349- netbk_tx_err(vif, &txreq, idx);13231323+ netbk_fatal_tx_err(vif);13501324 continue;13511325 }13521326···13741348 gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];1375134913761350 if (netbk_set_skb_gso(vif, skb, gso)) {13511351+ /* Failure in netbk_set_skb_gso is fatal. */13771352 kfree_skb(skb);13781378- netbk_tx_err(vif, &txreq, idx);13791353 continue;13801354 }13811355 }···14741448 txp->size -= data_len;14751449 } else {14761450 /* Schedule a response immediately. */14771477- xen_netbk_idx_release(netbk, pending_idx);14511451+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);14781452 }1479145314801454 if (txp->flags & XEN_NETTXF_csum_blank)···15261500 xen_netbk_tx_submit(netbk);15271501}1528150215291529-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)15031503+static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,15041504+ u8 status)15301505{15311506 struct xenvif *vif;15321507 struct pending_tx_info *pending_tx_info;···1541151415421515 vif = pending_tx_info->vif;1543151615441544- make_tx_response(vif, &pending_tx_info->req, XEN_NETIF_RSP_OKAY);15171517+ make_tx_response(vif, &pending_tx_info->req, status);1545151815461519 index = pending_index(netbk->pending_prod++);15471520 netbk->pending_ring[index] = pending_idx;
+2-3
drivers/pinctrl/Kconfig
···181181182182config PINCTRL_SAMSUNG183183 bool184184- depends on OF && GPIOLIB185184 select PINMUX186185 select PINCONF187186188188-config PINCTRL_EXYNOS4189189- bool "Pinctrl driver data for Exynos4 SoC"187187+config PINCTRL_EXYNOS188188+ bool "Pinctrl driver data for Samsung EXYNOS SoCs"190189 depends on OF && GPIOLIB191190 select PINCTRL_SAMSUNG192191
···599599}600600601601/* parse the pin numbers listed in the 'samsung,exynos5440-pins' property */602602-static int __init exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev,602602+static int exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev,603603 struct device_node *cfg_np, unsigned int **pin_list,604604 unsigned int *npins)605605{···630630 * Parse the information about all the available pin groups and pin functions631631 * from device node of the pin-controller.632632 */633633-static int __init exynos5440_pinctrl_parse_dt(struct platform_device *pdev,633633+static int exynos5440_pinctrl_parse_dt(struct platform_device *pdev,634634 struct exynos5440_pinctrl_priv_data *priv)635635{636636 struct device *dev = &pdev->dev;···723723}724724725725/* register the pinctrl interface with the pinctrl subsystem */726726-static int __init exynos5440_pinctrl_register(struct platform_device *pdev,726726+static int exynos5440_pinctrl_register(struct platform_device *pdev,727727 struct exynos5440_pinctrl_priv_data *priv)728728{729729 struct device *dev = &pdev->dev;···798798}799799800800/* register the gpiolib interface with the gpiolib subsystem */801801-static int __init exynos5440_gpiolib_register(struct platform_device *pdev,801801+static int exynos5440_gpiolib_register(struct platform_device *pdev,802802 struct exynos5440_pinctrl_priv_data *priv)803803{804804 struct gpio_chip *gc;···831831}832832833833/* unregister the gpiolib interface with the gpiolib subsystem */834834-static int __init exynos5440_gpiolib_unregister(struct platform_device *pdev,834834+static int exynos5440_gpiolib_unregister(struct platform_device *pdev,835835 struct exynos5440_pinctrl_priv_data *priv)836836{837837 int ret = gpiochip_remove(priv->gc);
+4-5
drivers/pinctrl/pinctrl-mxs.c
···146146static void mxs_dt_free_map(struct pinctrl_dev *pctldev,147147 struct pinctrl_map *map, unsigned num_maps)148148{149149- int i;149149+ u32 i;150150151151 for (i = 0; i < num_maps; i++) {152152 if (map[i].type == PIN_MAP_TYPE_MUX_GROUP)···203203 void __iomem *reg;204204 u8 bank, shift;205205 u16 pin;206206- int i;206206+ u32 i;207207208208 for (i = 0; i < g->npins; i++) {209209 bank = PINID_TO_BANK(g->pins[i]);···256256 void __iomem *reg;257257 u8 ma, vol, pull, bank, shift;258258 u16 pin;259259- int i;259259+ u32 i;260260261261 ma = CONFIG_TO_MA(config);262262 vol = CONFIG_TO_VOL(config);···345345 const char *propname = "fsl,pinmux-ids";346346 char *group;347347 int length = strlen(np->name) + SUFFIX_LEN;348348- int i;349349- u32 val;348348+ u32 val, i;350349351350 group = devm_kzalloc(&pdev->dev, length, GFP_KERNEL);352351 if (!group)
+1-1
drivers/pinctrl/pinctrl-nomadik.c
···676676}677677EXPORT_SYMBOL(nmk_gpio_set_mode);678678679679-static int nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio)679679+static int __maybe_unused nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio)680680{681681 int i;682682 u16 reg;
+2-77
drivers/pinctrl/pinctrl-single.c
···3030#define PCS_MUX_BITS_NAME "pinctrl-single,bits"3131#define PCS_REG_NAME_LEN ((sizeof(unsigned long) * 2) + 1)3232#define PCS_OFF_DISABLED ~0U3333-#define PCS_MAX_GPIO_VALUES 234333534/**3635 * struct pcs_pingroup - pingroups for a function···7475 const char **pgnames;7576 int npgnames;7677 struct list_head node;7777-};7878-7979-/**8080- * struct pcs_gpio_range - pinctrl gpio range8181- * @range: subrange of the GPIO number space8282- * @gpio_func: gpio function value in the pinmux register8383- */8484-struct pcs_gpio_range {8585- struct pinctrl_gpio_range range;8686- int gpio_func;8778};88798980/**···403414}404415405416static int pcs_request_gpio(struct pinctrl_dev *pctldev,406406- struct pinctrl_gpio_range *range, unsigned pin)417417+ struct pinctrl_gpio_range *range, unsigned offset)407418{408408- struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev);409409- struct pcs_gpio_range *gpio = NULL;410410- int end, mux_bytes;411411- unsigned data;412412-413413- gpio = container_of(range, struct pcs_gpio_range, range);414414- end = range->pin_base + range->npins - 1;415415- if (pin < range->pin_base || pin > end) {416416- dev_err(pctldev->dev,417417- "pin %d isn't in the range of %d to %d\n",418418- pin, range->pin_base, end);419419- return -EINVAL;420420- }421421- mux_bytes = pcs->width / BITS_PER_BYTE;422422- data = pcs->read(pcs->base + pin * mux_bytes) & ~pcs->fmask;423423- data |= gpio->gpio_func;424424- pcs->write(data, pcs->base + pin * mux_bytes);425425- return 0;419419+ return -ENOTSUPP;426420}427421428422static struct pinmux_ops pcs_pinmux_ops = {···879907880908static struct of_device_id pcs_of_match[];881909882882-static int pcs_add_gpio_range(struct device_node *node, struct pcs_device *pcs)883883-{884884- struct pcs_gpio_range *gpio;885885- struct device_node *child;886886- struct resource r;887887- const char name[] = "pinctrl-single";888888- u32 gpiores[PCS_MAX_GPIO_VALUES];889889- int ret, i = 0, mux_bytes = 0;890890-891891- for_each_child_of_node(node, child) {892892- ret = of_address_to_resource(child, 0, &r);893893- if (ret < 0)894894- continue;895895- memset(gpiores, 0, sizeof(u32) * PCS_MAX_GPIO_VALUES);896896- ret = of_property_read_u32_array(child, "pinctrl-single,gpio",897897- gpiores, PCS_MAX_GPIO_VALUES);898898- if (ret < 0)899899- continue;900900- gpio = devm_kzalloc(pcs->dev, sizeof(*gpio), GFP_KERNEL);901901- if (!gpio) {902902- dev_err(pcs->dev, "failed to allocate pcs gpio\n");903903- return -ENOMEM;904904- }905905- gpio->range.name = devm_kzalloc(pcs->dev, sizeof(name),906906- GFP_KERNEL);907907- if (!gpio->range.name) {908908- dev_err(pcs->dev, "failed to allocate range name\n");909909- return -ENOMEM;910910- }911911- memcpy((char *)gpio->range.name, name, sizeof(name));912912-913913- gpio->range.id = i++;914914- gpio->range.base = gpiores[0];915915- gpio->gpio_func = gpiores[1];916916- mux_bytes = pcs->width / BITS_PER_BYTE;917917- gpio->range.pin_base = (r.start - pcs->res->start) / mux_bytes;918918- gpio->range.npins = (r.end - r.start) / mux_bytes + 1;919919-920920- pinctrl_add_gpio_range(pcs->pctl, &gpio->range);921921- }922922- return 0;923923-}924924-925910static int pcs_probe(struct platform_device *pdev)926911{927912 struct device_node *np = pdev->dev.of_node;···9741045 ret = -EINVAL;9751046 goto free;9761047 }977977-978978- ret = pcs_add_gpio_range(np, pcs);979979- if (ret < 0)980980- goto free;98110489821049 dev_info(pcs->dev, "%i pins at pa %p size %u\n",9831050 pcs->desc.npins, pcs->base, pcs->size);
···244244 if (force)245245 pr_warn("module loaded by force\n");246246 /* first ensure that we are running on IBM HW */247247- else if (efi_enabled || !dmi_check_system(ibm_rtl_dmi_table))247247+ else if (efi_enabled(EFI_BOOT) || !dmi_check_system(ibm_rtl_dmi_table))248248 return -ENODEV;249249250250 /* Get the address for the Extended BIOS Data Area */
+4
drivers/platform/x86/samsung-laptop.c
···2626#include <linux/seq_file.h>2727#include <linux/debugfs.h>2828#include <linux/ctype.h>2929+#include <linux/efi.h>2930#include <acpi/video.h>30313132/*···15441543{15451544 struct samsung_laptop *samsung;15461545 int ret;15461546+15471547+ if (efi_enabled(EFI_BOOT))15481548+ return -ENODEV;1547154915481550 quirks = &samsung_unknown;15491551 if (!force && !dmi_check_system(samsung_dmi_table))
···998998 return NULL;999999 }1000100010011001- ret = of_regulator_match(pdev->dev.parent, regulators, matches, count);10011001+ ret = of_regulator_match(&pdev->dev, regulators, matches, count);10021002 if (ret < 0) {10031003 dev_err(&pdev->dev, "Error parsing regulator init data: %d\n",10041004 ret);
+1-1
drivers/regulator/tps80031-regulator.c
···728728 }729729 }730730 rdev = regulator_register(&ri->rinfo->desc, &config);731731- if (IS_ERR_OR_NULL(rdev)) {731731+ if (IS_ERR(rdev)) {732732 dev_err(&pdev->dev,733733 "register regulator failed %s\n",734734 ri->rinfo->desc.name);
+3
drivers/rtc/rtc-isl1208.c
···506506{507507 unsigned long timeout = jiffies + msecs_to_jiffies(1000);508508 struct i2c_client *client = data;509509+ struct rtc_device *rtc = i2c_get_clientdata(client);509510 int handled = 0, sr, err;510511511512 /*···528527529528 if (sr & ISL1208_REG_SR_ALM) {530529 dev_dbg(&client->dev, "alarm!\n");530530+531531+ rtc_update_irq(rtc, 1, RTC_IRQF | RTC_AF);531532532533 /* Clear the alarm */533534 sr &= ~ISL1208_REG_SR_ALM;
+5-3
drivers/rtc/rtc-pl031.c
···4444#define RTC_YMR 0x34 /* Year match register */4545#define RTC_YLR 0x38 /* Year data load register */46464747+#define RTC_CR_EN (1 << 0) /* counter enable bit */4748#define RTC_CR_CWEN (1 << 26) /* Clockwatch enable bit */48494950#define RTC_TCR_EN (1 << 1) /* Periodic timer enable bit */···321320 struct pl031_local *ldata;322321 struct pl031_vendor_data *vendor = id->data;323322 struct rtc_class_ops *ops = &vendor->ops;324324- unsigned long time;323323+ unsigned long time, data;325324326325 ret = amba_request_regions(adev, NULL);327326 if (ret)···346345 dev_dbg(&adev->dev, "designer ID = 0x%02x\n", amba_manf(adev));347346 dev_dbg(&adev->dev, "revision = 0x%01x\n", amba_rev(adev));348347348348+ data = readl(ldata->base + RTC_CR);349349 /* Enable the clockwatch on ST Variants */350350 if (vendor->clockwatch)351351- writel(readl(ldata->base + RTC_CR) | RTC_CR_CWEN,352352- ldata->base + RTC_CR);351351+ data |= RTC_CR_CWEN;352352+ writel(data | RTC_CR_EN, ldata->base + RTC_CR);353353354354 /*355355 * On ST PL031 variants, the RTC reset value does not provide correct
···633633 return -ENOMEM;634634 pci_set_drvdata(pdev, pci_info);635635636636- if (efi_enabled)636636+ if (efi_enabled(EFI_RUNTIME_SERVICES))637637 orom = isci_get_efi_var(pdev);638638639639 if (!orom)
···443443444444void ssb_bus_unregister(struct ssb_bus *bus)445445{446446+ int err;447447+448448+ err = ssb_gpio_unregister(bus);449449+ if (err == -EBUSY)450450+ ssb_dprintk(KERN_ERR PFX "Some GPIOs are still in use.\n");451451+ else if (err)452452+ ssb_dprintk(KERN_ERR PFX453453+ "Can not unregister GPIO driver: %i\n", err);454454+446455 ssb_buses_lock();447456 ssb_devices_unregister(bus);448457 list_del(&bus->list);
+5
drivers/ssb/ssb_private.h
···252252253253#ifdef CONFIG_SSB_DRIVER_GPIO254254extern int ssb_gpio_init(struct ssb_bus *bus);255255+extern int ssb_gpio_unregister(struct ssb_bus *bus);255256#else /* CONFIG_SSB_DRIVER_GPIO */256257static inline int ssb_gpio_init(struct ssb_bus *bus)257258{258259 return -ENOTSUPP;260260+}261261+static inline int ssb_gpio_unregister(struct ssb_bus *bus)262262+{263263+ return 0;259264}260265#endif /* CONFIG_SSB_DRIVER_GPIO */261266
+7-1
drivers/target/target_core_device.c
···941941942942int se_dev_set_fabric_max_sectors(struct se_device *dev, u32 fabric_max_sectors)943943{944944+ int block_size = dev->dev_attrib.block_size;945945+944946 if (dev->export_count) {945947 pr_err("dev[%p]: Unable to change SE Device"946948 " fabric_max_sectors while export_count is %d\n",···980978 /*981979 * Align max_sectors down to PAGE_SIZE to follow transport_allocate_data_tasks()982980 */981981+ if (!block_size) {982982+ block_size = 512;983983+ pr_warn("Defaulting to 512 for zero block_size\n");984984+ }983985 fabric_max_sectors = se_dev_align_max_sectors(fabric_max_sectors,984984- dev->dev_attrib.block_size);986986+ block_size);985987986988 dev->dev_attrib.fabric_max_sectors = fabric_max_sectors;987989 pr_debug("dev[%p]: SE Device max_sectors changed to %u\n",
+5
drivers/target/target_core_fabric_configfs.c
···754754 return -EFAULT;755755 }756756757757+ if (!(dev->dev_flags & DF_CONFIGURED)) {758758+ pr_err("se_device not configured yet, cannot port link\n");759759+ return -ENODEV;760760+ }761761+757762 tpg_ci = &lun_ci->ci_parent->ci_group->cg_item;758763 se_tpg = container_of(to_config_group(tpg_ci),759764 struct se_portal_group, tpg_group);
···641641642642out:643643 rbuf = transport_kmap_data_sg(cmd);644644- if (!rbuf)645645- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;646646-647647- memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));648648- transport_kunmap_data_sg(cmd);644644+ if (rbuf) {645645+ memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));646646+ transport_kunmap_data_sg(cmd);647647+ }649648650649 if (!ret)651650 target_complete_cmd(cmd, GOOD);···850851{851852 struct se_device *dev = cmd->se_dev;852853 char *cdb = cmd->t_task_cdb;853853- unsigned char *buf, *map_buf;854854+ unsigned char buf[SE_MODE_PAGE_BUF], *rbuf;854855 int type = dev->transport->get_device_type(dev);855856 int ten = (cmd->t_task_cdb[0] == MODE_SENSE_10);856857 bool dbd = !!(cdb[1] & 0x08);···862863 int ret;863864 int i;864865865865- map_buf = transport_kmap_data_sg(cmd);866866- if (!map_buf)867867- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;868868- /*869869- * If SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is not set, then we870870- * know we actually allocated a full page. Otherwise, if the871871- * data buffer is too small, allocate a temporary buffer so we872872- * don't have to worry about overruns in all our INQUIRY873873- * emulation handling.874874- */875875- if (cmd->data_length < SE_MODE_PAGE_BUF &&876876- (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC)) {877877- buf = kzalloc(SE_MODE_PAGE_BUF, GFP_KERNEL);878878- if (!buf) {879879- transport_kunmap_data_sg(cmd);880880- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;881881- }882882- } else {883883- buf = map_buf;884884- }866866+ memset(buf, 0, SE_MODE_PAGE_BUF);867867+885868 /*886869 * Skip over MODE DATA LENGTH + MEDIUM TYPE fields to byte 3 for887870 * MODE_SENSE_10 and byte 2 for MODE_SENSE (6).···915934 if (page == 0x3f) {916935 if (subpage != 0x00 && subpage != 0xff) {917936 pr_warn("MODE_SENSE: Invalid subpage code: 0x%02x\n", subpage);918918- kfree(buf);919919- transport_kunmap_data_sg(cmd);920937 return TCM_INVALID_CDB_FIELD;921938 }922939···951972 pr_err("MODE SENSE: unimplemented page/subpage: 0x%02x/0x%02x\n",952973 page, subpage);953974954954- transport_kunmap_data_sg(cmd);955975 return TCM_UNKNOWN_MODE_PAGE;956976957977set_length:···959981 else960982 buf[0] = length - 1;961983962962- if (buf != map_buf) {963963- memcpy(map_buf, buf, cmd->data_length);964964- kfree(buf);984984+ rbuf = transport_kmap_data_sg(cmd);985985+ if (rbuf) {986986+ memcpy(rbuf, buf, min_t(u32, SE_MODE_PAGE_BUF, cmd->data_length));987987+ transport_kunmap_data_sg(cmd);965988 }966989967967- transport_kunmap_data_sg(cmd);968990 target_complete_cmd(cmd, GOOD);969991 return 0;970992}
+44
drivers/usb/core/hcd.c
···3939#include <asm/unaligned.h>4040#include <linux/platform_device.h>4141#include <linux/workqueue.h>4242+#include <linux/pm_runtime.h>42434344#include <linux/usb.h>4445#include <linux/usb/hcd.h>···10261025 return retval;10271026}1028102710281028+/*10291029+ * usb_hcd_start_port_resume - a root-hub port is sending a resume signal10301030+ * @bus: the bus which the root hub belongs to10311031+ * @portnum: the port which is being resumed10321032+ *10331033+ * HCDs should call this function when they know that a resume signal is10341034+ * being sent to a root-hub port. The root hub will be prevented from10351035+ * going into autosuspend until usb_hcd_end_port_resume() is called.10361036+ *10371037+ * The bus's private lock must be held by the caller.10381038+ */10391039+void usb_hcd_start_port_resume(struct usb_bus *bus, int portnum)10401040+{10411041+ unsigned bit = 1 << portnum;10421042+10431043+ if (!(bus->resuming_ports & bit)) {10441044+ bus->resuming_ports |= bit;10451045+ pm_runtime_get_noresume(&bus->root_hub->dev);10461046+ }10471047+}10481048+EXPORT_SYMBOL_GPL(usb_hcd_start_port_resume);10491049+10501050+/*10511051+ * usb_hcd_end_port_resume - a root-hub port has stopped sending a resume signal10521052+ * @bus: the bus which the root hub belongs to10531053+ * @portnum: the port which is being resumed10541054+ *10551055+ * HCDs should call this function when they know that a resume signal has10561056+ * stopped being sent to a root-hub port. The root hub will be allowed to10571057+ * autosuspend again.10581058+ *10591059+ * The bus's private lock must be held by the caller.10601060+ */10611061+void usb_hcd_end_port_resume(struct usb_bus *bus, int portnum)10621062+{10631063+ unsigned bit = 1 << portnum;10641064+10651065+ if (bus->resuming_ports & bit) {10661066+ bus->resuming_ports &= ~bit;10671067+ pm_runtime_put_noidle(&bus->root_hub->dev);10681068+ }10691069+}10701070+EXPORT_SYMBOL_GPL(usb_hcd_end_port_resume);1029107110301072/*-------------------------------------------------------------------------*/10311073
+52-18
drivers/usb/core/hub.c
···28382838EXPORT_SYMBOL_GPL(usb_enable_ltm);2839283928402840#ifdef CONFIG_USB_SUSPEND28412841+/*28422842+ * usb_disable_function_remotewakeup - disable usb3.028432843+ * device's function remote wakeup28442844+ * @udev: target device28452845+ *28462846+ * Assume there's only one function on the USB 3.028472847+ * device and disable remote wake for the first28482848+ * interface. FIXME if the interface association28492849+ * descriptor shows there's more than one function.28502850+ */28512851+static int usb_disable_function_remotewakeup(struct usb_device *udev)28522852+{28532853+ return usb_control_msg(udev, usb_sndctrlpipe(udev, 0),28542854+ USB_REQ_CLEAR_FEATURE, USB_RECIP_INTERFACE,28552855+ USB_INTRF_FUNC_SUSPEND, 0, NULL, 0,28562856+ USB_CTRL_SET_TIMEOUT);28572857+}2841285828422859/*28432860 * usb_port_suspend - suspend a usb device's upstream port···29722955 dev_dbg(hub->intfdev, "can't suspend port %d, status %d\n",29732956 port1, status);29742957 /* paranoia: "should not happen" */29752975- if (udev->do_remote_wakeup)29762976- (void) usb_control_msg(udev, usb_sndctrlpipe(udev, 0),29772977- USB_REQ_CLEAR_FEATURE, USB_RECIP_DEVICE,29782978- USB_DEVICE_REMOTE_WAKEUP, 0,29792979- NULL, 0,29802980- USB_CTRL_SET_TIMEOUT);29582958+ if (udev->do_remote_wakeup) {29592959+ if (!hub_is_superspeed(hub->hdev)) {29602960+ (void) usb_control_msg(udev,29612961+ usb_sndctrlpipe(udev, 0),29622962+ USB_REQ_CLEAR_FEATURE,29632963+ USB_RECIP_DEVICE,29642964+ USB_DEVICE_REMOTE_WAKEUP, 0,29652965+ NULL, 0,29662966+ USB_CTRL_SET_TIMEOUT);29672967+ } else29682968+ (void) usb_disable_function_remotewakeup(udev);29692969+29702970+ }2981297129822972 /* Try to enable USB2 hardware LPM again */29832973 if (udev->usb2_hw_lpm_capable == 1)···30763052 * udev->reset_resume30773053 */30783054 } else if (udev->actconfig && !udev->reset_resume) {30793079- le16_to_cpus(&devstatus);30803080- if (devstatus & (1 << USB_DEVICE_REMOTE_WAKEUP)) {30813081- status = usb_control_msg(udev,30823082- usb_sndctrlpipe(udev, 0),30833083- USB_REQ_CLEAR_FEATURE,30553055+ if (!hub_is_superspeed(udev->parent)) {30563056+ le16_to_cpus(&devstatus);30573057+ if (devstatus & (1 << USB_DEVICE_REMOTE_WAKEUP))30583058+ status = usb_control_msg(udev,30593059+ usb_sndctrlpipe(udev, 0),30603060+ USB_REQ_CLEAR_FEATURE,30843061 USB_RECIP_DEVICE,30853085- USB_DEVICE_REMOTE_WAKEUP, 0,30863086- NULL, 0,30873087- USB_CTRL_SET_TIMEOUT);30883088- if (status)30893089- dev_dbg(&udev->dev,30903090- "disable remote wakeup, status %d\n",30913091- status);30623062+ USB_DEVICE_REMOTE_WAKEUP, 0,30633063+ NULL, 0,30643064+ USB_CTRL_SET_TIMEOUT);30653065+ } else {30663066+ status = usb_get_status(udev, USB_RECIP_INTERFACE, 0,30673067+ &devstatus);30683068+ le16_to_cpus(&devstatus);30693069+ if (!status && devstatus & (USB_INTRF_STAT_FUNC_RW_CAP30703070+ | USB_INTRF_STAT_FUNC_RW))30713071+ status =30723072+ usb_disable_function_remotewakeup(udev);30923073 }30743074+30753075+ if (status)30763076+ dev_dbg(&udev->dev,30773077+ "disable remote wakeup, status %d\n",30783078+ status);30933079 status = 0;30943080 }30953081 return status;
···649649 status = STS_PCD;650650 }651651 }652652- /* FIXME autosuspend idle root hubs */652652+653653+ /* If a resume is in progress, make sure it can finish */654654+ if (ehci->resuming_ports)655655+ mod_timer(&hcd->rh_timer, jiffies + msecs_to_jiffies(25));656656+653657 spin_unlock_irqrestore (&ehci->lock, flags);654658 return status ? retval : 0;655659}···855851 /* resume signaling for 20 msec */856852 ehci->reset_done[wIndex] = jiffies857853 + msecs_to_jiffies(20);854854+ usb_hcd_start_port_resume(&hcd->self, wIndex);858855 /* check the port again */859856 mod_timer(&ehci_to_hcd(ehci)->rh_timer,860857 ehci->reset_done[wIndex]);···867862 clear_bit(wIndex, &ehci->suspended_ports);868863 set_bit(wIndex, &ehci->port_c_suspend);869864 ehci->reset_done[wIndex] = 0;865865+ usb_hcd_end_port_resume(&hcd->self, wIndex);870866871867 /* stop resume signaling */872868 temp = ehci_readl(ehci, status_reg);···956950 ehci->reset_done[wIndex] = 0;957951 if (temp & PORT_PE)958952 set_bit(wIndex, &ehci->port_c_suspend);953953+ usb_hcd_end_port_resume(&hcd->self, wIndex);959954 }960955961956 if (temp & PORT_OC)
+30-20
drivers/usb/host/ehci-q.c
···11971197 if (ehci->async_iaa || ehci->async_unlinking)11981198 return;1199119912001200- /* Do all the waiting QHs at once */12011201- ehci->async_iaa = ehci->async_unlink;12021202- ehci->async_unlink = NULL;12031203-12041200 /* If the controller isn't running, we don't have to wait for it */12051201 if (unlikely(ehci->rh_state < EHCI_RH_RUNNING)) {12021202+12031203+ /* Do all the waiting QHs */12041204+ ehci->async_iaa = ehci->async_unlink;12051205+ ehci->async_unlink = NULL;12061206+12061207 if (!nested) /* Avoid recursion */12071208 end_unlink_async(ehci);1208120912091210 /* Otherwise start a new IAA cycle */12101211 } else if (likely(ehci->rh_state == EHCI_RH_RUNNING)) {12121212+ struct ehci_qh *qh;12131213+12141214+ /* Do only the first waiting QH (nVidia bug?) */12151215+ qh = ehci->async_unlink;12161216+ ehci->async_iaa = qh;12171217+ ehci->async_unlink = qh->unlink_next;12181218+ qh->unlink_next = NULL;12191219+12111220 /* Make sure the unlinks are all visible to the hardware */12121221 wmb();12131222···12641255 }12651256}1266125712581258+static void start_unlink_async(struct ehci_hcd *ehci, struct ehci_qh *qh);12591259+12671260static void unlink_empty_async(struct ehci_hcd *ehci)12681261{12691269- struct ehci_qh *qh, *next;12701270- bool stopped = (ehci->rh_state < EHCI_RH_RUNNING);12621262+ struct ehci_qh *qh;12631263+ struct ehci_qh *qh_to_unlink = NULL;12711264 bool check_unlinks_later = false;12651265+ int count = 0;1272126612731273- /* Unlink all the async QHs that have been empty for a timer cycle */12741274- next = ehci->async->qh_next.qh;12751275- while (next) {12761276- qh = next;12771277- next = qh->qh_next.qh;12781278-12671267+ /* Find the last async QH which has been empty for a timer cycle */12681268+ for (qh = ehci->async->qh_next.qh; qh; qh = qh->qh_next.qh) {12791269 if (list_empty(&qh->qtd_list) &&12801270 qh->qh_state == QH_STATE_LINKED) {12811281- if (!stopped && qh->unlink_cycle ==12821282- ehci->async_unlink_cycle)12711271+ ++count;12721272+ if (qh->unlink_cycle == ehci->async_unlink_cycle)12831273 check_unlinks_later = true;12841274 else12851285- single_unlink_async(ehci, qh);12751275+ qh_to_unlink = qh;12861276 }12871277 }1288127812891289- /* Start a new IAA cycle if any QHs are waiting for it */12901290- if (ehci->async_unlink)12911291- start_iaa_cycle(ehci, false);12791279+ /* If nothing else is being unlinked, unlink the last empty QH */12801280+ if (!ehci->async_iaa && !ehci->async_unlink && qh_to_unlink) {12811281+ start_unlink_async(ehci, qh_to_unlink);12821282+ --count;12831283+ }1292128412931293- /* QHs that haven't been empty for long enough will be handled later */12941294- if (check_unlinks_later) {12851285+ /* Other QHs will be handled later */12861286+ if (count > 0) {12951287 ehci_enable_event(ehci, EHCI_HRTIMER_ASYNC_UNLINKS, true);12961288 ++ehci->async_unlink_cycle;12971289 }
+6-3
drivers/usb/host/ehci-sched.c
···213213}214214215215static const unsigned char216216-max_tt_usecs[] = { 125, 125, 125, 125, 125, 125, 30, 0 };216216+max_tt_usecs[] = { 125, 125, 125, 125, 125, 125, 125, 25 };217217218218/* carryover low/fullspeed bandwidth that crosses uframe boundries */219219static inline void carryover_tt_bandwidth(unsigned short tt_usecs[8])···22122212 }22132213 ehci->now_frame = now_frame;2214221422152215+ frame = ehci->last_iso_frame;22152216 for (;;) {22162217 union ehci_shadow q, *q_p;22172218 __hc32 type, *hw_p;2218221922192219- frame = ehci->last_iso_frame;22202220restart:22212221 /* scan each element in frame's queue for completions */22222222 q_p = &ehci->pshadow [frame];···23212321 /* Stop when we have reached the current frame */23222322 if (frame == now_frame)23232323 break;23242324- ehci->last_iso_frame = (frame + 1) & fmask;23242324+23252325+ /* The last frame may still have active siTDs */23262326+ ehci->last_iso_frame = frame;23272327+ frame = (frame + 1) & fmask;23252328 }23262329}
+15-14
drivers/usb/host/ehci-timer.c
···113113114114 if (want != actual) {115115116116- /* Poll again later, but give up after about 20 ms */117117- if (ehci->ASS_poll_count++ < 20) {118118- ehci_enable_event(ehci, EHCI_HRTIMER_POLL_ASS, true);119119- return;120120- }121121- ehci_dbg(ehci, "Waited too long for the async schedule status (%x/%x), giving up\n",122122- want, actual);116116+ /* Poll again later */117117+ ehci_enable_event(ehci, EHCI_HRTIMER_POLL_ASS, true);118118+ ++ehci->ASS_poll_count;119119+ return;123120 }121121+122122+ if (ehci->ASS_poll_count > 20)123123+ ehci_dbg(ehci, "ASS poll count reached %d\n",124124+ ehci->ASS_poll_count);124125 ehci->ASS_poll_count = 0;125126126127 /* The status is up-to-date; restart or stop the schedule as needed */···160159161160 if (want != actual) {162161163163- /* Poll again later, but give up after about 20 ms */164164- if (ehci->PSS_poll_count++ < 20) {165165- ehci_enable_event(ehci, EHCI_HRTIMER_POLL_PSS, true);166166- return;167167- }168168- ehci_dbg(ehci, "Waited too long for the periodic schedule status (%x/%x), giving up\n",169169- want, actual);162162+ /* Poll again later */163163+ ehci_enable_event(ehci, EHCI_HRTIMER_POLL_PSS, true);164164+ return;170165 }166166+167167+ if (ehci->PSS_poll_count > 20)168168+ ehci_dbg(ehci, "PSS poll count reached %d\n",169169+ ehci->PSS_poll_count);171170 ehci->PSS_poll_count = 0;172171173172 /* The status is up-to-date; restart or stop the schedule as needed */
+1
drivers/usb/host/pci-quirks.c
···780780 "defaulting to EHCI.\n");781781 dev_warn(&xhci_pdev->dev,782782 "USB 3.0 devices will work at USB 2.0 speeds.\n");783783+ usb_disable_xhci_ports(xhci_pdev);783784 return;784785 }785786
+3
drivers/usb/host/uhci-hub.c
···116116 }117117 }118118 clear_bit(port, &uhci->resuming_ports);119119+ usb_hcd_end_port_resume(&uhci_to_hcd(uhci)->self, port);119120}120121121122/* Wait for the UHCI controller in HP's iLO2 server management chip.···168167 set_bit(port, &uhci->resuming_ports);169168 uhci->ports_timeout = jiffies +170169 msecs_to_jiffies(25);170170+ usb_hcd_start_port_resume(171171+ &uhci_to_hcd(uhci)->self, port);171172172173 /* Make sure we see the port again173174 * after the resuming period is over. */
+9-4
drivers/usb/host/xhci-ring.c
···16981698 faked_port_index + 1);16991699 if (slot_id && xhci->devs[slot_id])17001700 xhci_ring_device(xhci, slot_id);17011701- if (bus_state->port_remote_wakeup && (1 << faked_port_index)) {17011701+ if (bus_state->port_remote_wakeup & (1 << faked_port_index)) {17021702 bus_state->port_remote_wakeup &=17031703 ~(1 << faked_port_index);17041704 xhci_test_and_clear_bit(xhci, port_array,···25892589 (trb_comp_code != COMP_STALL &&25902590 trb_comp_code != COMP_BABBLE))25912591 xhci_urb_free_priv(xhci, urb_priv);25922592+ else25932593+ kfree(urb_priv);2592259425932595 usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);25942596 if ((urb->actual_length != urb->transfer_buffer_length &&···31103108 * running_total.31113109 */31123110 packets_transferred = (running_total + trb_buff_len) /31133113- usb_endpoint_maxp(&urb->ep->desc);31113111+ GET_MAX_PACKET(usb_endpoint_maxp(&urb->ep->desc));3114311231153113 if ((total_packet_count - packets_transferred) > 31)31163114 return 31 << 17;···36443642 td_len = urb->iso_frame_desc[i].length;36453643 td_remain_len = td_len;36463644 total_packet_count = DIV_ROUND_UP(td_len,36473647- usb_endpoint_maxp(&urb->ep->desc));36453645+ GET_MAX_PACKET(36463646+ usb_endpoint_maxp(&urb->ep->desc)));36483647 /* A zero-length transfer still involves at least one packet. */36493648 if (total_packet_count == 0)36503649 total_packet_count++;···36673664 td = urb_priv->td[i];36683665 for (j = 0; j < trbs_per_td; j++) {36693666 u32 remainder = 0;36703670- field = TRB_TBC(burst_count) | TRB_TLBPC(residue);36673667+ field = 0;3671366836723669 if (first_trb) {36703670+ field = TRB_TBC(burst_count) |36713671+ TRB_TLBPC(residue);36733672 /* Queue the isoc TRB */36743673 field |= TRB_TYPE(TRB_ISOC);36753674 /* Assume URB_ISO_ASAP is set */
···147147#define XSENS_CONVERTER_6_PID 0xD38E148148#define XSENS_CONVERTER_7_PID 0xD38F149149150150+/**151151+ * Zolix (www.zolix.com.cb) product ids152152+ */153153+#define FTDI_OMNI1509 0xD491 /* Omni1509 embedded USB-serial */154154+150155/*151156 * NDI (www.ndigital.com) product ids152157 */···209204210205/*211206 * ELV USB devices submitted by Christian Abt of ELV (www.elv.de).212212- * All of these devices use FTDI's vendor ID (0x0403).207207+ * Almost all of these devices use FTDI's vendor ID (0x0403).213208 * Further IDs taken from ELV Windows .inf file.214209 *215210 * The previously included PID for the UO 100 module was incorrect.···217212 *218213 * Armin Laeuger originally sent the PID for the UM 100 module.219214 */215215+#define FTDI_ELV_VID 0x1B1F /* ELV AG */216216+#define FTDI_ELV_WS300_PID 0xC006 /* eQ3 WS 300 PC II */220217#define FTDI_ELV_USR_PID 0xE000 /* ELV Universal-Sound-Recorder */221218#define FTDI_ELV_MSM1_PID 0xE001 /* ELV Mini-Sound-Modul */222219#define FTDI_ELV_KL100_PID 0xE002 /* ELV Kfz-Leistungsmesser KL 100 */
···9292 return 0;9393}94949595-/* This places the HUAWEI E220 devices in multi-port mode */9696-int usb_stor_huawei_e220_init(struct us_data *us)9595+/* This places the HUAWEI usb dongles in multi-port mode */9696+static int usb_stor_huawei_feature_init(struct us_data *us)9797{9898 int result;9999···103103 0x01, 0x0, NULL, 0x0, 1000);104104 US_DEBUGP("Huawei mode set result is %d\n", result);105105 return 0;106106+}107107+108108+/*109109+ * It will send a scsi switch command called rewind' to huawei dongle.110110+ * When the dongle receives this command at the first time,111111+ * it will reboot immediately. After rebooted, it will ignore this command.112112+ * So it is unnecessary to read its response.113113+ */114114+static int usb_stor_huawei_scsi_init(struct us_data *us)115115+{116116+ int result = 0;117117+ int act_len = 0;118118+ struct bulk_cb_wrap *bcbw = (struct bulk_cb_wrap *) us->iobuf;119119+ char rewind_cmd[] = {0x11, 0x06, 0x20, 0x00, 0x00, 0x01, 0x01, 0x00,120120+ 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};121121+122122+ bcbw->Signature = cpu_to_le32(US_BULK_CB_SIGN);123123+ bcbw->Tag = 0;124124+ bcbw->DataTransferLength = 0;125125+ bcbw->Flags = bcbw->Lun = 0;126126+ bcbw->Length = sizeof(rewind_cmd);127127+ memset(bcbw->CDB, 0, sizeof(bcbw->CDB));128128+ memcpy(bcbw->CDB, rewind_cmd, sizeof(rewind_cmd));129129+130130+ result = usb_stor_bulk_transfer_buf(us, us->send_bulk_pipe, bcbw,131131+ US_BULK_CB_WRAP_LEN, &act_len);132132+ US_DEBUGP("transfer actual length=%d, result=%d\n", act_len, result);133133+ return result;134134+}135135+136136+/*137137+ * It tries to find the supported Huawei USB dongles.138138+ * In Huawei, they assign the following product IDs139139+ * for all of their mobile broadband dongles,140140+ * including the new dongles in the future.141141+ * So if the product ID is not included in this list,142142+ * it means it is not Huawei's mobile broadband dongles.143143+ */144144+static int usb_stor_huawei_dongles_pid(struct us_data *us)145145+{146146+ struct usb_interface_descriptor *idesc;147147+ int idProduct;148148+149149+ idesc = &us->pusb_intf->cur_altsetting->desc;150150+ idProduct = us->pusb_dev->descriptor.idProduct;151151+ /* The first port is CDROM,152152+ * means the dongle in the single port mode,153153+ * and a switch command is required to be sent. */154154+ if (idesc && idesc->bInterfaceNumber == 0) {155155+ if ((idProduct == 0x1001)156156+ || (idProduct == 0x1003)157157+ || (idProduct == 0x1004)158158+ || (idProduct >= 0x1401 && idProduct <= 0x1500)159159+ || (idProduct >= 0x1505 && idProduct <= 0x1600)160160+ || (idProduct >= 0x1c02 && idProduct <= 0x2202)) {161161+ return 1;162162+ }163163+ }164164+ return 0;165165+}166166+167167+int usb_stor_huawei_init(struct us_data *us)168168+{169169+ int result = 0;170170+171171+ if (usb_stor_huawei_dongles_pid(us)) {172172+ if (us->pusb_dev->descriptor.idProduct >= 0x1446)173173+ result = usb_stor_huawei_scsi_init(us);174174+ else175175+ result = usb_stor_huawei_feature_init(us);176176+ }177177+ return result;106178}
+2-2
drivers/usb/storage/initializers.h
···4646 * flash reader */4747int usb_stor_ucr61s2b_init(struct us_data *us);48484949-/* This places the HUAWEI E220 devices in multi-port mode */5050-int usb_stor_huawei_e220_init(struct us_data *us);4949+/* This places the HUAWEI usb dongles in multi-port mode */5050+int usb_stor_huawei_init(struct us_data *us);
···4141#define USUAL_DEV(useProto, useTrans) \4242{ USB_INTERFACE_INFO(USB_CLASS_MASS_STORAGE, useProto, useTrans) }43434444+/* Define the device is matched with Vendor ID and interface descriptors */4545+#define UNUSUAL_VENDOR_INTF(id_vendor, cl, sc, pr, \4646+ vendorName, productName, useProtocol, useTransport, \4747+ initFunction, flags) \4848+{ \4949+ .match_flags = USB_DEVICE_ID_MATCH_INT_INFO \5050+ | USB_DEVICE_ID_MATCH_VENDOR, \5151+ .idVendor = (id_vendor), \5252+ .bInterfaceClass = (cl), \5353+ .bInterfaceSubClass = (sc), \5454+ .bInterfaceProtocol = (pr), \5555+ .driver_info = (flags) \5656+}5757+4458struct usb_device_id usb_storage_usb_ids[] = {4559# include "unusual_devs.h"4660 { } /* Terminating entry */···6450#undef UNUSUAL_DEV6551#undef COMPLIANT_DEV6652#undef USUAL_DEV5353+#undef UNUSUAL_VENDOR_INTF67546855/*6956 * The table of devices to ignore
+28-13
drivers/vhost/net.c
···165165}166166167167/* Caller must have TX VQ lock */168168-static void tx_poll_start(struct vhost_net *net, struct socket *sock)168168+static int tx_poll_start(struct vhost_net *net, struct socket *sock)169169{170170+ int ret;171171+170172 if (unlikely(net->tx_poll_state != VHOST_NET_POLL_STOPPED))171171- return;172172- vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file);173173- net->tx_poll_state = VHOST_NET_POLL_STARTED;173173+ return 0;174174+ ret = vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file);175175+ if (!ret)176176+ net->tx_poll_state = VHOST_NET_POLL_STARTED;177177+ return ret;174178}175179176180/* In case of DMA done not in order in lower device driver for some reason.···646642 vhost_poll_stop(n->poll + VHOST_NET_VQ_RX);647643}648644649649-static void vhost_net_enable_vq(struct vhost_net *n,645645+static int vhost_net_enable_vq(struct vhost_net *n,650646 struct vhost_virtqueue *vq)651647{652648 struct socket *sock;649649+ int ret;653650654651 sock = rcu_dereference_protected(vq->private_data,655652 lockdep_is_held(&vq->mutex));656653 if (!sock)657657- return;654654+ return 0;658655 if (vq == n->vqs + VHOST_NET_VQ_TX) {659656 n->tx_poll_state = VHOST_NET_POLL_STOPPED;660660- tx_poll_start(n, sock);657657+ ret = tx_poll_start(n, sock);661658 } else662662- vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file);659659+ ret = vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file);660660+661661+ return ret;663662}664663665664static struct socket *vhost_net_stop_vq(struct vhost_net *n,···834827 r = PTR_ERR(ubufs);835828 goto err_ubufs;836829 }837837- oldubufs = vq->ubufs;838838- vq->ubufs = ubufs;830830+839831 vhost_net_disable_vq(n, vq);840832 rcu_assign_pointer(vq->private_data, sock);841841- vhost_net_enable_vq(n, vq);842842-843833 r = vhost_init_used(vq);844834 if (r)845845- goto err_vq;835835+ goto err_used;836836+ r = vhost_net_enable_vq(n, vq);837837+ if (r)838838+ goto err_used;839839+840840+ oldubufs = vq->ubufs;841841+ vq->ubufs = ubufs;846842847843 n->tx_packets = 0;848844 n->tx_zcopy_err = 0;···869859 mutex_unlock(&n->dev.mutex);870860 return 0;871861862862+err_used:863863+ rcu_assign_pointer(vq->private_data, oldsock);864864+ vhost_net_enable_vq(n, vq);865865+ if (ubufs)866866+ vhost_ubuf_put_and_wait(ubufs);872867err_ubufs:873868 fput(sock->file);874869err_vq:
+1-3
drivers/vhost/tcm_vhost.c
···575575576576 /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */577577 tv_tpg = vs->vs_tpg;578578- if (unlikely(!tv_tpg)) {579579- pr_err("%s endpoint not set\n", __func__);578578+ if (unlikely(!tv_tpg))580579 return;581581- }582580583581 mutex_lock(&vq->mutex);584582 vhost_disable_notify(&vs->dev, vq);
+15-3
drivers/vhost/vhost.c
···7777 init_poll_funcptr(&poll->table, vhost_poll_func);7878 poll->mask = mask;7979 poll->dev = dev;8080+ poll->wqh = NULL;80818182 vhost_work_init(&poll->work, fn);8283}83848485/* Start polling a file. We add ourselves to file's wait queue. The caller must8586 * keep a reference to a file until after vhost_poll_stop is called. */8686-void vhost_poll_start(struct vhost_poll *poll, struct file *file)8787+int vhost_poll_start(struct vhost_poll *poll, struct file *file)8788{8889 unsigned long mask;9090+ int ret = 0;89919092 mask = file->f_op->poll(file, &poll->table);9193 if (mask)9294 vhost_poll_wakeup(&poll->wait, 0, 0, (void *)mask);9595+ if (mask & POLLERR) {9696+ if (poll->wqh)9797+ remove_wait_queue(poll->wqh, &poll->wait);9898+ ret = -EINVAL;9999+ }100100+101101+ return ret;93102}9410395104/* Stop polling a file. After this function returns, it becomes safe to drop the96105 * file reference. You must also flush afterwards. */97106void vhost_poll_stop(struct vhost_poll *poll)98107{9999- remove_wait_queue(poll->wqh, &poll->wait);108108+ if (poll->wqh) {109109+ remove_wait_queue(poll->wqh, &poll->wait);110110+ poll->wqh = NULL;111111+ }100112}101113102114static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,···804792 fput(filep);805793806794 if (pollstart && vq->handle_kick)807807- vhost_poll_start(&vq->poll, vq->kick);795795+ r = vhost_poll_start(&vq->poll, vq->kick);808796809797 mutex_unlock(&vq->mutex);810798
···840840841841 if (irq == -1) {842842 irq = xen_allocate_irq_dynamic();843843- if (irq == -1)843843+ if (irq < 0)844844 goto out;845845846846 irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,···944944945945 if (irq == -1) {946946 irq = xen_allocate_irq_dynamic();947947- if (irq == -1)947947+ if (irq < 0)948948 goto out;949949950950 irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
+7-7
drivers/xen/xen-pciback/pciback_ops.c
···135135 struct pci_dev *dev, struct xen_pci_op *op)136136{137137 struct xen_pcibk_dev_data *dev_data;138138- int otherend = pdev->xdev->otherend_id;139138 int status;140139141140 if (unlikely(verbose_request))···143144 status = pci_enable_msi(dev);144145145146 if (status) {146146- printk(KERN_ERR "error enable msi for guest %x status %x\n",147147- otherend, status);147147+ pr_warn_ratelimited(DRV_NAME ": %s: error enabling MSI for guest %u: err %d\n",148148+ pci_name(dev), pdev->xdev->otherend_id,149149+ status);148150 op->value = 0;149151 return XEN_PCI_ERR_op_failed;150152 }···223223 pci_name(dev), i,224224 op->msix_entries[i].vector);225225 }226226- } else {227227- printk(KERN_WARNING DRV_NAME ": %s: failed to enable MSI-X: err %d!\n",228228- pci_name(dev), result);229229- }226226+ } else227227+ pr_warn_ratelimited(DRV_NAME ": %s: error enabling MSI-X for guest %u: err %d!\n",228228+ pci_name(dev), pdev->xdev->otherend_id,229229+ result);230230 kfree(entries);231231232232 op->value = result;
+14-14
fs/btrfs/extent-tree.c
···39973997 * We make the other tasks wait for the flush only when we can flush39983998 * all things.39993999 */40004000- if (ret && flush == BTRFS_RESERVE_FLUSH_ALL) {40004000+ if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {40014001 flushing = true;40024002 space_info->flush = 1;40034003 }···45344534 unsigned nr_extents = 0;45354535 int extra_reserve = 0;45364536 enum btrfs_reserve_flush_enum flush = BTRFS_RESERVE_FLUSH_ALL;45374537- int ret;45374537+ int ret = 0;45384538 bool delalloc_lock = true;4539453945404540 /* If we are a free space inode we need to not flush since we will be in···45794579 csum_bytes = BTRFS_I(inode)->csum_bytes;45804580 spin_unlock(&BTRFS_I(inode)->lock);4581458145824582- if (root->fs_info->quota_enabled) {45824582+ if (root->fs_info->quota_enabled)45834583 ret = btrfs_qgroup_reserve(root, num_bytes +45844584 nr_extents * root->leafsize);45854585- if (ret) {45864586- spin_lock(&BTRFS_I(inode)->lock);45874587- calc_csum_metadata_size(inode, num_bytes, 0);45884588- spin_unlock(&BTRFS_I(inode)->lock);45894589- if (delalloc_lock)45904590- mutex_unlock(&BTRFS_I(inode)->delalloc_mutex);45914591- return ret;45924592- }45934593- }4594458545954595- ret = reserve_metadata_bytes(root, block_rsv, to_reserve, flush);45864586+ /*45874587+ * ret != 0 here means the qgroup reservation failed, we go straight to45884588+ * the shared error handling then.45894589+ */45904590+ if (ret == 0)45914591+ ret = reserve_metadata_bytes(root, block_rsv,45924592+ to_reserve, flush);45934593+45964594 if (ret) {45974595 u64 to_free = 0;45984596 unsigned dropped;···55585560 int empty_cluster = 2 * 1024 * 1024;55595561 struct btrfs_space_info *space_info;55605562 int loop = 0;55615561- int index = 0;55635563+ int index = __get_raid_index(data);55625564 int alloc_type = (data & BTRFS_BLOCK_GROUP_DATA) ?55635565 RESERVE_ALLOC_NO_ACCOUNT : RESERVE_ALLOC;55645566 bool found_uncached_bg = false;···67866788 &wc->flags[level]);67876789 if (ret < 0) {67886790 btrfs_tree_unlock_rw(eb, path->locks[level]);67916791+ path->locks[level] = 0;67896792 return ret;67906793 }67916794 BUG_ON(wc->refs[level] == 0);67926795 if (wc->refs[level] == 1) {67936796 btrfs_tree_unlock_rw(eb, path->locks[level]);67976797+ path->locks[level] = 0;67946798 return 1;67956799 }67966800 }
···293293 struct btrfs_key key;294294 struct btrfs_ioctl_defrag_range_args range;295295 int num_defrag;296296+ int index;297297+ int ret;296298297299 /* get the inode */298300 key.objectid = defrag->root;299301 btrfs_set_key_type(&key, BTRFS_ROOT_ITEM_KEY);300302 key.offset = (u64)-1;303303+304304+ index = srcu_read_lock(&fs_info->subvol_srcu);305305+301306 inode_root = btrfs_read_fs_root_no_name(fs_info, &key);302307 if (IS_ERR(inode_root)) {303303- kmem_cache_free(btrfs_inode_defrag_cachep, defrag);304304- return PTR_ERR(inode_root);308308+ ret = PTR_ERR(inode_root);309309+ goto cleanup;310310+ }311311+ if (btrfs_root_refs(&inode_root->root_item) == 0) {312312+ ret = -ENOENT;313313+ goto cleanup;305314 }306315307316 key.objectid = defrag->ino;···318309 key.offset = 0;319310 inode = btrfs_iget(fs_info->sb, &key, inode_root, NULL);320311 if (IS_ERR(inode)) {321321- kmem_cache_free(btrfs_inode_defrag_cachep, defrag);322322- return PTR_ERR(inode);312312+ ret = PTR_ERR(inode);313313+ goto cleanup;323314 }315315+ srcu_read_unlock(&fs_info->subvol_srcu, index);324316325317 /* do a chunk of defrag */326318 clear_bit(BTRFS_INODE_IN_DEFRAG, &BTRFS_I(inode)->runtime_flags);···356346357347 iput(inode);358348 return 0;349349+cleanup:350350+ srcu_read_unlock(&fs_info->subvol_srcu, index);351351+ kmem_cache_free(btrfs_inode_defrag_cachep, defrag);352352+ return ret;359353}360354361355/*···16081594 if (err < 0 && num_written > 0)16091595 num_written = err;16101596 }16111611-out:15971597+16121598 if (sync)16131599 atomic_dec(&BTRFS_I(inode)->sync_writers);16001600+out:16141601 sb_end_write(inode->i_sb);16151602 current->backing_dev_info = NULL;16161603 return num_written ? num_written : err;···22562241 if (lockend <= lockstart)22572242 lockend = lockstart + root->sectorsize;2258224322442244+ lockend--;22592245 len = lockend - lockstart + 1;2260224622612247 len = max_t(u64, len, root->sectorsize);···23232307 }23242308 }2325230923262326- *offset = start;23272327- free_extent_map(em);23282328- break;23102310+ if (!test_bit(EXTENT_FLAG_PREALLOC,23112311+ &em->flags)) {23122312+ *offset = start;23132313+ free_extent_map(em);23142314+ break;23152315+ }23292316 }23302317 }23312318
+12-8
fs/btrfs/free-space-cache.c
···18621862{18631863 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;18641864 struct btrfs_free_space *info;18651865- int ret = 0;18651865+ int ret;18661866+ bool re_search = false;1866186718671868 spin_lock(&ctl->tree_lock);1868186918691870again:18711871+ ret = 0;18701872 if (!bytes)18711873 goto out_lock;18721874···18811879 info = tree_search_offset(ctl, offset_to_bitmap(ctl, offset),18821880 1, 0);18831881 if (!info) {18841884- /* the tree logging code might be calling us before we18851885- * have fully loaded the free space rbtree for this18861886- * block group. So it is possible the entry won't18871887- * be in the rbtree yet at all. The caching code18881888- * will make sure not to put it in the rbtree if18891889- * the logging code has pinned it.18821882+ /*18831883+ * If we found a partial bit of our free space in a18841884+ * bitmap but then couldn't find the other part this may18851885+ * be a problem, so WARN about it.18901886 */18871887+ WARN_ON(re_search);18911888 goto out_lock;18921889 }18931890 }1894189118921892+ re_search = false;18951893 if (!info->bitmap) {18961894 unlink_free_space(ctl, info);18971895 if (offset == info->offset) {···19371935 }1938193619391937 ret = remove_from_bitmap(ctl, info, &offset, &bytes);19401940- if (ret == -EAGAIN)19381938+ if (ret == -EAGAIN) {19391939+ re_search = true;19411940 goto again;19411941+ }19421942 BUG_ON(ret); /* logic error */19431943out_lock:19441944 spin_unlock(&ctl->tree_lock);
+102-35
fs/btrfs/inode.c
···8888 [S_IFLNK >> S_SHIFT] = BTRFS_FT_SYMLINK,8989};90909191-static int btrfs_setsize(struct inode *inode, loff_t newsize);9191+static int btrfs_setsize(struct inode *inode, struct iattr *attr);9292static int btrfs_truncate(struct inode *inode);9393static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent);9494static noinline int cow_file_range(struct inode *inode,···24782478 continue;24792479 }24802480 nr_truncate++;24812481+24822482+ /* 1 for the orphan item deletion. */24832483+ trans = btrfs_start_transaction(root, 1);24842484+ if (IS_ERR(trans)) {24852485+ ret = PTR_ERR(trans);24862486+ goto out;24872487+ }24882488+ ret = btrfs_orphan_add(trans, inode);24892489+ btrfs_end_transaction(trans, root);24902490+ if (ret)24912491+ goto out;24922492+24812493 ret = btrfs_truncate(inode);24822494 } else {24832495 nr_unlink++;···36773665 block_end - cur_offset, 0);36783666 if (IS_ERR(em)) {36793667 err = PTR_ERR(em);36683668+ em = NULL;36803669 break;36813670 }36823671 last_byte = min(extent_map_end(em), block_end);···37613748 return err;37623749}3763375037643764-static int btrfs_setsize(struct inode *inode, loff_t newsize)37513751+static int btrfs_setsize(struct inode *inode, struct iattr *attr)37653752{37663753 struct btrfs_root *root = BTRFS_I(inode)->root;37673754 struct btrfs_trans_handle *trans;37683755 loff_t oldsize = i_size_read(inode);37563756+ loff_t newsize = attr->ia_size;37573757+ int mask = attr->ia_valid;37693758 int ret;3770375937713760 if (newsize == oldsize)37723761 return 0;37623762+37633763+ /*37643764+ * The regular truncate() case without ATTR_CTIME and ATTR_MTIME is a37653765+ * special case where we need to update the times despite not having37663766+ * these flags set. For all other operations the VFS set these flags37673767+ * explicitly if it wants a timestamp update.37683768+ */37693769+ if (newsize != oldsize && (!(mask & (ATTR_CTIME | ATTR_MTIME))))37703770+ inode->i_ctime = inode->i_mtime = current_fs_time(inode->i_sb);3773377137743772 if (newsize > oldsize) {37753773 truncate_pagecache(inode, oldsize, newsize);···38073783 set_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,38083784 &BTRFS_I(inode)->runtime_flags);3809378537863786+ /*37873787+ * 1 for the orphan item we're going to add37883788+ * 1 for the orphan item deletion.37893789+ */37903790+ trans = btrfs_start_transaction(root, 2);37913791+ if (IS_ERR(trans))37923792+ return PTR_ERR(trans);37933793+37943794+ /*37953795+ * We need to do this in case we fail at _any_ point during the37963796+ * actual truncate. Once we do the truncate_setsize we could37973797+ * invalidate pages which forces any outstanding ordered io to37983798+ * be instantly completed which will give us extents that need37993799+ * to be truncated. If we fail to get an orphan inode down we38003800+ * could have left over extents that were never meant to live,38013801+ * so we need to garuntee from this point on that everything38023802+ * will be consistent.38033803+ */38043804+ ret = btrfs_orphan_add(trans, inode);38053805+ btrfs_end_transaction(trans, root);38063806+ if (ret)38073807+ return ret;38083808+38103809 /* we don't support swapfiles, so vmtruncate shouldn't fail */38113810 truncate_setsize(inode, newsize);38123811 ret = btrfs_truncate(inode);38123812+ if (ret && inode->i_nlink)38133813+ btrfs_orphan_del(NULL, inode);38133814 }3814381538153816 return ret;···38543805 return err;3855380638563807 if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) {38573857- err = btrfs_setsize(inode, attr->ia_size);38083808+ err = btrfs_setsize(inode, attr);38583809 if (err)38593810 return err;38603811 }···56215572 return em;56225573 if (em) {56235574 /*56245624- * if our em maps to a hole, there might56255625- * actually be delalloc bytes behind it55755575+ * if our em maps to55765576+ * - a hole or55775577+ * - a pre-alloc extent,55785578+ * there might actually be delalloc bytes behind it.56265579 */56275627- if (em->block_start != EXTENT_MAP_HOLE)55805580+ if (em->block_start != EXTENT_MAP_HOLE &&55815581+ !test_bit(EXTENT_FLAG_PREALLOC, &em->flags))56285582 return em;56295583 else56305584 hole_em = em;···57095657 */57105658 em->block_start = hole_em->block_start;57115659 em->block_len = hole_len;56605660+ if (test_bit(EXTENT_FLAG_PREALLOC, &hole_em->flags))56615661+ set_bit(EXTENT_FLAG_PREALLOC, &em->flags);57125662 } else {57135663 em->start = range_start;57145664 em->len = found;···6969691569706916 /*69716917 * 1 for the truncate slack space69726972- * 1 for the orphan item we're going to add69736973- * 1 for the orphan item deletion69746918 * 1 for updating the inode.69756919 */69766976- trans = btrfs_start_transaction(root, 4);69206920+ trans = btrfs_start_transaction(root, 2);69776921 if (IS_ERR(trans)) {69786922 err = PTR_ERR(trans);69796923 goto out;···69816929 ret = btrfs_block_rsv_migrate(&root->fs_info->trans_block_rsv, rsv,69826930 min_size);69836931 BUG_ON(ret);69846984-69856985- ret = btrfs_orphan_add(trans, inode);69866986- if (ret) {69876987- btrfs_end_transaction(trans, root);69886988- goto out;69896989- }6990693269916933 /*69926934 * setattr is responsible for setting the ordered_data_close flag,···70507004 ret = btrfs_orphan_del(trans, inode);70517005 if (ret)70527006 err = ret;70537053- } else if (ret && inode->i_nlink > 0) {70547054- /*70557055- * Failed to do the truncate, remove us from the in memory70567056- * orphan list.70577057- */70587058- ret = btrfs_orphan_del(NULL, inode);70597007 }7060700870617009 if (trans) {···75717531 */75727532int btrfs_start_delalloc_inodes(struct btrfs_root *root, int delay_iput)75737533{75747574- struct list_head *head = &root->fs_info->delalloc_inodes;75757534 struct btrfs_inode *binode;75767535 struct inode *inode;75777536 struct btrfs_delalloc_work *work, *next;75787537 struct list_head works;75387538+ struct list_head splice;75797539 int ret = 0;7580754075817541 if (root->fs_info->sb->s_flags & MS_RDONLY)75827542 return -EROFS;7583754375847544 INIT_LIST_HEAD(&works);75857585-75457545+ INIT_LIST_HEAD(&splice);75467546+again:75867547 spin_lock(&root->fs_info->delalloc_lock);75877587- while (!list_empty(head)) {75887588- binode = list_entry(head->next, struct btrfs_inode,75487548+ list_splice_init(&root->fs_info->delalloc_inodes, &splice);75497549+ while (!list_empty(&splice)) {75507550+ binode = list_entry(splice.next, struct btrfs_inode,75897551 delalloc_inodes);75527552+75537553+ list_del_init(&binode->delalloc_inodes);75547554+75907555 inode = igrab(&binode->vfs_inode);75917556 if (!inode)75927592- list_del_init(&binode->delalloc_inodes);75577557+ continue;75587558+75597559+ list_add_tail(&binode->delalloc_inodes,75607560+ &root->fs_info->delalloc_inodes);75937561 spin_unlock(&root->fs_info->delalloc_lock);75947594- if (inode) {75957595- work = btrfs_alloc_delalloc_work(inode, 0, delay_iput);75967596- if (!work) {75977597- ret = -ENOMEM;75987598- goto out;75997599- }76007600- list_add_tail(&work->list, &works);76017601- btrfs_queue_worker(&root->fs_info->flush_workers,76027602- &work->work);75627562+75637563+ work = btrfs_alloc_delalloc_work(inode, 0, delay_iput);75647564+ if (unlikely(!work)) {75657565+ ret = -ENOMEM;75667566+ goto out;76037567 }75687568+ list_add_tail(&work->list, &works);75697569+ btrfs_queue_worker(&root->fs_info->flush_workers,75707570+ &work->work);75717571+76047572 cond_resched();76057573 spin_lock(&root->fs_info->delalloc_lock);75747574+ }75757575+ spin_unlock(&root->fs_info->delalloc_lock);75767576+75777577+ list_for_each_entry_safe(work, next, &works, list) {75787578+ list_del_init(&work->list);75797579+ btrfs_wait_and_free_delalloc_work(work);75807580+ }75817581+75827582+ spin_lock(&root->fs_info->delalloc_lock);75837583+ if (!list_empty(&root->fs_info->delalloc_inodes)) {75847584+ spin_unlock(&root->fs_info->delalloc_lock);75857585+ goto again;76067586 }76077587 spin_unlock(&root->fs_info->delalloc_lock);76087588···76387578 atomic_read(&root->fs_info->async_delalloc_pages) == 0));76397579 }76407580 atomic_dec(&root->fs_info->async_submit_draining);75817581+ return 0;76417582out:76427583 list_for_each_entry_safe(work, next, &works, list) {76437584 list_del_init(&work->list);76447585 btrfs_wait_and_free_delalloc_work(work);75867586+ }75877587+75887588+ if (!list_empty_careful(&splice)) {75897589+ spin_lock(&root->fs_info->delalloc_lock);75907590+ list_splice_tail(&splice, &root->fs_info->delalloc_inodes);75917591+ spin_unlock(&root->fs_info->delalloc_lock);76457592 }76467593 return ret;76477594}
+98-36
fs/btrfs/ioctl.c
···515515516516 BUG_ON(ret);517517518518- d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));519518fail:520519 if (async_transid) {521520 *async_transid = trans->transid;···524525 }525526 if (err && !ret)526527 ret = err;528528+529529+ if (!ret)530530+ d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));531531+527532 return ret;528533}529534···13421339 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,13431340 1)) {13441341 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");13451345- return -EINPROGRESS;13421342+ mnt_drop_write_file(file);13431343+ return -EINVAL;13461344 }1347134513481346 mutex_lock(&root->fs_info->volume_mutex);···13661362 printk(KERN_INFO "btrfs: resizing devid %llu\n",13671363 (unsigned long long)devid);13681364 }13651365+13691366 device = btrfs_find_device(root->fs_info, devid, NULL, NULL);13701367 if (!device) {13711368 printk(KERN_INFO "btrfs: resizer unable to find device %llu\n",···13741369 ret = -EINVAL;13751370 goto out_free;13761371 }13771377- if (device->fs_devices && device->fs_devices->seeding) {13721372+13731373+ if (!device->writeable) {13781374 printk(KERN_INFO "btrfs: resizer unable to apply on "13791379- "seeding device %llu\n",13751375+ "readonly device %llu\n",13801376 (unsigned long long)devid);13811377 ret = -EINVAL;13821378 goto out_free;···14491443 kfree(vol_args);14501444out:14511445 mutex_unlock(&root->fs_info->volume_mutex);14521452- mnt_drop_write_file(file);14531446 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);14471447+ mnt_drop_write_file(file);14541448 return ret;14551449}14561450···21012095 err = inode_permission(inode, MAY_WRITE | MAY_EXEC);21022096 if (err)21032097 goto out_dput;21042104-21052105- /* check if subvolume may be deleted by a non-root user */21062106- err = btrfs_may_delete(dir, dentry, 1);21072107- if (err)21082108- goto out_dput;21092098 }20992099+21002100+ /* check if subvolume may be deleted by a user */21012101+ err = btrfs_may_delete(dir, dentry, 1);21022102+ if (err)21032103+ goto out_dput;2110210421112105 if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID) {21122106 err = -EINVAL;···21892183 struct btrfs_ioctl_defrag_range_args *range;21902184 int ret;2191218521922192- if (btrfs_root_readonly(root))21932193- return -EROFS;21862186+ ret = mnt_want_write_file(file);21872187+ if (ret)21882188+ return ret;2194218921952190 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,21962191 1)) {21972192 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");21982198- return -EINPROGRESS;21932193+ mnt_drop_write_file(file);21942194+ return -EINVAL;21992195 }22002200- ret = mnt_want_write_file(file);22012201- if (ret) {22022202- atomic_set(&root->fs_info->mutually_exclusive_operation_running,22032203- 0);22042204- return ret;21962196+21972197+ if (btrfs_root_readonly(root)) {21982198+ ret = -EROFS;21992199+ goto out;22052200 }2206220122072202 switch (inode->i_mode & S_IFMT) {···22542247 ret = -EINVAL;22552248 }22562249out:22572257- mnt_drop_write_file(file);22582250 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);22512251+ mnt_drop_write_file(file);22592252 return ret;22602253}22612254···22702263 if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,22712264 1)) {22722265 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");22732273- return -EINPROGRESS;22662266+ return -EINVAL;22742267 }2275226822762269 mutex_lock(&root->fs_info->volume_mutex);···23072300 1)) {23082301 pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");23092302 mnt_drop_write_file(file);23102310- return -EINPROGRESS;23032303+ return -EINVAL;23112304 }2312230523132306 mutex_lock(&root->fs_info->volume_mutex);···23232316 kfree(vol_args);23242317out:23252318 mutex_unlock(&root->fs_info->volume_mutex);23262326- mnt_drop_write_file(file);23272319 atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);23202320+ mnt_drop_write_file(file);23282321 return ret;23292322}23302323···34443437 struct btrfs_fs_info *fs_info = root->fs_info;34453438 struct btrfs_ioctl_balance_args *bargs;34463439 struct btrfs_balance_control *bctl;34403440+ bool need_unlock; /* for mut. excl. ops lock */34473441 int ret;34483448- int need_to_clear_lock = 0;3449344234503443 if (!capable(CAP_SYS_ADMIN))34513444 return -EPERM;···34543447 if (ret)34553448 return ret;3456344934573457- mutex_lock(&fs_info->volume_mutex);34503450+again:34513451+ if (!atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1)) {34523452+ mutex_lock(&fs_info->volume_mutex);34533453+ mutex_lock(&fs_info->balance_mutex);34543454+ need_unlock = true;34553455+ goto locked;34563456+ }34573457+34583458+ /*34593459+ * mut. excl. ops lock is locked. Three possibilites:34603460+ * (1) some other op is running34613461+ * (2) balance is running34623462+ * (3) balance is paused -- special case (think resume)34633463+ */34583464 mutex_lock(&fs_info->balance_mutex);34653465+ if (fs_info->balance_ctl) {34663466+ /* this is either (2) or (3) */34673467+ if (!atomic_read(&fs_info->balance_running)) {34683468+ mutex_unlock(&fs_info->balance_mutex);34693469+ if (!mutex_trylock(&fs_info->volume_mutex))34703470+ goto again;34713471+ mutex_lock(&fs_info->balance_mutex);34723472+34733473+ if (fs_info->balance_ctl &&34743474+ !atomic_read(&fs_info->balance_running)) {34753475+ /* this is (3) */34763476+ need_unlock = false;34773477+ goto locked;34783478+ }34793479+34803480+ mutex_unlock(&fs_info->balance_mutex);34813481+ mutex_unlock(&fs_info->volume_mutex);34823482+ goto again;34833483+ } else {34843484+ /* this is (2) */34853485+ mutex_unlock(&fs_info->balance_mutex);34863486+ ret = -EINPROGRESS;34873487+ goto out;34883488+ }34893489+ } else {34903490+ /* this is (1) */34913491+ mutex_unlock(&fs_info->balance_mutex);34923492+ pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");34933493+ ret = -EINVAL;34943494+ goto out;34953495+ }34963496+34973497+locked:34983498+ BUG_ON(!atomic_read(&fs_info->mutually_exclusive_operation_running));3459349934603500 if (arg) {34613501 bargs = memdup_user(arg, sizeof(*bargs));34623502 if (IS_ERR(bargs)) {34633503 ret = PTR_ERR(bargs);34643464- goto out;35043504+ goto out_unlock;34653505 }3466350634673507 if (bargs->flags & BTRFS_BALANCE_RESUME) {···35283474 bargs = NULL;35293475 }3530347635313531- if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,35323532- 1)) {35333533- pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");34773477+ if (fs_info->balance_ctl) {35343478 ret = -EINPROGRESS;35353479 goto out_bargs;35363480 }35373537- need_to_clear_lock = 1;3538348135393482 bctl = kzalloc(sizeof(*bctl), GFP_NOFS);35403483 if (!bctl) {···35523501 }3553350235543503do_balance:35553555- ret = btrfs_balance(bctl, bargs);35563504 /*35573557- * bctl is freed in __cancel_balance or in free_fs_info if35583558- * restriper was paused all the way until unmount35053505+ * Ownership of bctl and mutually_exclusive_operation_running35063506+ * goes to to btrfs_balance. bctl is freed in __cancel_balance,35073507+ * or, if restriper was paused all the way until unmount, in35083508+ * free_fs_info. mutually_exclusive_operation_running is35093509+ * cleared in __cancel_balance.35593510 */35113511+ need_unlock = false;35123512+35133513+ ret = btrfs_balance(bctl, bargs);35143514+35603515 if (arg) {35613516 if (copy_to_user(arg, bargs, sizeof(*bargs)))35623517 ret = -EFAULT;···3570351335713514out_bargs:35723515 kfree(bargs);35733573-out:35743574- if (need_to_clear_lock)35753575- atomic_set(&root->fs_info->mutually_exclusive_operation_running,35763576- 0);35163516+out_unlock:35773517 mutex_unlock(&fs_info->balance_mutex);35783518 mutex_unlock(&fs_info->volume_mutex);35193519+ if (need_unlock)35203520+ atomic_set(&fs_info->mutually_exclusive_operation_running, 0);35213521+out:35793522 mnt_drop_write_file(file);35803523 return ret;35813524}···37533696 if (IS_ERR(sa)) {37543697 ret = PTR_ERR(sa);37553698 goto drop_write;36993699+ }37003700+37013701+ if (!sa->qgroupid) {37023702+ ret = -EINVAL;37033703+ goto out;37563704 }3757370537583706 trans = btrfs_join_transaction(root);
+10-3
fs/btrfs/ordered-data.c
···836836 * if the disk i_size is already at the inode->i_size, or837837 * this ordered extent is inside the disk i_size, we're done838838 */839839- if (disk_i_size == i_size || offset <= disk_i_size) {839839+ if (disk_i_size == i_size)840840 goto out;841841- }841841+842842+ /*843843+ * We still need to update disk_i_size if outstanding_isize is greater844844+ * than disk_i_size.845845+ */846846+ if (offset <= disk_i_size &&847847+ (!ordered || ordered->outstanding_isize <= disk_i_size))848848+ goto out;842849843850 /*844851 * walk backward from this ordered extent to disk_i_size.···877870 break;878871 if (test->file_offset >= i_size)879872 break;880880- if (test->file_offset >= disk_i_size) {873873+ if (entry_end(test) > disk_i_size) {881874 /*882875 * we don't update disk_i_size now, so record this883876 * undealt i_size. Or we will not know the real
+19-1
fs/btrfs/qgroup.c
···379379380380 ret = add_relation_rb(fs_info, found_key.objectid,381381 found_key.offset);382382+ if (ret == -ENOENT) {383383+ printk(KERN_WARNING384384+ "btrfs: orphan qgroup relation 0x%llx->0x%llx\n",385385+ (unsigned long long)found_key.objectid,386386+ (unsigned long long)found_key.offset);387387+ ret = 0; /* ignore the error */388388+ }382389 if (ret)383390 goto out;384391next2:···963956 struct btrfs_fs_info *fs_info, u64 qgroupid)964957{965958 struct btrfs_root *quota_root;959959+ struct btrfs_qgroup *qgroup;966960 int ret = 0;967961968962 quota_root = fs_info->quota_root;969963 if (!quota_root)970964 return -EINVAL;971965966966+ /* check if there are no relations to this qgroup */967967+ spin_lock(&fs_info->qgroup_lock);968968+ qgroup = find_qgroup_rb(fs_info, qgroupid);969969+ if (qgroup) {970970+ if (!list_empty(&qgroup->groups) || !list_empty(&qgroup->members)) {971971+ spin_unlock(&fs_info->qgroup_lock);972972+ return -EBUSY;973973+ }974974+ }975975+ spin_unlock(&fs_info->qgroup_lock);976976+972977 ret = del_qgroup_item(trans, quota_root, qgroupid);973978974979 spin_lock(&fs_info->qgroup_lock);975980 del_qgroup_rb(quota_root->fs_info, qgroupid);976976-977981 spin_unlock(&fs_info->qgroup_lock);978982979983 return ret;
+20-5
fs/btrfs/scrub.c
···580580 int corrected = 0;581581 struct btrfs_key key;582582 struct inode *inode = NULL;583583+ struct btrfs_fs_info *fs_info;583584 u64 end = offset + PAGE_SIZE - 1;584585 struct btrfs_root *local_root;586586+ int srcu_index;585587586588 key.objectid = root;587589 key.type = BTRFS_ROOT_ITEM_KEY;588590 key.offset = (u64)-1;589589- local_root = btrfs_read_fs_root_no_name(fixup->root->fs_info, &key);590590- if (IS_ERR(local_root))591591+592592+ fs_info = fixup->root->fs_info;593593+ srcu_index = srcu_read_lock(&fs_info->subvol_srcu);594594+595595+ local_root = btrfs_read_fs_root_no_name(fs_info, &key);596596+ if (IS_ERR(local_root)) {597597+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);591598 return PTR_ERR(local_root);599599+ }592600593601 key.type = BTRFS_INODE_ITEM_KEY;594602 key.objectid = inum;595603 key.offset = 0;596596- inode = btrfs_iget(fixup->root->fs_info->sb, &key, local_root, NULL);604604+ inode = btrfs_iget(fs_info->sb, &key, local_root, NULL);605605+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);597606 if (IS_ERR(inode))598607 return PTR_ERR(inode);599608···615606 }616607617608 if (PageUptodate(page)) {618618- struct btrfs_fs_info *fs_info;619609 if (PageDirty(page)) {620610 /*621611 * we need to write the data to the defect sector. the···31883180 u64 physical_for_dev_replace;31893181 u64 len;31903182 struct btrfs_fs_info *fs_info = nocow_ctx->sctx->dev_root->fs_info;31833183+ int srcu_index;3191318431923185 key.objectid = root;31933186 key.type = BTRFS_ROOT_ITEM_KEY;31943187 key.offset = (u64)-1;31883188+31893189+ srcu_index = srcu_read_lock(&fs_info->subvol_srcu);31903190+31953191 local_root = btrfs_read_fs_root_no_name(fs_info, &key);31963196- if (IS_ERR(local_root))31923192+ if (IS_ERR(local_root)) {31933193+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);31973194 return PTR_ERR(local_root);31953195+ }3198319631993197 key.type = BTRFS_INODE_ITEM_KEY;32003198 key.objectid = inum;32013199 key.offset = 0;32023200 inode = btrfs_iget(fs_info->sb, &key, local_root, NULL);32013201+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);32033202 if (IS_ERR(inode))32043203 return PTR_ERR(inode);32053204
+3-1
fs/btrfs/send.c
···18141814 (unsigned long)nce->ino);18151815 if (!nce_head) {18161816 nce_head = kmalloc(sizeof(*nce_head), GFP_NOFS);18171817- if (!nce_head)18171817+ if (!nce_head) {18181818+ kfree(nce);18181819 return -ENOMEM;18201820+ }18191821 INIT_LIST_HEAD(nce_head);1820182218211823 ret = radix_tree_insert(&sctx->name_cache, nce->ino, nce_head);
···333333 &root->fs_info->trans_block_rsv,334334 num_bytes, flush);335335 if (ret)336336- return ERR_PTR(ret);336336+ goto reserve_fail;337337 }338338again:339339 h = kmem_cache_alloc(btrfs_trans_handle_cachep, GFP_NOFS);340340- if (!h)341341- return ERR_PTR(-ENOMEM);340340+ if (!h) {341341+ ret = -ENOMEM;342342+ goto alloc_fail;343343+ }342344343345 /*344346 * If we are JOIN_NOLOCK we're already committing a transaction and···367365 if (ret < 0) {368366 /* We must get the transaction if we are JOIN_NOLOCK. */369367 BUG_ON(type == TRANS_JOIN_NOLOCK);370370-371371- if (type < TRANS_JOIN_NOLOCK)372372- sb_end_intwrite(root->fs_info->sb);373373- kmem_cache_free(btrfs_trans_handle_cachep, h);374374- return ERR_PTR(ret);368368+ goto join_fail;375369 }376370377371 cur_trans = root->fs_info->running_transaction;···408410 if (!current->journal_info && type != TRANS_USERSPACE)409411 current->journal_info = h;410412 return h;413413+414414+join_fail:415415+ if (type < TRANS_JOIN_NOLOCK)416416+ sb_end_intwrite(root->fs_info->sb);417417+ kmem_cache_free(btrfs_trans_handle_cachep, h);418418+alloc_fail:419419+ if (num_bytes)420420+ btrfs_block_rsv_release(root, &root->fs_info->trans_block_rsv,421421+ num_bytes);422422+reserve_fail:423423+ if (qgroup_reserved)424424+ btrfs_qgroup_free(root, qgroup_reserved);425425+ return ERR_PTR(ret);411426}412427413428struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root,···14791468 goto cleanup_transaction;14801469 }1481147014821482- if (cur_trans->aborted) {14711471+ /* Stop the commit early if ->aborted is set */14721472+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {14831473 ret = cur_trans->aborted;14841474 goto cleanup_transaction;14851475 }···15861574 wait_event(cur_trans->writer_wait,15871575 atomic_read(&cur_trans->num_writers) == 1);1588157615771577+ /* ->aborted might be set after the previous check, so check it */15781578+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {15791579+ ret = cur_trans->aborted;15801580+ goto cleanup_transaction;15811581+ }15891582 /*15901583 * the reloc mutex makes sure that we stop15911584 * the balancing code from coming in and moving···1669165216701653 ret = commit_cowonly_roots(trans, root);16711654 if (ret) {16551655+ mutex_unlock(&root->fs_info->tree_log_mutex);16561656+ mutex_unlock(&root->fs_info->reloc_mutex);16571657+ goto cleanup_transaction;16581658+ }16591659+16601660+ /*16611661+ * The tasks which save the space cache and inode cache may also16621662+ * update ->aborted, check it.16631663+ */16641664+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {16651665+ ret = cur_trans->aborted;16721666 mutex_unlock(&root->fs_info->tree_log_mutex);16731667 mutex_unlock(&root->fs_info->reloc_mutex);16741668 goto cleanup_transaction;
+8-2
fs/btrfs/tree-log.c
···33573357 if (skip_csum)33583358 return 0;3359335933603360+ if (em->compress_type) {33613361+ csum_offset = 0;33623362+ csum_len = block_len;33633363+ }33643364+33603365 /* block start is already adjusted for the file extent offset. */33613366 ret = btrfs_lookup_csums_range(log->fs_info->csum_root,33623367 em->block_start + csum_offset,···34153410 em = list_entry(extents.next, struct extent_map, list);3416341134173412 list_del_init(&em->list);34183418- clear_bit(EXTENT_FLAG_LOGGING, &em->flags);3419341334203414 /*34213415 * If we had an error we just need to delete everybody from our34223416 * private list.34233417 */34243418 if (ret) {34193419+ clear_em_logging(tree, em);34253420 free_extent_map(em);34263421 continue;34273422 }···34293424 write_unlock(&tree->lock);3430342534313426 ret = log_one_extent(trans, inode, root, em, path);34323432- free_extent_map(em);34333427 write_lock(&tree->lock);34283428+ clear_em_logging(tree, em);34293429+ free_extent_map(em);34343430 }34353431 WARN_ON(!list_empty(&extents));34363432 write_unlock(&tree->lock);
···236236 error = nfs4_discover_server_trunking(clp, &old);237237 if (error < 0)238238 goto error;239239+ nfs_put_client(clp);239240 if (clp != old) {240241 clp->cl_preserve_clid = true;241241- nfs_put_client(clp);242242 clp = old;243243- atomic_inc(&clp->cl_count);244243 }245244246245 return clp;···305306 .clientid = new->cl_clientid,306307 .confirm = new->cl_confirm,307308 };308308- int status;309309+ int status = -NFS4ERR_STALE_CLIENTID;309310310311 spin_lock(&nn->nfs_client_lock);311312 list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {···331332332333 if (prev)333334 nfs_put_client(prev);335335+ prev = pos;334336335337 status = nfs4_proc_setclientid_confirm(pos, &clid, cred);336336- if (status == 0) {338338+ switch (status) {339339+ case -NFS4ERR_STALE_CLIENTID:340340+ break;341341+ case 0:337342 nfs4_swap_callback_idents(pos, new);338343339339- nfs_put_client(pos);344344+ prev = NULL;340345 *result = pos;341346 dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",342347 __func__, pos, atomic_read(&pos->cl_count));343343- return 0;344344- }345345- if (status != -NFS4ERR_STALE_CLIENTID) {346346- nfs_put_client(pos);347347- dprintk("NFS: <-- %s status = %d, no result\n",348348- __func__, status);349349- return status;348348+ default:349349+ goto out;350350 }351351352352 spin_lock(&nn->nfs_client_lock);353353- prev = pos;354353 }354354+ spin_unlock(&nn->nfs_client_lock);355355356356- /*357357- * No matching nfs_client found. This should be impossible,358358- * because the new nfs_client has already been added to359359- * nfs_client_list by nfs_get_client().360360- *361361- * Don't BUG(), since the caller is holding a mutex.362362- */356356+ /* No match found. The server lost our clientid */357357+out:363358 if (prev)364359 nfs_put_client(prev);365365- spin_unlock(&nn->nfs_client_lock);366366- pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);367367- return -NFS4ERR_STALE_CLIENTID;360360+ dprintk("NFS: <-- %s status = %d\n", __func__, status);361361+ return status;368362}369363370364#ifdef CONFIG_NFS_V4_1···424432{425433 struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);426434 struct nfs_client *pos, *n, *prev = NULL;427427- int error;435435+ int status = -NFS4ERR_STALE_CLIENTID;428436429437 spin_lock(&nn->nfs_client_lock);430438 list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {···440448 nfs_put_client(prev);441449 prev = pos;442450443443- error = nfs_wait_client_init_complete(pos);444444- if (error < 0) {451451+ nfs4_schedule_lease_recovery(pos);452452+ status = nfs_wait_client_init_complete(pos);453453+ if (status < 0) {445454 nfs_put_client(pos);446455 spin_lock(&nn->nfs_client_lock);447456 continue;448457 }449449-458458+ status = pos->cl_cons_state;450459 spin_lock(&nn->nfs_client_lock);460460+ if (status < 0)461461+ continue;451462 }452463453464 if (pos->rpc_ops != new->rpc_ops)···468473 if (!nfs4_match_serverowners(pos, new))469474 continue;470475476476+ atomic_inc(&pos->cl_count);471477 spin_unlock(&nn->nfs_client_lock);472478 dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",473479 __func__, pos, atomic_read(&pos->cl_count));···477481 return 0;478482 }479483480480- /*481481- * No matching nfs_client found. This should be impossible,482482- * because the new nfs_client has already been added to483483- * nfs_client_list by nfs_get_client().484484- *485485- * Don't BUG(), since the caller is holding a mutex.486486- */484484+ /* No matching nfs_client found. */487485 spin_unlock(&nn->nfs_client_lock);488488- pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);489489- return -NFS4ERR_STALE_CLIENTID;486486+ dprintk("NFS: <-- %s status = %d\n", __func__, status);487487+ return status;490488}491489#endif /* CONFIG_NFS_V4_1 */492490
+14-8
fs/nfs/nfs4state.c
···136136 clp->cl_confirm = clid.confirm;137137138138 status = nfs40_walk_client_list(clp, result, cred);139139- switch (status) {140140- case -NFS4ERR_STALE_CLIENTID:141141- set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);142142- case 0:139139+ if (status == 0) {143140 /* Sustain the lease, even if it's empty. If the clientid4144141 * goes stale it's of no use for trunking discovery. */145142 nfs4_schedule_state_renewal(*result);146146- break;147143 }148148-149144out:150145 return status;151146}···18581863 case -ETIMEDOUT:18591864 case -EAGAIN:18601865 ssleep(1);18661866+ case -NFS4ERR_STALE_CLIENTID:18611867 dprintk("NFS: %s after status %d, retrying\n",18621868 __func__, status);18631869 goto again;···20182022 nfs4_begin_drain_session(clp);20192023 cred = nfs4_get_exchange_id_cred(clp);20202024 status = nfs4_proc_destroy_session(clp->cl_session, cred);20212021- if (status && status != -NFS4ERR_BADSESSION &&20222022- status != -NFS4ERR_DEADSESSION) {20252025+ switch (status) {20262026+ case 0:20272027+ case -NFS4ERR_BADSESSION:20282028+ case -NFS4ERR_DEADSESSION:20292029+ break;20302030+ case -NFS4ERR_BACK_CHAN_BUSY:20312031+ case -NFS4ERR_DELAY:20322032+ set_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state);20332033+ status = 0;20342034+ ssleep(1);20352035+ goto out;20362036+ default:20232037 status = nfs4_recovery_handle_error(clp, status);20242038 goto out;20252039 }
···46804680 return error;46814681 }4682468246834683- if (bma->flags & XFS_BMAPI_STACK_SWITCH)46844684- bma->stack_switch = 1;46854685-46864683 error = xfs_bmap_alloc(bma);46874684 if (error)46884685 return error;···49524955 bma.userdata = 0;49534956 bma.flist = flist;49544957 bma.firstblock = firstblock;49584958+49594959+ if (flags & XFS_BMAPI_STACK_SWITCH)49604960+ bma.stack_switch = 1;4955496149564962 while (bno < end && n < *nmap) {49574963 inhole = eof || bma.got.br_startoff > bno;
+20
fs/xfs/xfs_buf.c
···487487 struct rb_node *parent;488488 xfs_buf_t *bp;489489 xfs_daddr_t blkno = map[0].bm_bn;490490+ xfs_daddr_t eofs;490491 int numblks = 0;491492 int i;492493···498497 /* Check for IOs smaller than the sector size / not sector aligned */499498 ASSERT(!(numbytes < (1 << btp->bt_sshift)));500499 ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_smask));500500+501501+ /*502502+ * Corrupted block numbers can get through to here, unfortunately, so we503503+ * have to check that the buffer falls within the filesystem bounds.504504+ */505505+ eofs = XFS_FSB_TO_BB(btp->bt_mount, btp->bt_mount->m_sb.sb_dblocks);506506+ if (blkno >= eofs) {507507+ /*508508+ * XXX (dgc): we should really be returning EFSCORRUPTED here,509509+ * but none of the higher level infrastructure supports510510+ * returning a specific error on buffer lookup failures.511511+ */512512+ xfs_alert(btp->bt_mount,513513+ "%s: Block out of range: block 0x%llx, EOFS 0x%llx ",514514+ __func__, blkno, eofs);515515+ return NULL;516516+ }501517502518 /* get tree root */503519 pag = xfs_perag_get(btp->bt_mount,···15051487 while (!list_empty(&btp->bt_lru)) {15061488 bp = list_first_entry(&btp->bt_lru, struct xfs_buf, b_lru);15071489 if (atomic_read(&bp->b_hold) > 1) {14901490+ trace_xfs_buf_wait_buftarg(bp, _RET_IP_);14911491+ list_move_tail(&bp->b_lru, &btp->bt_lru);15081492 spin_unlock(&btp->bt_lru_lock);15091493 delay(100);15101494 goto restart;
+10-2
fs/xfs/xfs_buf_item.c
···652652653653 /*654654 * If the buf item isn't tracking any data, free it, otherwise drop the655655- * reference we hold to it.655655+ * reference we hold to it. If we are aborting the transaction, this may656656+ * be the only reference to the buf item, so we free it anyway657657+ * regardless of whether it is dirty or not. A dirty abort implies a658658+ * shutdown, anyway.656659 */657660 clean = 1;658661 for (i = 0; i < bip->bli_format_count; i++) {···667664 }668665 if (clean)669666 xfs_buf_item_relse(bp);670670- else667667+ else if (aborted) {668668+ if (atomic_dec_and_test(&bip->bli_refcount)) {669669+ ASSERT(XFS_FORCED_SHUTDOWN(lip->li_mountp));670670+ xfs_buf_item_relse(bp);671671+ }672672+ } else671673 atomic_dec(&bip->bli_refcount);672674673675 if (!hold)
···351351 }352352 if (shift)353353 alloc_blocks >>= shift;354354+355355+ /*356356+ * If we are still trying to allocate more space than is357357+ * available, squash the prealloc hard. This can happen if we358358+ * have a large file on a small filesystem and the above359359+ * lowspace thresholds are smaller than MAXEXTLEN.360360+ */361361+ while (alloc_blocks >= freesp)362362+ alloc_blocks >>= 4;354363 }355364356365 if (alloc_blocks < mp->m_writeio_blocks)
···618618#endif619619620620/*621621- * We play games with efi_enabled so that the compiler will, if possible, remove622622- * EFI-related code altogether.621621+ * We play games with efi_enabled so that the compiler will, if622622+ * possible, remove EFI-related code altogether.623623 */624624+#define EFI_BOOT 0 /* Were we booted from EFI? */625625+#define EFI_SYSTEM_TABLES 1 /* Can we use EFI system tables? */626626+#define EFI_CONFIG_TABLES 2 /* Can we use EFI config tables? */627627+#define EFI_RUNTIME_SERVICES 3 /* Can we use runtime services? */628628+#define EFI_MEMMAP 4 /* Can we use EFI memory map? */629629+#define EFI_64BIT 5 /* Is the firmware 64-bit? */630630+624631#ifdef CONFIG_EFI625632# ifdef CONFIG_X86626626- extern int efi_enabled;627627- extern bool efi_64bit;633633+extern int efi_enabled(int facility);628634# else629629-# define efi_enabled 1635635+static inline int efi_enabled(int facility)636636+{637637+ return 1;638638+}630639# endif631640#else632632-# define efi_enabled 0641641+static inline int efi_enabled(int facility)642642+{643643+ return 0;644644+}633645#endif634646635647/*
+25
include/linux/llist.h
···125125 (pos) = llist_entry((pos)->member.next, typeof(*(pos)), member))126126127127/**128128+ * llist_for_each_entry_safe - iterate safely against remove over some entries129129+ * of lock-less list of given type.130130+ * @pos: the type * to use as a loop cursor.131131+ * @n: another type * to use as a temporary storage.132132+ * @node: the fist entry of deleted list entries.133133+ * @member: the name of the llist_node with the struct.134134+ *135135+ * In general, some entries of the lock-less list can be traversed136136+ * safely only after being removed from list, so start with an entry137137+ * instead of list head. This variant allows removal of entries138138+ * as we iterate.139139+ *140140+ * If being used on entries deleted from lock-less list directly, the141141+ * traverse order is from the newest to the oldest added entry. If142142+ * you want to traverse from the oldest to the newest, you must143143+ * reverse the order by yourself before traversing.144144+ */145145+#define llist_for_each_entry_safe(pos, n, node, member) \146146+ for ((pos) = llist_entry((node), typeof(*(pos)), member), \147147+ (n) = (pos)->member.next; \148148+ &(pos)->member != NULL; \149149+ (pos) = llist_entry(n, typeof(*(pos)), member), \150150+ (n) = (&(pos)->member != NULL) ? (pos)->member.next : NULL)151151+152152+/**128153 * llist_empty - tests whether a lock-less list is empty129154 * @head: the list to test130155 *
+1-1
include/linux/memcontrol.h
···429429 * the slab_mutex must be held when looping through those caches430430 */431431#define for_each_memcg_cache_index(_idx) \432432- for ((_idx) = 0; i < memcg_limited_groups_array_size; (_idx)++)432432+ for ((_idx) = 0; (_idx) < memcg_limited_groups_array_size; (_idx)++)433433434434static inline bool memcg_kmem_enabled(void)435435{
···158158#define SG_TRANS_DATA (0x02 << 4)159159#define SG_LINK_DESC (0x03 << 4)160160161161-/* SD bank voltage */162162-#define SD_IO_3V3 0163163-#define SD_IO_1V8 1164164-161161+/* Output voltage */162162+#define OUTPUT_3V3 0163163+#define OUTPUT_1V8 1165164166165/* Card Clock Enable Register */167166#define SD_CLK_EN 0x04···200201#define CHANGE_CLK 0x01201202202203/* LDO_CTL */204204+#define BPP_ASIC_1V7 0x00205205+#define BPP_ASIC_1V8 0x01206206+#define BPP_ASIC_1V9 0x02207207+#define BPP_ASIC_2V0 0x03208208+#define BPP_ASIC_2V7 0x04209209+#define BPP_ASIC_2V8 0x05210210+#define BPP_ASIC_3V2 0x06211211+#define BPP_ASIC_3V3 0x07212212+#define BPP_REG_TUNED18 0x07213213+#define BPP_TUNED18_SHIFT_8402 5214214+#define BPP_TUNED18_SHIFT_8411 4215215+#define BPP_PAD_MASK 0x04216216+#define BPP_PAD_3V3 0x04217217+#define BPP_PAD_1V8 0x00203218#define BPP_LDO_POWB 0x03204219#define BPP_LDO_ON 0x00205220#define BPP_LDO_SUSPEND 0x02···701688 int (*disable_auto_blink)(struct rtsx_pcr *pcr);702689 int (*card_power_on)(struct rtsx_pcr *pcr, int card);703690 int (*card_power_off)(struct rtsx_pcr *pcr, int card);691691+ int (*switch_output_voltage)(struct rtsx_pcr *pcr,692692+ u8 voltage);704693 unsigned int (*cd_deglitch)(struct rtsx_pcr *pcr);694694+ int (*conv_clk_and_div_n)(int clk, int dir);705695};706696707697enum PDEV_STAT {PDEV_STAT_IDLE, PDEV_STAT_RUN};···799783 u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk);800784int rtsx_pci_card_power_on(struct rtsx_pcr *pcr, int card);801785int rtsx_pci_card_power_off(struct rtsx_pcr *pcr, int card);786786+int rtsx_pci_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage);802787unsigned int rtsx_pci_card_exist(struct rtsx_pcr *pcr);803788void rtsx_pci_complete_unfinished_transfer(struct rtsx_pcr *pcr);804789
+1-1
include/linux/mmu_notifier.h
···151151 * Therefore notifier chains can only be traversed when either152152 *153153 * 1. mmap_sem is held.154154- * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->mutex).154154+ * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->rwsem).155155 * 3. No other concurrent thread can access the list (release)156156 */157157struct mmu_notifier {
+46-13
include/linux/security.h
···989989 * tells the LSM to decrement the number of secmark labeling rules loaded990990 * @req_classify_flow:991991 * Sets the flow's sid to the openreq sid.992992+ * @tun_dev_alloc_security:993993+ * This hook allows a module to allocate a security structure for a TUN994994+ * device.995995+ * @security pointer to a security structure pointer.996996+ * Returns a zero on success, negative values on failure.997997+ * @tun_dev_free_security:998998+ * This hook allows a module to free the security structure for a TUN999999+ * device.10001000+ * @security pointer to the TUN device's security structure9921001 * @tun_dev_create:9931002 * Check permissions prior to creating a new TUN device.994994- * @tun_dev_post_create:995995- * This hook allows a module to update or allocate a per-socket security996996- * structure.997997- * @sk contains the newly created sock structure.10031003+ * @tun_dev_attach_queue:10041004+ * Check permissions prior to attaching to a TUN device queue.10051005+ * @security pointer to the TUN device's security structure.9981006 * @tun_dev_attach:999999- * Check permissions prior to attaching to a persistent TUN device. This10001000- * hook can also be used by the module to update any security state10071007+ * This hook can be used by the module to update any security state10011008 * associated with the TUN device's sock structure.10021009 * @sk contains the existing sock structure.10101010+ * @security pointer to the TUN device's security structure.10111011+ * @tun_dev_open:10121012+ * This hook can be used by the module to update any security state10131013+ * associated with the TUN device's security structure.10141014+ * @security pointer to the TUN devices's security structure.10031015 *10041016 * Security hooks for XFRM operations.10051017 *···16321620 void (*secmark_refcount_inc) (void);16331621 void (*secmark_refcount_dec) (void);16341622 void (*req_classify_flow) (const struct request_sock *req, struct flowi *fl);16351635- int (*tun_dev_create)(void);16361636- void (*tun_dev_post_create)(struct sock *sk);16371637- int (*tun_dev_attach)(struct sock *sk);16231623+ int (*tun_dev_alloc_security) (void **security);16241624+ void (*tun_dev_free_security) (void *security);16251625+ int (*tun_dev_create) (void);16261626+ int (*tun_dev_attach_queue) (void *security);16271627+ int (*tun_dev_attach) (struct sock *sk, void *security);16281628+ int (*tun_dev_open) (void *security);16381629#endif /* CONFIG_SECURITY_NETWORK */1639163016401631#ifdef CONFIG_SECURITY_NETWORK_XFRM···25812566int security_secmark_relabel_packet(u32 secid);25822567void security_secmark_refcount_inc(void);25832568void security_secmark_refcount_dec(void);25692569+int security_tun_dev_alloc_security(void **security);25702570+void security_tun_dev_free_security(void *security);25842571int security_tun_dev_create(void);25852585-void security_tun_dev_post_create(struct sock *sk);25862586-int security_tun_dev_attach(struct sock *sk);25722572+int security_tun_dev_attach_queue(void *security);25732573+int security_tun_dev_attach(struct sock *sk, void *security);25742574+int security_tun_dev_open(void *security);2587257525882576#else /* CONFIG_SECURITY_NETWORK */25892577static inline int security_unix_stream_connect(struct sock *sock,···27512733{27522734}2753273527362736+static inline int security_tun_dev_alloc_security(void **security)27372737+{27382738+ return 0;27392739+}27402740+27412741+static inline void security_tun_dev_free_security(void *security)27422742+{27432743+}27442744+27542745static inline int security_tun_dev_create(void)27552746{27562747 return 0;27572748}2758274927592759-static inline void security_tun_dev_post_create(struct sock *sk)27502750+static inline int security_tun_dev_attach_queue(void *security)27602751{27522752+ return 0;27612753}2762275427632763-static inline int security_tun_dev_attach(struct sock *sk)27552755+static inline int security_tun_dev_attach(struct sock *sk, void *security)27562756+{27572757+ return 0;27582758+}27592759+27602760+static inline int security_tun_dev_open(void *security)27642761{27652762 return 0;27662763}
+2
include/linux/usb.h
···357357 int bandwidth_int_reqs; /* number of Interrupt requests */358358 int bandwidth_isoc_reqs; /* number of Isoc. requests */359359360360+ unsigned resuming_ports; /* bit array: resuming root-hub ports */361361+360362#if defined(CONFIG_USB_MON) || defined(CONFIG_USB_MON_MODULE)361363 struct mon_bus *mon_bus; /* non-null when associated */362364 int monitored; /* non-zero when monitored */
+3
include/linux/usb/hcd.h
···430430extern void usb_wakeup_notification(struct usb_device *hdev,431431 unsigned int portnum);432432433433+extern void usb_hcd_start_port_resume(struct usb_bus *bus, int portnum);434434+extern void usb_hcd_end_port_resume(struct usb_bus *bus, int portnum);435435+433436/* The D0/D1 toggle bits ... USE WITH CAUTION (they're almost hcd-internal) */434437#define usb_gettoggle(dev, ep, out) (((dev)->toggle[out] >> (ep)) & 1)435438#define usb_dotoggle(dev, ep, out) ((dev)->toggle[out] ^= (1 << (ep)))
···143143extern int ip4_datagram_connect(struct sock *sk, 144144 struct sockaddr *uaddr, int addr_len);145145146146+extern void ip4_datagram_release_cb(struct sock *sk);147147+146148struct ip_reply_arg {147149 struct kvec iov[1]; 148150 int flags;
+2
include/net/netfilter/nf_conntrack_core.h
···3131extern int nf_conntrack_proto_init(struct net *net);3232extern void nf_conntrack_proto_fini(struct net *net);33333434+extern void nf_conntrack_cleanup_end(void);3535+3436extern bool3537nf_ct_get_tuple(const struct sk_buff *skb,3638 unsigned int nhoff,
+10-10
include/net/transp_v6.h
···3434 struct sockaddr *uaddr,3535 int addr_len);36363737-extern int datagram_recv_ctl(struct sock *sk,3838- struct msghdr *msg,3939- struct sk_buff *skb);3737+extern int ip6_datagram_recv_ctl(struct sock *sk,3838+ struct msghdr *msg,3939+ struct sk_buff *skb);40404141-extern int datagram_send_ctl(struct net *net,4242- struct sock *sk,4343- struct msghdr *msg,4444- struct flowi6 *fl6,4545- struct ipv6_txoptions *opt,4646- int *hlimit, int *tclass,4747- int *dontfrag);4141+extern int ip6_datagram_send_ctl(struct net *net,4242+ struct sock *sk,4343+ struct msghdr *msg,4444+ struct flowi6 *fl6,4545+ struct ipv6_txoptions *opt,4646+ int *hlimit, int *tclass,4747+ int *dontfrag);48484949#define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006)5050
+6
include/uapi/linux/usb/ch9.h
···152152#define USB_INTRF_FUNC_SUSPEND_LP (1 << (8 + 0))153153#define USB_INTRF_FUNC_SUSPEND_RW (1 << (8 + 1))154154155155+/*156156+ * Interface status, Figure 9-5 USB 3.0 spec157157+ */158158+#define USB_INTRF_STAT_FUNC_RW_CAP 1159159+#define USB_INTRF_STAT_FUNC_RW 2160160+155161#define USB_ENDPOINT_HALT 0 /* IN/OUT will STALL */156162157163/* Bit array elements as returned by the USB_REQ_GET_STATUS request. */
+2-2
init/main.c
···604604 pidmap_init();605605 anon_vma_init();606606#ifdef CONFIG_X86607607- if (efi_enabled)607607+ if (efi_enabled(EFI_RUNTIME_SERVICES))608608 efi_enter_virtual_mode();609609#endif610610 thread_info_cache_init();···632632 acpi_early_init(); /* before LAPIC and SMP init */633633 sfi_init_late();634634635635- if (efi_enabled) {635635+ if (efi_enabled(EFI_RUNTIME_SERVICES)) {636636 efi_late_init();637637 efi_free_boot_services();638638 }
+18-2
kernel/events/core.c
···908908}909909910910/*911911+ * Initialize event state based on the perf_event_attr::disabled.912912+ */913913+static inline void perf_event__state_init(struct perf_event *event)914914+{915915+ event->state = event->attr.disabled ? PERF_EVENT_STATE_OFF :916916+ PERF_EVENT_STATE_INACTIVE;917917+}918918+919919+/*911920 * Called at perf_event creation and when events are attached/detached from a912921 * group.913922 */···61886179 event->overflow_handler = overflow_handler;61896180 event->overflow_handler_context = context;6190618161916191- if (attr->disabled)61926192- event->state = PERF_EVENT_STATE_OFF;61826182+ perf_event__state_init(event);6193618361946184 pmu = NULL;61956185···6617660966186610 mutex_lock(&gctx->mutex);66196611 perf_remove_from_context(group_leader);66126612+66136613+ /*66146614+ * Removing from the context ends up with disabled66156615+ * event. What we want here is event in the initial66166616+ * startup state, ready to be add into new context.66176617+ */66186618+ perf_event__state_init(group_leader);66206619 list_for_each_entry(sibling, &group_leader->sibling_list,66216620 group_entry) {66226621 perf_remove_from_context(sibling);66226622+ perf_event__state_init(sibling);66236623 put_ctx(gctx);66246624 }66256625 mutex_unlock(&gctx->mutex);
-9
kernel/printk.c
···8787struct console *console_drivers;8888EXPORT_SYMBOL_GPL(console_drivers);89899090-#ifdef CONFIG_LOCKDEP9191-static struct lockdep_map console_lock_dep_map = {9292- .name = "console_lock"9393-};9494-#endif9595-9690/*9791 * This is used for debugging the mess that is the VT code by9892 * keeping track if we have the console semaphore held. It's···19181924 return;19191925 console_locked = 1;19201926 console_may_schedule = 1;19211921- mutex_acquire(&console_lock_dep_map, 0, 0, _RET_IP_);19221927}19231928EXPORT_SYMBOL(console_lock);19241929···19391946 }19401947 console_locked = 1;19411948 console_may_schedule = 0;19421942- mutex_acquire(&console_lock_dep_map, 0, 1, _RET_IP_);19431949 return 1;19441950}19451951EXPORT_SYMBOL(console_trylock);···20992107 local_irq_restore(flags);21002108 }21012109 console_locked = 0;21022102- mutex_release(&console_lock_dep_map, 1, _RET_IP_);2103211021042111 /* Release the exclusive_console once it is used */21052112 if (unlikely(exclusive_console))
+10-3
kernel/rcutree_plugin.h
···4040#ifdef CONFIG_RCU_NOCB_CPU4141static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */4242static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */4343-static bool rcu_nocb_poll; /* Offload kthread are to poll. */4444-module_param(rcu_nocb_poll, bool, 0444);4343+static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */4544static char __initdata nocb_buf[NR_CPUS * 5];4645#endif /* #ifdef CONFIG_RCU_NOCB_CPU */4746···21582159}21592160__setup("rcu_nocbs=", rcu_nocb_setup);2160216121622162+static int __init parse_rcu_nocb_poll(char *arg)21632163+{21642164+ rcu_nocb_poll = 1;21652165+ return 0;21662166+}21672167+early_param("rcu_nocb_poll", parse_rcu_nocb_poll);21682168+21612169/* Is the specified CPU a no-CPUs CPU? */21622170static bool is_nocb_cpu(int cpu)21632171{···23722366 for (;;) {23732367 /* If not polling, wait for next batch of callbacks. */23742368 if (!rcu_nocb_poll)23752375- wait_event(rdp->nocb_wq, rdp->nocb_head);23692369+ wait_event_interruptible(rdp->nocb_wq, rdp->nocb_head);23762370 list = ACCESS_ONCE(rdp->nocb_head);23772371 if (!list) {23782372 schedule_timeout_interruptible(1);23732373+ flush_signals(current);23792374 continue;23802375 }23812376
···566566static int do_balance_runtime(struct rt_rq *rt_rq)567567{568568 struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);569569- struct root_domain *rd = cpu_rq(smp_processor_id())->rd;569569+ struct root_domain *rd = rq_of_rt_rq(rt_rq)->rd;570570 int i, weight, more = 0;571571 u64 rt_period;572572
+12-1
kernel/smp.c
···3333 struct call_single_data csd;3434 atomic_t refs;3535 cpumask_var_t cpumask;3636+ cpumask_var_t cpumask_ipi;3637};37383839static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);···5756 if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL,5857 cpu_to_node(cpu)))5958 return notifier_from_errno(-ENOMEM);5959+ if (!zalloc_cpumask_var_node(&cfd->cpumask_ipi, GFP_KERNEL,6060+ cpu_to_node(cpu)))6161+ return notifier_from_errno(-ENOMEM);6062 break;61636264#ifdef CONFIG_HOTPLUG_CPU···6965 case CPU_DEAD:7066 case CPU_DEAD_FROZEN:7167 free_cpumask_var(cfd->cpumask);6868+ free_cpumask_var(cfd->cpumask_ipi);7269 break;7370#endif7471 };···531526 return;532527 }533528529529+ /*530530+ * After we put an entry into the list, data->cpumask531531+ * may be cleared again when another CPU sends another IPI for532532+ * a SMP function call, so data->cpumask will be zero.533533+ */534534+ cpumask_copy(data->cpumask_ipi, data->cpumask);534535 raw_spin_lock_irqsave(&call_function.lock, flags);535536 /*536537 * Place entry at the _HEAD_ of the list, so that any cpu still···560549 smp_mb();561550562551 /* Send a message to all CPUs in the map */563563- arch_send_call_function_ipi_mask(data->cpumask);552552+ arch_send_call_function_ipi_mask(data->cpumask_ipi);564553565554 /* Optionally wait for the CPUs to complete */566555 if (wait)
···160160 if (is_write_migration_entry(entry))161161 pte = pte_mkwrite(pte);162162#ifdef CONFIG_HUGETLB_PAGE163163- if (PageHuge(new))163163+ if (PageHuge(new)) {164164 pte = pte_mkhuge(pte);165165+ pte = arch_make_huge_pte(pte, vma, new, 0);166166+ }165167#endif166168 flush_cache_page(vma, addr, pte_pfn(pte));167169 set_pte_at(mm, addr, ptep, pte);
+1-1
mm/mmap.c
···29432943 * vma in this mm is backed by the same anon_vma or address_space.29442944 *29452945 * We can take all the locks in random order because the VM code29462946- * taking i_mmap_mutex or anon_vma->mutex outside the mmap_sem never29462946+ * taking i_mmap_mutex or anon_vma->rwsem outside the mmap_sem never29472947 * takes more than one of them in a row. Secondly we're protected29482948 * against a concurrent mm_take_all_locks() by the mm_all_locks_mutex.29492949 *
+18-1
net/batman-adv/distributed-arp-table.c
···738738 struct arphdr *arphdr;739739 struct ethhdr *ethhdr;740740 __be32 ip_src, ip_dst;741741+ uint8_t *hw_src, *hw_dst;741742 uint16_t type = 0;742743743744 /* pull the ethernet header */···778777 ip_src = batadv_arp_ip_src(skb, hdr_size);779778 ip_dst = batadv_arp_ip_dst(skb, hdr_size);780779 if (ipv4_is_loopback(ip_src) || ipv4_is_multicast(ip_src) ||781781- ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst))780780+ ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst) ||781781+ ipv4_is_zeronet(ip_src) || ipv4_is_lbcast(ip_src) ||782782+ ipv4_is_zeronet(ip_dst) || ipv4_is_lbcast(ip_dst))782783 goto out;784784+785785+ hw_src = batadv_arp_hw_src(skb, hdr_size);786786+ if (is_zero_ether_addr(hw_src) || is_multicast_ether_addr(hw_src))787787+ goto out;788788+789789+ /* we don't care about the destination MAC address in ARP requests */790790+ if (arphdr->ar_op != htons(ARPOP_REQUEST)) {791791+ hw_dst = batadv_arp_hw_dst(skb, hdr_size);792792+ if (is_zero_ether_addr(hw_dst) ||793793+ is_multicast_ether_addr(hw_dst))794794+ goto out;795795+ }783796784797 type = ntohs(arphdr->ar_op);785798out:···10271012 */10281013 ret = !batadv_is_my_client(bat_priv, hw_dst);10291014out:10151015+ if (ret)10161016+ kfree_skb(skb);10301017 /* if ret == false -> packet has to be delivered to the interface */10311018 return ret;10321019}
···352352353353 case BT_CONNECTED:354354 case BT_CONFIG:355355- if (sco_pi(sk)->conn) {355355+ if (sco_pi(sk)->conn->hcon) {356356 sk->sk_state = BT_DISCONN;357357 sco_sock_set_timer(sk, SCO_DISCONN_TIMEOUT);358358 hci_conn_put(sco_pi(sk)->conn->hcon);
+13
net/bluetooth/smp.c
···859859860860 skb_pull(skb, sizeof(code));861861862862+ /*863863+ * The SMP context must be initialized for all other PDUs except864864+ * pairing and security requests. If we get any other PDU when865865+ * not initialized simply disconnect (done if this function866866+ * returns an error).867867+ */868868+ if (code != SMP_CMD_PAIRING_REQ && code != SMP_CMD_SECURITY_REQ &&869869+ !conn->smp_chan) {870870+ BT_ERR("Unexpected SMP command 0x%02x. Disconnecting.", code);871871+ kfree_skb(skb);872872+ return -ENOTSUPP;873873+ }874874+862875 switch (code) {863876 case SMP_CMD_PAIRING_REQ:864877 reason = smp_cmd_pairing_req(conn, skb);
+6-3
net/core/pktgen.c
···17811781 return -EFAULT;17821782 i += len;17831783 mutex_lock(&pktgen_thread_lock);17841784- pktgen_add_device(t, f);17841784+ ret = pktgen_add_device(t, f);17851785 mutex_unlock(&pktgen_thread_lock);17861786- ret = count;17871787- sprintf(pg_result, "OK: add_device=%s", f);17861786+ if (!ret) {17871787+ ret = count;17881788+ sprintf(pg_result, "OK: add_device=%s", f);17891789+ } else17901790+ sprintf(pg_result, "ERROR: can not add device %s", f);17881791 goto out;17891792 }17901793
···310310{311311 int cnt; /* increase in packets */312312 unsigned int delta = 0;313313+ u32 snd_cwnd = tp->snd_cwnd;314314+315315+ if (unlikely(!snd_cwnd)) {316316+ pr_err_once("snd_cwnd is nul, please report this bug.\n");317317+ snd_cwnd = 1U;318318+ }313319314320 /* RFC3465: ABC Slow start315321 * Increase only after a full MSS of bytes is acked···330324 if (sysctl_tcp_max_ssthresh > 0 && tp->snd_cwnd > sysctl_tcp_max_ssthresh)331325 cnt = sysctl_tcp_max_ssthresh >> 1; /* limited slow start */332326 else333333- cnt = tp->snd_cwnd; /* exponential increase */327327+ cnt = snd_cwnd; /* exponential increase */334328335329 /* RFC3465: ABC336330 * We MAY increase by 2 if discovered delayed ack···340334 tp->bytes_acked = 0;341335342336 tp->snd_cwnd_cnt += cnt;343343- while (tp->snd_cwnd_cnt >= tp->snd_cwnd) {344344- tp->snd_cwnd_cnt -= tp->snd_cwnd;337337+ while (tp->snd_cwnd_cnt >= snd_cwnd) {338338+ tp->snd_cwnd_cnt -= snd_cwnd;345339 delta++;346340 }347347- tp->snd_cwnd = min(tp->snd_cwnd + delta, tp->snd_cwnd_clamp);341341+ tp->snd_cwnd = min(snd_cwnd + delta, tp->snd_cwnd_clamp);348342}349343EXPORT_SYMBOL_GPL(tcp_slow_start);350344
+6-2
net/ipv4/tcp_input.c
···35043504 }35053505 } else {35063506 if (!(flag & FLAG_DATA_ACKED) && (tp->frto_counter == 1)) {35073507+ if (!tcp_packets_in_flight(tp)) {35083508+ tcp_enter_frto_loss(sk, 2, flag);35093509+ return true;35103510+ }35113511+35073512 /* Prevent sending of new data. */35083513 tp->snd_cwnd = min(tp->snd_cwnd,35093514 tcp_packets_in_flight(tp));···56545649 * the remote receives only the retransmitted (regular) SYNs: either56555650 * the original SYN-data or the corresponding SYN-ACK is lost.56565651 */56575657- syn_drop = (cookie->len <= 0 && data &&56585658- inet_csk(sk)->icsk_retransmits);56525652+ syn_drop = (cookie->len <= 0 && data && tp->total_retrans);5659565356605654 tcp_fastopen_cache_set(sk, mss, cookie, syn_drop);56615655
+9-6
net/ipv4/tcp_ipv4.c
···369369 * We do take care of PMTU discovery (RFC1191) special case :370370 * we can receive locally generated ICMP messages while socket is held.371371 */372372- if (sock_owned_by_user(sk) &&373373- type != ICMP_DEST_UNREACH &&374374- code != ICMP_FRAG_NEEDED)375375- NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);376376-372372+ if (sock_owned_by_user(sk)) {373373+ if (!(type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED))374374+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);375375+ }377376 if (sk->sk_state == TCP_CLOSE)378377 goto out;379378···496497 * errors returned from accept().497498 */498499 inet_csk_reqsk_queue_drop(sk, req, prev);500500+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);499501 goto out;500502501503 case TCP_SYN_SENT:···15011501 * clogging syn queue with openreqs with exponentially increasing15021502 * timeout.15031503 */15041504- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)15041504+ if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {15051505+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);15051506 goto drop;15071507+ }1506150815071509 req = inet_reqsk_alloc(&tcp_request_sock_ops);15081510 if (!req)···16691667drop_and_free:16701668 reqsk_free(req);16711669drop:16701670+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);16721671 return 0;16731672}16741673EXPORT_SYMBOL(tcp_v4_conn_request);
···17101710 return -EINVAL;17111711 if (get_user(v, (u32 __user *)optval))17121712 return -EFAULT;17131713+ /* "pim6reg%u" should not exceed 16 bytes (IFNAMSIZ) */17141714+ if (v != RT_TABLE_DEFAULT && v >= 100000000)17151715+ return -EINVAL;17131716 if (sk == mrt->mroute6_sk)17141717 return -EBUSY;17151718
···168168169169}170170171171+/* Lookup the tunnel socket, possibly involving the fs code if the socket is172172+ * owned by userspace. A struct sock returned from this function must be173173+ * released using l2tp_tunnel_sock_put once you're done with it.174174+ */175175+struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel)176176+{177177+ int err = 0;178178+ struct socket *sock = NULL;179179+ struct sock *sk = NULL;180180+181181+ if (!tunnel)182182+ goto out;183183+184184+ if (tunnel->fd >= 0) {185185+ /* Socket is owned by userspace, who might be in the process186186+ * of closing it. Look the socket up using the fd to ensure187187+ * consistency.188188+ */189189+ sock = sockfd_lookup(tunnel->fd, &err);190190+ if (sock)191191+ sk = sock->sk;192192+ } else {193193+ /* Socket is owned by kernelspace */194194+ sk = tunnel->sock;195195+ }196196+197197+out:198198+ return sk;199199+}200200+EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_lookup);201201+202202+/* Drop a reference to a tunnel socket obtained via. l2tp_tunnel_sock_put */203203+void l2tp_tunnel_sock_put(struct sock *sk)204204+{205205+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);206206+ if (tunnel) {207207+ if (tunnel->fd >= 0) {208208+ /* Socket is owned by userspace */209209+ sockfd_put(sk->sk_socket);210210+ }211211+ sock_put(sk);212212+ }213213+}214214+EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put);215215+171216/* Lookup a session by id in the global session list172217 */173218static struct l2tp_session *l2tp_session_find_2(struct net *net, u32 session_id)···11681123 struct udphdr *uh;11691124 struct inet_sock *inet;11701125 __wsum csum;11711171- int old_headroom;11721172- int new_headroom;11731126 int headroom;11741127 int uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0;11751128 int udp_len;···11791136 */11801137 headroom = NET_SKB_PAD + sizeof(struct iphdr) +11811138 uhlen + hdr_len;11821182- old_headroom = skb_headroom(skb);11831139 if (skb_cow_head(skb, headroom)) {11841140 kfree_skb(skb);11851141 return NET_XMIT_DROP;11861142 }1187114311881188- new_headroom = skb_headroom(skb);11891144 skb_orphan(skb);11901190- skb->truesize += new_headroom - old_headroom;11911191-11921145 /* Setup L2TP header */11931146 session->build_header(session, __skb_push(skb, hdr_len));11941147···16461607 tunnel->old_sk_destruct = sk->sk_destruct;16471608 sk->sk_destruct = &l2tp_tunnel_destruct;16481609 tunnel->sock = sk;16101610+ tunnel->fd = fd;16491611 lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class, "l2tp_sock");1650161216511613 sk->sk_allocation = GFP_ATOMIC;···16821642 */16831643int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)16841644{16851685- int err = 0;16861686- struct socket *sock = tunnel->sock ? tunnel->sock->sk_socket : NULL;16451645+ int err = -EBADF;16461646+ struct socket *sock = NULL;16471647+ struct sock *sk = NULL;16481648+16491649+ sk = l2tp_tunnel_sock_lookup(tunnel);16501650+ if (!sk)16511651+ goto out;16521652+16531653+ sock = sk->sk_socket;16541654+ BUG_ON(!sock);1687165516881656 /* Force the tunnel socket to close. This will eventually16891657 * cause the tunnel to be deleted via the normal socket close16901658 * mechanisms when userspace closes the tunnel socket.16911659 */16921692- if (sock != NULL) {16931693- err = inet_shutdown(sock, 2);16601660+ err = inet_shutdown(sock, 2);1694166116951695- /* If the tunnel's socket was created by the kernel,16961696- * close the socket here since the socket was not16971697- * created by userspace.16981698- */16991699- if (sock->file == NULL)17001700- err = inet_release(sock);17011701- }16621662+ /* If the tunnel's socket was created by the kernel,16631663+ * close the socket here since the socket was not16641664+ * created by userspace.16651665+ */16661666+ if (sock->file == NULL)16671667+ err = inet_release(sock);1702166816691669+ l2tp_tunnel_sock_put(sk);16701670+out:17031671 return err;17041672}17051673EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
+4-1
net/l2tp/l2tp_core.h
···188188 int (*recv_payload_hook)(struct sk_buff *skb);189189 void (*old_sk_destruct)(struct sock *);190190 struct sock *sock; /* Parent socket */191191- int fd;191191+ int fd; /* Parent fd, if tunnel socket192192+ * was created by userspace */192193193194 uint8_t priv[0]; /* private data */194195};···229228 return tunnel;230229}231230231231+extern struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel);232232+extern void l2tp_tunnel_sock_put(struct sock *sk);232233extern struct l2tp_session *l2tp_session_find(struct net *net, struct l2tp_tunnel *tunnel, u32 session_id);233234extern struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth);234235extern struct l2tp_session *l2tp_session_find_by_ifname(struct net *net, char *ifname);
···164164 sta = sta_info_get(sdata, mac_addr);165165 else166166 sta = sta_info_get_bss(sdata, mac_addr);167167- if (!sta) {167167+ /*168168+ * The ASSOC test makes sure the driver is ready to169169+ * receive the key. When wpa_supplicant has roamed170170+ * using FT, it attempts to set the key before171171+ * association has completed, this rejects that attempt172172+ * so it will set the key again after assocation.173173+ *174174+ * TODO: accept the key if we have a station entry and175175+ * add it to the device after the station.176176+ */177177+ if (!sta || !test_sta_flag(sta, WLAN_STA_ASSOC)) {168178 ieee80211_key_free(sdata->local, key);169179 err = -ENOENT;170180 goto out_unlock;
···102102 ieee80211_sta_reset_conn_monitor(sdata);103103}104104105105-void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local,106106- bool offchannel_ps_enable)105105+void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local)107106{108107 struct ieee80211_sub_if_data *sdata;109108···133134134135 if (sdata->vif.type != NL80211_IFTYPE_MONITOR) {135136 netif_tx_stop_all_queues(sdata->dev);136136- if (offchannel_ps_enable &&137137- (sdata->vif.type == NL80211_IFTYPE_STATION) &&137137+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&138138 sdata->u.mgd.associated)139139 ieee80211_offchannel_ps_enable(sdata);140140 }···141143 mutex_unlock(&local->iflist_mtx);142144}143145144144-void ieee80211_offchannel_return(struct ieee80211_local *local,145145- bool offchannel_ps_disable)146146+void ieee80211_offchannel_return(struct ieee80211_local *local)146147{147148 struct ieee80211_sub_if_data *sdata;148149···160163 continue;161164162165 /* Tell AP we're back */163163- if (offchannel_ps_disable &&164164- sdata->vif.type == NL80211_IFTYPE_STATION) {165165- if (sdata->u.mgd.associated)166166- ieee80211_offchannel_ps_disable(sdata);167167- }166166+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&167167+ sdata->u.mgd.associated)168168+ ieee80211_offchannel_ps_disable(sdata);168169169170 if (sdata->vif.type != NL80211_IFTYPE_MONITOR) {170171 /*···380385 local->tmp_channel = NULL;381386 ieee80211_hw_config(local, 0);382387383383- ieee80211_offchannel_return(local, true);388388+ ieee80211_offchannel_return(local);384389 }385390386391 ieee80211_recalc_idle(local);
+5-10
net/mac80211/scan.c
···292292 if (!was_hw_scan) {293293 ieee80211_configure_filter(local);294294 drv_sw_scan_complete(local);295295- ieee80211_offchannel_return(local, true);295295+ ieee80211_offchannel_return(local);296296 }297297298298 ieee80211_recalc_idle(local);···341341 local->next_scan_state = SCAN_DECISION;342342 local->scan_channel_idx = 0;343343344344- ieee80211_offchannel_stop_vifs(local, true);344344+ ieee80211_offchannel_stop_vifs(local);345345346346 ieee80211_configure_filter(local);347347···678678 local->scan_channel = NULL;679679 ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL);680680681681- /*682682- * Re-enable vifs and beaconing. Leave PS683683- * in off-channel state..will put that back684684- * on-channel at the end of scanning.685685- */686686- ieee80211_offchannel_return(local, false);681681+ /* disable PS */682682+ ieee80211_offchannel_return(local);687683688684 *next_delay = HZ / 5;689685 /* afterwards, resume scan & go to next channel */···689693static void ieee80211_scan_state_resume(struct ieee80211_local *local,690694 unsigned long *next_delay)691695{692692- /* PS already is in off-channel mode */693693- ieee80211_offchannel_stop_vifs(local, false);696696+ ieee80211_offchannel_stop_vifs(local);694697695698 if (local->ops->flush) {696699 drv_flush(local, false);
+6-3
net/mac80211/tx.c
···16731673 chanctx_conf =16741674 rcu_dereference(tmp_sdata->vif.chanctx_conf);16751675 }16761676- if (!chanctx_conf)16771677- goto fail_rcu;1678167616791679- chan = chanctx_conf->def.chan;16771677+ if (chanctx_conf)16781678+ chan = chanctx_conf->def.chan;16791679+ else if (!local->use_chanctx)16801680+ chan = local->_oper_channel;16811681+ else16821682+ goto fail_rcu;1680168316811684 /*16821685 * Frame injection is not allowed if beaconing is not allowed
+5-4
net/netfilter/nf_conntrack_core.c
···13761376 synchronize_net();13771377 nf_conntrack_proto_fini(net);13781378 nf_conntrack_cleanup_net(net);13791379+}1379138013801380- if (net_eq(net, &init_net)) {13811381- RCU_INIT_POINTER(nf_ct_destroy, NULL);13821382- nf_conntrack_cleanup_init_net();13831383- }13811381+void nf_conntrack_cleanup_end(void)13821382+{13831383+ RCU_INIT_POINTER(nf_ct_destroy, NULL);13841384+ nf_conntrack_cleanup_init_net();13841385}1385138613861387void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
···345345}346346EXPORT_SYMBOL_GPL(xt_find_revision);347347348348-static char *textify_hooks(char *buf, size_t size, unsigned int mask)348348+static char *349349+textify_hooks(char *buf, size_t size, unsigned int mask, uint8_t nfproto)349350{350350- static const char *const names[] = {351351+ static const char *const inetbr_names[] = {351352 "PREROUTING", "INPUT", "FORWARD",352353 "OUTPUT", "POSTROUTING", "BROUTING",353354 };354354- unsigned int i;355355+ static const char *const arp_names[] = {356356+ "INPUT", "FORWARD", "OUTPUT",357357+ };358358+ const char *const *names;359359+ unsigned int i, max;355360 char *p = buf;356361 bool np = false;357362 int res;358363364364+ names = (nfproto == NFPROTO_ARP) ? arp_names : inetbr_names;365365+ max = (nfproto == NFPROTO_ARP) ? ARRAY_SIZE(arp_names) :366366+ ARRAY_SIZE(inetbr_names);359367 *p = '\0';360360- for (i = 0; i < ARRAY_SIZE(names); ++i) {368368+ for (i = 0; i < max; ++i) {361369 if (!(mask & (1 << i)))362370 continue;363371 res = snprintf(p, size, "%s%s", np ? "/" : "", names[i]);···410402 pr_err("%s_tables: %s match: used from hooks %s, but only "411403 "valid from %s\n",412404 xt_prefix[par->family], par->match->name,413413- textify_hooks(used, sizeof(used), par->hook_mask),414414- textify_hooks(allow, sizeof(allow), par->match->hooks));405405+ textify_hooks(used, sizeof(used), par->hook_mask,406406+ par->family),407407+ textify_hooks(allow, sizeof(allow), par->match->hooks,408408+ par->family));415409 return -EINVAL;416410 }417411 if (par->match->proto && (par->match->proto != proto || inv_proto)) {···585575 pr_err("%s_tables: %s target: used from hooks %s, but only "586576 "usable from %s\n",587577 xt_prefix[par->family], par->target->name,588588- textify_hooks(used, sizeof(used), par->hook_mask),589589- textify_hooks(allow, sizeof(allow), par->target->hooks));578578+ textify_hooks(used, sizeof(used), par->hook_mask,579579+ par->family),580580+ textify_hooks(allow, sizeof(allow), par->target->hooks,581581+ par->family));590582 return -EINVAL;591583 }592584 if (par->target->proto && (par->target->proto != proto || inv_proto)) {
+2-2
net/netfilter/xt_CT.c
···109109 struct xt_ct_target_info *info = par->targinfo;110110 struct nf_conntrack_tuple t;111111 struct nf_conn *ct;112112- int ret;112112+ int ret = -EOPNOTSUPP;113113114114 if (info->flags & ~XT_CT_NOTRACK)115115 return -EINVAL;···247247 struct xt_ct_target_info_v1 *info = par->targinfo;248248 struct nf_conntrack_tuple t;249249 struct nf_conn *ct;250250- int ret;250250+ int ret = -EOPNOTSUPP;251251252252 if (info->flags & ~XT_CT_NOTRACK)253253 return -EINVAL;
+9-7
net/openvswitch/vport-netdev.c
···3535/* Must be called with rcu_read_lock. */3636static void netdev_port_receive(struct vport *vport, struct sk_buff *skb)3737{3838- if (unlikely(!vport)) {3939- kfree_skb(skb);4040- return;4141- }3838+ if (unlikely(!vport))3939+ goto error;4040+4141+ if (unlikely(skb_warn_if_lro(skb)))4242+ goto error;42434344 /* Make our own copy of the packet. Otherwise we will mangle the4445 * packet for anyone who came before us (e.g. tcpdump via AF_PACKET).···51505251 skb_push(skb, ETH_HLEN);5352 ovs_vport_receive(vport, skb);5353+ return;5454+5555+error:5656+ kfree_skb(skb);5457}55585659/* Called with rcu_read_lock and bottom-halves disabled. */···173168 packet_length(skb), mtu);174169 goto error;175170 }176176-177177- if (unlikely(skb_warn_if_lro(skb)))178178- goto error;179171180172 skb->dev = netdev_vport->dev;181173 len = skb->len;
+6-4
net/packet/af_packet.c
···2361236123622362 packet_flush_mclist(sk);2363236323642364- memset(&req_u, 0, sizeof(req_u));23652365-23662366- if (po->rx_ring.pg_vec)23642364+ if (po->rx_ring.pg_vec) {23652365+ memset(&req_u, 0, sizeof(req_u));23672366 packet_set_ring(sk, &req_u, 1, 0);23672367+ }2368236823692369- if (po->tx_ring.pg_vec)23692369+ if (po->tx_ring.pg_vec) {23702370+ memset(&req_u, 0, sizeof(req_u));23702371 packet_set_ring(sk, &req_u, 1, 1);23722372+ }2371237323722374 fanout_release(sk);23732375
+6-6
net/sched/sch_netem.c
···438438 if (q->rate) {439439 struct sk_buff_head *list = &sch->q;440440441441- delay += packet_len_2_sched_time(skb->len, q);442442-443441 if (!skb_queue_empty(list)) {444442 /*445445- * Last packet in queue is reference point (now).446446- * First packet in queue is already in flight,447447- * calculate this time bonus and substract443443+ * Last packet in queue is reference point (now),444444+ * calculate this time bonus and subtract448445 * from delay.449446 */450450- delay -= now - netem_skb_cb(skb_peek(list))->time_to_send;447447+ delay -= netem_skb_cb(skb_peek_tail(list))->time_to_send - now;448448+ delay = max_t(psched_tdiff_t, 0, delay);451449 now = netem_skb_cb(skb_peek_tail(list))->time_to_send;452450 }451451+452452+ delay += packet_len_2_sched_time(skb->len, q);453453 }454454455455 cb->time_to_send = now + delay;
+1-1
net/sctp/auth.c
···7171 return;72727373 if (atomic_dec_and_test(&key->refcnt)) {7474- kfree(key);7474+ kzfree(key);7575 SCTP_DBG_OBJCNT_DEC(keys);7676 }7777}
+5
net/sctp/endpointola.c
···249249/* Final destructor for endpoint. */250250static void sctp_endpoint_destroy(struct sctp_endpoint *ep)251251{252252+ int i;253253+252254 SCTP_ASSERT(ep->base.dead, "Endpoint is not dead", return);253255254256 /* Free up the HMAC transform. */···272270 /* Cleanup. */273271 sctp_inq_free(&ep->base.inqueue);274272 sctp_bind_addr_free(&ep->base.bind_addr);273273+274274+ for (i = 0; i < SCTP_HOW_MANY_SECRETS; ++i)275275+ memset(&ep->secret_key[i], 0, SCTP_SECRET_SIZE);275276276277 /* Remove and free the port */277278 if (sctp_sk(ep->base.sk)->bind_hash)
+8-4
net/sctp/outqueue.c
···224224225225/* Free the outqueue structure and any related pending chunks.226226 */227227-void sctp_outq_teardown(struct sctp_outq *q)227227+static void __sctp_outq_teardown(struct sctp_outq *q)228228{229229 struct sctp_transport *transport;230230 struct list_head *lchunk, *temp;···277277 sctp_chunk_free(chunk);278278 }279279280280- q->error = 0;281281-282280 /* Throw away any leftover control chunks. */283281 list_for_each_entry_safe(chunk, tmp, &q->control_chunk_list, list) {284282 list_del_init(&chunk->list);···284286 }285287}286288289289+void sctp_outq_teardown(struct sctp_outq *q)290290+{291291+ __sctp_outq_teardown(q);292292+ sctp_outq_init(q->asoc, q);293293+}294294+287295/* Free the outqueue structure and any related pending chunks. */288296void sctp_outq_free(struct sctp_outq *q)289297{290298 /* Throw away leftover chunks. */291291- sctp_outq_teardown(q);299299+ __sctp_outq_teardown(q);292300293301 /* If we were kmalloc()'d, free the memory. */294302 if (q->malloced)
+3-1
net/sctp/sm_statefuns.c
···1779177917801780 /* Update the content of current association. */17811781 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));17821782- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));17831782 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));17831783+ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,17841784+ SCTP_STATE(SCTP_STATE_ESTABLISHED));17851785+ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));17841786 return SCTP_DISPOSITION_CONSUME;1785178717861788nomem_ev:
···19192020# Try to match the kernel target.2121ifndef CONFIG_64BIT2222+ifndef CROSS_COMPILE22232324# s390 has -m31 flag to build 31 bit binaries2425ifndef CONFIG_S390···3534HOSTLOADLIBES_bpf-direct += $(MFLAG)3635HOSTLOADLIBES_bpf-fancy += $(MFLAG)3736HOSTLOADLIBES_dropper += $(MFLAG)3737+endif3838endif39394040# Tell kbuild to always build the programs