···37093709D: Co-author of German book ``Linux-Kernel-Programmierung''37103710D: Co-founder of Berlin Linux User Group3711371137123712+N: Andrew Victor37133713+E: linux@maxim.org.za37143714+W: http://maxim.org.za/at91_26.html37153715+D: First maintainer of Atmel ARM-based SoC, aka AT9137163716+D: Introduced support for at91rm9200, the first chip of AT91 family37173717+S: South Africa37183718+37123719N: Riku Voipio37133720E: riku.voipio@iki.fi37143721D: Author of PCA9532 LED and Fintek f75375s hwmon driver
···66Required properties:77- compatible : Should be "ti,omap3-l3-smx" for OMAP3 family88 Should be "ti,omap4-l3-noc" for OMAP4 family99+ Should be "ti,omap5-l3-noc" for OMAP5 family910 Should be "ti,dra7-l3-noc" for DRA7 family1011 Should be "ti,am4372-l3-noc" for AM43 family1112- reg: Contains L3 register address range for each noc domain.
···198198199199TTY_OTHER_CLOSED Device is a pty and the other side has closed.200200201201+TTY_OTHER_DONE Device is a pty and the other side has closed and202202+ all pending input processing has been completed.203203+201204TTY_NO_WRITE_SPLIT Prevent driver from splitting up writes into202205 smaller chunks.203206
+47-21
MAINTAINERS
···892892F: arch/arm/mach-alpine/893893894894ARM/ATMEL AT91RM9200 AND AT91SAM ARM ARCHITECTURES895895-M: Andrew Victor <linux@maxim.org.za>896895M: Nicolas Ferre <nicolas.ferre@atmel.com>896896+M: Alexandre Belloni <alexandre.belloni@free-electrons.com>897897M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>898898L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)899899-W: http://maxim.org.za/at91_26.html900899W: http://www.linux4sam.org901900S: Supported902901F: arch/arm/mach-at91/···974975ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE975976M: Hans Ulli Kroll <ulli.kroll@googlemail.com>976977L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)977977-T: git git://git.berlios.de/gemini-board978978+T: git git://github.com/ulli-kroll/linux.git978979S: Maintained979980F: arch/arm/mach-gemini/980981···988989F: drivers/clocksource/timer-prima2.c989990F: drivers/clocksource/timer-atlas7.c990991N: [^a-z]sirf992992+993993+ARM/CONEXANT DIGICOLOR MACHINE SUPPORT994994+M: Baruch Siach <baruch@tkos.co.il>995995+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)996996+S: Maintained997997+N: digicolor991998992999ARM/EBSA110 MACHINE SUPPORT9931000M: Russell King <linux@arm.linux.org.uk>···11931188M: Philipp Zabel <philipp.zabel@gmail.com>11941189S: Maintained1195119011961196-ARM/Marvell Armada 370 and Armada XP SOC support11911191+ARM/Marvell Kirkwood and Armada 370, 375, 38x, XP SOC support11971192M: Jason Cooper <jason@lakedaemon.net>11981193M: Andrew Lunn <andrew@lunn.ch>11991194M: Gregory Clement <gregory.clement@free-electrons.com>···12021197S: Maintained12031198F: arch/arm/mach-mvebu/12041199F: drivers/rtc/rtc-armada38x.c12001200+F: arch/arm/boot/dts/armada*12011201+F: arch/arm/boot/dts/kirkwood*12021202+1205120312061204ARM/Marvell Berlin SoC support12071205M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>12081206L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)12091207S: Maintained12101208F: arch/arm/mach-berlin/12091209+F: arch/arm/boot/dts/berlin*12101210+1211121112121212ARM/Marvell Dove/MV78xx0/Orion SOC support12131213M: Jason Cooper <jason@lakedaemon.net>···12251215F: arch/arm/mach-mv78xx0/12261216F: arch/arm/mach-orion5x/12271217F: arch/arm/plat-orion/12181218+F: arch/arm/boot/dts/dove*12191219+F: arch/arm/boot/dts/orion5x*12201220+1228122112291222ARM/Orion SoC/Technologic Systems TS-78xx platform support12301223M: Alexander Clouter <alex@digriz.org.uk>···1379136613801367ARM/SAMSUNG EXYNOS ARM ARCHITECTURES13811368M: Kukjin Kim <kgene@kernel.org>13691369+M: Krzysztof Kozlowski <k.kozlowski@samsung.com>13821370L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)13831371L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)13841372S: Maintained···14531439M: Dinh Nguyen <dinguyen@opensource.altera.com>14541440S: Maintained14551441F: arch/arm/mach-socfpga/14421442+F: arch/arm/boot/dts/socfpga*14431443+F: arch/arm/configs/socfpga_defconfig14561444W: http://www.rocketboards.org14571457-T: git://git.rocketboards.org/linux-socfpga.git14581458-T: git://git.rocketboards.org/linux-socfpga-next.git14451445+T: git git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux.git1459144614601447ARM/SOCFPGA CLOCK FRAMEWORK SUPPORT14611448M: Dinh Nguyen <dinguyen@opensource.altera.com>···19441929F: drivers/net/wireless/b43legacy/1945193019461931BACKLIGHT CLASS/SUBSYSTEM19471947-M: Jingoo Han <jg1.han@samsung.com>19321932+M: Jingoo Han <jingoohan1@gmail.com>19481933M: Lee Jones <lee.jones@linaro.org>19491934S: Maintained19501935F: drivers/video/backlight/···21312116F: drivers/net/ethernet/broadcom/bnx2x/2132211721332118BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE21342134-M: Christian Daudt <bcm@fixthebug.org>21352119M: Florian Fainelli <f.fainelli@gmail.com>21202120+M: Ray Jui <rjui@broadcom.com>21212121+M: Scott Branden <sbranden@broadcom.com>21362122L: bcm-kernel-feedback-list@broadcom.com21372123T: git git://github.com/broadcom/mach-bcm21382124S: Maintained···21842168F: drivers/usb/gadget/udc/bcm63xx_udc.*2185216921862170BROADCOM BCM7XXX ARM ARCHITECTURE21872187-M: Marc Carino <marc.ceeeee@gmail.com>21882171M: Brian Norris <computersforpeace@gmail.com>21892172M: Gregory Fong <gregory.0xf0@gmail.com>21902173M: Florian Fainelli <f.fainelli@gmail.com>···39273912F: Documentation/extcon/3928391339293914EXYNOS DP DRIVER39303930-M: Jingoo Han <jg1.han@samsung.com>39153915+M: Jingoo Han <jingoohan1@gmail.com>39313916L: dri-devel@lists.freedesktop.org39323917S: Maintained39333918F: drivers/gpu/drm/exynos/exynos_dp*···43864371F: include/uapi/linux/gfs2_ondisk.h4387437243884373GIGASET ISDN DRIVERS43894389-M: Hansjoerg Lipp <hjlipp@web.de>43904390-M: Tilman Schmidt <tilman@imap.cc>43744374+M: Paul Bolle <pebolle@tiscali.nl>43914375L: gigaset307x-common@lists.sourceforge.net43924376W: http://gigaset307x.sourceforge.net/43934393-S: Maintained43774377+S: Odd Fixes43944378F: Documentation/isdn/README.gigaset43954379F: drivers/isdn/gigaset/43964380F: include/uapi/linux/gigaset_dev.h···50625048L: linux-rdma@vger.kernel.org50635049W: http://www.openfabrics.org/50645050Q: http://patchwork.kernel.org/project/linux-rdma/list/50655065-T: git git://github.com/dledford/linux.git50515051+T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git50665052S: Supported50675053F: Documentation/infiniband/50685054F: drivers/infiniband/···69676953S: Maintained69686954F: arch/nios2/6969695569566956+NOKIA N900 POWER SUPPLY DRIVERS69576957+M: Pali Rohár <pali.rohar@gmail.com>69586958+S: Maintained69596959+F: include/linux/power/bq2415x_charger.h69606960+F: include/linux/power/bq27x00_battery.h69616961+F: include/linux/power/isp1704_charger.h69626962+F: drivers/power/bq2415x_charger.c69636963+F: drivers/power/bq27x00_battery.c69646964+F: drivers/power/isp1704_charger.c69656965+F: drivers/power/rx51_battery.c69666966+69706967NTB DRIVER69716968M: Jon Mason <jdmason@kudzu.us>69726969M: Dave Jiang <dave.jiang@intel.com>···75667541F: drivers/pci/host/*rcar*7567754275687543PCI DRIVER FOR SAMSUNG EXYNOS75697569-M: Jingoo Han <jg1.han@samsung.com>75447544+M: Jingoo Han <jingoohan1@gmail.com>75707545L: linux-pci@vger.kernel.org75717546L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)75727547L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)···75747549F: drivers/pci/host/pci-exynos.c7575755075767551PCI DRIVER FOR SYNOPSIS DESIGNWARE75777577-M: Jingoo Han <jg1.han@samsung.com>75527552+M: Jingoo Han <jingoohan1@gmail.com>75787553L: linux-pci@vger.kernel.org75797554S: Maintained75807555F: drivers/pci/host/*designware*···85308505F: sound/soc/samsung/8531850685328507SAMSUNG FRAMEBUFFER DRIVER85338533-M: Jingoo Han <jg1.han@samsung.com>85088508+M: Jingoo Han <jingoohan1@gmail.com>85348509L: linux-fbdev@vger.kernel.org85358510S: Maintained85368511F: drivers/video/fbdev/s3c-fb.c···88358810S: Supported88368811F: drivers/scsi/be2iscsi/8837881288388838-SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER88398839-M: Sathya Perla <sathya.perla@emulex.com>88408840-M: Subbu Seetharaman <subbu.seetharaman@emulex.com>88418841-M: Ajit Khaparde <ajit.khaparde@emulex.com>88138813+Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER88148814+M: Sathya Perla <sathya.perla@avagotech.com>88158815+M: Ajit Khaparde <ajit.khaparde@avagotech.com>88168816+M: Padmanabh Ratnakar <padmanabh.ratnakar@avagotech.com>88178817+M: Sriharsha Basavapatna <sriharsha.basavapatna@avagotech.com>88428818L: netdev@vger.kernel.org88438819W: http://www.emulex.com88448820S: Supported
···2233source "lib/Kconfig.debug"4455-config EARLY_PRINTK66- bool "Early printk" if EMBEDDED77- default y88- help99- Write kernel log output directly into the VGA buffer or to a serial1010- port.1111-1212- This is useful for kernel debugging when your machine crashes very1313- early before the console code is initialized. For normal operation1414- it is not recommended because it looks ugly and doesn't cooperate1515- with klogd/syslogd or the X server. You should normally N here,1616- unless you want to debug such a crash.1717-185config 16KSTACKS196 bool "Use 16Kb for kernel stacks instead of 8Kb"207 help
+1-1
arch/arc/include/asm/atomic.h
···9999 atomic_ops_unlock(flags); \100100}101101102102-#define ATOMIC_OP_RETURN(op, c_op) \102102+#define ATOMIC_OP_RETURN(op, c_op, asm_op) \103103static inline int atomic_##op##_return(int i, atomic_t *v) \104104{ \105105 unsigned long flags; \
+2-2
arch/arc/mm/cache_arc700.c
···266266 * Machine specific helpers for Entire D-Cache or Per Line ops267267 */268268269269-static unsigned int __before_dc_op(const int op)269269+static inline unsigned int __before_dc_op(const int op)270270{271271 unsigned int reg = reg;272272···284284 return reg;285285}286286287287-static void __after_dc_op(const int op, unsigned int reg)287287+static inline void __after_dc_op(const int op, unsigned int reg)288288{289289 if (op & OP_FLUSH) /* flush / flush-n-inv both wait */290290 while (read_aux_reg(ARC_REG_DC_CTRL) & DC_CTRL_FLUSH_STATUS);
···105105 };106106107107 internal-regs {108108+ rtc@10300 {109109+ /* No crystal connected to the internal RTC */110110+ status = "disabled";111111+ };108112 serial@12000 {109113 status = "okay";110114 };
+1
arch/arm/boot/dts/dove-cubox.dts
···87878888 /* connect xtal input to 25MHz reference */8989 clocks = <&ref25>;9090+ clock-names = "xtal";90919192 /* connect xtal input as source of pll0 and pll1 */9293 silabs,pll-source = <0 0>, <1 0>;
···393393CONFIG_DMA_OMAP=y394394# CONFIG_IOMMU_SUPPORT is not set395395CONFIG_EXTCON=m396396-CONFIG_EXTCON_GPIO=m396396+CONFIG_EXTCON_USB_GPIO=m397397CONFIG_EXTCON_PALMAS=m398398CONFIG_TI_EMIF=m399399CONFIG_PWM=y
···167167}168168169169/*170170+ * Set or clear the USE_DELAYED_RESET_ASSERTION option. Used by smp code171171+ * and suspend.172172+ *173173+ * This is necessary only on Exynos4 SoCs. When system is running174174+ * USE_DELAYED_RESET_ASSERTION should be set so the ARM CLK clock down175175+ * feature could properly detect global idle state when secondary CPU is176176+ * powered down.177177+ *178178+ * However this should not be set when such system is going into suspend.179179+ */180180+void exynos_set_delayed_reset_assertion(bool enable)181181+{182182+ if (of_machine_is_compatible("samsung,exynos4")) {183183+ unsigned int tmp, core_id;184184+185185+ for (core_id = 0; core_id < num_possible_cpus(); core_id++) {186186+ tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));187187+ if (enable)188188+ tmp |= S5P_USE_DELAYED_RESET_ASSERTION;189189+ else190190+ tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);191191+ pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));192192+ }193193+ }194194+}195195+196196+/*170197 * Apparently, these SoCs are not able to wake-up from suspend using171198 * the PMU. Too bad. Should they suddenly become capable of such a172199 * feat, the matches below should be moved to suspend.c.
+2-37
arch/arm/mach-exynos/platsmp.c
···34343535extern void exynos4_secondary_startup(void);36363737-/*3838- * Set or clear the USE_DELAYED_RESET_ASSERTION option, set on Exynos4 SoCs3939- * during hot-(un)plugging CPUx.4040- *4141- * The feature can be cleared safely during first boot of secondary CPU.4242- *4343- * Exynos4 SoCs require setting USE_DELAYED_RESET_ASSERTION during powering4444- * down a CPU so the CPU idle clock down feature could properly detect global4545- * idle state when CPUx is off.4646- */4747-static void exynos_set_delayed_reset_assertion(u32 core_id, bool enable)4848-{4949- if (soc_is_exynos4()) {5050- unsigned int tmp;5151-5252- tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));5353- if (enable)5454- tmp |= S5P_USE_DELAYED_RESET_ASSERTION;5555- else5656- tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);5757- pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));5858- }5959-}6060-6137#ifdef CONFIG_HOTPLUG_CPU6238static inline void cpu_leave_lowpower(u32 core_id)6339{···4973 : "=&r" (v)5074 : "Ir" (CR_C), "Ir" (0x40)5175 : "cc");5252-5353- exynos_set_delayed_reset_assertion(core_id, false);5476}55775678static inline void platform_do_lowpower(unsigned int cpu, int *spurious)···60866187 /* Turn the CPU off on next WFI instruction. */6288 exynos_cpu_power_down(core_id);6363-6464- /*6565- * Exynos4 SoCs require setting6666- * USE_DELAYED_RESET_ASSERTION so the CPU idle6767- * clock down feature could properly detect6868- * global idle state when CPUx is off.6969- */7070- exynos_set_delayed_reset_assertion(core_id, true);71897290 wfi();7391···337371 udelay(10);338372 }339373340340- /* No harm if this is called during first boot of secondary CPU */341341- exynos_set_delayed_reset_assertion(core_id, false);342342-343374 /*344375 * now the secondary core is starting up let it run its345376 * calibrations, then wait for it to finish···382419 int i;383420384421 exynos_sysram_init();422422+423423+ exynos_set_delayed_reset_assertion(true);385424386425 if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9)387426 scu_enable(scu_base_addr());
+2-2
arch/arm/mach-exynos/pm_domains.c
···188188 args.np = np;189189 args.args_count = 0;190190 child_domain = of_genpd_get_from_provider(&args);191191- if (!child_domain)191191+ if (IS_ERR(child_domain))192192 continue;193193194194 if (of_parse_phandle_with_args(np, "power-domains",···196196 continue;197197198198 parent_domain = of_genpd_get_from_provider(&args);199199- if (!parent_domain)199199+ if (IS_ERR(parent_domain))200200 continue;201201202202 if (pm_genpd_add_subdomain(parent_domain, child_domain))
+6-1
arch/arm/mach-exynos/suspend.c
···342342343343static void exynos_pm_prepare(void)344344{345345+ exynos_set_delayed_reset_assertion(false);346346+345347 /* Set wake-up mask registers */346348 exynos_pm_set_wakeup_mask();347349···484482485483 /* Clear SLEEP mode set in INFORM1 */486484 pmu_raw_writel(0x0, S5P_INFORM1);485485+ exynos_set_delayed_reset_assertion(true);487486}488487489488static void exynos3250_pm_resume(void)···726723 return;727724 }728725729729- if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL)))726726+ if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {730727 pr_warn("Outdated DT detected, suspend/resume will NOT work\n");728728+ return;729729+ }731730732731 pm_data = (const struct exynos_pm_data *) match->data;733732
···11/*22- * Copyright (C) 2010 Pengutronix, Wolfram Sang <w.sang@pengutronix.de>22+ * Copyright (C) 2010 Pengutronix, Wolfram Sang <kernel@pengutronix.de>33 *44 * This program is free software; you can redistribute it and/or modify it under55 * the terms of the GNU General Public License version 2 as published by the
+14-54
arch/arm/mach-omap2/omap_hwmod.c
···171171 */172172#define LINKS_PER_OCP_IF 2173173174174+/*175175+ * Address offset (in bytes) between the reset control and the reset176176+ * status registers: 4 bytes on OMAP4177177+ */178178+#define OMAP4_RST_CTRL_ST_OFFSET 4179179+174180/**175181 * struct omap_hwmod_soc_ops - fn ptrs for some SoC-specific operations176182 * @enable_module: function to enable a module (via MODULEMODE)···30223016 if (ohri->st_shift)30233017 pr_err("omap_hwmod: %s: %s: hwmod data error: OMAP4 does not support st_shift\n",30243018 oh->name, ohri->name);30253025- return omap_prm_deassert_hardreset(ohri->rst_shift, 0,30193019+ return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->rst_shift,30263020 oh->clkdm->pwrdm.ptr->prcm_partition,30273021 oh->clkdm->pwrdm.ptr->prcm_offs,30283028- oh->prcm.omap4.rstctrl_offs, 0);30223022+ oh->prcm.omap4.rstctrl_offs,30233023+ oh->prcm.omap4.rstctrl_offs +30243024+ OMAP4_RST_CTRL_ST_OFFSET);30293025}3030302630313027/**···30563048}3057304930583050/**30593059- * _am33xx_assert_hardreset - call AM33XX PRM hardreset fn with hwmod args30603060- * @oh: struct omap_hwmod * to assert hardreset30613061- * @ohri: hardreset line data30623062- *30633063- * Call am33xx_prminst_assert_hardreset() with parameters extracted30643064- * from the hwmod @oh and the hardreset line data @ohri. Only30653065- * intended for use as an soc_ops function pointer. Passes along the30663066- * return value from am33xx_prminst_assert_hardreset(). XXX This30673067- * function is scheduled for removal when the PRM code is moved into30683068- * drivers/.30693069- */30703070-static int _am33xx_assert_hardreset(struct omap_hwmod *oh,30713071- struct omap_hwmod_rst_info *ohri)30723072-30733073-{30743074- return omap_prm_assert_hardreset(ohri->rst_shift, 0,30753075- oh->clkdm->pwrdm.ptr->prcm_offs,30763076- oh->prcm.omap4.rstctrl_offs);30773077-}30783078-30793079-/**30803051 * _am33xx_deassert_hardreset - call AM33XX PRM hardreset fn with hwmod args30813052 * @oh: struct omap_hwmod * to deassert hardreset30823053 * @ohri: hardreset line data···30703083static int _am33xx_deassert_hardreset(struct omap_hwmod *oh,30713084 struct omap_hwmod_rst_info *ohri)30723085{30733073- return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0,30863086+ return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift,30873087+ oh->clkdm->pwrdm.ptr->prcm_partition,30743088 oh->clkdm->pwrdm.ptr->prcm_offs,30753089 oh->prcm.omap4.rstctrl_offs,30763090 oh->prcm.omap4.rstst_offs);30773077-}30783078-30793079-/**30803080- * _am33xx_is_hardreset_asserted - call AM33XX PRM hardreset fn with hwmod args30813081- * @oh: struct omap_hwmod * to test hardreset30823082- * @ohri: hardreset line data30833083- *30843084- * Call am33xx_prminst_is_hardreset_asserted() with parameters30853085- * extracted from the hwmod @oh and the hardreset line data @ohri.30863086- * Only intended for use as an soc_ops function pointer. Passes along30873087- * the return value from am33xx_prminst_is_hardreset_asserted(). XXX30883088- * This function is scheduled for removal when the PRM code is moved30893089- * into drivers/.30903090- */30913091-static int _am33xx_is_hardreset_asserted(struct omap_hwmod *oh,30923092- struct omap_hwmod_rst_info *ohri)30933093-{30943094- return omap_prm_is_hardreset_asserted(ohri->rst_shift, 0,30953095- oh->clkdm->pwrdm.ptr->prcm_offs,30963096- oh->prcm.omap4.rstctrl_offs);30973091}3098309230993093/* Public functions */···38763908 soc_ops.init_clkdm = _init_clkdm;38773909 soc_ops.update_context_lost = _omap4_update_context_lost;38783910 soc_ops.get_context_lost = _omap4_get_context_lost;38793879- } else if (soc_is_am43xx()) {39113911+ } else if (cpu_is_ti816x() || soc_is_am33xx() || soc_is_am43xx()) {38803912 soc_ops.enable_module = _omap4_enable_module;38813913 soc_ops.disable_module = _omap4_disable_module;38823914 soc_ops.wait_target_ready = _omap4_wait_target_ready;38833915 soc_ops.assert_hardreset = _omap4_assert_hardreset;38843884- soc_ops.deassert_hardreset = _omap4_deassert_hardreset;38853885- soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;38863886- soc_ops.init_clkdm = _init_clkdm;38873887- } else if (cpu_is_ti816x() || soc_is_am33xx()) {38883888- soc_ops.enable_module = _omap4_enable_module;38893889- soc_ops.disable_module = _omap4_disable_module;38903890- soc_ops.wait_target_ready = _omap4_wait_target_ready;38913891- soc_ops.assert_hardreset = _am33xx_assert_hardreset;38923916 soc_ops.deassert_hardreset = _am33xx_deassert_hardreset;38933893- soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted;39173917+ soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;38943918 soc_ops.init_clkdm = _init_clkdm;38953919 } else {38963920 WARN(1, "omap_hwmod: unknown SoC type\n");
···8787 return v;8888}89899090-/*9191- * Address offset (in bytes) between the reset control and the reset9292- * status registers: 4 bytes on OMAP49393- */9494-#define OMAP4_RST_CTRL_ST_OFFSET 49595-9690/**9791 * omap4_prminst_is_hardreset_asserted - read the HW reset line state of9892 * submodules contained in the hwmod module···135141 * omap4_prminst_deassert_hardreset - deassert a submodule hardreset line and136142 * wait137143 * @shift: register bit shift corresponding to the reset line to deassert138138- * @st_shift: status bit offset, not used for OMAP4+144144+ * @st_shift: status bit offset corresponding to the reset line139145 * @part: PRM partition140146 * @inst: PRM instance offset141147 * @rstctrl_offs: reset register offset142142- * @st_offs: reset status register offset, not used for OMAP4+148148+ * @rstst_offs: reset status register offset143149 *144150 * Some IPs like dsp, ipu or iva contain processors that require an HW145151 * reset line to be asserted / deasserted in order to fully enable the···151157 * of reset, or -EBUSY if the submodule did not exit reset promptly.152158 */153159int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst,154154- u16 rstctrl_offs, u16 st_offs)160160+ u16 rstctrl_offs, u16 rstst_offs)155161{156162 int c;157163 u32 mask = 1 << shift;158158- u16 rstst_offs = rstctrl_offs + OMAP4_RST_CTRL_ST_OFFSET;164164+ u32 st_mask = 1 << st_shift;159165160166 /* Check the current status to avoid de-asserting the line twice */161167 if (omap4_prminst_is_hardreset_asserted(shift, part, inst,···163169 return -EEXIST;164170165171 /* Clear the reset status by writing 1 to the status bit */166166- omap4_prminst_rmw_inst_reg_bits(0xffffffff, mask, part, inst,172172+ omap4_prminst_rmw_inst_reg_bits(0xffffffff, st_mask, part, inst,167173 rstst_offs);168174 /* de-assert the reset control line */169175 omap4_prminst_rmw_inst_reg_bits(mask, 0, part, inst, rstctrl_offs);170176 /* wait the status to be set */171171- omap_test_timeout(omap4_prminst_is_hardreset_asserted(shift, part, inst,172172- rstst_offs),177177+ omap_test_timeout(omap4_prminst_is_hardreset_asserted(st_shift, part,178178+ inst, rstst_offs),173179 MAX_MODULE_HARDRESET_WAIT, c);174180175181 return (c == MAX_MODULE_HARDRESET_WAIT) ? -EBUSY : 0;
+5-8
arch/arm/mach-omap2/timer.c
···298298 if (IS_ERR(src))299299 return PTR_ERR(src);300300301301- if (clk_get_parent(timer->fclk) != src) {302302- r = clk_set_parent(timer->fclk, src);303303- if (r < 0) {304304- pr_warn("%s: %s cannot set source\n", __func__,305305- oh->name);306306- clk_put(src);307307- return r;308308- }301301+ r = clk_set_parent(timer->fclk, src);302302+ if (r < 0) {303303+ pr_warn("%s: %s cannot set source\n", __func__, oh->name);304304+ clk_put(src);305305+ return r;309306 }310307311308 clk_put(src);
+10-2
arch/arm/mach-omap2/vc.c
···316316 * idle. And we can also scale voltages to zero for off-idle.317317 * Note that no actual voltage scaling during off-idle will318318 * happen unless the board specific twl4030 PMIC scripts are319319- * loaded.319319+ * loaded. See also omap_vc_i2c_init for comments regarding320320+ * erratum i531.320321 */321322 val = voltdm->read(OMAP3_PRM_VOLTCTRL_OFFSET);322323 if (!(val & OMAP3430_PRM_VOLTCTRL_SEL_OFF)) {···705704 return;706705 }707706707707+ /*708708+ * Note that for omap3 OMAP3430_SREN_MASK clears SREN to work around709709+ * erratum i531 "Extra Power Consumed When Repeated Start Operation710710+ * Mode Is Enabled on I2C Interface Dedicated for Smart Reflex (I2C4)".711711+ * Otherwise I2C4 eventually leads into about 23mW extra power being712712+ * consumed even during off idle using VMODE.713713+ */708714 i2c_high_speed = voltdm->pmic->i2c_high_speed;709715 if (i2c_high_speed)710710- voltdm->rmw(vc->common->i2c_cfg_hsen_mask,716716+ voltdm->rmw(vc->common->i2c_cfg_clear_mask,711717 vc->common->i2c_cfg_hsen_mask,712718 vc->common->i2c_cfg_reg);713719
+2
arch/arm/mach-omap2/vc.h
···3434 * @cmd_ret_shift: RET field shift in PRM_VC_CMD_VAL_* register3535 * @cmd_off_shift: OFF field shift in PRM_VC_CMD_VAL_* register3636 * @i2c_cfg_reg: I2C configuration register offset3737+ * @i2c_cfg_clear_mask: high-speed mode bit clear mask in I2C config register3738 * @i2c_cfg_hsen_mask: high-speed mode bit field mask in I2C config register3839 * @i2c_mcode_mask: MCODE field mask for I2C config register3940 *···5352 u8 cmd_ret_shift;5453 u8 cmd_off_shift;5554 u8 i2c_cfg_reg;5555+ u8 i2c_cfg_clear_mask;5656 u8 i2c_cfg_hsen_mask;5757 u8 i2c_mcode_mask;5858};
···691691config PXA310_ULPI692692 bool693693694694+config PXA_SYSTEMS_CPLDS695695+ tristate "Motherboard cplds"696696+ default ARCH_LUBBOCK || MACH_MAINSTONE697697+ help698698+ This driver supports the Lubbock and Mainstone multifunction chip699699+ found on the pxa25x development platform system (Lubbock) and pxa27x700700+ development platform system (Mainstone). This IO board supports the701701+ interrupts handling, ethernet controller, flash chips, etc ...702702+694703endif
···11+/*22+ * Intel Reference Systems cplds33+ *44+ * Copyright (C) 2014 Robert Jarzmik55+ *66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License as published by88+ * the Free Software Foundation; either version 2 of the License, or99+ * (at your option) any later version.1010+ *1111+ * Cplds motherboard driver, supporting lubbock and mainstone SoC board.1212+ */1313+1414+#include <linux/bitops.h>1515+#include <linux/gpio.h>1616+#include <linux/gpio/consumer.h>1717+#include <linux/interrupt.h>1818+#include <linux/io.h>1919+#include <linux/irq.h>2020+#include <linux/irqdomain.h>2121+#include <linux/mfd/core.h>2222+#include <linux/module.h>2323+#include <linux/of_platform.h>2424+2525+#define FPGA_IRQ_MASK_EN 0x02626+#define FPGA_IRQ_SET_CLR 0x102727+2828+#define CPLDS_NB_IRQ 322929+3030+struct cplds {3131+ void __iomem *base;3232+ int irq;3333+ unsigned int irq_mask;3434+ struct gpio_desc *gpio0;3535+ struct irq_domain *irqdomain;3636+};3737+3838+static irqreturn_t cplds_irq_handler(int in_irq, void *d)3939+{4040+ struct cplds *fpga = d;4141+ unsigned long pending;4242+ unsigned int bit;4343+4444+ pending = readl(fpga->base + FPGA_IRQ_SET_CLR) & fpga->irq_mask;4545+ for_each_set_bit(bit, &pending, CPLDS_NB_IRQ)4646+ generic_handle_irq(irq_find_mapping(fpga->irqdomain, bit));4747+4848+ return IRQ_HANDLED;4949+}5050+5151+static void cplds_irq_mask_ack(struct irq_data *d)5252+{5353+ struct cplds *fpga = irq_data_get_irq_chip_data(d);5454+ unsigned int cplds_irq = irqd_to_hwirq(d);5555+ unsigned int set, bit = BIT(cplds_irq);5656+5757+ fpga->irq_mask &= ~bit;5858+ writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);5959+ set = readl(fpga->base + FPGA_IRQ_SET_CLR);6060+ writel(set & ~bit, fpga->base + FPGA_IRQ_SET_CLR);6161+}6262+6363+static void cplds_irq_unmask(struct irq_data *d)6464+{6565+ struct cplds *fpga = irq_data_get_irq_chip_data(d);6666+ unsigned int cplds_irq = irqd_to_hwirq(d);6767+ unsigned int bit = BIT(cplds_irq);6868+6969+ fpga->irq_mask |= bit;7070+ writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);7171+}7272+7373+static struct irq_chip cplds_irq_chip = {7474+ .name = "pxa_cplds",7575+ .irq_mask_ack = cplds_irq_mask_ack,7676+ .irq_unmask = cplds_irq_unmask,7777+ .flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE,7878+};7979+8080+static int cplds_irq_domain_map(struct irq_domain *d, unsigned int irq,8181+ irq_hw_number_t hwirq)8282+{8383+ struct cplds *fpga = d->host_data;8484+8585+ irq_set_chip_and_handler(irq, &cplds_irq_chip, handle_level_irq);8686+ irq_set_chip_data(irq, fpga);8787+8888+ return 0;8989+}9090+9191+static const struct irq_domain_ops cplds_irq_domain_ops = {9292+ .xlate = irq_domain_xlate_twocell,9393+ .map = cplds_irq_domain_map,9494+};9595+9696+static int cplds_resume(struct platform_device *pdev)9797+{9898+ struct cplds *fpga = platform_get_drvdata(pdev);9999+100100+ writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);101101+102102+ return 0;103103+}104104+105105+static int cplds_probe(struct platform_device *pdev)106106+{107107+ struct resource *res;108108+ struct cplds *fpga;109109+ int ret;110110+ unsigned int base_irq = 0;111111+ unsigned long irqflags = 0;112112+113113+ fpga = devm_kzalloc(&pdev->dev, sizeof(*fpga), GFP_KERNEL);114114+ if (!fpga)115115+ return -ENOMEM;116116+117117+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);118118+ if (res) {119119+ fpga->irq = (unsigned int)res->start;120120+ irqflags = res->flags;121121+ }122122+ if (!fpga->irq)123123+ return -ENODEV;124124+125125+ base_irq = platform_get_irq(pdev, 1);126126+ if (base_irq < 0)127127+ base_irq = 0;128128+129129+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);130130+ fpga->base = devm_ioremap_resource(&pdev->dev, res);131131+ if (IS_ERR(fpga->base))132132+ return PTR_ERR(fpga->base);133133+134134+ platform_set_drvdata(pdev, fpga);135135+136136+ writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN);137137+ writel(0, fpga->base + FPGA_IRQ_SET_CLR);138138+139139+ ret = devm_request_irq(&pdev->dev, fpga->irq, cplds_irq_handler,140140+ irqflags, dev_name(&pdev->dev), fpga);141141+ if (ret == -ENOSYS)142142+ return -EPROBE_DEFER;143143+144144+ if (ret) {145145+ dev_err(&pdev->dev, "couldn't request main irq%d: %d\n",146146+ fpga->irq, ret);147147+ return ret;148148+ }149149+150150+ irq_set_irq_wake(fpga->irq, 1);151151+ fpga->irqdomain = irq_domain_add_linear(pdev->dev.of_node,152152+ CPLDS_NB_IRQ,153153+ &cplds_irq_domain_ops, fpga);154154+ if (!fpga->irqdomain)155155+ return -ENODEV;156156+157157+ if (base_irq) {158158+ ret = irq_create_strict_mappings(fpga->irqdomain, base_irq, 0,159159+ CPLDS_NB_IRQ);160160+ if (ret) {161161+ dev_err(&pdev->dev, "couldn't create the irq mapping %d..%d\n",162162+ base_irq, base_irq + CPLDS_NB_IRQ);163163+ return ret;164164+ }165165+ }166166+167167+ return 0;168168+}169169+170170+static int cplds_remove(struct platform_device *pdev)171171+{172172+ struct cplds *fpga = platform_get_drvdata(pdev);173173+174174+ irq_set_chip_and_handler(fpga->irq, NULL, NULL);175175+176176+ return 0;177177+}178178+179179+static const struct of_device_id cplds_id_table[] = {180180+ { .compatible = "intel,lubbock-cplds-irqs", },181181+ { .compatible = "intel,mainstone-cplds-irqs", },182182+ { }183183+};184184+MODULE_DEVICE_TABLE(of, cplds_id_table);185185+186186+static struct platform_driver cplds_driver = {187187+ .driver = {188188+ .name = "pxa_cplds_irqs",189189+ .of_match_table = of_match_ptr(cplds_id_table),190190+ },191191+ .probe = cplds_probe,192192+ .remove = cplds_remove,193193+ .resume = cplds_resume,194194+};195195+196196+module_platform_driver(cplds_driver);197197+198198+MODULE_DESCRIPTION("PXA Cplds interrupts driver");199199+MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>");200200+MODULE_LICENSE("GPL");
+7
arch/arm/mach-rockchip/pm.c
···8383 SGRF_PCLK_WDT_GATE | SGRF_FAST_BOOT_EN8484 | SGRF_PCLK_WDT_GATE_WRITE | SGRF_FAST_BOOT_EN_WRITE);85858686+ /*8787+ * The dapswjdp can not auto reset before resume, that cause it may8888+ * access some illegal address during resume. Let's disable it before8989+ * suspend, and the MASKROM will enable it back.9090+ */9191+ regmap_write(sgrf_regmap, RK3288_SGRF_CPU_CON0, SGRF_DAPDEVICEEN_WRITE);9292+8693 /* booting address of resuming system is from this register value */8794 regmap_write(sgrf_regmap, RK3288_SGRF_FAST_BOOT_ADDR,8895 rk3288_bootram_phy);
···3030#include "pm.h"31313232#define RK3288_GRF_SOC_CON0 0x2443333+#define RK3288_TIMER6_7_PHYS 0xff81000033343435static void __init rockchip_timer_init(void)3536{3637 if (of_machine_is_compatible("rockchip,rk3288")) {3738 struct regmap *grf;3939+ void __iomem *reg_base;4040+4141+ /*4242+ * Most/all uboot versions for rk3288 don't enable timer74343+ * which is needed for the architected timer to work.4444+ * So make sure it is running during early boot.4545+ */4646+ reg_base = ioremap(RK3288_TIMER6_7_PHYS, SZ_16K);4747+ if (reg_base) {4848+ writel(0, reg_base + 0x30);4949+ writel(0xffffffff, reg_base + 0x20);5050+ writel(0xffffffff, reg_base + 0x24);5151+ writel(1, reg_base + 0x30);5252+ dsb();5353+ iounmap(reg_base);5454+ } else {5555+ pr_err("rockchip: could not map timer7 registers\n");5656+ }38573958 /*4059 * Disable auto jtag/sdmmc switching that causes issues
+5-8
arch/arm/mm/dma-mapping.c
···18781878 * arm_iommu_attach_device function.18791879 */18801880struct dma_iommu_mapping *18811881-arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size)18811881+arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size)18821882{18831883 unsigned int bits = size >> PAGE_SHIFT;18841884 unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long);18851885 struct dma_iommu_mapping *mapping;18861886 int extensions = 1;18871887 int err = -ENOMEM;18881888+18891889+ /* currently only 32-bit DMA address space is supported */18901890+ if (size > DMA_BIT_MASK(32) + 1)18911891+ return ERR_PTR(-ERANGE);1888189218891893 if (!bitmap_size)18901894 return ERR_PTR(-EINVAL);···20592055 struct dma_iommu_mapping *mapping;2060205620612057 if (!iommu)20622062- return false;20632063-20642064- /*20652065- * currently arm_iommu_create_mapping() takes a max of size_t20662066- * for size param. So check this limit for now.20672067- */20682068- if (size > SIZE_MAX)20692058 return false;2070205920712060 mapping = arm_iommu_create_mapping(dev->bus, dma_base, size);
-2
arch/arm/mm/proc-arm1020.S
···2222 *2323 * These are the low level assembler for performing cache and TLB2424 * functions on the arm1020.2525- *2626- * CONFIG_CPU_ARM1020_CPU_IDLE -> nohlt2725 */2826#include <linux/linkage.h>2927#include <linux/init.h>
-2
arch/arm/mm/proc-arm1020e.S
···2222 *2323 * These are the low level assembler for performing cache and TLB2424 * functions on the arm1020e.2525- *2626- * CONFIG_CPU_ARM1020_CPU_IDLE -> nohlt2725 */2826#include <linux/linkage.h>2927#include <linux/init.h>
···5454#define SEEN_DATA (1 << (BPF_MEMWORDS + 3))55555656#define FLAG_NEED_X_RESET (1 << 0)5757+#define FLAG_IMM_OVERFLOW (1 << 1)57585859struct jit_ctx {5960 const struct bpf_prog *skf;···294293 /* PC in ARM mode == address of the instruction + 8 */295294 imm = offset - (8 + ctx->idx * 4);296295296296+ if (imm & ~0xfff) {297297+ /*298298+ * literal pool is too far, signal it into flags. we299299+ * can only detect it on the second pass unfortunately.300300+ */301301+ ctx->flags |= FLAG_IMM_OVERFLOW;302302+ return 0;303303+ }304304+297305 return imm;298306}299307···459449 return;460450 }461451#endif462462- if (rm != ARM_R0)463463- emit(ARM_MOV_R(ARM_R0, rm), ctx);452452+453453+ /*454454+ * For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4455455+ * (r_A) and rn is ARM_R0 (r_scratch) so load rn first into456456+ * ARM_R1 to avoid accidentally overwriting ARM_R0 with rm457457+ * before using it as a source for ARM_R1.458458+ *459459+ * For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is460460+ * ARM_R5 (r_X) so there is no particular register overlap461461+ * issues.462462+ */464463 if (rn != ARM_R1)465464 emit(ARM_MOV_R(ARM_R1, rn), ctx);465465+ if (rm != ARM_R0)466466+ emit(ARM_MOV_R(ARM_R0, rm), ctx);466467467468 ctx->seen |= SEEN_CALL;468469 emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);···876855 default:877856 return -1;878857 }858858+859859+ if (ctx->flags & FLAG_IMM_OVERFLOW)860860+ /*861861+ * this instruction generated an overflow when862862+ * trying to access the literal pool, so863863+ * delegate this filter to the kernel interpreter.864864+ */865865+ return -1;879866 }880867881868 /* compute offsets only during the first pass */···946917 ctx.idx = 0;947918948919 build_prologue(&ctx);949949- build_body(&ctx);920920+ if (build_body(&ctx) < 0) {921921+#if __LINUX_ARM_ARCH__ < 7922922+ if (ctx.imm_count)923923+ kfree(ctx.imms);924924+#endif925925+ bpf_jit_binary_free(header);926926+ goto out;927927+ }950928 build_epilogue(&ctx);951929952930 flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx));
···2424#include <asm/cacheflush.h>2525#include <asm/alternative.h>2626#include <asm/cpufeature.h>2727-#include <asm/insn.h>2827#include <linux/stop_machine.h>29283029extern struct alt_instr __alt_instructions[], __alt_instructions_end[];···3334 struct alt_instr *end;3435};35363636-/*3737- * Decode the imm field of a b/bl instruction, and return the byte3838- * offset as a signed value (so it can be used when computing a new3939- * branch target).4040- */4141-static s32 get_branch_offset(u32 insn)4242-{4343- s32 imm = aarch64_insn_decode_immediate(AARCH64_INSN_IMM_26, insn);4444-4545- /* sign-extend the immediate before turning it into a byte offset */4646- return (imm << 6) >> 4;4747-}4848-4949-static u32 get_alt_insn(u8 *insnptr, u8 *altinsnptr)5050-{5151- u32 insn;5252-5353- aarch64_insn_read(altinsnptr, &insn);5454-5555- /* Stop the world on instructions we don't support... */5656- BUG_ON(aarch64_insn_is_cbz(insn));5757- BUG_ON(aarch64_insn_is_cbnz(insn));5858- BUG_ON(aarch64_insn_is_bcond(insn));5959- /* ... and there is probably more. */6060-6161- if (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn)) {6262- enum aarch64_insn_branch_type type;6363- unsigned long target;6464-6565- if (aarch64_insn_is_b(insn))6666- type = AARCH64_INSN_BRANCH_NOLINK;6767- else6868- type = AARCH64_INSN_BRANCH_LINK;6969-7070- target = (unsigned long)altinsnptr + get_branch_offset(insn);7171- insn = aarch64_insn_gen_branch_imm((unsigned long)insnptr,7272- target, type);7373- }7474-7575- return insn;7676-}7777-7837static int __apply_alternatives(void *alt_region)7938{8039 struct alt_instr *alt;···4083 u8 *origptr, *replptr;41844285 for (alt = region->begin; alt < region->end; alt++) {4343- u32 insn;4444- int i;4545-4686 if (!cpus_have_cap(alt->cpufeature))4787 continue;4888···49955096 origptr = (u8 *)&alt->orig_offset + alt->orig_offset;5197 replptr = (u8 *)&alt->alt_offset + alt->alt_offset;5252-5353- for (i = 0; i < alt->alt_len; i += sizeof(insn)) {5454- insn = get_alt_insn(origptr + i, replptr + i);5555- aarch64_insn_write(origptr + i, insn);5656- }5757-9898+ memcpy(origptr, replptr, alt->alt_len);5899 flush_icache_range((uintptr_t)origptr,59100 (uintptr_t)(origptr + alt->alt_len));60101 }
+4-4
arch/arm64/kernel/perf_event.c
···13151315 if (!cpu_pmu)13161316 return -ENODEV;1317131713181318- irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);13191319- if (!irqs)13201320- return -ENOMEM;13211321-13221318 /* Don't bother with PPIs; they're already affine */13231319 irq = platform_get_irq(pdev, 0);13241320 if (irq >= 0 && irq_is_percpu(irq))13251321 return 0;13221322+13231323+ irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);13241324+ if (!irqs)13251325+ return -ENOMEM;1326132613271327 for (i = 0; i < pdev->num_resources; ++i) {13281328 struct device_node *dn;
···4545#define SMP_DUMP 0x84646#define SMP_ASK_C0COUNT 0x1047474848-extern volatile cpumask_t cpu_callin_map;4848+extern cpumask_t cpu_callin_map;49495050/* Mask of CPUs which are currently definitely operating coherently */5151extern cpumask_t cpu_coherent_mask;
+17-15
arch/mips/kernel/elf.c
···76767777 /* Lets see if this is an O32 ELF */7878 if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) {7979- /* FR = 1 for N32 */8080- if (ehdr32->e_flags & EF_MIPS_ABI2)8181- state->overall_fp_mode = FP_FR1;8282- else8383- /* Set a good default FPU mode for O32 */8484- state->overall_fp_mode = cpu_has_mips_r6 ?8585- FP_FRE : FP_FR0;8686-8779 if (ehdr32->e_flags & EF_MIPS_FP64) {8880 /*8981 * Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it···96104 (char *)&abiflags,97105 sizeof(abiflags));98106 } else {9999- /* FR=1 is really the only option for 64-bit */100100- state->overall_fp_mode = FP_FR1;101101-102107 if (phdr64->p_type != PT_MIPS_ABIFLAGS)103108 return 0;104109 if (phdr64->p_filesz < sizeof(abiflags))···126137 struct elf32_hdr *ehdr = _ehdr;127138 struct mode_req prog_req, interp_req;128139 int fp_abi, interp_fp_abi, abi0, abi1, max_abi;140140+ bool is_mips64;129141130142 if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT))131143 return 0;···142152 abi0 = abi1 = fp_abi;143153 }144154145145- /* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */146146- max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) &&147147- (!(ehdr->e_flags & EF_MIPS_ABI2))) ?148148- MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT;155155+ is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) ||156156+ (ehdr->e_flags & EF_MIPS_ABI2);157157+158158+ if (is_mips64) {159159+ /* MIPS64 code always uses FR=1, thus the default is easy */160160+ state->overall_fp_mode = FP_FR1;161161+162162+ /* Disallow access to the various FPXX & FP64 ABIs */163163+ max_abi = MIPS_ABI_FP_SOFT;164164+ } else {165165+ /* Default to a mode capable of running code expecting FR=0 */166166+ state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0;167167+168168+ /* Allow all ABIs we know about */169169+ max_abi = MIPS_ABI_FP_64A;170170+ }149171150172 if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) ||151173 (abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))
+1-1
arch/mips/kernel/ptrace.c
···176176177177 __get_user(value, data + 64);178178 fcr31 = child->thread.fpu.fcr31;179179- mask = current_cpu_data.fpu_msk31;179179+ mask = boot_cpu_data.fpu_msk31;180180 child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);181181182182 /* FIR may not be written. */
+1-1
arch/mips/kernel/smp-cps.c
···9292#ifdef CONFIG_MIPS_MT_FPAFF9393 /* If we have an FPU, enroll ourselves in the FPU-full mask */9494 if (cpu_has_fpu)9595- cpu_set(0, mt_fpu_cpumask);9595+ cpumask_set_cpu(0, &mt_fpu_cpumask);9696#endif /* CONFIG_MIPS_MT_FPAFF */9797}9898
+4-2
arch/mips/kernel/smp.c
···4343#include <asm/time.h>4444#include <asm/setup.h>45454646-volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */4646+cpumask_t cpu_callin_map; /* Bitmask of started secondaries */47474848int __cpu_number_map[NR_CPUS]; /* Map physical to logical */4949EXPORT_SYMBOL(__cpu_number_map);···218218 /*219219 * Trust is futile. We should really have timeouts ...220220 */221221- while (!cpumask_test_cpu(cpu, &cpu_callin_map))221221+ while (!cpumask_test_cpu(cpu, &cpu_callin_map)) {222222 udelay(100);223223+ schedule();224224+ }223225224226 synchronise_count_master(cpu);225227 return 0;
···23892389{23902390 unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];23912391 enum emulation_result er = EMULATE_DONE;23922392- unsigned long curr_pc;2393239223942393 if (run->mmio.len > sizeof(*gpr)) {23952394 kvm_err("Bad MMIO length: %d", run->mmio.len);···23962397 goto done;23972398 }2398239923992399- /*24002400- * Update PC and hold onto current PC in case there is24012401- * an error and we want to rollback the PC24022402- */24032403- curr_pc = vcpu->arch.pc;24042400 er = update_pc(vcpu, vcpu->arch.pending_load_cause);24052401 if (er == EMULATE_FAIL)24062402 return er;
···495495496496 if (cpu_has_rixi) {497497 /*498498- * Enable the no read, no exec bits, and enable large virtual498498+ * Enable the no read, no exec bits, and enable large physical499499 * address.500500 */501501#ifdef CONFIG_64BIT
+2-2
arch/mips/sgi-ip32/ip32-platform.c
···130130 .resource = ip32_rtc_resources,131131};132132133133-+static int __init sgio2_rtc_devinit(void)133133+static __init int sgio2_rtc_devinit(void)134134{135135 return platform_device_register(&ip32_rtc_device);136136}137137138138-device_initcall(sgio2_cmos_devinit);138138+device_initcall(sgio2_rtc_devinit);
···552552 q->queue_lock = &q->__queue_lock;553553 spin_unlock_irq(lock);554554555555+ bdi_destroy(&q->backing_dev_info);556556+555557 /* @q is and will stay empty, shutdown and put */556558 blk_put_queue(q);557559}
+36-24
block/blk-mq.c
···677677 data.next = blk_rq_timeout(round_jiffies_up(data.next));678678 mod_timer(&q->timeout, data.next);679679 } else {680680- queue_for_each_hw_ctx(q, hctx, i)681681- blk_mq_tag_idle(hctx);680680+ queue_for_each_hw_ctx(q, hctx, i) {681681+ /* the hctx may be unmapped, so check it here */682682+ if (blk_mq_hw_queue_mapped(hctx))683683+ blk_mq_tag_idle(hctx);684684+ }682685 }683686}684687···858855 spin_lock(&hctx->lock);859856 list_splice(&rq_list, &hctx->dispatch);860857 spin_unlock(&hctx->lock);858858+ /*859859+ * the queue is expected stopped with BLK_MQ_RQ_QUEUE_BUSY, but860860+ * it's possible the queue is stopped and restarted again861861+ * before this. Queue restart will dispatch requests. And since862862+ * requests in rq_list aren't added into hctx->dispatch yet,863863+ * the requests in rq_list might get lost.864864+ *865865+ * blk_mq_run_hw_queue() already checks the STOPPED bit866866+ **/867867+ blk_mq_run_hw_queue(hctx, true);861868 }862869}863870···15841571 return NOTIFY_OK;15851572}1586157315871587-static int blk_mq_hctx_cpu_online(struct blk_mq_hw_ctx *hctx, int cpu)15881588-{15891589- struct request_queue *q = hctx->queue;15901590- struct blk_mq_tag_set *set = q->tag_set;15911591-15921592- if (set->tags[hctx->queue_num])15931593- return NOTIFY_OK;15941594-15951595- set->tags[hctx->queue_num] = blk_mq_init_rq_map(set, hctx->queue_num);15961596- if (!set->tags[hctx->queue_num])15971597- return NOTIFY_STOP;15981598-15991599- hctx->tags = set->tags[hctx->queue_num];16001600- return NOTIFY_OK;16011601-}16021602-16031574static int blk_mq_hctx_notify(void *data, unsigned long action,16041575 unsigned int cpu)16051576{···1591159415921595 if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)15931596 return blk_mq_hctx_cpu_offline(hctx, cpu);15941594- else if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN)15951595- return blk_mq_hctx_cpu_online(hctx, cpu);15971597+15981598+ /*15991599+ * In case of CPU online, tags may be reallocated16001600+ * in blk_mq_map_swqueue() after mapping is updated.16011601+ */1596160215971603 return NOTIFY_OK;15981604}···17751775 unsigned int i;17761776 struct blk_mq_hw_ctx *hctx;17771777 struct blk_mq_ctx *ctx;17781778+ struct blk_mq_tag_set *set = q->tag_set;1778177917791780 queue_for_each_hw_ctx(q, hctx, i) {17801781 cpumask_clear(hctx->cpumask);···18041803 * disable it and free the request entries.18051804 */18061805 if (!hctx->nr_ctx) {18071807- struct blk_mq_tag_set *set = q->tag_set;18081808-18091806 if (set->tags[i]) {18101807 blk_mq_free_rq_map(set, set->tags[i], i);18111808 set->tags[i] = NULL;18121812- hctx->tags = NULL;18131809 }18101810+ hctx->tags = NULL;18141811 continue;18151812 }18131813+18141814+ /* unmapped hw queue can be remapped after CPU topo changed */18151815+ if (!set->tags[i])18161816+ set->tags[i] = blk_mq_init_rq_map(set, i);18171817+ hctx->tags = set->tags[i];18181818+ WARN_ON(!hctx->tags);1816181918171820 /*18181821 * Set the map size to the number of mapped software queues.···20952090 */20962091 list_for_each_entry(q, &all_q_list, all_q_node)20972092 blk_mq_freeze_queue_start(q);20982098- list_for_each_entry(q, &all_q_list, all_q_node)20932093+ list_for_each_entry(q, &all_q_list, all_q_node) {20992094 blk_mq_freeze_queue_wait(q);20952095+20962096+ /*20972097+ * timeout handler can't touch hw queue during the20982098+ * reinitialization20992099+ */21002100+ del_timer_sync(&q->timeout);21012101+ }2100210221012103 list_for_each_entry(q, &all_q_list, all_q_node)21022104 blk_mq_queue_reinit(q);
···270270config SATA_DWC271271 tristate "DesignWare Cores SATA support"272272 depends on 460EX273273+ select DW_DMAC273274 help274275 This option enables support for the on-chip SATA controller of the275276 AppliedMicro processor 460EX.···727726 help728727 This option enables support for the NatSemi/AMD SC1200 SoC729728 companion chip used with the Geode processor family.730730-731731- If unsure, say N.732732-733733-config PATA_SCC734734- tristate "Toshiba's Cell Reference Set IDE support"735735- depends on PCI && PPC_CELLEB736736- help737737- This option enables support for the built-in IDE controller on738738- Toshiba Cell Reference Board.739729740730 If unsure, say N.741731
···6666 board_ahci_yes_fbs,67676868 /* board IDs for specific chipsets in alphabetical order */6969+ board_ahci_avn,6970 board_ahci_mcp65,7071 board_ahci_mcp77,7172 board_ahci_mcp89,···8584static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);8685static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,8786 unsigned long deadline);8787+static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,8888+ unsigned long deadline);8889static void ahci_mcp89_apple_enable(struct pci_dev *pdev);8990static bool is_mcp89_apple(struct pci_dev *pdev);9091static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,···108105static struct ata_port_operations ahci_p5wdh_ops = {109106 .inherits = &ahci_ops,110107 .hardreset = ahci_p5wdh_hardreset,108108+};109109+110110+static struct ata_port_operations ahci_avn_ops = {111111+ .inherits = &ahci_ops,112112+ .hardreset = ahci_avn_hardreset,111113};112114113115static const struct ata_port_info ahci_port_info[] = {···159151 .port_ops = &ahci_ops,160152 },161153 /* by chipsets */154154+ [board_ahci_avn] = {155155+ .flags = AHCI_FLAG_COMMON,156156+ .pio_mask = ATA_PIO4,157157+ .udma_mask = ATA_UDMA6,158158+ .port_ops = &ahci_avn_ops,159159+ },162160 [board_ahci_mcp65] = {163161 AHCI_HFLAGS (AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP |164162 AHCI_HFLAG_YES_NCQ),···304290 { PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */305291 { PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */306292 { PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */307307- { PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */308308- { PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */309309- { PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */310310- { PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */311311- { PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */312312- { PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */313313- { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */314314- { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */293293+ { PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */294294+ { PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */295295+ { PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */296296+ { PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */297297+ { PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */298298+ { PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */299299+ { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */300300+ { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */315301 { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */316302 { PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */317303 { PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */···683669 }684670 return rc;685671}672672+673673+/*674674+ * ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports.675675+ *676676+ * It has been observed with some SSDs that the timing of events in the677677+ * link synchronization phase can leave the port in a state that can not678678+ * be recovered by a SATA-hard-reset alone. The failing signature is679679+ * SStatus.DET stuck at 1 ("Device presence detected but Phy680680+ * communication not established"). It was found that unloading and681681+ * reloading the driver when this problem occurs allows the drive682682+ * connection to be recovered (DET advanced to 0x3). The critical683683+ * component of reloading the driver is that the port state machines are684684+ * reset by bouncing "port enable" in the AHCI PCS configuration685685+ * register. So, reproduce that effect by bouncing a port whenever we686686+ * see DET==1 after a reset.687687+ */688688+static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,689689+ unsigned long deadline)690690+{691691+ const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);692692+ struct ata_port *ap = link->ap;693693+ struct ahci_port_priv *pp = ap->private_data;694694+ struct ahci_host_priv *hpriv = ap->host->private_data;695695+ u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;696696+ unsigned long tmo = deadline - jiffies;697697+ struct ata_taskfile tf;698698+ bool online;699699+ int rc, i;700700+701701+ DPRINTK("ENTER\n");702702+703703+ ahci_stop_engine(ap);704704+705705+ for (i = 0; i < 2; i++) {706706+ u16 val;707707+ u32 sstatus;708708+ int port = ap->port_no;709709+ struct ata_host *host = ap->host;710710+ struct pci_dev *pdev = to_pci_dev(host->dev);711711+712712+ /* clear D2H reception area to properly wait for D2H FIS */713713+ ata_tf_init(link->device, &tf);714714+ tf.command = ATA_BUSY;715715+ ata_tf_to_fis(&tf, 0, 0, d2h_fis);716716+717717+ rc = sata_link_hardreset(link, timing, deadline, &online,718718+ ahci_check_ready);719719+720720+ if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 ||721721+ (sstatus & 0xf) != 1)722722+ break;723723+724724+ ata_link_printk(link, KERN_INFO, "avn bounce port%d\n",725725+ port);726726+727727+ pci_read_config_word(pdev, 0x92, &val);728728+ val &= ~(1 << port);729729+ pci_write_config_word(pdev, 0x92, val);730730+ ata_msleep(ap, 1000);731731+ val |= 1 << port;732732+ pci_write_config_word(pdev, 0x92, val);733733+ deadline += tmo;734734+ }735735+736736+ hpriv->start_engine(ap);737737+738738+ if (online)739739+ *class = ahci_dev_classify(ap);740740+741741+ DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class);742742+ return rc;743743+}744744+686745687746#ifdef CONFIG_PM688747static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
+24-25
drivers/ata/ahci_st.c
···3737 struct reset_control *pwr;3838 struct reset_control *sw_rst;3939 struct reset_control *pwr_rst;4040- struct ahci_host_priv *hpriv;4140};42414342static void st_ahci_configure_oob(void __iomem *mmio)···5455 writel(new_val, mmio + ST_AHCI_OOBR);5556}56575757-static int st_ahci_deassert_resets(struct device *dev)5858+static int st_ahci_deassert_resets(struct ahci_host_priv *hpriv,5959+ struct device *dev)5860{5959- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);6161+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;6062 int err;61636264 if (drv_data->pwr) {···9090static void st_ahci_host_stop(struct ata_host *host)9191{9292 struct ahci_host_priv *hpriv = host->private_data;9393+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;9394 struct device *dev = host->dev;9494- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);9595 int err;96969797 if (drv_data->pwr) {···103103 ahci_platform_disable_resources(hpriv);104104}105105106106-static int st_ahci_probe_resets(struct platform_device *pdev)106106+static int st_ahci_probe_resets(struct ahci_host_priv *hpriv,107107+ struct device *dev)107108{108108- struct st_ahci_drv_data *drv_data = platform_get_drvdata(pdev);109109+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;109110110110- drv_data->pwr = devm_reset_control_get(&pdev->dev, "pwr-dwn");111111+ drv_data->pwr = devm_reset_control_get(dev, "pwr-dwn");111112 if (IS_ERR(drv_data->pwr)) {112112- dev_info(&pdev->dev, "power reset control not defined\n");113113+ dev_info(dev, "power reset control not defined\n");113114 drv_data->pwr = NULL;114115 }115116116116- drv_data->sw_rst = devm_reset_control_get(&pdev->dev, "sw-rst");117117+ drv_data->sw_rst = devm_reset_control_get(dev, "sw-rst");117118 if (IS_ERR(drv_data->sw_rst)) {118118- dev_info(&pdev->dev, "soft reset control not defined\n");119119+ dev_info(dev, "soft reset control not defined\n");119120 drv_data->sw_rst = NULL;120121 }121122122122- drv_data->pwr_rst = devm_reset_control_get(&pdev->dev, "pwr-rst");123123+ drv_data->pwr_rst = devm_reset_control_get(dev, "pwr-rst");123124 if (IS_ERR(drv_data->pwr_rst)) {124124- dev_dbg(&pdev->dev, "power soft reset control not defined\n");125125+ dev_dbg(dev, "power soft reset control not defined\n");125126 drv_data->pwr_rst = NULL;126127 }127128128128- return st_ahci_deassert_resets(&pdev->dev);129129+ return st_ahci_deassert_resets(hpriv, dev);129130}130131131132static struct ata_port_operations st_ahci_port_ops = {···155154 if (!drv_data)156155 return -ENOMEM;157156158158- platform_set_drvdata(pdev, drv_data);159159-160157 hpriv = ahci_platform_get_resources(pdev);161158 if (IS_ERR(hpriv))162159 return PTR_ERR(hpriv);160160+ hpriv->plat_data = drv_data;163161164164- drv_data->hpriv = hpriv;165165-166166- err = st_ahci_probe_resets(pdev);162162+ err = st_ahci_probe_resets(hpriv, &pdev->dev);167163 if (err)168164 return err;169165···168170 if (err)169171 return err;170172171171- st_ahci_configure_oob(drv_data->hpriv->mmio);173173+ st_ahci_configure_oob(hpriv->mmio);172174173175 err = ahci_platform_init_host(pdev, hpriv, &st_ahci_port_info,174176 &ahci_platform_sht);···183185#ifdef CONFIG_PM_SLEEP184186static int st_ahci_suspend(struct device *dev)185187{186186- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);187187- struct ahci_host_priv *hpriv = drv_data->hpriv;188188+ struct ata_host *host = dev_get_drvdata(dev);189189+ struct ahci_host_priv *hpriv = host->private_data;190190+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;188191 int err;189192190193 err = ahci_platform_suspend_host(dev);···207208208209static int st_ahci_resume(struct device *dev)209210{210210- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);211211- struct ahci_host_priv *hpriv = drv_data->hpriv;211211+ struct ata_host *host = dev_get_drvdata(dev);212212+ struct ahci_host_priv *hpriv = host->private_data;212213 int err;213214214215 err = ahci_platform_enable_resources(hpriv);215216 if (err)216217 return err;217218218218- err = st_ahci_deassert_resets(dev);219219+ err = st_ahci_deassert_resets(hpriv, dev);219220 if (err) {220221 ahci_platform_disable_resources(hpriv);221222 return err;222223 }223224224224- st_ahci_configure_oob(drv_data->hpriv->mmio);225225+ st_ahci_configure_oob(hpriv->mmio);225226226227 return ahci_platform_resume_host(dev);227228}
+1-2
drivers/ata/libahci.c
···17071707 if (unlikely(resetting))17081708 status &= ~PORT_IRQ_BAD_PMP;1709170917101710- /* if LPM is enabled, PHYRDY doesn't mean anything */17111711- if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) {17101710+ if (sata_lpm_ignore_phy_events(&ap->link)) {17121711 status &= ~PORT_IRQ_PHYRDY;17131712 ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG);17141713 }
+33-1
drivers/ata/libata-core.c
···42354235 ATA_HORKAGE_ZERO_AFTER_TRIM, },42364236 { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |42374237 ATA_HORKAGE_ZERO_AFTER_TRIM, },42384238- { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |42384238+ { "Samsung SSD 8*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |42394239 ATA_HORKAGE_ZERO_AFTER_TRIM, },4240424042414241 /*···6751675167526752 return tmp;67536753}67546754+67556755+/**67566756+ * sata_lpm_ignore_phy_events - test if PHY event should be ignored67576757+ * @link: Link receiving the event67586758+ *67596759+ * Test whether the received PHY event has to be ignored or not.67606760+ *67616761+ * LOCKING:67626762+ * None:67636763+ *67646764+ * RETURNS:67656765+ * True if the event has to be ignored.67666766+ */67676767+bool sata_lpm_ignore_phy_events(struct ata_link *link)67686768+{67696769+ unsigned long lpm_timeout = link->last_lpm_change +67706770+ msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY);67716771+67726772+ /* if LPM is enabled, PHYRDY doesn't mean anything */67736773+ if (link->lpm_policy > ATA_LPM_MAX_POWER)67746774+ return true;67756775+67766776+ /* ignore the first PHY event after the LPM policy changed67776777+ * as it is might be spurious67786778+ */67796779+ if ((link->flags & ATA_LFLAG_CHANGED) &&67806780+ time_before(jiffies, lpm_timeout))67816781+ return true;67826782+67836783+ return false;67846784+}67856785+EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);6754678667556787/*67566788 * Dummy port_ops
···660660 * Initialise the fake PMU. We only need to populate the661661 * used_mask for the purposes of validation.662662 */663663- .used_mask = CPU_BITS_NONE,663663+ .used_mask = { 0 },664664 };665665666666 if (!validate_event(event->pmu, &fake_pmu, leader))
···728728 sysfs_show_32bit_prop(buffer, "max_engine_clk_fcompute",729729 dev->gpu->kfd2kgd->get_max_engine_clock_in_mhz(730730 dev->gpu->kgd));731731+731732 sysfs_show_64bit_prop(buffer, "local_mem_size",732732- dev->gpu->kfd2kgd->get_vmem_size(733733- dev->gpu->kgd));733733+ (unsigned long long int) 0);734734735735 sysfs_show_32bit_prop(buffer, "fw_version",736736 dev->gpu->kfd2kgd->get_fw_version(
+4-5
drivers/gpu/drm/drm_irq.c
···131131132132 /* Reinitialize corresponding vblank timestamp if high-precision query133133 * available. Skip this step if query unsupported or failed. Will134134- * reinitialize delayed at next vblank interrupt in that case.134134+ * reinitialize delayed at next vblank interrupt in that case and135135+ * assign 0 for now, to mark the vblanktimestamp as invalid.135136 */136136- if (rc) {137137- tslot = atomic_read(&vblank->count) + diff;138138- vblanktimestamp(dev, crtc, tslot) = t_vblank;139139- }137137+ tslot = atomic_read(&vblank->count) + diff;138138+ vblanktimestamp(dev, crtc, tslot) = rc ? t_vblank : (struct timeval) {0, 0};140139141140 smp_mb__before_atomic();142141 atomic_add(diff, &vblank->count);
+10-3
drivers/gpu/drm/i915/i915_drv.c
···699699 intel_init_pch_refclk(dev);700700 drm_mode_config_reset(dev);701701702702+ /*703703+ * Interrupts have to be enabled before any batches are run. If not the704704+ * GPU will hang. i915_gem_init_hw() will initiate batches to705705+ * update/restore the context.706706+ *707707+ * Modeset enabling in intel_modeset_init_hw() also needs working708708+ * interrupts.709709+ */710710+ intel_runtime_pm_enable_interrupts(dev_priv);711711+702712 mutex_lock(&dev->struct_mutex);703713 if (i915_gem_init_hw(dev)) {704714 DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");705715 atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter);706716 }707717 mutex_unlock(&dev->struct_mutex);708708-709709- /* We need working interrupts for modeset enabling ... */710710- intel_runtime_pm_enable_interrupts(dev_priv);711718712719 intel_modeset_init_hw(dev);713720
-3
drivers/gpu/drm/i915/intel_display.c
···1363513635};13636136361363713637static struct intel_quirk intel_quirks[] = {1363813638- /* HP Mini needs pipe A force quirk (LP: #322104) */1363913639- { 0x27ae, 0x103c, 0x361a, quirk_pipea_force },1364013640-1364113638 /* Toshiba Protege R-205, S-209 needs pipe A force quirk */1364213639 { 0x2592, 0x1179, 0x0001, quirk_pipea_force },1364313640
+5-4
drivers/gpu/drm/i915/intel_dp.c
···1348134813491349 pipe_config->has_dp_encoder = true;13501350 pipe_config->has_drrs = false;13511351- pipe_config->has_audio = intel_dp->has_audio;13511351+ pipe_config->has_audio = intel_dp->has_audio && port != PORT_A;1352135213531353 if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) {13541354 intel_fixed_panel_mode(intel_connector->panel.fixed_mode,···22112211 int dotclock;2212221222132213 tmp = I915_READ(intel_dp->output_reg);22142214- if (tmp & DP_AUDIO_OUTPUT_ENABLE)22152215- pipe_config->has_audio = true;22142214+22152215+ pipe_config->has_audio = tmp & DP_AUDIO_OUTPUT_ENABLE && port != PORT_A;2216221622172217 if ((port == PORT_A) || !HAS_PCH_CPT(dev)) {22182218 if (tmp & DP_SYNC_HS_HIGH)···38123812 if (val == 0)38133813 break;3814381438153815- intel_dp->sink_rates[i] = val * 200;38153815+ /* Value read is in kHz while drm clock is saved in deca-kHz */38163816+ intel_dp->sink_rates[i] = (val * 200) / 10;38163817 }38173818 intel_dp->num_sink_rates = i;38183819 }
···43034303 L2_CACHE_BIGK_FRAGMENT_SIZE(4));43044304 /* setup context0 */43054305 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);43064306- WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);43064306+ WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);43074307 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);43084308 WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,43094309 (u32)(rdev->dummy_page.addr >> 12));···43184318 /* empty context1-15 */43194319 /* set vm size, must be a multiple of 4 */43204320 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);43214321- WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);43214321+ WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);43224322 /* Assign the pt base to something valid for now; the pts used for43234323 * the VMs are determined by the application and setup and assigned43244324 * on the fly in the vm part of radeon_gart.c
···173173 drm->irq_enabled = true;174174175175 /* syncpoints are used for full 32-bit hardware VBLANK counters */176176- drm->vblank_disable_immediate = true;177176 drm->max_vblank_count = 0xffffffff;178177179178 err = drm_vblank_init(drm, drm->mode_config.num_crtc);
-9
drivers/ide/Kconfig
···643643 help644644 This driver adds support for Toshiba TC86C001 GOKU-S chip.645645646646-config BLK_DEV_CELLEB647647- tristate "Toshiba's Cell Reference Set IDE support"648648- depends on PPC_CELLEB649649- select BLK_DEV_IDEDMA_PCI650650- help651651- This driver provides support for the on-board IDE controller on652652- Toshiba Cell Reference Board.653653- If unsure, say Y.654654-655646endif656647657648# TODO: BLK_DEV_IDEDMA_PCI -> BLK_DEV_IDEDMA_SFF
···304304 struct st_sensors_platform_data *of_pdata;305305 int err = 0;306306307307- mutex_init(&sdata->tb.buf_lock);308308-309307 /* If OF/DT pdata exists, it will take precedence of anything else */310308 of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata);311309 if (of_pdata)
···15691569 MLX4_CMD_TIME_CLASS_B,15701570 MLX4_CMD_WRAPPED);15711571 if (err)15721572- pr_warn(KERN_WARNING15731573- "set port %d command failed\n", gw->port);15721572+ pr_warn("set port %d command failed\n", gw->port);15741573 }1575157415761575 mlx4_free_cmd_mailbox(dev, mailbox);
+1-1
drivers/infiniband/hw/mlx5/qp.c
···1392139213931393 if (ah->ah_flags & IB_AH_GRH) {13941394 if (ah->grh.sgid_index >= gen->port[port - 1].gid_table_len) {13951395- pr_err(KERN_ERR "sgid_index (%u) too large. max is %d\n",13951395+ pr_err("sgid_index (%u) too large. max is %d\n",13961396 ah->grh.sgid_index, gen->port[port - 1].gid_table_len);13971397 return -EINVAL;13981398 }
+1-1
drivers/infiniband/hw/qib/qib.h
···903903 /* PCI Device ID (here for NodeInfo) */904904 u16 deviceid;905905 /* for write combining settings */906906- unsigned long wc_cookie;906906+ int wc_cookie;907907 unsigned long wc_base;908908 unsigned long wc_len;909909
+2-1
drivers/infiniband/hw/qib/qib_wc_x86_64.c
···118118 if (!ret) {119119 dd->wc_cookie = arch_phys_wc_add(pioaddr, piolen);120120 if (dd->wc_cookie < 0)121121- ret = -EINVAL;121121+ /* use error from routine */122122+ ret = dd->wc_cookie;122123 }123124124125 return ret;
···10781078 pr_debug("skip op %ld on disc %d for sector %llu\n",10791079 bi->bi_rw, i, (unsigned long long)sh->sector);10801080 clear_bit(R5_LOCKED, &sh->dev[i].flags);10811081- if (sh->batch_head)10821082- set_bit(STRIPE_BATCH_ERR,10831083- &sh->batch_head->state);10841081 set_bit(STRIPE_HANDLE, &sh->state);10851082 }10861083···19681971 put_cpu();19691972}1970197319741974+static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp)19751975+{19761976+ struct stripe_head *sh;19771977+19781978+ sh = kmem_cache_zalloc(sc, gfp);19791979+ if (sh) {19801980+ spin_lock_init(&sh->stripe_lock);19811981+ spin_lock_init(&sh->batch_lock);19821982+ INIT_LIST_HEAD(&sh->batch_list);19831983+ INIT_LIST_HEAD(&sh->lru);19841984+ atomic_set(&sh->count, 1);19851985+ }19861986+ return sh;19871987+}19711988static int grow_one_stripe(struct r5conf *conf, gfp_t gfp)19721989{19731990 struct stripe_head *sh;19741974- sh = kmem_cache_zalloc(conf->slab_cache, gfp);19911991+19921992+ sh = alloc_stripe(conf->slab_cache, gfp);19751993 if (!sh)19761994 return 0;1977199519781996 sh->raid_conf = conf;19791979-19801980- spin_lock_init(&sh->stripe_lock);1981199719821998 if (grow_buffers(sh, gfp)) {19831999 shrink_buffers(sh);···20001990 sh->hash_lock_index =20011991 conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS;20021992 /* we just created an active stripe so... */20032003- atomic_set(&sh->count, 1);20041993 atomic_inc(&conf->active_stripes);20052005- INIT_LIST_HEAD(&sh->lru);2006199420072007- spin_lock_init(&sh->batch_lock);20082008- INIT_LIST_HEAD(&sh->batch_list);20092009- sh->batch_head = NULL;20101995 release_stripe(sh);20111996 conf->max_nr_stripes++;20121997 return 1;···20652060 return ret;20662061}2067206220632063+static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)20642064+{20652065+ unsigned long cpu;20662066+ int err = 0;20672067+20682068+ mddev_suspend(conf->mddev);20692069+ get_online_cpus();20702070+ for_each_present_cpu(cpu) {20712071+ struct raid5_percpu *percpu;20722072+ struct flex_array *scribble;20732073+20742074+ percpu = per_cpu_ptr(conf->percpu, cpu);20752075+ scribble = scribble_alloc(new_disks,20762076+ new_sectors / STRIPE_SECTORS,20772077+ GFP_NOIO);20782078+20792079+ if (scribble) {20802080+ flex_array_free(percpu->scribble);20812081+ percpu->scribble = scribble;20822082+ } else {20832083+ err = -ENOMEM;20842084+ break;20852085+ }20862086+ }20872087+ put_online_cpus();20882088+ mddev_resume(conf->mddev);20892089+ return err;20902090+}20912091+20682092static int resize_stripes(struct r5conf *conf, int newsize)20692093{20702094 /* Make all the stripes able to hold 'newsize' devices.···21222088 struct stripe_head *osh, *nsh;21232089 LIST_HEAD(newstripes);21242090 struct disk_info *ndisks;21252125- unsigned long cpu;21262091 int err;21272092 struct kmem_cache *sc;21282093 int i;···21422109 return -ENOMEM;2143211021442111 for (i = conf->max_nr_stripes; i; i--) {21452145- nsh = kmem_cache_zalloc(sc, GFP_KERNEL);21122112+ nsh = alloc_stripe(sc, GFP_KERNEL);21462113 if (!nsh)21472114 break;2148211521492116 nsh->raid_conf = conf;21502150- spin_lock_init(&nsh->stripe_lock);21512151-21522117 list_add(&nsh->lru, &newstripes);21532118 }21542119 if (i) {···21732142 lock_device_hash_lock(conf, hash));21742143 osh = get_free_stripe(conf, hash);21752144 unlock_device_hash_lock(conf, hash);21762176- atomic_set(&nsh->count, 1);21452145+21772146 for(i=0; i<conf->pool_size; i++) {21782147 nsh->dev[i].page = osh->dev[i].page;21792148 nsh->dev[i].orig_page = osh->dev[i].page;21802149 }21812181- for( ; i<newsize; i++)21822182- nsh->dev[i].page = NULL;21832150 nsh->hash_lock_index = hash;21842151 kmem_cache_free(conf->slab_cache, osh);21852152 cnt++;···22032174 } else22042175 err = -ENOMEM;2205217622062206- get_online_cpus();22072207- for_each_present_cpu(cpu) {22082208- struct raid5_percpu *percpu;22092209- struct flex_array *scribble;22102210-22112211- percpu = per_cpu_ptr(conf->percpu, cpu);22122212- scribble = scribble_alloc(newsize, conf->chunk_sectors /22132213- STRIPE_SECTORS, GFP_NOIO);22142214-22152215- if (scribble) {22162216- flex_array_free(percpu->scribble);22172217- percpu->scribble = scribble;22182218- } else {22192219- err = -ENOMEM;22202220- break;22212221- }22222222- }22232223- put_online_cpus();22242224-22252177 /* Step 4, return new stripes to service */22262178 while(!list_empty(&newstripes)) {22272179 nsh = list_entry(newstripes.next, struct stripe_head, lru);···2222221222232213 conf->slab_cache = sc;22242214 conf->active_name = 1-conf->active_name;22252225- conf->pool_size = newsize;22152215+ if (!err)22162216+ conf->pool_size = newsize;22262217 return err;22272218}22282219···24452434 }24462435 rdev_dec_pending(rdev, conf->mddev);2447243624482448- if (sh->batch_head && !uptodate)24372437+ if (sh->batch_head && !uptodate && !replacement)24492438 set_bit(STRIPE_BATCH_ERR, &sh->batch_head->state);2450243924512440 if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))···32893278 /* reconstruct-write isn't being forced */32903279 return 0;32913280 for (i = 0; i < s->failed; i++) {32923292- if (!test_bit(R5_UPTODATE, &fdev[i]->flags) &&32813281+ if (s->failed_num[i] != sh->pd_idx &&32823282+ s->failed_num[i] != sh->qd_idx &&32833283+ !test_bit(R5_UPTODATE, &fdev[i]->flags) &&32933284 !test_bit(R5_OVERWRITE, &fdev[i]->flags))32943285 return 1;32953286 }···33113298 */33123299 BUG_ON(test_bit(R5_Wantcompute, &dev->flags));33133300 BUG_ON(test_bit(R5_Wantread, &dev->flags));33013301+ BUG_ON(sh->batch_head);33143302 if ((s->uptodate == disks - 1) &&33153303 (s->failed && (disk_idx == s->failed_num[0] ||33163304 disk_idx == s->failed_num[1]))) {···33803366{33813367 int i;3382336833833383- BUG_ON(sh->batch_head);33843369 /* look for blocks to read/compute, skip this if a compute33853370 * is already in flight, or if the stripe contents are in the33863371 * midst of changing due to a write···42114198 return;4212419942134200 head_sh = sh;42144214- do {42154215- sh = list_first_entry(&sh->batch_list,42164216- struct stripe_head, batch_list);42174217- BUG_ON(sh == head_sh);42184218- } while (!test_bit(STRIPE_DEGRADED, &sh->state));4219420142204220- while (sh != head_sh) {42214221- next = list_first_entry(&sh->batch_list,42224222- struct stripe_head, batch_list);42024202+ list_for_each_entry_safe(sh, next, &head_sh->batch_list, batch_list) {42034203+42234204 list_del_init(&sh->batch_list);4224420542254206 set_mask_bits(&sh->state, ~STRIPE_EXPAND_SYNC_FLAG,···4233422642344227 set_bit(STRIPE_HANDLE, &sh->state);42354228 release_stripe(sh);42364236-42374237- sh = next;42384229 }42394230}42404231···62266221 percpu->spare_page = alloc_page(GFP_KERNEL);62276222 if (!percpu->scribble)62286223 percpu->scribble = scribble_alloc(max(conf->raid_disks,62296229- conf->previous_raid_disks), conf->chunk_sectors /62306230- STRIPE_SECTORS, GFP_KERNEL);62246224+ conf->previous_raid_disks),62256225+ max(conf->chunk_sectors,62266226+ conf->prev_chunk_sectors)62276227+ / STRIPE_SECTORS,62286228+ GFP_KERNEL);6231622962326230 if (!percpu->scribble || (conf->level == 6 && !percpu->spare_page)) {62336231 free_scratch_buffer(conf, percpu);···72067198 if (!check_stripe_cache(mddev))72077199 return -ENOSPC;7208720072017201+ if (mddev->new_chunk_sectors > mddev->chunk_sectors ||72027202+ mddev->delta_disks > 0)72037203+ if (resize_chunks(conf,72047204+ conf->previous_raid_disks72057205+ + max(0, mddev->delta_disks),72067206+ max(mddev->new_chunk_sectors,72077207+ mddev->chunk_sectors)72087208+ ) < 0)72097209+ return -ENOMEM;72097210 return resize_stripes(conf, (conf->previous_raid_disks72107211 + mddev->delta_disks));72117212}
+12
drivers/mmc/card/block.c
···10291029 md->reset_done &= ~type;10301030}1031103110321032+int mmc_access_rpmb(struct mmc_queue *mq)10331033+{10341034+ struct mmc_blk_data *md = mq->data;10351035+ /*10361036+ * If this is a RPMB partition access, return ture10371037+ */10381038+ if (md && md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)10391039+ return true;10401040+10411041+ return false;10421042+}10431043+10321044static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)10331045{10341046 struct mmc_blk_data *md = mq->data;
+1-1
drivers/mmc/card/queue.c
···3838 return BLKPREP_KILL;3939 }40404141- if (mq && mmc_card_removed(mq->card))4141+ if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq)))4242 return BLKPREP_KILL;43434444 req->cmd_flags |= REQ_DONTPREP;
+2
drivers/mmc/card/queue.h
···7373extern int mmc_packed_init(struct mmc_queue *, struct mmc_card *);7474extern void mmc_packed_clean(struct mmc_queue *);75757676+extern int mmc_access_rpmb(struct mmc_queue *);7777+7678#endif
+1
drivers/mmc/core/core.c
···26512651 switch (mode) {26522652 case PM_HIBERNATION_PREPARE:26532653 case PM_SUSPEND_PREPARE:26542654+ case PM_RESTORE_PREPARE:26542655 spin_lock_irqsave(&host->lock, flags);26552656 host->rescan_disable = 1;26562657 spin_unlock_irqrestore(&host->lock, flags);
+5-2
drivers/mmc/host/dw_mmc.c
···589589 host->ring_size = PAGE_SIZE / sizeof(struct idmac_desc);590590591591 /* Forward link the descriptor list */592592- for (i = 0, p = host->sg_cpu; i < host->ring_size - 1; i++, p++)592592+ for (i = 0, p = host->sg_cpu; i < host->ring_size - 1; i++, p++) {593593 p->des3 = cpu_to_le32(host->sg_dma +594594 (sizeof(struct idmac_desc) * (i + 1)));595595+ p->des1 = 0;596596+ }595597596598 /* Set the last descriptor as the end-of-ring descriptor */597599 p->des3 = cpu_to_le32(host->sg_dma);···13021300 int gpio_cd = mmc_gpio_get_cd(mmc);1303130113041302 /* Use platform get_cd function, else try onboard card detect */13051305- if (brd->quirks & DW_MCI_QUIRK_BROKEN_CARD_DETECTION)13031303+ if ((brd->quirks & DW_MCI_QUIRK_BROKEN_CARD_DETECTION) ||13041304+ (mmc->caps & MMC_CAP_NONREMOVABLE))13061305 present = 1;13071306 else if (!IS_ERR_VALUE(gpio_cd))13081307 present = gpio_cd;
···310310 blk_rq_map_sg(req->q, req, pdu->usgl.sg);311311312312 ret = ubiblock_read(pdu);313313+ rq_flush_dcache_pages(req);314314+313315 blk_mq_end_request(req, ret);314316}315317
+4-3
drivers/net/can/xilinx_can.c
···509509 cf->can_id |= CAN_RTR_FLAG;510510 }511511512512- if (!(id_xcan & XCAN_IDR_SRR_MASK)) {513513- data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET);514514- data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET);512512+ /* DW1/DW2 must always be read to remove message from RXFIFO */513513+ data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET);514514+ data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET);515515516516+ if (!(cf->can_id & CAN_RTR_FLAG)) {516517 /* Change Xilinx CAN data format to socketCAN data format */517518 if (cf->can_dlc > 0)518519 *(__be32 *)(cf->data) = cpu_to_be32(data[0]);
···11config NET_XGENE22 tristate "APM X-Gene SoC Ethernet Driver"33 depends on HAS_DMA44+ depends on ARCH_XGENE || COMPILE_TEST45 select PHYLIB56 help67 This is the Ethernet driver for the on-chip ethernet interface on the
+5-5
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
···47864786{47874787 struct bnx2x *bp = netdev_priv(dev);4788478847894789+ if (pci_num_vf(bp->pdev)) {47904790+ DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n");47914791+ return -EPERM;47924792+ }47934793+47894794 if (bp->recovery_state != BNX2X_RECOVERY_DONE) {47904795 BNX2X_ERR("Can't perform change MTU during parity recovery\n");47914796 return -EAGAIN;···49424937 return -ENODEV;49434938 }49444939 bp = netdev_priv(dev);49454945-49464946- if (pci_num_vf(bp->pdev)) {49474947- DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n");49484948- return -EPERM;49494949- }4950494049514941 if (bp->recovery_state != BNX2X_RECOVERY_DONE) {49524942 BNX2X_ERR("Handling parity error recovery. Try again later\n");
+7-2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···1337113371 /* Management FW 'remembers' living interfaces. Allow it some time1337213372 * to forget previously living interfaces, allowing a proper re-load.1337313373 */1337413374- if (is_kdump_kernel())1337513375- msleep(5000);1337413374+ if (is_kdump_kernel()) {1337513375+ ktime_t now = ktime_get_boottime();1337613376+ ktime_t fw_ready_time = ktime_set(5, 0);1337713377+1337813378+ if (ktime_before(now, fw_ready_time))1337913379+ msleep(ktime_ms_delta(fw_ready_time, now));1338013380+ }13376133811337713382 /* An estimated maximum supported CoS number according to the chip1337813383 * version.
+10-1
drivers/net/ethernet/cadence/macb.c
···981981 struct macb_queue *queue = dev_id;982982 struct macb *bp = queue->bp;983983 struct net_device *dev = bp->dev;984984- u32 status;984984+ u32 status, ctrl;985985986986 status = queue_readl(queue, ISR);987987···10361036 * Link change detection isn't possible with RMII, so we'll10371037 * add that if/when we get our hands on a full-blown MII PHY.10381038 */10391039+10401040+ if (status & MACB_BIT(RXUBR)) {10411041+ ctrl = macb_readl(bp, NCR);10421042+ macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));10431043+ macb_writel(bp, NCR, ctrl | MACB_BIT(RE));10441044+10451045+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)10461046+ macb_writel(bp, ISR, MACB_BIT(RXUBR));10471047+ }1039104810401049 if (status & MACB_BIT(ISR_ROVR)) {10411050 /* We missed at least one packet */
···610610 unsigned int total_bytes = 0, total_packets = 0;611611 u16 cleaned_count = fm10k_desc_unused(rx_ring);612612613613- do {613613+ while (likely(total_packets < budget)) {614614 union fm10k_rx_desc *rx_desc;615615616616 /* return some buffers to hardware, one at a time is too slow */···659659660660 /* update budget accounting */661661 total_packets++;662662- } while (likely(total_packets < budget));662662+ }663663664664 /* place incomplete frames back on ring for completion */665665 rx_ring->skb = skb;
+3-1
drivers/net/ethernet/intel/igb/igb_main.c
···10361036 adapter->tx_ring[q_vector->tx.ring->queue_index] = NULL;1037103710381038 if (q_vector->rx.ring)10391039- adapter->tx_ring[q_vector->rx.ring->queue_index] = NULL;10391039+ adapter->rx_ring[q_vector->rx.ring->queue_index] = NULL;1040104010411041 netif_napi_del(&q_vector->napi);10421042···12071207 q_vector = adapter->q_vector[v_idx];12081208 if (!q_vector)12091209 q_vector = kzalloc(size, GFP_KERNEL);12101210+ else12111211+ memset(q_vector, 0, size);12101212 if (!q_vector)12111213 return -ENOMEM;12121214
···139139 int i;140140 int offset = next - start;141141142142- for (i = 0; i <= num; i++) {142142+ for (i = 0; i < num; i++) {143143 ret += be64_to_cpu(*curr);144144 curr += offset;145145 }
···2727config AMD_XGBE_PHY2828 tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs"2929 depends on (OF || ACPI) && HAS_IOMEM3030+ depends on ARM64 || COMPILE_TEST3031 ---help---3132 Currently supports the AMD 10GbE PHY3233
+4-1
drivers/net/phy/mdio-gpio.c
···168168 if (!new_bus->irq[i])169169 new_bus->irq[i] = PHY_POLL;170170171171- snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id);171171+ if (bus_id != -1)172172+ snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id);173173+ else174174+ strncpy(new_bus->id, "gpio", MII_BUS_ID_SIZE);172175173176 if (devm_gpio_request(dev, bitbang->mdc, "mdc"))174177 goto out_free_bus;
+2-1
drivers/net/phy/micrel.c
···548548 }549549550550 clk = devm_clk_get(&phydev->dev, "rmii-ref");551551- if (!IS_ERR(clk)) {551551+ /* NOTE: clk may be NULL if building without CONFIG_HAVE_CLK */552552+ if (!IS_ERR_OR_NULL(clk)) {552553 unsigned long rate = clk_get_rate(clk);553554 bool rmii_ref_clk_sel_25_mhz;554555
···12851285 struct net_device *net)12861286{12871287 struct usbnet *dev = netdev_priv(net);12881288- int length;12881288+ unsigned int length;12891289 struct urb *urb = NULL;12901290 struct skb_data *entry;12911291 struct driver_info *info = dev->driver_info;···14131413 }14141414 } else14151415 netif_dbg(dev, tx_queued, dev->net,14161416- "> tx, len %d, type 0x%x\n", length, skb->protocol);14161416+ "> tx, len %u, type 0x%x\n", length, skb->protocol);14171417#ifdef CONFIG_PM14181418deferred:14191419#endif
+25-27
drivers/net/wireless/ath/ath9k/xmit.c
···11031103 struct sk_buff *skb;11041104 struct ath_frame_info *fi;11051105 struct ieee80211_tx_info *info;11061106- struct ieee80211_vif *vif;11071106 struct ath_hw *ah = sc->sc_ah;1108110711091108 if (sc->tx99_state || !ah->tpc_enabled)11101109 return MAX_RATE_POWER;1111111011121111 skb = bf->bf_mpdu;11131113- info = IEEE80211_SKB_CB(skb);11141114- vif = info->control.vif;11151115-11161116- if (!vif) {11171117- max_power = sc->cur_chan->cur_txpower;11181118- goto out;11191119- }11201120-11211121- if (vif->bss_conf.txpower_type != NL80211_TX_POWER_LIMITED) {11221122- max_power = min_t(u8, sc->cur_chan->cur_txpower,11231123- 2 * vif->bss_conf.txpower);11241124- goto out;11251125- }11261126-11271112 fi = get_frame_info(skb);11131113+ info = IEEE80211_SKB_CB(skb);1128111411291115 if (!AR_SREV_9300_20_OR_LATER(ah)) {11301116 int txpower = fi->tx_power;···11471161 txpower -= 2;1148116211491163 txpower = max(txpower, 0);11501150- max_power = min_t(u8, ah->tx_power[rateidx],11511151- 2 * vif->bss_conf.txpower);11521152- max_power = min_t(u8, max_power, txpower);11641164+ max_power = min_t(u8, ah->tx_power[rateidx], txpower);11651165+11661166+ /* XXX: clamp minimum TX power at 1 for AR9160 since if11671167+ * max_power is set to 0, frames are transmitted at max11681168+ * TX power11691169+ */11701170+ if (!max_power && !AR_SREV_9280_20_OR_LATER(ah))11711171+ max_power = 1;11531172 } else if (!bf->bf_state.bfs_paprd) {11541173 if (rateidx < 8 && (info->flags & IEEE80211_TX_CTL_STBC))11551174 max_power = min_t(u8, ah->tx_power_stbc[rateidx],11561156- 2 * vif->bss_conf.txpower);11751175+ fi->tx_power);11571176 else11581177 max_power = min_t(u8, ah->tx_power[rateidx],11591159- 2 * vif->bss_conf.txpower);11601160- max_power = min(max_power, fi->tx_power);11781178+ fi->tx_power);11611179 } else {11621180 max_power = ah->paprd_training_power;11631181 }11641164-out:11651165- /* XXX: clamp minimum TX power at 1 for AR9160 since if max_power11661166- * is set to 0, frames are transmitted at max TX power11671167- */11681168- return (!max_power && !AR_SREV_9280_20_OR_LATER(ah)) ? 1 : max_power;11821182+11831183+ return max_power;11691184}1170118511711186static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,···21162129 struct ath_node *an = NULL;21172130 enum ath9k_key_type keytype;21182131 bool short_preamble = false;21322132+ u8 txpower;2119213321202134 /*21212135 * We check if Short Preamble is needed for the CTS rate by···21332145 if (sta)21342146 an = (struct ath_node *) sta->drv_priv;2135214721482148+ if (tx_info->control.vif) {21492149+ struct ieee80211_vif *vif = tx_info->control.vif;21502150+21512151+ txpower = 2 * vif->bss_conf.txpower;21522152+ } else {21532153+ struct ath_softc *sc = hw->priv;21542154+21552155+ txpower = sc->cur_chan->cur_txpower;21562156+ }21572157+21362158 memset(fi, 0, sizeof(*fi));21372159 fi->txq = -1;21382160 if (hw_key)···21532155 fi->keyix = ATH9K_TXKEYIX_INVALID;21542156 fi->keytype = keytype;21552157 fi->framelen = framelen;21562156- fi->tx_power = MAX_RATE_POWER;21582158+ fi->tx_power = txpower;2157215921582160 if (!rate)21592161 return;
+2
drivers/net/wireless/iwlwifi/iwl-fw-file.h
···244244 * longer than the passive one, which is essential for fragmented scan.245245 * @IWL_UCODE_TLV_API_WIFI_MCC_UPDATE: ucode supports MCC updates with source.246246 * IWL_UCODE_TLV_API_HDC_PHASE_0: ucode supports finer configuration of LTR247247+ * @IWL_UCODE_TLV_API_TX_POWER_DEV: new API for tx power.247248 * @IWL_UCODE_TLV_API_BASIC_DWELL: use only basic dwell time in scan command,248249 * regardless of the band or the number of the probes. FW will calculate249250 * the actual dwell time.···261260 IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8),262261 IWL_UCODE_TLV_API_WIFI_MCC_UPDATE = BIT(9),263262 IWL_UCODE_TLV_API_HDC_PHASE_0 = BIT(10),263263+ IWL_UCODE_TLV_API_TX_POWER_DEV = BIT(11),264264 IWL_UCODE_TLV_API_BASIC_DWELL = BIT(13),265265 IWL_UCODE_TLV_API_SCD_CFG = BIT(15),266266 IWL_UCODE_TLV_API_SINGLE_SCAN_EBS = BIT(16),
+27-14
drivers/net/wireless/iwlwifi/iwl-trans.h
···66 * GPL LICENSE SUMMARY77 *88 * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.99- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH99+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH1010 *1111 * This program is free software; you can redistribute it and/or modify1212 * it under the terms of version 2 of the GNU General Public License as···3232 * BSD LICENSE3333 *3434 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.3535- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH3535+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH3636 * All rights reserved.3737 *3838 * Redistribution and use in source and binary forms, with or without···421421 *422422 * All the handlers MUST be implemented423423 *424424- * @start_hw: starts the HW- from that point on, the HW can send interrupts425425- * May sleep424424+ * @start_hw: starts the HW. If low_power is true, the NIC needs to be taken425425+ * out of a low power state. From that point on, the HW can send426426+ * interrupts. May sleep.426427 * @op_mode_leave: Turn off the HW RF kill indication if on427428 * May sleep428429 * @start_fw: allocates and inits all the resources for the transport···433432 * the SCD base address in SRAM, then provide it here, or 0 otherwise.434433 * May sleep435434 * @stop_device: stops the whole device (embedded CPU put to reset) and stops436436- * the HW. From that point on, the HW will be in low power but will still437437- * issue interrupt if the HW RF kill is triggered. This callback must do438438- * the right thing and not crash even if start_hw() was called but not439439- * start_fw(). May sleep435435+ * the HW. If low_power is true, the NIC will be put in low power state.436436+ * From that point on, the HW will be stopped but will still issue an437437+ * interrupt if the HW RF kill switch is triggered.438438+ * This callback must do the right thing and not crash even if %start_hw()439439+ * was called but not &start_fw(). May sleep.440440 * @d3_suspend: put the device into the correct mode for WoWLAN during441441 * suspend. This is optional, if not implemented WoWLAN will not be442442 * supported. This callback may sleep.···493491 */494492struct iwl_trans_ops {495493496496- int (*start_hw)(struct iwl_trans *iwl_trans);494494+ int (*start_hw)(struct iwl_trans *iwl_trans, bool low_power);497495 void (*op_mode_leave)(struct iwl_trans *iwl_trans);498496 int (*start_fw)(struct iwl_trans *trans, const struct fw_img *fw,499497 bool run_in_rfkill);500498 int (*update_sf)(struct iwl_trans *trans,501499 struct iwl_sf_region *st_fwrd_space);502500 void (*fw_alive)(struct iwl_trans *trans, u32 scd_addr);503503- void (*stop_device)(struct iwl_trans *trans);501501+ void (*stop_device)(struct iwl_trans *trans, bool low_power);504502505503 void (*d3_suspend)(struct iwl_trans *trans, bool test);506504 int (*d3_resume)(struct iwl_trans *trans, enum iwl_d3_status *status,···654652 trans->ops->configure(trans, trans_cfg);655653}656654657657-static inline int iwl_trans_start_hw(struct iwl_trans *trans)655655+static inline int _iwl_trans_start_hw(struct iwl_trans *trans, bool low_power)658656{659657 might_sleep();660658661661- return trans->ops->start_hw(trans);659659+ return trans->ops->start_hw(trans, low_power);660660+}661661+662662+static inline int iwl_trans_start_hw(struct iwl_trans *trans)663663+{664664+ return trans->ops->start_hw(trans, true);662665}663666664667static inline void iwl_trans_op_mode_leave(struct iwl_trans *trans)···710703 return 0;711704}712705713713-static inline void iwl_trans_stop_device(struct iwl_trans *trans)706706+static inline void _iwl_trans_stop_device(struct iwl_trans *trans,707707+ bool low_power)714708{715709 might_sleep();716710717717- trans->ops->stop_device(trans);711711+ trans->ops->stop_device(trans, low_power);718712719713 trans->state = IWL_TRANS_NO_FW;714714+}715715+716716+static inline void iwl_trans_stop_device(struct iwl_trans *trans)717717+{718718+ _iwl_trans_stop_device(trans, true);720719}721720722721static inline void iwl_trans_d3_suspend(struct iwl_trans *trans, bool test)
···298298} __packed;299299300300/**301301+ * struct iwl_reduce_tx_power_cmd - TX power reduction command302302+ * REDUCE_TX_POWER_CMD = 0x9f303303+ * @flags: (reserved for future implementation)304304+ * @mac_context_id: id of the mac ctx for which we are reducing TX power.305305+ * @pwr_restriction: TX power restriction in dBms.306306+ */307307+struct iwl_reduce_tx_power_cmd {308308+ u8 flags;309309+ u8 mac_context_id;310310+ __le16 pwr_restriction;311311+} __packed; /* TX_REDUCED_POWER_API_S_VER_1 */312312+313313+/**314314+ * struct iwl_dev_tx_power_cmd - TX power reduction command315315+ * REDUCE_TX_POWER_CMD = 0x9f316316+ * @set_mode: 0 - MAC tx power, 1 - device tx power317317+ * @mac_context_id: id of the mac ctx for which we are reducing TX power.318318+ * @pwr_restriction: TX power restriction in 1/8 dBms.319319+ * @dev_24: device TX power restriction in 1/8 dBms320320+ * @dev_52_low: device TX power restriction upper band - low321321+ * @dev_52_high: device TX power restriction upper band - high322322+ */323323+struct iwl_dev_tx_power_cmd {324324+ __le32 set_mode;325325+ __le32 mac_context_id;326326+ __le16 pwr_restriction;327327+ __le16 dev_24;328328+ __le16 dev_52_low;329329+ __le16 dev_52_high;330330+} __packed; /* TX_REDUCED_POWER_API_S_VER_2 */331331+332332+#define IWL_DEV_MAX_TX_POWER 0x7FFF333333+334334+/**301335 * struct iwl_beacon_filter_cmd302336 * REPLY_BEACON_FILTERING_CMD = 0xd2 (command)303337 * @id_and_color: MAC contex identifier
+2-42
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
···122122 SCAN_COMP_STATUS_ERR_ALLOC_TE = 0x0C,123123};124124125125-/**126126- * struct iwl_scan_results_notif - scan results for one channel127127- * ( SCAN_RESULTS_NOTIFICATION = 0x83 )128128- * @channel: which channel the results are from129129- * @band: 0 for 5.2 GHz, 1 for 2.4 GHz130130- * @probe_status: SCAN_PROBE_STATUS_*, indicates success of probe request131131- * @num_probe_not_sent: # of request that weren't sent due to not enough time132132- * @duration: duration spent in channel, in usecs133133- * @statistics: statistics gathered for this channel134134- */135135-struct iwl_scan_results_notif {136136- u8 channel;137137- u8 band;138138- u8 probe_status;139139- u8 num_probe_not_sent;140140- __le32 duration;141141- __le32 statistics[SCAN_RESULTS_STATISTICS];142142-} __packed; /* SCAN_RESULT_NTF_API_S_VER_2 */143143-144144-/**145145- * struct iwl_scan_complete_notif - notifies end of scanning (all channels)146146- * ( SCAN_COMPLETE_NOTIFICATION = 0x84 )147147- * @scanned_channels: number of channels scanned (and number of valid results)148148- * @status: one of SCAN_COMP_STATUS_*149149- * @bt_status: BT on/off status150150- * @last_channel: last channel that was scanned151151- * @tsf_low: TSF timer (lower half) in usecs152152- * @tsf_high: TSF timer (higher half) in usecs153153- * @results: array of scan results, only "scanned_channels" of them are valid154154- */155155-struct iwl_scan_complete_notif {156156- u8 scanned_channels;157157- u8 status;158158- u8 bt_status;159159- u8 last_channel;160160- __le32 tsf_low;161161- __le32 tsf_high;162162- struct iwl_scan_results_notif results[];163163-} __packed; /* SCAN_COMPLETE_NTF_API_S_VER_2 */164164-165125/* scan offload */166126#define IWL_SCAN_MAX_BLACKLIST_LEN 64167127#define IWL_SCAN_SHORT_BLACKLIST_LEN 16···514554} __packed;515555516556/**517517- * struct iwl_lmac_scan_results_notif - scan results for one channel -557557+ * struct iwl_scan_results_notif - scan results for one channel -518558 * SCAN_RESULT_NTF_API_S_VER_3519559 * @channel: which channel the results are from520560 * @band: 0 for 5.2 GHz, 1 for 2.4 GHz···522562 * @num_probe_not_sent: # of request that weren't sent due to not enough time523563 * @duration: duration spent in channel, in usecs524564 */525525-struct iwl_lmac_scan_results_notif {565565+struct iwl_scan_results_notif {526566 u8 channel;527567 u8 band;528568 u8 probe_status;
-13
drivers/net/wireless/iwlwifi/mvm/fw-api.h
···281281 __le32 valid;282282} __packed;283283284284-/**285285- * struct iwl_reduce_tx_power_cmd - TX power reduction command286286- * REDUCE_TX_POWER_CMD = 0x9f287287- * @flags: (reserved for future implementation)288288- * @mac_context_id: id of the mac ctx for which we are reducing TX power.289289- * @pwr_restriction: TX power restriction in dBms.290290- */291291-struct iwl_reduce_tx_power_cmd {292292- u8 flags;293293- u8 mac_context_id;294294- __le16 pwr_restriction;295295-} __packed; /* TX_REDUCED_POWER_API_S_VER_1 */296296-297284/*298285 * Calibration control struct.299286 * Sent as part of the phy configuration command.
+21-33
drivers/net/wireless/iwlwifi/mvm/fw.c
···66 * GPL LICENSE SUMMARY77 *88 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.99- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH99+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH1010 *1111 * This program is free software; you can redistribute it and/or modify1212 * it under the terms of version 2 of the GNU General Public License as···3232 * BSD LICENSE3333 *3434 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.3535- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH3535+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH3636 * All rights reserved.3737 *3838 * Redistribution and use in source and binary forms, with or without···322322323323 lockdep_assert_held(&mvm->mutex);324324325325- if (WARN_ON_ONCE(mvm->init_ucode_complete || mvm->calibrating))325325+ if (WARN_ON_ONCE(mvm->calibrating))326326 return 0;327327328328 iwl_init_notification_wait(&mvm->notif_wait,···396396 */397397 ret = iwl_wait_notification(&mvm->notif_wait, &calib_wait,398398 MVM_UCODE_CALIB_TIMEOUT);399399- if (!ret)400400- mvm->init_ucode_complete = true;401399402400 if (ret && iwl_mvm_is_radio_killed(mvm)) {403401 IWL_DEBUG_RF_KILL(mvm, "RFKILL while calibrating.\n");···491493 le32_to_cpu(desc->trig_desc.type));492494493495 mvm->fw_dump_desc = desc;494494-495495- /* stop recording */496496- if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {497497- iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100);498498- } else {499499- iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0);500500- /* wait before we collect the data till the DBGC stop */501501- udelay(100);502502- }503496504497 queue_delayed_work(system_wq, &mvm->fw_dump_wk, delay);505498···647658 * module loading, load init ucode now648659 * (for example, if we were in RFKILL)649660 */650650- if (!mvm->init_ucode_complete) {651651- ret = iwl_run_init_mvm_ucode(mvm, false);652652- if (ret && !iwlmvm_mod_params.init_dbg) {653653- IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret);654654- /* this can't happen */655655- if (WARN_ON(ret > 0))656656- ret = -ERFKILL;657657- goto error;658658- }659659- if (!iwlmvm_mod_params.init_dbg) {660660- /*661661- * should stop and start HW since that INIT662662- * image just loaded663663- */664664- iwl_trans_stop_device(mvm->trans);665665- ret = iwl_trans_start_hw(mvm->trans);666666- if (ret)667667- return ret;668668- }661661+ ret = iwl_run_init_mvm_ucode(mvm, false);662662+ if (ret && !iwlmvm_mod_params.init_dbg) {663663+ IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret);664664+ /* this can't happen */665665+ if (WARN_ON(ret > 0))666666+ ret = -ERFKILL;667667+ goto error;668668+ }669669+ if (!iwlmvm_mod_params.init_dbg) {670670+ /*671671+ * Stop and start the transport without entering low power672672+ * mode. This will save the state of other components on the673673+ * device that are triggered by the INIT firwmare (MFUART).674674+ */675675+ _iwl_trans_stop_device(mvm->trans, false);676676+ _iwl_trans_start_hw(mvm->trans, false);677677+ if (ret)678678+ return ret;669679 }670680671681 if (iwlmvm_mod_params.init_dbg)
+23-3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
···1322132213231323 clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status);13241324 iwl_mvm_d0i3_enable_tx(mvm, NULL);13251325- ret = iwl_mvm_update_quotas(mvm, false, NULL);13251325+ ret = iwl_mvm_update_quotas(mvm, true, NULL);13261326 if (ret)13271327 IWL_ERR(mvm, "Failed to update quotas after restart (%d)\n",13281328 ret);···14711471 return NULL;14721472}1473147314741474-static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,14751475- s8 tx_power)14741474+static int iwl_mvm_set_tx_power_old(struct iwl_mvm *mvm,14751475+ struct ieee80211_vif *vif, s8 tx_power)14761476{14771477 /* FW is in charge of regulatory enforcement */14781478 struct iwl_reduce_tx_power_cmd reduce_txpwr_cmd = {···14831483 return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0,14841484 sizeof(reduce_txpwr_cmd),14851485 &reduce_txpwr_cmd);14861486+}14871487+14881488+static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,14891489+ s16 tx_power)14901490+{14911491+ struct iwl_dev_tx_power_cmd cmd = {14921492+ .set_mode = 0,14931493+ .mac_context_id =14941494+ cpu_to_le32(iwl_mvm_vif_from_mac80211(vif)->id),14951495+ .pwr_restriction = cpu_to_le16(8 * tx_power),14961496+ };14971497+14981498+ if (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_TX_POWER_DEV))14991499+ return iwl_mvm_set_tx_power_old(mvm, vif, tx_power);15001500+15011501+ if (tx_power == IWL_DEFAULT_MAX_TX_POWER)15021502+ cmd.pwr_restriction = cpu_to_le16(IWL_DEV_MAX_TX_POWER);15031503+15041504+ return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0,15051505+ sizeof(cmd), &cmd);14861506}1487150714881508static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
···865865 return;866866867867 mutex_lock(&mvm->mutex);868868+869869+ /* stop recording */870870+ if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {871871+ iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100);872872+ } else {873873+ iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0);874874+ /* wait before we collect the data till the DBGC stop */875875+ udelay(100);876876+ }877877+868878 iwl_mvm_fw_error_dump(mvm);869879870880 /* start recording again if the firmware is not crashed */
+5
drivers/net/wireless/iwlwifi/mvm/rx.c
···478478 if (vif->type != NL80211_IFTYPE_STATION)479479 return;480480481481+ if (sig == 0) {482482+ IWL_DEBUG_RX(mvm, "RSSI is 0 - skip signal based decision\n");483483+ return;484484+ }485485+481486 mvmvif->bf_data.ave_beacon_signal = sig;482487483488 /* BT Coex */
···55 *66 * GPL LICENSE SUMMARY77 *88- * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.99- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH88+ * Copyright(c) 2007 - 2015 Intel Corporation. All rights reserved.99+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH1010 *1111 * This program is free software; you can redistribute it and/or modify1212 * it under the terms of version 2 of the GNU General Public License as···3131 *3232 * BSD LICENSE3333 *3434- * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.3535- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH3434+ * Copyright(c) 2005 - 2015 Intel Corporation. All rights reserved.3535+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH3636 * All rights reserved.3737 *3838 * Redistribution and use in source and binary forms, with or without···104104static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)105105{106106 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);107107- struct page *page;107107+ struct page *page = NULL;108108 dma_addr_t phys;109109 u32 size;110110 u8 power;···131131 DMA_FROM_DEVICE);132132 if (dma_mapping_error(trans->dev, phys)) {133133 __free_pages(page, order);134134+ page = NULL;134135 continue;135136 }136137 IWL_INFO(trans,···10211020 iwl_pcie_tx_start(trans, scd_addr);10221021}1023102210241024-static void iwl_trans_pcie_stop_device(struct iwl_trans *trans)10231023+static void iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)10251024{10261025 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);10271026 bool hw_rfkill, was_hw_rfkill;···11161115void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state)11171116{11181117 if (iwl_op_mode_hw_rf_kill(trans->op_mode, state))11191119- iwl_trans_pcie_stop_device(trans);11181118+ iwl_trans_pcie_stop_device(trans, true);11201119}1121112011221121static void iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test)···12011200 return 0;12021201}1203120212041204-static int iwl_trans_pcie_start_hw(struct iwl_trans *trans)12031203+static int iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power)12051204{12061205 bool hw_rfkill;12071206 int err;
+1-1
drivers/net/wireless/rtlwifi/usb.c
···126126127127 do {128128 status = usb_control_msg(udev, pipe, request, reqtype, value,129129- index, pdata, len, 0); /*max. timeout*/129129+ index, pdata, len, 1000);130130 if (status < 0) {131131 /* firmware download is checksumed, don't retry */132132 if ((value >= FW_8192C_START_ADDRESS &&
+1-1
drivers/parisc/superio.c
···348348 BUG();349349 return -1;350350 }351351- printk("superio_fixup_irq(%s) ven 0x%x dev 0x%x from %pf\n",351351+ printk(KERN_DEBUG "superio_fixup_irq(%s) ven 0x%x dev 0x%x from %ps\n",352352 pci_name(pcidev),353353 pcidev->vendor, pcidev->device,354354 __builtin_return_address(0));
+7-6
drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
···418418 return ret;419419420420 val = pad->buffer_type << PMIC_GPIO_REG_OUT_TYPE_SHIFT;421421- val = pad->strength << PMIC_GPIO_REG_OUT_STRENGTH_SHIFT;421421+ val |= pad->strength << PMIC_GPIO_REG_OUT_STRENGTH_SHIFT;422422423423 ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_DIG_OUT_CTL, val);424424 if (ret < 0)···467467 seq_puts(s, " ---");468468 } else {469469470470- if (!pad->input_enabled) {470470+ if (pad->input_enabled) {471471 ret = pmic_gpio_read(state, pad, PMIC_MPP_REG_RT_STS);472472- if (!ret) {473473- ret &= PMIC_MPP_REG_RT_STS_VAL_MASK;474474- pad->out_value = ret;475475- }472472+ if (ret < 0)473473+ return;474474+475475+ ret &= PMIC_MPP_REG_RT_STS_VAL_MASK;476476+ pad->out_value = ret;476477 }477478478479 seq_printf(s, " %-4s", pad->output_enabled ? "out" : "in");
···4141config POWER_RESET_BRCMSTB4242 bool "Broadcom STB reset driver"4343 depends on ARM || MIPS || COMPILE_TEST4444+ depends on MFD_SYSCON4445 default ARCH_BRCMSTB4546 help4647 This driver provides restart support for Broadcom STB boards.
+2-2
drivers/power/reset/at91-reset.c
···212212 res = platform_get_resource(pdev, IORESOURCE_MEM, idx + 1 );213213 at91_ramc_base[idx] = devm_ioremap(&pdev->dev, res->start,214214 resource_size(res));215215- if (IS_ERR(at91_ramc_base[idx])) {215215+ if (!at91_ramc_base[idx]) {216216 dev_err(&pdev->dev, "Could not map ram controller address\n");217217- return PTR_ERR(at91_ramc_base[idx]);217217+ return -ENOMEM;218218 }219219 }220220
+3-15
drivers/power/reset/ltc2952-poweroff.c
···120120121121static void ltc2952_poweroff_start_wde(struct ltc2952_poweroff *data)122122{123123- if (hrtimer_start(&data->timer_wde, data->wde_interval,124124- HRTIMER_MODE_REL)) {125125- /*126126- * The device will not toggle the watchdog reset,127127- * thus shut down is only safe if the PowerPath controller128128- * has a long enough time-off before triggering a hardware129129- * power-off.130130- *131131- * Only sending a warning as the system will power-off anyway132132- */133133- dev_err(data->dev, "unable to start the timer\n");134134- }123123+ hrtimer_start(&data->timer_wde, data->wde_interval, HRTIMER_MODE_REL);135124}136125137126static enum hrtimer_restart···154165 }155166156167 if (gpiod_get_value(data->gpio_trigger)) {157157- if (hrtimer_start(&data->timer_trigger, data->trigger_delay,158158- HRTIMER_MODE_REL))159159- dev_err(data->dev, "unable to start the wait timer\n");168168+ hrtimer_start(&data->timer_trigger, data->trigger_delay,169169+ HRTIMER_MODE_REL);160170 } else {161171 hrtimer_cancel(&data->timer_trigger);162172 /* omitting return value check, timer should have been valid */
+1-1
drivers/rtc/rtc-armada38x.c
···6464static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)6565{6666 struct armada38x_rtc *rtc = dev_get_drvdata(dev);6767- unsigned long time, time_check, flags;6767+ unsigned long time, time_check;68686969 mutex_lock(&rtc->mutex_time);7070 time = readl(rtc->regs + RTC_TIME);
+2-1
drivers/spi/Kconfig
···7878config SPI_BCM28357979 tristate "BCM2835 SPI controller"8080 depends on ARCH_BCM2835 || COMPILE_TEST8181+ depends on GPIOLIB8182 help8283 This selects a driver for the Broadcom BCM2835 SPI master.8384···303302config SPI_FSL_DSPI304303 tristate "Freescale DSPI controller"305304 select REGMAP_MMIO306306- depends on SOC_VF610 || COMPILE_TEST305305+ depends on SOC_VF610 || SOC_LS1021A || COMPILE_TEST307306 help308307 This enables support for the Freescale DSPI controller in master309308 mode. VF610 platform uses the controller.
+2-3
drivers/spi/spi-bcm2835.c
···164164 unsigned long xfer_time_us)165165{166166 struct bcm2835_spi *bs = spi_master_get_devdata(master);167167- unsigned long timeout = jiffies +168168- max(4 * xfer_time_us * HZ / 1000000, 2uL);167167+ /* set timeout to 1 second of maximum polling */168168+ unsigned long timeout = jiffies + HZ;169169170170 /* enable HW block without interrupts */171171 bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA);172172173173- /* set timeout to 4x the expected time, or 2 jiffies */174173 /* loop until finished the transfer */175174 while (bs->rx_len) {176175 /* read from fifo as much as possible */
+10-7
drivers/spi/spi-bitbang.c
···180180{181181 struct spi_bitbang_cs *cs = spi->controller_state;182182 struct spi_bitbang *bitbang;183183- int retval;184183 unsigned long flags;185184186185 bitbang = spi_master_get_devdata(spi->master);···196197 if (!cs->txrx_word)197198 return -EINVAL;198199199199- retval = bitbang->setup_transfer(spi, NULL);200200- if (retval < 0)201201- return retval;200200+ if (bitbang->setup_transfer) {201201+ int retval = bitbang->setup_transfer(spi, NULL);202202+ if (retval < 0)203203+ return retval;204204+ }202205203206 dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs);204207···296295297296 /* init (-1) or override (1) transfer params */298297 if (do_setup != 0) {299299- status = bitbang->setup_transfer(spi, t);300300- if (status < 0)301301- break;298298+ if (bitbang->setup_transfer) {299299+ status = bitbang->setup_transfer(spi, t);300300+ if (status < 0)301301+ break;302302+ }302303 if (do_setup == -1)303304 do_setup = 0;304305 }
···359359 struct fsl_espi_transfer *trans, u8 *rx_buff)360360{361361 struct fsl_espi_transfer *espi_trans = trans;362362- unsigned int n_tx = espi_trans->n_tx;363363- unsigned int n_rx = espi_trans->n_rx;362362+ unsigned int total_len = espi_trans->len;364363 struct spi_transfer *t;365364 u8 *local_buf;366365 u8 *rx_buf = rx_buff;367366 unsigned int trans_len;368367 unsigned int addr;369369- int i, pos, loop;368368+ unsigned int tx_only;369369+ unsigned int rx_pos = 0;370370+ unsigned int pos;371371+ int i, loop;370372371373 local_buf = kzalloc(SPCOM_TRANLEN_MAX, GFP_KERNEL);372374 if (!local_buf) {···376374 return;377375 }378376379379- for (pos = 0, loop = 0; pos < n_rx; pos += trans_len, loop++) {380380- trans_len = n_rx - pos;381381- if (trans_len > SPCOM_TRANLEN_MAX - n_tx)382382- trans_len = SPCOM_TRANLEN_MAX - n_tx;377377+ for (pos = 0, loop = 0; pos < total_len; pos += trans_len, loop++) {378378+ trans_len = total_len - pos;383379384380 i = 0;381381+ tx_only = 0;385382 list_for_each_entry(t, &m->transfers, transfer_list) {386383 if (t->tx_buf) {387384 memcpy(local_buf + i, t->tx_buf, t->len);388385 i += t->len;386386+ if (!t->rx_buf)387387+ tx_only += t->len;389388 }390389 }391390391391+ /* Add additional TX bytes to compensate SPCOM_TRANLEN_MAX */392392+ if (loop > 0)393393+ trans_len += tx_only;394394+395395+ if (trans_len > SPCOM_TRANLEN_MAX)396396+ trans_len = SPCOM_TRANLEN_MAX;397397+398398+ /* Update device offset */392399 if (pos > 0) {393400 addr = fsl_espi_cmd2addr(local_buf);394394- addr += pos;401401+ addr += rx_pos;395402 fsl_espi_addr2cmd(addr, local_buf);396403 }397404398398- espi_trans->n_tx = n_tx;399399- espi_trans->n_rx = trans_len;400400- espi_trans->len = trans_len + n_tx;405405+ espi_trans->len = trans_len;401406 espi_trans->tx_buf = local_buf;402407 espi_trans->rx_buf = local_buf;403408 fsl_espi_do_trans(m, espi_trans);404409405405- memcpy(rx_buf + pos, espi_trans->rx_buf + n_tx, trans_len);410410+ /* If there is at least one RX byte then copy it to rx_buf */411411+ if (tx_only < SPCOM_TRANLEN_MAX)412412+ memcpy(rx_buf + rx_pos, espi_trans->rx_buf + tx_only,413413+ trans_len - tx_only);414414+415415+ rx_pos += trans_len - tx_only;406416407417 if (loop > 0)408408- espi_trans->actual_length += espi_trans->len - n_tx;418418+ espi_trans->actual_length += espi_trans->len - tx_only;409419 else410420 espi_trans->actual_length += espi_trans->len;411421 }···432418 u8 *rx_buf = NULL;433419 unsigned int n_tx = 0;434420 unsigned int n_rx = 0;421421+ unsigned int xfer_len = 0;435422 struct fsl_espi_transfer espi_trans;436423437424 list_for_each_entry(t, &m->transfers, transfer_list) {···442427 n_rx += t->len;443428 rx_buf = t->rx_buf;444429 }430430+ if ((t->tx_buf) || (t->rx_buf))431431+ xfer_len += t->len;445432 }446433447434 espi_trans.n_tx = n_tx;448435 espi_trans.n_rx = n_rx;449449- espi_trans.len = n_tx + n_rx;436436+ espi_trans.len = xfer_len;450437 espi_trans.actual_length = 0;451438 espi_trans.status = 0;452439
+12-4
drivers/spi/spi-omap2-mcspi.c
···12101210 struct omap2_mcspi *mcspi;12111211 struct omap2_mcspi_dma *mcspi_dma;12121212 struct spi_transfer *t;12131213+ int status;1213121412141215 spi = m->spi;12151216 mcspi = spi_master_get_devdata(master);···12301229 tx_buf ? "tx" : "",12311230 rx_buf ? "rx" : "",12321231 t->bits_per_word);12331233- return -EINVAL;12321232+ status = -EINVAL;12331233+ goto out;12341234 }1235123512361236 if (m->is_dma_mapped || len < DMA_MIN_BYTES)···12431241 if (dma_mapping_error(mcspi->dev, t->tx_dma)) {12441242 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",12451243 'T', len);12461246- return -EINVAL;12441244+ status = -EINVAL;12451245+ goto out;12471246 }12481247 }12491248 if (mcspi_dma->dma_rx && rx_buf != NULL) {···12561253 if (tx_buf != NULL)12571254 dma_unmap_single(mcspi->dev, t->tx_dma,12581255 len, DMA_TO_DEVICE);12591259- return -EINVAL;12561256+ status = -EINVAL;12571257+ goto out;12601258 }12611259 }12621260 }1263126112641262 omap2_mcspi_work(mcspi, m);12631263+ /* spi_finalize_current_message() changes the status inside the12641264+ * spi_message, save the status here. */12651265+ status = m->status;12661266+out:12651267 spi_finalize_current_message(master);12661266- return 0;12681268+ return status;12671269}1268127012691271static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi)
+9
drivers/spi/spi.c
···583583 rx_dev = master->dma_rx->device->dev;584584585585 list_for_each_entry(xfer, &msg->transfers, transfer_list) {586586+ /*587587+ * Restore the original value of tx_buf or rx_buf if they are588588+ * NULL.589589+ */590590+ if (xfer->tx_buf == master->dummy_tx)591591+ xfer->tx_buf = NULL;592592+ if (xfer->rx_buf == master->dummy_rx)593593+ xfer->rx_buf = NULL;594594+586595 if (!master->can_dma(master, msg->spi, xfer))587596 continue;588597
+7-9
drivers/staging/gdm724x/gdm_mux.c
···158158 unsigned int start_flag;159159 unsigned int payload_size;160160 unsigned short packet_type;161161- int dummy_cnt;161161+ int total_len;162162 u32 packet_size_sum = r->offset;163163 int index;164164 int ret = TO_HOST_INVALID_PACKET;···176176 break;177177 }178178179179- dummy_cnt = ALIGN(MUX_HEADER_SIZE + payload_size, 4);179179+ total_len = ALIGN(MUX_HEADER_SIZE + payload_size, 4);180180181181 if (len - packet_size_sum <182182- MUX_HEADER_SIZE + payload_size + dummy_cnt) {182182+ total_len) {183183 pr_err("invalid payload : %d %d %04x\n",184184 payload_size, len, packet_type);185185 break;···202202 break;203203 }204204205205- packet_size_sum += MUX_HEADER_SIZE + payload_size + dummy_cnt;205205+ packet_size_sum += total_len;206206 if (len - packet_size_sum <= MUX_HEADER_SIZE + 2) {207207 ret = r->callback(NULL,208208 0,···361361 struct mux_pkt_header *mux_header;362362 struct mux_tx *t = NULL;363363 static u32 seq_num = 1;364364- int dummy_cnt;365364 int total_len;366365 int ret;367366 unsigned long flags;···373374374375 spin_lock_irqsave(&mux_dev->write_lock, flags);375376376376- dummy_cnt = ALIGN(MUX_HEADER_SIZE + len, 4);377377-378378- total_len = len + MUX_HEADER_SIZE + dummy_cnt;377377+ total_len = ALIGN(MUX_HEADER_SIZE + len, 4);379378380379 t = alloc_mux_tx(total_len);381380 if (!t) {···389392 mux_header->packet_type = __cpu_to_le16(packet_type[tty_index]);390393391394 memcpy(t->buf+MUX_HEADER_SIZE, data, len);392392- memset(t->buf+MUX_HEADER_SIZE+len, 0, dummy_cnt);395395+ memset(t->buf+MUX_HEADER_SIZE+len, 0, total_len - MUX_HEADER_SIZE -396396+ len);393397394398 t->len = total_len;395399 t->callback = cb;
+7-10
drivers/staging/rtl8712/rtl871x_ioctl_linux.c
···19001900 struct mp_ioctl_handler *phandler;19011901 struct mp_ioctl_param *poidparam;19021902 unsigned long BytesRead, BytesWritten, BytesNeeded;19031903- u8 *pparmbuf = NULL, bset;19031903+ u8 *pparmbuf, bset;19041904 u16 len;19051905 uint status;19061906 int ret = 0;1907190719081908- if ((!p->length) || (!p->pointer)) {19091909- ret = -EINVAL;19101910- goto _r871x_mp_ioctl_hdl_exit;19111911- }19081908+ if ((!p->length) || (!p->pointer))19091909+ return -EINVAL;19101910+19121911 bset = (u8)(p->flags & 0xFFFF);19131912 len = p->length;19141914- pparmbuf = NULL;19151913 pparmbuf = memdup_user(p->pointer, len);19161916- if (IS_ERR(pparmbuf)) {19171917- ret = PTR_ERR(pparmbuf);19181918- goto _r871x_mp_ioctl_hdl_exit;19191919- }19141914+ if (IS_ERR(pparmbuf))19151915+ return PTR_ERR(pparmbuf);19161916+19201917 poidparam = (struct mp_ioctl_param *)pparmbuf;19211918 if (poidparam->subcode >= MAX_MP_IOCTL_SUBCODE) {19221919 ret = -EINVAL;
···600600 add_wait_queue(&tty->read_wait, &wait);601601602602 for (;;) {603603- if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {603603+ if (test_bit(TTY_OTHER_DONE, &tty->flags)) {604604 ret = -EIO;605605 break;606606 }···828828 /* set bits for operations that won't block */829829 if (n_hdlc->rx_buf_list.head)830830 mask |= POLLIN | POLLRDNORM; /* readable */831831- if (test_bit(TTY_OTHER_CLOSED, &tty->flags))831831+ if (test_bit(TTY_OTHER_DONE, &tty->flags))832832 mask |= POLLHUP;833833 if (tty_hung_up_p(filp))834834 mask |= POLLHUP;
+18-4
drivers/tty/n_tty.c
···19491949 return ldata->commit_head - ldata->read_tail >= amt;19501950}1951195119521952+static inline int check_other_done(struct tty_struct *tty)19531953+{19541954+ int done = test_bit(TTY_OTHER_DONE, &tty->flags);19551955+ if (done) {19561956+ /* paired with cmpxchg() in check_other_closed(); ensures19571957+ * read buffer head index is not stale19581958+ */19591959+ smp_mb__after_atomic();19601960+ }19611961+ return done;19621962+}19631963+19521964/**19531965 * copy_from_read_buf - copy read data directly19541966 * @tty: terminal device···21792167 struct n_tty_data *ldata = tty->disc_data;21802168 unsigned char __user *b = buf;21812169 DEFINE_WAIT_FUNC(wait, woken_wake_function);21822182- int c;21702170+ int c, done;21832171 int minimum, time;21842172 ssize_t retval = 0;21852173 long timeout;···22472235 ((minimum - (b - buf)) >= 1))22482236 ldata->minimum_to_wake = (minimum - (b - buf));2249223722382238+ done = check_other_done(tty);22392239+22502240 if (!input_available_p(tty, 0)) {22512251- if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {22412241+ if (done) {22522242 retval = -EIO;22532243 break;22542244 }···2457244324582444 poll_wait(file, &tty->read_wait, wait);24592445 poll_wait(file, &tty->write_wait, wait);24462446+ if (check_other_done(tty))24472447+ mask |= POLLHUP;24602448 if (input_available_p(tty, 1))24612449 mask |= POLLIN | POLLRDNORM;24622450 if (tty->packet && tty->link->ctrl_status)24632451 mask |= POLLPRI | POLLIN | POLLRDNORM;24642464- if (test_bit(TTY_OTHER_CLOSED, &tty->flags))24652465- mask |= POLLHUP;24662452 if (tty_hung_up_p(file))24672453 mask |= POLLHUP;24682454 if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
+3-2
drivers/tty/pty.c
···5353 /* Review - krefs on tty_link ?? */5454 if (!tty->link)5555 return;5656- tty_flush_to_ldisc(tty->link);5756 set_bit(TTY_OTHER_CLOSED, &tty->link->flags);5858- wake_up_interruptible(&tty->link->read_wait);5757+ tty_flip_buffer_push(tty->link->port);5958 wake_up_interruptible(&tty->link->write_wait);6059 if (tty->driver->subtype == PTY_TYPE_MASTER) {6160 set_bit(TTY_OTHER_CLOSED, &tty->flags);···242243 goto out;243244244245 clear_bit(TTY_IO_ERROR, &tty->flags);246246+ /* TTY_OTHER_CLOSED must be cleared before TTY_OTHER_DONE */245247 clear_bit(TTY_OTHER_CLOSED, &tty->link->flags);248248+ clear_bit(TTY_OTHER_DONE, &tty->link->flags);246249 set_bit(TTY_THROTTLED, &tty->flags);247250 return 0;248251
+4-1
drivers/tty/serial/amba-pl011.c
···1639163916401640 writew(uap->vendor->ifls, uap->port.membase + UART011_IFLS);1641164116421642+ /* Assume that TX IRQ doesn't work until we see one: */16431643+ uap->tx_irq_seen = 0;16441644+16421645 spin_lock_irq(&uap->port.lock);1643164616441647 /* restore RTS and DTR */···17051702 spin_lock_irq(&uap->port.lock);17061703 uap->im = 0;17071704 writew(uap->im, uap->port.membase + UART011_IMSC);17081708- writew(0xffff & ~UART011_TXIS, uap->port.membase + UART011_ICR);17051705+ writew(0xffff, uap->port.membase + UART011_ICR);17091706 spin_unlock_irq(&uap->port.lock);1710170717111708 pl011_dma_shutdown(uap);
+2-7
drivers/tty/serial/earlycon.c
···187187 return 0;188188189189 err = setup_earlycon(buf);190190- if (err == -ENOENT) {191191- pr_warn("no match for %s\n", buf);192192- err = 0;193193- } else if (err == -EALREADY) {194194- pr_warn("already registered\n");195195- err = 0;196196- }190190+ if (err == -ENOENT || err == -EALREADY)191191+ return 0;197192 return err;198193}199194early_param("earlycon", param_setup_earlycon);
···37373838#define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)39394040+/*4141+ * If all tty flip buffers have been processed by flush_to_ldisc() or4242+ * dropped by tty_buffer_flush(), check if the linked pty has been closed.4343+ * If so, wake the reader/poll to process4444+ */4545+static inline void check_other_closed(struct tty_struct *tty)4646+{4747+ unsigned long flags, old;4848+4949+ /* transition from TTY_OTHER_CLOSED => TTY_OTHER_DONE must be atomic */5050+ for (flags = ACCESS_ONCE(tty->flags);5151+ test_bit(TTY_OTHER_CLOSED, &flags);5252+ ) {5353+ old = flags;5454+ __set_bit(TTY_OTHER_DONE, &flags);5555+ flags = cmpxchg(&tty->flags, old, flags);5656+ if (old == flags) {5757+ wake_up_interruptible(&tty->read_wait);5858+ break;5959+ }6060+ }6161+}40624163/**4264 * tty_buffer_lock_exclusive - gain exclusive access to buffer···250228251229 if (ld && ld->ops->flush_buffer)252230 ld->ops->flush_buffer(tty);231231+232232+ check_other_closed(tty);253233254234 atomic_dec(&buf->priority);255235 mutex_unlock(&buf->lock);···495471 smp_rmb();496472 count = head->commit - head->read;497473 if (!count) {498498- if (next == NULL)474474+ if (next == NULL) {475475+ check_other_closed(tty);499476 break;477477+ }500478 buf->head = next;501479 tty_buffer_free(port, head);502480 continue;···512486 mutex_unlock(&buf->lock);513487514488 tty_ldisc_deref(disc);515515-}516516-517517-/**518518- * tty_flush_to_ldisc519519- * @tty: tty to push520520- *521521- * Push the terminal flip buffers to the line discipline.522522- *523523- * Must not be called from IRQ context.524524- */525525-void tty_flush_to_ldisc(struct tty_struct *tty)526526-{527527- flush_work(&tty->port->buf.work);528489}529490530491/**
+5-1
drivers/usb/chipidea/debug.c
···8888 char buf[32];8989 int ret;90909191- if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))9191+ count = min_t(size_t, sizeof(buf) - 1, count);9292+ if (copy_from_user(buf, ubuf, count))9293 return -EFAULT;9494+9595+ /* sscanf requires a zero terminated string */9696+ buf[count] = '\0';93979498 if (sscanf(buf, "%u", &mode) != 1)9599 return -EINVAL;
···437437 | USB_REQ_GET_DESCRIPTOR):438438 switch (value >> 8) {439439 case HID_DT_HID:440440+ {441441+ struct hid_descriptor hidg_desc_copy = hidg_desc;442442+440443 VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: HID\n");444444+ hidg_desc_copy.desc[0].bDescriptorType = HID_DT_REPORT;445445+ hidg_desc_copy.desc[0].wDescriptorLength =446446+ cpu_to_le16(hidg->report_desc_length);447447+441448 length = min_t(unsigned short, length,442442- hidg_desc.bLength);443443- memcpy(req->buf, &hidg_desc, length);449449+ hidg_desc_copy.bLength);450450+ memcpy(req->buf, &hidg_desc_copy, length);444451 goto respond;445452 break;453453+ }446454 case HID_DT_REPORT:447455 VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: REPORT\n");448456 length = min_t(unsigned short, length,···640632 hidg_fs_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);641633 hidg_hs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);642634 hidg_fs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);635635+ /*636636+ * We can use hidg_desc struct here but we should not relay637637+ * that its content won't change after returning from this function.638638+ */643639 hidg_desc.desc[0].bDescriptorType = HID_DT_REPORT;644640 hidg_desc.desc[0].wDescriptorLength =645641 cpu_to_le16(hidg->report_desc_length);
+4-1
drivers/usb/gadget/function/u_serial.c
···113113 int write_allocated;114114 struct gs_buf port_write_buf;115115 wait_queue_head_t drain_wait; /* wait while writes drain */116116+ bool write_busy;116117117118 /* REVISIT this state ... */118119 struct usb_cdc_line_coding port_line_coding; /* 8-N-1 etc */···364363 int status = 0;365364 bool do_tty_wake = false;366365367367- while (!list_empty(pool)) {366366+ while (!port->write_busy && !list_empty(pool)) {368367 struct usb_request *req;369368 int len;370369···394393 * NOTE that we may keep sending data for a while after395394 * the TTY closed (dev->ioport->port_tty is NULL).396395 */396396+ port->write_busy = true;397397 spin_unlock(&port->port_lock);398398 status = usb_ep_queue(in, req, GFP_ATOMIC);399399 spin_lock(&port->port_lock);400400+ port->write_busy = false;400401401402 if (status) {402403 pr_debug("%s: %s %s err %d\n",
+5-5
drivers/usb/gadget/legacy/acm_ms.c
···121121/*122122 * We _always_ have both ACM and mass storage functions.123123 */124124-static int __init acm_ms_do_config(struct usb_configuration *c)124124+static int acm_ms_do_config(struct usb_configuration *c)125125{126126 struct fsg_opts *opts;127127 int status;···174174175175/*-------------------------------------------------------------------------*/176176177177-static int __init acm_ms_bind(struct usb_composite_dev *cdev)177177+static int acm_ms_bind(struct usb_composite_dev *cdev)178178{179179 struct usb_gadget *gadget = cdev->gadget;180180 struct fsg_opts *opts;···249249 return status;250250}251251252252-static int __exit acm_ms_unbind(struct usb_composite_dev *cdev)252252+static int acm_ms_unbind(struct usb_composite_dev *cdev)253253{254254 usb_put_function(f_msg);255255 usb_put_function_instance(fi_msg);···258258 return 0;259259}260260261261-static __refdata struct usb_composite_driver acm_ms_driver = {261261+static struct usb_composite_driver acm_ms_driver = {262262 .name = "g_acm_ms",263263 .dev = &device_desc,264264 .max_speed = USB_SPEED_SUPER,265265 .strings = dev_strings,266266 .bind = acm_ms_bind,267267- .unbind = __exit_p(acm_ms_unbind),267267+ .unbind = acm_ms_unbind,268268};269269270270module_usb_composite_driver(acm_ms_driver);
+5-5
drivers/usb/gadget/legacy/audio.c
···167167168168/*-------------------------------------------------------------------------*/169169170170-static int __init audio_do_config(struct usb_configuration *c)170170+static int audio_do_config(struct usb_configuration *c)171171{172172 int status;173173···216216217217/*-------------------------------------------------------------------------*/218218219219-static int __init audio_bind(struct usb_composite_dev *cdev)219219+static int audio_bind(struct usb_composite_dev *cdev)220220{221221#ifndef CONFIG_GADGET_UAC1222222 struct f_uac2_opts *uac2_opts;···276276 return status;277277}278278279279-static int __exit audio_unbind(struct usb_composite_dev *cdev)279279+static int audio_unbind(struct usb_composite_dev *cdev)280280{281281#ifdef CONFIG_GADGET_UAC1282282 if (!IS_ERR_OR_NULL(f_uac1))···292292 return 0;293293}294294295295-static __refdata struct usb_composite_driver audio_driver = {295295+static struct usb_composite_driver audio_driver = {296296 .name = "g_audio",297297 .dev = &device_desc,298298 .strings = audio_strings,299299 .max_speed = USB_SPEED_HIGH,300300 .bind = audio_bind,301301- .unbind = __exit_p(audio_unbind),301301+ .unbind = audio_unbind,302302};303303304304module_usb_composite_driver(audio_driver);
+5-5
drivers/usb/gadget/legacy/cdc2.c
···104104/*105105 * We _always_ have both CDC ECM and CDC ACM functions.106106 */107107-static int __init cdc_do_config(struct usb_configuration *c)107107+static int cdc_do_config(struct usb_configuration *c)108108{109109 int status;110110···153153154154/*-------------------------------------------------------------------------*/155155156156-static int __init cdc_bind(struct usb_composite_dev *cdev)156156+static int cdc_bind(struct usb_composite_dev *cdev)157157{158158 struct usb_gadget *gadget = cdev->gadget;159159 struct f_ecm_opts *ecm_opts;···211211 return status;212212}213213214214-static int __exit cdc_unbind(struct usb_composite_dev *cdev)214214+static int cdc_unbind(struct usb_composite_dev *cdev)215215{216216 usb_put_function(f_acm);217217 usb_put_function_instance(fi_serial);···222222 return 0;223223}224224225225-static __refdata struct usb_composite_driver cdc_driver = {225225+static struct usb_composite_driver cdc_driver = {226226 .name = "g_cdc",227227 .dev = &device_desc,228228 .strings = dev_strings,229229 .max_speed = USB_SPEED_HIGH,230230 .bind = cdc_bind,231231- .unbind = __exit_p(cdc_unbind),231231+ .unbind = cdc_unbind,232232};233233234234module_usb_composite_driver(cdc_driver);
···12671267 * since the command ring is 64-byte aligned.12681268 * It must also be greater than 16.12691269 */12701270-#define TRBS_PER_SEGMENT 6412701270+#define TRBS_PER_SEGMENT 25612711271/* Allow two commands + a link TRB, along with any reserved command TRBs */12721272#define MAX_RSVD_CMD_TRBS (TRBS_PER_SEGMENT - 3)12731273#define TRB_SEGMENT_SIZE (TRBS_PER_SEGMENT*16)
···766766 USB_SC_DEVICE, USB_PR_DEVICE, NULL,767767 US_FL_GO_SLOW ),768768769769+/* Reported by Christian Schaller <cschalle@redhat.com> */770770+UNUSUAL_DEV( 0x059f, 0x0651, 0x0000, 0x0000,771771+ "LaCie",772772+ "External HDD",773773+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,774774+ US_FL_NO_WP_DETECT ),775775+769776/* Submitted by Joel Bourquard <numlock@freesurf.ch>770777 * Some versions of this device need the SubClass and Protocol overrides771778 * while others don't.
+27-4
fs/btrfs/extent-tree.c
···31803180 btrfs_mark_buffer_dirty(leaf);31813181fail:31823182 btrfs_release_path(path);31833183- if (ret)31843184- btrfs_abort_transaction(trans, root, ret);31853183 return ret;3186318431873185}···34853487 ret = 0;34863488 }34873489 }34883488- if (!ret)34903490+ if (!ret) {34893491 ret = write_one_cache_group(trans, root, path, cache);34923492+ /*34933493+ * Our block group might still be attached to the list34943494+ * of new block groups in the transaction handle of some34953495+ * other task (struct btrfs_trans_handle->new_bgs). This34963496+ * means its block group item isn't yet in the extent34973497+ * tree. If this happens ignore the error, as we will34983498+ * try again later in the critical section of the34993499+ * transaction commit.35003500+ */35013501+ if (ret == -ENOENT) {35023502+ ret = 0;35033503+ spin_lock(&cur_trans->dirty_bgs_lock);35043504+ if (list_empty(&cache->dirty_list)) {35053505+ list_add_tail(&cache->dirty_list,35063506+ &cur_trans->dirty_bgs);35073507+ btrfs_get_block_group(cache);35083508+ }35093509+ spin_unlock(&cur_trans->dirty_bgs_lock);35103510+ } else if (ret) {35113511+ btrfs_abort_transaction(trans, root, ret);35123512+ }35133513+ }3490351434913515 /* if its not on the io list, we need to put the block group */34923516 if (should_put)···36173597 ret = 0;36183598 }36193599 }36203620- if (!ret)36003600+ if (!ret) {36213601 ret = write_one_cache_group(trans, root, path, cache);36023602+ if (ret)36033603+ btrfs_abort_transaction(trans, root, ret);36043604+ }3622360536233606 /* if its not on the io list, we need to put the block group */36243607 if (should_put)
+19
fs/btrfs/extent_io.c
···47724772 start >> PAGE_CACHE_SHIFT);47734773 if (eb && atomic_inc_not_zero(&eb->refs)) {47744774 rcu_read_unlock();47754775+ /*47764776+ * Lock our eb's refs_lock to avoid races with47774777+ * free_extent_buffer. When we get our eb it might be flagged47784778+ * with EXTENT_BUFFER_STALE and another task running47794779+ * free_extent_buffer might have seen that flag set,47804780+ * eb->refs == 2, that the buffer isn't under IO (dirty and47814781+ * writeback flags not set) and it's still in the tree (flag47824782+ * EXTENT_BUFFER_TREE_REF set), therefore being in the process47834783+ * of decrementing the extent buffer's reference count twice.47844784+ * So here we could race and increment the eb's reference count,47854785+ * clear its stale flag, mark it as dirty and drop our reference47864786+ * before the other task finishes executing free_extent_buffer,47874787+ * which would later result in an attempt to free an extent47884788+ * buffer that is dirty.47894789+ */47904790+ if (test_bit(EXTENT_BUFFER_STALE, &eb->bflags)) {47914791+ spin_lock(&eb->refs_lock);47924792+ spin_unlock(&eb->refs_lock);47934793+ }47754794 mark_extent_buffer_accessed(eb, NULL);47764795 return eb;47774796 }
+13-3
fs/btrfs/free-space-cache.c
···86868787 mapping_set_gfp_mask(inode->i_mapping,8888 mapping_gfp_mask(inode->i_mapping) &8989- ~(GFP_NOFS & ~__GFP_HIGHMEM));8989+ ~(__GFP_FS | __GFP_HIGHMEM));90909191 return inode;9292}···34663466 struct btrfs_free_space_ctl *ctl = root->free_ino_ctl;34673467 int ret;34683468 struct btrfs_io_ctl io_ctl;34693469+ bool release_metadata = true;3469347034703471 if (!btrfs_test_opt(root, INODE_MAP_CACHE))34713472 return 0;···34743473 memset(&io_ctl, 0, sizeof(io_ctl));34753474 ret = __btrfs_write_out_cache(root, inode, ctl, NULL, &io_ctl,34763475 trans, path, 0);34773477- if (!ret)34763476+ if (!ret) {34773477+ /*34783478+ * At this point writepages() didn't error out, so our metadata34793479+ * reservation is released when the writeback finishes, at34803480+ * inode.c:btrfs_finish_ordered_io(), regardless of it finishing34813481+ * with or without an error.34823482+ */34833483+ release_metadata = false;34783484 ret = btrfs_wait_cache_io(root, trans, NULL, &io_ctl, path, 0);34853485+ }3479348634803487 if (ret) {34813481- btrfs_delalloc_release_metadata(inode, inode->i_size);34883488+ if (release_metadata)34893489+ btrfs_delalloc_release_metadata(inode, inode->i_size);34823490#ifdef DEBUG34833491 btrfs_err(root->fs_info,34843492 "failed to write free ino cache for root %llu",
+10-4
fs/btrfs/ordered-data.c
···722722int btrfs_wait_ordered_range(struct inode *inode, u64 start, u64 len)723723{724724 int ret = 0;725725+ int ret_wb = 0;725726 u64 end;726727 u64 orig_end;727728 struct btrfs_ordered_extent *ordered;···742741 if (ret)743742 return ret;744743745745- ret = filemap_fdatawait_range(inode->i_mapping, start, orig_end);746746- if (ret)747747- return ret;744744+ /*745745+ * If we have a writeback error don't return immediately. Wait first746746+ * for any ordered extents that haven't completed yet. This is to make747747+ * sure no one can dirty the same page ranges and call writepages()748748+ * before the ordered extents complete - to avoid failures (-EEXIST)749749+ * when adding the new ordered extents to the ordered tree.750750+ */751751+ ret_wb = filemap_fdatawait_range(inode->i_mapping, start, orig_end);748752749753 end = orig_end;750754 while (1) {···773767 break;774768 end--;775769 }776776- return ret;770770+ return ret_wb ? ret_wb : ret;777771}778772779773/*
+3
fs/exec.c
···659659 if (stack_base > STACK_SIZE_MAX)660660 stack_base = STACK_SIZE_MAX;661661662662+ /* Add space for stack randomization. */663663+ stack_base += (STACK_RND_MASK << PAGE_SHIFT);664664+662665 /* Make sure we didn't let the argument array grow too large. */663666 if (vma->vm_end - vma->vm_start > stack_base)664667 return -ENOMEM;
-1
fs/ext4/ext4.h
···28892889 struct ext4_map_blocks *map, int flags);28902890extern int ext4_ext_calc_metadata_amount(struct inode *inode,28912891 ext4_lblk_t lblocks);28922892-extern int ext4_extent_tree_init(handle_t *, struct inode *);28932892extern int ext4_ext_calc_credits_for_single_extent(struct inode *inode,28942893 int num,28952894 struct ext4_ext_path *path);
···377377 ext4_lblk_t lblock = le32_to_cpu(ext->ee_block);378378 ext4_lblk_t last = lblock + len - 1;379379380380- if (lblock > last)380380+ if (len == 0 || lblock > last)381381 return 0;382382 return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);383383}···53955395 unsigned int credits;53965396 loff_t new_size, ioffset;53975397 int ret;53985398+53995399+ /*54005400+ * We need to test this early because xfstests assumes that a54015401+ * collapse range of (0, 1) will return EOPNOTSUPP if the file54025402+ * system does not support collapse range.54035403+ */54045404+ if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))54055405+ return -EOPNOTSUPP;5398540653995407 /* Collapse range works only on fs block size aligned offsets. */54005408 if (offset & (EXT4_CLUSTER_SIZE(sb) - 1) ||
+1-1
fs/ext4/inode.c
···43454345 int inode_size = EXT4_INODE_SIZE(sb);4346434643474347 oi.orig_ino = orig_ino;43484348- ino = orig_ino & ~(inodes_per_block - 1);43484348+ ino = (orig_ino & ~(inodes_per_block - 1)) + 1;43494349 for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {43504350 if (ino == orig_ino)43514351 continue;
···842842{843843 jbd2_journal_revoke_header_t *header;844844 int offset, max;845845+ int csum_size = 0;846846+ __u32 rcount;845847 int record_len = 4;846848847849 header = (jbd2_journal_revoke_header_t *) bh->b_data;848850 offset = sizeof(jbd2_journal_revoke_header_t);849849- max = be32_to_cpu(header->r_count);851851+ rcount = be32_to_cpu(header->r_count);850852851853 if (!jbd2_revoke_block_csum_verify(journal, header))852854 return -EINVAL;855855+856856+ if (jbd2_journal_has_csum_v2or3(journal))857857+ csum_size = sizeof(struct jbd2_journal_revoke_tail);858858+ if (rcount > journal->j_blocksize - csum_size)859859+ return -EINVAL;860860+ max = rcount;853861854862 if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))855863 record_len = 8;
+10-8
fs/jbd2/revoke.c
···577577{578578 int csum_size = 0;579579 struct buffer_head *descriptor;580580- int offset;580580+ int sz, offset;581581 journal_header_t *header;582582583583 /* If we are already aborting, this all becomes a noop. We···594594 if (jbd2_journal_has_csum_v2or3(journal))595595 csum_size = sizeof(struct jbd2_journal_revoke_tail);596596597597+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))598598+ sz = 8;599599+ else600600+ sz = 4;601601+597602 /* Make sure we have a descriptor with space left for the record */598603 if (descriptor) {599599- if (offset >= journal->j_blocksize - csum_size) {604604+ if (offset + sz > journal->j_blocksize - csum_size) {600605 flush_descriptor(journal, descriptor, offset, write_op);601606 descriptor = NULL;602607 }···624619 *descriptorp = descriptor;625620 }626621627627- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) {622622+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))628623 * ((__be64 *)(&descriptor->b_data[offset])) =629624 cpu_to_be64(record->blocknr);630630- offset += 8;631631-632632- } else {625625+ else633626 * ((__be32 *)(&descriptor->b_data[offset])) =634627 cpu_to_be32(record->blocknr);635635- offset += 4;636636- }628628+ offset += sz;637629638630 *offsetp = offset;639631}
+16-9
fs/jbd2/transaction.c
···551551 int result;552552 int wanted;553553554554- WARN_ON(!transaction);555554 if (is_handle_aborted(handle))556555 return -EROFS;557556 journal = transaction->t_journal;···626627 tid_t tid;627628 int need_to_start, ret;628629629629- WARN_ON(!transaction);630630 /* If we've had an abort of any type, don't even think about631631 * actually doing the restart! */632632 if (is_handle_aborted(handle))···783785 int need_copy = 0;784786 unsigned long start_lock, time_lock;785787786786- WARN_ON(!transaction);787788 if (is_handle_aborted(handle))788789 return -EROFS;789790 journal = transaction->t_journal;···10481051 int err;1049105210501053 jbd_debug(5, "journal_head %p\n", jh);10511051- WARN_ON(!transaction);10521054 err = -EROFS;10531055 if (is_handle_aborted(handle))10541056 goto out;···12621266 struct journal_head *jh;12631267 int ret = 0;1264126812651265- WARN_ON(!transaction);12661269 if (is_handle_aborted(handle))12671270 return -EROFS;12681271 journal = transaction->t_journal;···13921397 int err = 0;13931398 int was_modified = 0;1394139913951395- WARN_ON(!transaction);13961400 if (is_handle_aborted(handle))13971401 return -EROFS;13981402 journal = transaction->t_journal;···15241530 tid_t tid;15251531 pid_t pid;1526153215271527- if (!transaction)15281528- goto free_and_exit;15331533+ if (!transaction) {15341534+ /*15351535+ * Handle is already detached from the transaction so15361536+ * there is nothing to do other than decrease a refcount,15371537+ * or free the handle if refcount drops to zero15381538+ */15391539+ if (--handle->h_ref > 0) {15401540+ jbd_debug(4, "h_ref %d -> %d\n", handle->h_ref + 1,15411541+ handle->h_ref);15421542+ return err;15431543+ } else {15441544+ if (handle->h_rsv_handle)15451545+ jbd2_free_handle(handle->h_rsv_handle);15461546+ goto free_and_exit;15471547+ }15481548+ }15291549 journal = transaction->t_journal;1530155015311551 J_ASSERT(journal_current_handle() == handle);···23812373 transaction_t *transaction = handle->h_transaction;23822374 journal_t *journal;2383237523842384- WARN_ON(!transaction);23852376 if (is_handle_aborted(handle))23862377 return -EROFS;23872378 journal = transaction->t_journal;
+8-1
fs/kernfs/dir.c
···518518 if (!kn)519519 goto err_out1;520520521521- ret = ida_simple_get(&root->ino_ida, 1, 0, GFP_KERNEL);521521+ /*522522+ * If the ino of the sysfs entry created for a kmem cache gets523523+ * allocated from an ida layer, which is accounted to the memcg that524524+ * owns the cache, the memcg will get pinned forever. So do not account525525+ * ino ida allocations.526526+ */527527+ ret = ida_simple_get(&root->ino_ida, 1, 0,528528+ GFP_KERNEL | __GFP_NOACCOUNT);522529 if (ret < 0)523530 goto err_out2;524531 kn->ino = ret;
+15-7
fs/namei.c
···14151415 */14161416 if (nd->flags & LOOKUP_RCU) {14171417 unsigned seq;14181418+ bool negative;14181419 dentry = __d_lookup_rcu(parent, &nd->last, &seq);14191420 if (!dentry)14201421 goto unlazy;···14251424 * the dentry name information from lookup.14261425 */14271426 *inode = dentry->d_inode;14271427+ negative = d_is_negative(dentry);14281428 if (read_seqcount_retry(&dentry->d_seq, seq))14291429 return -ECHILD;14301430+ if (negative)14311431+ return -ENOENT;1430143214311433 /*14321434 * This sequence count validates that the parent had no···14761472 goto need_lookup;14771473 }1478147414751475+ if (unlikely(d_is_negative(dentry))) {14761476+ dput(dentry);14771477+ return -ENOENT;14781478+ }14791479 path->mnt = mnt;14801480 path->dentry = dentry;14811481 err = follow_managed(path, nd->flags);···15911583 goto out_err;1592158415931585 inode = path->dentry->d_inode;15861586+ err = -ENOENT;15871587+ if (d_is_negative(path->dentry))15881588+ goto out_path_put;15941589 }15951595- err = -ENOENT;15961596- if (d_is_negative(path->dentry))15971597- goto out_path_put;1598159015991591 if (should_follow_link(path->dentry, follow)) {16001592 if (nd->flags & LOOKUP_RCU) {···3044303630453037 BUG_ON(nd->flags & LOOKUP_RCU);30463038 inode = path->dentry->d_inode;30473047-finish_lookup:30483048- /* we _can_ be in RCU mode here */30493039 error = -ENOENT;30503040 if (d_is_negative(path->dentry)) {30513041 path_to_nameidata(path, nd);30523042 goto out;30533043 }30543054-30443044+finish_lookup:30453045+ /* we _can_ be in RCU mode here */30553046 if (should_follow_link(path->dentry, !symlink_ok)) {30563047 if (nd->flags & LOOKUP_RCU) {30573048 if (unlikely(nd->path.mnt != path->mnt ||···3233322632343227 if (unlikely(file->f_flags & __O_TMPFILE)) {32353228 error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened);32363236- goto out;32293229+ goto out2;32373230 }3238323132393232 error = path_init(dfd, pathname, flags, nd);···32633256 }32643257out:32653258 path_cleanup(nd);32593259+out2:32663260 if (!(opened & FILE_OPENED)) {32673261 BUG_ON(!error);32683262 put_filp(file);
+6
fs/namespace.c
···31793179 if (mnt->mnt.mnt_sb->s_type != type)31803180 continue;3181318131823182+ /* This mount is not fully visible if it's root directory31833183+ * is not the root directory of the filesystem.31843184+ */31853185+ if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)31863186+ continue;31873187+31823188 /* This mount is not fully visible if there are any child mounts31833189 * that cover anything except for empty directories.31843190 */
+11
fs/nfsd/blocklayout.c
···181181}182182183183const struct nfsd4_layout_ops bl_layout_ops = {184184+ /*185185+ * Pretend that we send notification to the client. This is a blatant186186+ * lie to force recent Linux clients to cache our device IDs.187187+ * We rarely ever change the device ID, so the harm of leaking deviceids188188+ * for a while isn't too bad. Unfortunately RFC5661 is a complete mess189189+ * in this regard, but I filed errata 4119 for this a while ago, and190190+ * hopefully the Linux client will eventually start caching deviceids191191+ * without this again.192192+ */193193+ .notify_types =194194+ NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE,184195 .proc_getdeviceinfo = nfsd4_block_proc_getdeviceinfo,185196 .encode_getdeviceinfo = nfsd4_block_encode_getdeviceinfo,186197 .proc_layoutget = nfsd4_block_proc_layoutget,
+55-64
fs/nfsd/nfs4callback.c
···224224}225225226226static int decode_cb_op_status(struct xdr_stream *xdr, enum nfs_opnum4 expected,227227- enum nfsstat4 *status)227227+ int *status)228228{229229 __be32 *p;230230 u32 op;···235235 op = be32_to_cpup(p++);236236 if (unlikely(op != expected))237237 goto out_unexpected;238238- *status = be32_to_cpup(p);238238+ *status = nfs_cb_stat_to_errno(be32_to_cpup(p));239239 return 0;240240out_overflow:241241 print_overflow_msg(__func__, xdr);···446446static int decode_cb_sequence4res(struct xdr_stream *xdr,447447 struct nfsd4_callback *cb)448448{449449- enum nfsstat4 nfserr;450449 int status;451450452451 if (cb->cb_minorversion == 0)453452 return 0;454453455455- status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &nfserr);456456- if (unlikely(status))457457- goto out;458458- if (unlikely(nfserr != NFS4_OK))459459- goto out_default;460460- status = decode_cb_sequence4resok(xdr, cb);461461-out:462462- return status;463463-out_default:464464- return nfs_cb_stat_to_errno(nfserr);454454+ status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &cb->cb_status);455455+ if (unlikely(status || cb->cb_status))456456+ return status;457457+458458+ return decode_cb_sequence4resok(xdr, cb);465459}466460467461/*···518524 struct nfsd4_callback *cb)519525{520526 struct nfs4_cb_compound_hdr hdr;521521- enum nfsstat4 nfserr;522527 int status;523528524529 status = decode_cb_compound4res(xdr, &hdr);525530 if (unlikely(status))526526- goto out;531531+ return status;527532528533 if (cb != NULL) {529534 status = decode_cb_sequence4res(xdr, cb);530530- if (unlikely(status))531531- goto out;535535+ if (unlikely(status || cb->cb_status))536536+ return status;532537 }533538534534- status = decode_cb_op_status(xdr, OP_CB_RECALL, &nfserr);535535- if (unlikely(status))536536- goto out;537537- if (unlikely(nfserr != NFS4_OK))538538- status = nfs_cb_stat_to_errno(nfserr);539539-out:540540- return status;539539+ return decode_cb_op_status(xdr, OP_CB_RECALL, &cb->cb_status);541540}542541543542#ifdef CONFIG_NFSD_PNFS···608621 struct nfsd4_callback *cb)609622{610623 struct nfs4_cb_compound_hdr hdr;611611- enum nfsstat4 nfserr;612624 int status;613625614626 status = decode_cb_compound4res(xdr, &hdr);615627 if (unlikely(status))616616- goto out;628628+ return status;629629+617630 if (cb) {618631 status = decode_cb_sequence4res(xdr, cb);619619- if (unlikely(status))620620- goto out;632632+ if (unlikely(status || cb->cb_status))633633+ return status;621634 }622622- status = decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &nfserr);623623- if (unlikely(status))624624- goto out;625625- if (unlikely(nfserr != NFS4_OK))626626- status = nfs_cb_stat_to_errno(nfserr);627627-out:628628- return status;635635+ return decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &cb->cb_status);629636}630637#endif /* CONFIG_NFSD_PNFS */631638···879898 if (!nfsd41_cb_get_slot(clp, task))880899 return;881900 }882882- spin_lock(&clp->cl_lock);883883- if (list_empty(&cb->cb_per_client)) {884884- /* This is the first call, not a restart */885885- cb->cb_done = false;886886- list_add(&cb->cb_per_client, &clp->cl_callbacks);887887- }888888- spin_unlock(&clp->cl_lock);889901 rpc_call_start(task);890902}891903···892918893919 if (clp->cl_minorversion) {894920 /* No need for lock, access serialized in nfsd4_cb_prepare */895895- ++clp->cl_cb_session->se_cb_seq_nr;921921+ if (!task->tk_status)922922+ ++clp->cl_cb_session->se_cb_seq_nr;896923 clear_bit(0, &clp->cl_cb_slot_busy);897924 rpc_wake_up_next(&clp->cl_cb_waitq);898925 dprintk("%s: freed slot, new seqid=%d\n", __func__,899926 clp->cl_cb_session->se_cb_seq_nr);900927 }901928902902- if (clp->cl_cb_client != task->tk_client) {903903- /* We're shutting down or changing cl_cb_client; leave904904- * it to nfsd4_process_cb_update to restart the call if905905- * necessary. */929929+ /*930930+ * If the backchannel connection was shut down while this931931+ * task was queued, we need to resubmit it after setting up932932+ * a new backchannel connection.933933+ *934934+ * Note that if we lost our callback connection permanently935935+ * the submission code will error out, so we don't need to936936+ * handle that case here.937937+ */938938+ if (task->tk_flags & RPC_TASK_KILLED) {939939+ task->tk_status = 0;940940+ cb->cb_need_restart = true;906941 return;907942 }908943909909- if (cb->cb_done)910910- return;944944+ if (cb->cb_status) {945945+ WARN_ON_ONCE(task->tk_status);946946+ task->tk_status = cb->cb_status;947947+ }911948912949 switch (cb->cb_ops->done(cb, task)) {913950 case 0:···934949 default:935950 BUG();936951 }937937- cb->cb_done = true;938952}939953940954static void nfsd4_cb_release(void *calldata)941955{942956 struct nfsd4_callback *cb = calldata;943943- struct nfs4_client *clp = cb->cb_clp;944957945945- if (cb->cb_done) {946946- spin_lock(&clp->cl_lock);947947- list_del(&cb->cb_per_client);948948- spin_unlock(&clp->cl_lock);949949-958958+ if (cb->cb_need_restart)959959+ nfsd4_run_cb(cb);960960+ else950961 cb->cb_ops->release(cb);951951- }962962+952963}953964954965static const struct rpc_call_ops nfsd4_cb_ops = {···10391058 nfsd4_mark_cb_down(clp, err);10401059 return;10411060 }10421042- /* Yay, the callback channel's back! Restart any callbacks: */10431043- list_for_each_entry(cb, &clp->cl_callbacks, cb_per_client)10441044- queue_work(callback_wq, &cb->cb_work);10451061}1046106210471063static void···10491071 struct nfs4_client *clp = cb->cb_clp;10501072 struct rpc_clnt *clnt;1051107310521052- if (cb->cb_ops && cb->cb_ops->prepare)10531053- cb->cb_ops->prepare(cb);10741074+ if (cb->cb_need_restart) {10751075+ cb->cb_need_restart = false;10761076+ } else {10771077+ if (cb->cb_ops && cb->cb_ops->prepare)10781078+ cb->cb_ops->prepare(cb);10791079+ }1054108010551081 if (clp->cl_flags & NFSD4_CLIENT_CB_FLAG_MASK)10561082 nfsd4_process_cb_update(cb);···10661084 cb->cb_ops->release(cb);10671085 return;10681086 }10871087+10881088+ /*10891089+ * Don't send probe messages for 4.1 or later.10901090+ */10911091+ if (!cb->cb_ops && clp->cl_minorversion) {10921092+ clp->cl_cb_state = NFSD4_CB_UP;10931093+ return;10941094+ }10951095+10691096 cb->cb_msg.rpc_cred = clp->cl_cb_cred;10701097 rpc_call_async(clnt, &cb->cb_msg, RPC_TASK_SOFT | RPC_TASK_SOFTCONN,10711098 cb->cb_ops ? &nfsd4_cb_ops : &nfsd4_cb_probe_ops, cb);···10891098 cb->cb_msg.rpc_resp = cb;10901099 cb->cb_ops = ops;10911100 INIT_WORK(&cb->cb_work, nfsd4_run_cb_work);10921092- INIT_LIST_HEAD(&cb->cb_per_client);10931093- cb->cb_done = true;11011101+ cb->cb_status = 0;11021102+ cb->cb_need_restart = false;10941103}1095110410961105void nfsd4_run_cb(struct nfsd4_callback *cb)
+129-18
fs/nfsd/nfs4state.c
···9494static struct kmem_cache *file_slab;9595static struct kmem_cache *stateid_slab;9696static struct kmem_cache *deleg_slab;9797+static struct kmem_cache *odstate_slab;97989899static void free_session(struct nfsd4_session *);99100···282281 if (atomic_dec_and_lock(&fi->fi_ref, &state_lock)) {283282 hlist_del_rcu(&fi->fi_hash);284283 spin_unlock(&state_lock);284284+ WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));285285 WARN_ON_ONCE(!list_empty(&fi->fi_delegations));286286 call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);287287 }···473471 __nfs4_file_put_access(fp, O_RDONLY);474472}475473474474+/*475475+ * Allocate a new open/delegation state counter. This is needed for476476+ * pNFS for proper return on close semantics.477477+ *478478+ * Note that we only allocate it for pNFS-enabled exports, otherwise479479+ * all pointers to struct nfs4_clnt_odstate are always NULL.480480+ */481481+static struct nfs4_clnt_odstate *482482+alloc_clnt_odstate(struct nfs4_client *clp)483483+{484484+ struct nfs4_clnt_odstate *co;485485+486486+ co = kmem_cache_zalloc(odstate_slab, GFP_KERNEL);487487+ if (co) {488488+ co->co_client = clp;489489+ atomic_set(&co->co_odcount, 1);490490+ }491491+ return co;492492+}493493+494494+static void495495+hash_clnt_odstate_locked(struct nfs4_clnt_odstate *co)496496+{497497+ struct nfs4_file *fp = co->co_file;498498+499499+ lockdep_assert_held(&fp->fi_lock);500500+ list_add(&co->co_perfile, &fp->fi_clnt_odstate);501501+}502502+503503+static inline void504504+get_clnt_odstate(struct nfs4_clnt_odstate *co)505505+{506506+ if (co)507507+ atomic_inc(&co->co_odcount);508508+}509509+510510+static void511511+put_clnt_odstate(struct nfs4_clnt_odstate *co)512512+{513513+ struct nfs4_file *fp;514514+515515+ if (!co)516516+ return;517517+518518+ fp = co->co_file;519519+ if (atomic_dec_and_lock(&co->co_odcount, &fp->fi_lock)) {520520+ list_del(&co->co_perfile);521521+ spin_unlock(&fp->fi_lock);522522+523523+ nfsd4_return_all_file_layouts(co->co_client, fp);524524+ kmem_cache_free(odstate_slab, co);525525+ }526526+}527527+528528+static struct nfs4_clnt_odstate *529529+find_or_hash_clnt_odstate(struct nfs4_file *fp, struct nfs4_clnt_odstate *new)530530+{531531+ struct nfs4_clnt_odstate *co;532532+ struct nfs4_client *cl;533533+534534+ if (!new)535535+ return NULL;536536+537537+ cl = new->co_client;538538+539539+ spin_lock(&fp->fi_lock);540540+ list_for_each_entry(co, &fp->fi_clnt_odstate, co_perfile) {541541+ if (co->co_client == cl) {542542+ get_clnt_odstate(co);543543+ goto out;544544+ }545545+ }546546+ co = new;547547+ co->co_file = fp;548548+ hash_clnt_odstate_locked(new);549549+out:550550+ spin_unlock(&fp->fi_lock);551551+ return co;552552+}553553+476554struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl,477555 struct kmem_cache *slab)478556{···688606}689607690608static struct nfs4_delegation *691691-alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh)609609+alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh,610610+ struct nfs4_clnt_odstate *odstate)692611{693612 struct nfs4_delegation *dp;694613 long n;···714631 INIT_LIST_HEAD(&dp->dl_perfile);715632 INIT_LIST_HEAD(&dp->dl_perclnt);716633 INIT_LIST_HEAD(&dp->dl_recall_lru);634634+ dp->dl_clnt_odstate = odstate;635635+ get_clnt_odstate(odstate);717636 dp->dl_type = NFS4_OPEN_DELEGATE_READ;718637 dp->dl_retries = 1;719638 nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client,···799714 spin_lock(&state_lock);800715 unhash_delegation_locked(dp);801716 spin_unlock(&state_lock);717717+ put_clnt_odstate(dp->dl_clnt_odstate);802718 nfs4_put_deleg_lease(dp->dl_stid.sc_file);803719 nfs4_put_stid(&dp->dl_stid);804720}···810724811725 WARN_ON(!list_empty(&dp->dl_recall_lru));812726727727+ put_clnt_odstate(dp->dl_clnt_odstate);813728 nfs4_put_deleg_lease(dp->dl_stid.sc_file);814729815730 if (clp->cl_minorversion == 0)···1020933{1021934 struct nfs4_ol_stateid *stp = openlockstateid(stid);1022935936936+ put_clnt_odstate(stp->st_clnt_odstate);1023937 release_all_access(stp);1024938 if (stp->st_stateowner)1025939 nfs4_put_stateowner(stp->st_stateowner);···16261538 INIT_LIST_HEAD(&clp->cl_openowners);16271539 INIT_LIST_HEAD(&clp->cl_delegations);16281540 INIT_LIST_HEAD(&clp->cl_lru);16291629- INIT_LIST_HEAD(&clp->cl_callbacks);16301541 INIT_LIST_HEAD(&clp->cl_revoked);16311542#ifdef CONFIG_NFSD_PNFS16321543 INIT_LIST_HEAD(&clp->cl_lo_states);···17211634 while (!list_empty(&reaplist)) {17221635 dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru);17231636 list_del_init(&dp->dl_recall_lru);16371637+ put_clnt_odstate(dp->dl_clnt_odstate);17241638 nfs4_put_deleg_lease(dp->dl_stid.sc_file);17251639 nfs4_put_stid(&dp->dl_stid);17261640 }···31453057 spin_lock_init(&fp->fi_lock);31463058 INIT_LIST_HEAD(&fp->fi_stateids);31473059 INIT_LIST_HEAD(&fp->fi_delegations);30603060+ INIT_LIST_HEAD(&fp->fi_clnt_odstate);31483061 fh_copy_shallow(&fp->fi_fhandle, fh);31493062 fp->fi_deleg_file = NULL;31503063 fp->fi_had_conflict = false;···31623073void31633074nfsd4_free_slabs(void)31643075{30763076+ kmem_cache_destroy(odstate_slab);31653077 kmem_cache_destroy(openowner_slab);31663078 kmem_cache_destroy(lockowner_slab);31673079 kmem_cache_destroy(file_slab);···31933103 sizeof(struct nfs4_delegation), 0, 0, NULL);31943104 if (deleg_slab == NULL)31953105 goto out_free_stateid_slab;31063106+ odstate_slab = kmem_cache_create("nfsd4_odstate",31073107+ sizeof(struct nfs4_clnt_odstate), 0, 0, NULL);31083108+ if (odstate_slab == NULL)31093109+ goto out_free_deleg_slab;31963110 return 0;3197311131123112+out_free_deleg_slab:31133113+ kmem_cache_destroy(deleg_slab);31983114out_free_stateid_slab:31993115 kmem_cache_destroy(stateid_slab);32003116out_free_file_slab:···36773581 open->op_stp = nfs4_alloc_open_stateid(clp);36783582 if (!open->op_stp)36793583 return nfserr_jukebox;35843584+35853585+ if (nfsd4_has_session(cstate) &&35863586+ (cstate->current_fh.fh_export->ex_flags & NFSEXP_PNFS)) {35873587+ open->op_odstate = alloc_clnt_odstate(clp);35883588+ if (!open->op_odstate)35893589+ return nfserr_jukebox;35903590+ }35913591+36803592 return nfs_ok;36813593}36823594···3973386939743870static struct nfs4_delegation *39753871nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,39763976- struct nfs4_file *fp)38723872+ struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate)39773873{39783874 int status;39793875 struct nfs4_delegation *dp;···39813877 if (fp->fi_had_conflict)39823878 return ERR_PTR(-EAGAIN);3983387939843984- dp = alloc_init_deleg(clp, fh);38803880+ dp = alloc_init_deleg(clp, fh, odstate);39853881 if (!dp)39863882 return ERR_PTR(-ENOMEM);39873883···40073903 spin_unlock(&state_lock);40083904out:40093905 if (status) {39063906+ put_clnt_odstate(dp->dl_clnt_odstate);40103907 nfs4_put_stid(&dp->dl_stid);40113908 return ERR_PTR(status);40123909 }···40853980 default:40863981 goto out_no_deleg;40873982 }40884088- dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file);39833983+ dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file, stp->st_clnt_odstate);40893984 if (IS_ERR(dp))40903985 goto out_no_deleg;40913986···41744069 release_open_stateid(stp);41754070 goto out;41764071 }40724072+40734073+ stp->st_clnt_odstate = find_or_hash_clnt_odstate(fp,40744074+ open->op_odstate);40754075+ if (stp->st_clnt_odstate == open->op_odstate)40764076+ open->op_odstate = NULL;41774077 }41784078 update_stateid(&stp->st_stid.sc_stateid);41794079 memcpy(&open->op_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));···42394129 kmem_cache_free(file_slab, open->op_file);42404130 if (open->op_stp)42414131 nfs4_put_stid(&open->op_stp->st_stid);41324132+ if (open->op_odstate)41334133+ kmem_cache_free(odstate_slab, open->op_odstate);42424134}4243413542444136__be32···44974385 return nfserr_old_stateid;44984386}4499438743884388+static __be32 nfsd4_check_openowner_confirmed(struct nfs4_ol_stateid *ols)43894389+{43904390+ if (ols->st_stateowner->so_is_open_owner &&43914391+ !(openowner(ols->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))43924392+ return nfserr_bad_stateid;43934393+ return nfs_ok;43944394+}43954395+45004396static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)45014397{45024398 struct nfs4_stid *s;45034503- struct nfs4_ol_stateid *ols;45044399 __be32 status = nfserr_bad_stateid;4505440045064401 if (ZERO_STATEID(stateid) || ONE_STATEID(stateid))···45374418 break;45384419 case NFS4_OPEN_STID:45394420 case NFS4_LOCK_STID:45404540- ols = openlockstateid(s);45414541- if (ols->st_stateowner->so_is_open_owner45424542- && !(openowner(ols->st_stateowner)->oo_flags45434543- & NFS4_OO_CONFIRMED))45444544- status = nfserr_bad_stateid;45454545- else45464546- status = nfs_ok;44214421+ status = nfsd4_check_openowner_confirmed(openlockstateid(s));45474422 break;45484423 default:45494424 printk("unknown stateid type %x\n", s->sc_type);···46294516 status = nfs4_check_fh(current_fh, stp);46304517 if (status)46314518 goto out;46324632- if (stp->st_stateowner->so_is_open_owner46334633- && !(openowner(stp->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))45194519+ status = nfsd4_check_openowner_confirmed(stp);45204520+ if (status)46344521 goto out;46354522 status = nfs4_check_openmode(stp, flags);46364523 if (status)···49644851 goto out; 49654852 update_stateid(&stp->st_stid.sc_stateid);49664853 memcpy(&close->cl_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));49674967-49684968- nfsd4_return_all_file_layouts(stp->st_stateowner->so_client,49694969- stp->st_stid.sc_file);4970485449714855 nfsd4_close_open_stateid(stp);49724856···65986488 list_for_each_safe(pos, next, &reaplist) {65996489 dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);66006490 list_del_init(&dp->dl_recall_lru);64916491+ put_clnt_odstate(dp->dl_clnt_odstate);66016492 nfs4_put_deleg_lease(dp->dl_stid.sc_file);66026493 nfs4_put_stid(&dp->dl_stid);66036494 }
+16-3
fs/nfsd/state.h
···63636464struct nfsd4_callback {6565 struct nfs4_client *cb_clp;6666- struct list_head cb_per_client;6766 u32 cb_minorversion;6867 struct rpc_message cb_msg;6968 struct nfsd4_callback_ops *cb_ops;7069 struct work_struct cb_work;7171- bool cb_done;7070+ int cb_status;7171+ bool cb_need_restart;7272};73737474struct nfsd4_callback_ops {···126126 struct list_head dl_perfile;127127 struct list_head dl_perclnt;128128 struct list_head dl_recall_lru; /* delegation recalled */129129+ struct nfs4_clnt_odstate *dl_clnt_odstate;129130 u32 dl_type;130131 time_t dl_time;131132/* For recall: */···333332 int cl_cb_state;334333 struct nfsd4_callback cl_cb_null;335334 struct nfsd4_session *cl_cb_session;336336- struct list_head cl_callbacks; /* list of in-progress callbacks */337335338336 /* for all client information that callback code might need: */339337 spinlock_t cl_lock;···465465}466466467467/*468468+ * Per-client state indicating no. of opens and outstanding delegations469469+ * on a file from a particular client.'od' stands for 'open & delegation'470470+ */471471+struct nfs4_clnt_odstate {472472+ struct nfs4_client *co_client;473473+ struct nfs4_file *co_file;474474+ struct list_head co_perfile;475475+ atomic_t co_odcount;476476+};477477+478478+/*468479 * nfs4_file: a file opened by some number of (open) nfs4_stateowners.469480 *470481 * These objects are global. nfsd keeps one instance of a nfs4_file per···496485 struct list_head fi_delegations;497486 struct rcu_head fi_rcu;498487 };488488+ struct list_head fi_clnt_odstate;499489 /* One each for O_RDONLY, O_WRONLY, O_RDWR: */500490 struct file * fi_fds[3];501491 /*···538526 struct list_head st_perstateowner;539527 struct list_head st_locks;540528 struct nfs4_stateowner * st_stateowner;529529+ struct nfs4_clnt_odstate * st_clnt_odstate;541530 unsigned char st_access_bmap;542531 unsigned char st_deny_bmap;543532 struct nfs4_ol_stateid * st_openstp;
+1
fs/nfsd/xdr4.h
···247247 struct nfs4_openowner *op_openowner; /* used during processing */248248 struct nfs4_file *op_file; /* used during processing */249249 struct nfs4_ol_stateid *op_stp; /* used during processing */250250+ struct nfs4_clnt_odstate *op_odstate; /* used during processing */250251 struct nfs4_acl *op_acl;251252 struct xdr_netobj op_label;252253};
+11-1
fs/splice.c
···11611161 long ret, bytes;11621162 umode_t i_mode;11631163 size_t len;11641164- int i, flags;11641164+ int i, flags, more;1165116511661166 /*11671167 * We require the input being a regular file, as we don't want to···12041204 * Don't block on output, we have to drain the direct pipe.12051205 */12061206 sd->flags &= ~SPLICE_F_NONBLOCK;12071207+ more = sd->flags & SPLICE_F_MORE;1207120812081209 while (len) {12091210 size_t read_len;···12171216 read_len = ret;12181217 sd->total_len = read_len;1219121812191219+ /*12201220+ * If more data is pending, set SPLICE_F_MORE12211221+ * If this is the last data and SPLICE_F_MORE was not set12221222+ * initially, clears it.12231223+ */12241224+ if (read_len < len)12251225+ sd->flags |= SPLICE_F_MORE;12261226+ else if (!more)12271227+ sd->flags &= ~SPLICE_F_MORE;12201228 /*12211229 * NOTE: nonblocking mode only applies to the input. We12221230 * must not do the output in nonblocking mode as then we
···205205 ATA_LFLAG_SW_ACTIVITY = (1 << 7), /* keep activity stats */206206 ATA_LFLAG_NO_LPM = (1 << 8), /* disable LPM on this link */207207 ATA_LFLAG_RST_ONCE = (1 << 9), /* limit recovery to one reset */208208+ ATA_LFLAG_CHANGED = (1 << 10), /* LPM state changed on this link */208209209210 /* struct ata_port flags */210211 ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */···309308 * doing SRST.310309 */311310 ATA_TMOUT_PMP_SRST_WAIT = 5000,311311+312312+ /* When the LPM policy is set to ATA_LPM_MAX_POWER, there might313313+ * be a spurious PHY event, so ignore the first PHY event that314314+ * occurs within 10s after the policy change.315315+ */316316+ ATA_TMOUT_SPURIOUS_PHY = 10000,312317313318 /* ATA bus states */314319 BUS_UNKNOWN = 0,···795788 struct ata_eh_context eh_context;796789797790 struct ata_device device[ATA_MAX_DEVICES];791791+792792+ unsigned long last_lpm_change; /* when last LPM change happened */798793};799794#define ATA_LINK_CLEAR_BEGIN offsetof(struct ata_link, active_tag)800795#define ATA_LINK_CLEAR_END offsetof(struct ata_link, device[0])···12101201extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);12111202extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);12121203extern void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, struct list_head *eh_q);12041204+extern bool sata_lpm_ignore_phy_events(struct ata_link *link);1213120512141206extern int ata_cable_40wire(struct ata_port *ap);12151207extern int ata_cable_80wire(struct ata_port *ap);
+4
include/linux/memcontrol.h
···463463 if (!memcg_kmem_enabled())464464 return true;465465466466+ if (gfp & __GFP_NOACCOUNT)467467+ return true;466468 /*467469 * __GFP_NOFAIL allocations will move on even if charging is not468470 * possible. Therefore we don't even try, and have this allocation···523521memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)524522{525523 if (!memcg_kmem_enabled())524524+ return cachep;525525+ if (gfp & __GFP_NOACCOUNT)526526 return cachep;527527 if (gfp & __GFP_NOFAIL)528528 return cachep;
-3
include/linux/netdevice.h
···2525#ifndef _LINUX_NETDEVICE_H2626#define _LINUX_NETDEVICE_H27272828-#include <linux/pm_qos.h>2928#include <linux/timer.h>3029#include <linux/bug.h>3130#include <linux/delay.h>···14971498 * for hardware timestamping14981499 *14991500 * @qdisc_tx_busylock: XXX: need comments on this one15001500- *15011501- * @pm_qos_req: Power Management QoS object15021501 *15031502 * FIXME: cleanup struct net_device such that network protocol info15041503 * moves out.
+4-3
include/linux/sched/rt.h
···1818#ifdef CONFIG_RT_MUTEXES1919extern int rt_mutex_getprio(struct task_struct *p);2020extern void rt_mutex_setprio(struct task_struct *p, int prio);2121-extern int rt_mutex_check_prio(struct task_struct *task, int newprio);2121+extern int rt_mutex_get_effective_prio(struct task_struct *task, int newprio);2222extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task);2323extern void rt_mutex_adjust_pi(struct task_struct *p);2424static inline bool tsk_is_pi_blocked(struct task_struct *tsk)···3131 return p->normal_prio;3232}33333434-static inline int rt_mutex_check_prio(struct task_struct *task, int newprio)3434+static inline int rt_mutex_get_effective_prio(struct task_struct *task,3535+ int newprio)3536{3636- return 0;3737+ return newprio;3738}38393940static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+8
include/linux/tcp.h
···145145 * read the code and the spec side by side (and laugh ...)146146 * See RFC793 and RFC1122. The RFC writes these in capitals.147147 */148148+ u64 bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived149149+ * sum(delta(rcv_nxt)), or how many bytes150150+ * were acked.151151+ */148152 u32 rcv_nxt; /* What we want to receive next */149153 u32 copied_seq; /* Head of yet unread data */150154 u32 rcv_wup; /* rcv_nxt on last window update sent */151155 u32 snd_nxt; /* Next sequence we send */152156157157+ u64 bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked158158+ * sum(delta(snd_una)), or how many bytes159159+ * were acked.160160+ */153161 u32 snd_una; /* First byte we want an ack for */154162 u32 snd_sml; /* Last byte of the most recently transmitted small packet */155163 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */
···120120 * struct codel_params - contains codel parameters121121 * @target: target queue size (in time units)122122 * @interval: width of moving time window123123+ * @mtu: device mtu, or minimal queue backlog in bytes.123124 * @ecn: is Explicit Congestion Notification enabled124125 */125126struct codel_params {126127 codel_time_t target;127128 codel_time_t interval;129129+ u32 mtu;128130 bool ecn;129131};130132···168166 u32 ecn_mark;169167};170168171171-static void codel_params_init(struct codel_params *params)169169+static void codel_params_init(struct codel_params *params,170170+ const struct Qdisc *sch)172171{173172 params->interval = MS2TIME(100);174173 params->target = MS2TIME(5);174174+ params->mtu = psched_mtu(qdisc_dev(sch));175175 params->ecn = false;176176}177177···184180185181static void codel_stats_init(struct codel_stats *stats)186182{187187- stats->maxpacket = 256;183183+ stats->maxpacket = 0;188184}189185190186/*···238234 stats->maxpacket = qdisc_pkt_len(skb);239235240236 if (codel_time_before(vars->ldelay, params->target) ||241241- sch->qstats.backlog <= stats->maxpacket) {237237+ sch->qstats.backlog <= params->mtu) {242238 /* went below - stay below for at least interval */243239 vars->first_above_time = 0;244240 return false;
+2
include/net/mac80211.h
···16661666 * @sta: station table entry, %NULL for per-vif queue16671667 * @tid: the TID for this queue (unused for per-vif queue)16681668 * @ac: the AC for this queue16691669+ * @drv_priv: data area for driver use, will always be aligned to16701670+ * sizeof(void *).16691671 *16701672 * The driver can obtain packets from this queue by calling16711673 * ieee80211_tx_dequeue().
+92-2
include/net/mac802154.h
···247247 __put_unaligned_memmove64(swab64p(le64_src), be64_dst);248248}249249250250-/* Basic interface to register ieee802154 device */250250+/**251251+ * ieee802154_alloc_hw - Allocate a new hardware device252252+ *253253+ * This must be called once for each hardware device. The returned pointer254254+ * must be used to refer to this device when calling other functions.255255+ * mac802154 allocates a private data area for the driver pointed to by256256+ * @priv in &struct ieee802154_hw, the size of this area is given as257257+ * @priv_data_len.258258+ *259259+ * @priv_data_len: length of private data260260+ * @ops: callbacks for this device261261+ *262262+ * Return: A pointer to the new hardware device, or %NULL on error.263263+ */251264struct ieee802154_hw *252265ieee802154_alloc_hw(size_t priv_data_len, const struct ieee802154_ops *ops);266266+267267+/**268268+ * ieee802154_free_hw - free hardware descriptor269269+ *270270+ * This function frees everything that was allocated, including the271271+ * private data for the driver. You must call ieee802154_unregister_hw()272272+ * before calling this function.273273+ *274274+ * @hw: the hardware to free275275+ */253276void ieee802154_free_hw(struct ieee802154_hw *hw);277277+278278+/**279279+ * ieee802154_register_hw - Register hardware device280280+ *281281+ * You must call this function before any other functions in282282+ * mac802154. Note that before a hardware can be registered, you283283+ * need to fill the contained wpan_phy's information.284284+ *285285+ * @hw: the device to register as returned by ieee802154_alloc_hw()286286+ *287287+ * Return: 0 on success. An error code otherwise.288288+ */254289int ieee802154_register_hw(struct ieee802154_hw *hw);290290+291291+/**292292+ * ieee802154_unregister_hw - Unregister a hardware device293293+ *294294+ * This function instructs mac802154 to free allocated resources295295+ * and unregister netdevices from the networking subsystem.296296+ *297297+ * @hw: the hardware to unregister298298+ */255299void ieee802154_unregister_hw(struct ieee802154_hw *hw);256300301301+/**302302+ * ieee802154_rx - receive frame303303+ *304304+ * Use this function to hand received frames to mac802154. The receive305305+ * buffer in @skb must start with an IEEE 802.15.4 header. In case of a306306+ * paged @skb is used, the driver is recommended to put the ieee802154307307+ * header of the frame on the linear part of the @skb to avoid memory308308+ * allocation and/or memcpy by the stack.309309+ *310310+ * This function may not be called in IRQ context. Calls to this function311311+ * for a single hardware must be synchronized against each other.312312+ *313313+ * @hw: the hardware this frame came in on314314+ * @skb: the buffer to receive, owned by mac802154 after this call315315+ */257316void ieee802154_rx(struct ieee802154_hw *hw, struct sk_buff *skb);317317+318318+/**319319+ * ieee802154_rx_irqsafe - receive frame320320+ *321321+ * Like ieee802154_rx() but can be called in IRQ context322322+ * (internally defers to a tasklet.)323323+ *324324+ * @hw: the hardware this frame came in on325325+ * @skb: the buffer to receive, owned by mac802154 after this call326326+ * @lqi: link quality indicator327327+ */258328void ieee802154_rx_irqsafe(struct ieee802154_hw *hw, struct sk_buff *skb,259329 u8 lqi);260260-330330+/**331331+ * ieee802154_wake_queue - wake ieee802154 queue332332+ * @hw: pointer as obtained from ieee802154_alloc_hw().333333+ *334334+ * Drivers should use this function instead of netif_wake_queue.335335+ */261336void ieee802154_wake_queue(struct ieee802154_hw *hw);337337+338338+/**339339+ * ieee802154_stop_queue - stop ieee802154 queue340340+ * @hw: pointer as obtained from ieee802154_alloc_hw().341341+ *342342+ * Drivers should use this function instead of netif_stop_queue.343343+ */262344void ieee802154_stop_queue(struct ieee802154_hw *hw);345345+346346+/**347347+ * ieee802154_xmit_complete - frame transmission complete348348+ *349349+ * @hw: pointer as obtained from ieee802154_alloc_hw().350350+ * @skb: buffer for transmission351351+ * @ifs_handling: indicate interframe space handling352352+ */263353void ieee802154_xmit_complete(struct ieee802154_hw *hw, struct sk_buff *skb,264354 bool ifs_handling);265355
+5-2
include/net/tcp.h
···576576}577577578578/* tcp.c */579579-void tcp_get_info(const struct sock *, struct tcp_info *);579579+void tcp_get_info(struct sock *, struct tcp_info *);580580581581/* Read 'sendfile()'-style from a TCP socket */582582typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *,···804804/* Requires ECN/ECT set on all packets */805805#define TCP_CONG_NEEDS_ECN 0x2806806807807+union tcp_cc_info;808808+807809struct tcp_congestion_ops {808810 struct list_head list;809811 u32 key;···831829 /* hook for packet ack accounting (optional) */832830 void (*pkts_acked)(struct sock *sk, u32 num_acked, s32 rtt_us);833831 /* get info for inet_diag (optional) */834834- int (*get_info)(struct sock *sk, u32 ext, struct sk_buff *skb);832832+ size_t (*get_info)(struct sock *sk, u32 ext, int *attr,833833+ union tcp_cc_info *info);835834836835 char name[TCP_CA_NAME_MAX];837836 struct module *owner;
···112112#define TCP_FASTOPEN 23 /* Enable FastOpen on listeners */113113#define TCP_TIMESTAMP 24114114#define TCP_NOTSENT_LOWAT 25 /* limit number of unsent bytes in write queue */115115+#define TCP_CC_INFO 26 /* Get Congestion Control (optional) info */115116116117struct tcp_repair_opt {117118 __u32 opt_code;···190189191190 __u64 tcpi_pacing_rate;192191 __u64 tcpi_max_pacing_rate;192192+ __u64 tcpi_bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked */193193+ __u64 tcpi_bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived */193194};194195195196/* for TCP_MD5SIG socket option */
+3-2
init/do_mounts.c
···225225#endif226226227227 if (strncmp(name, "/dev/", 5) != 0) {228228- unsigned maj, min;228228+ unsigned maj, min, offset;229229 char dummy;230230231231- if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) {231231+ if ((sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) ||232232+ (sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3)) {232233 res = MKDEV(maj, min);233234 if (maj != MAJOR(res) || min != MINOR(res))234235 goto fail;
+33-8
kernel/events/core.c
···913913 * Those places that change perf_event::ctx will hold both914914 * perf_event_ctx::mutex of the 'old' and 'new' ctx value.915915 *916916- * Lock ordering is by mutex address. There is one other site where917917- * perf_event_context::mutex nests and that is put_event(). But remember that918918- * that is a parent<->child context relation, and migration does not affect919919- * children, therefore these two orderings should not interact.916916+ * Lock ordering is by mutex address. There are two other sites where917917+ * perf_event_context::mutex nests and those are:918918+ *919919+ * - perf_event_exit_task_context() [ child , 0 ]920920+ * __perf_event_exit_task()921921+ * sync_child_event()922922+ * put_event() [ parent, 1 ]923923+ *924924+ * - perf_event_init_context() [ parent, 0 ]925925+ * inherit_task_group()926926+ * inherit_group()927927+ * inherit_event()928928+ * perf_event_alloc()929929+ * perf_init_event()930930+ * perf_try_init_event() [ child , 1 ]931931+ *932932+ * While it appears there is an obvious deadlock here -- the parent and child933933+ * nesting levels are inverted between the two. This is in fact safe because934934+ * life-time rules separate them. That is an exiting task cannot fork, and a935935+ * spawning task cannot (yet) exit.936936+ *937937+ * But remember that that these are parent<->child context relations, and938938+ * migration does not affect children, therefore these two orderings should not939939+ * interact.920940 *921941 * The change in perf_event::ctx does not affect children (as claimed above)922942 * because the sys_perf_event_open() case will install a new event and break···36773657 }36783658}3679365936803680-/*36813681- * Called when the last reference to the file is gone.36823682- */36833660static void put_event(struct perf_event *event)36843661{36853662 struct perf_event_context *ctx;···37143697}37153698EXPORT_SYMBOL_GPL(perf_event_release_kernel);3716369937003700+/*37013701+ * Called when the last reference to the file is gone.37023702+ */37173703static int perf_release(struct inode *inode, struct file *file)37183704{37193705 put_event(file->private_data);···73847364 return -ENODEV;7385736573867366 if (event->group_leader != event) {73877387- ctx = perf_event_ctx_lock(event->group_leader);73677367+ /*73687368+ * This ctx->mutex can nest when we're called through73697369+ * inheritance. See the perf_event_ctx_lock_nested() comment.73707370+ */73717371+ ctx = perf_event_ctx_lock_nested(event->group_leader,73727372+ SINGLE_DEPTH_NESTING);73887373 BUG_ON(!ctx);73897374 }73907375
···265265}266266267267/*268268- * Called by sched_setscheduler() to check whether the priority change269269- * is overruled by a possible priority boosting.268268+ * Called by sched_setscheduler() to get the priority which will be269269+ * effective after the change.270270 */271271-int rt_mutex_check_prio(struct task_struct *task, int newprio)271271+int rt_mutex_get_effective_prio(struct task_struct *task, int newprio)272272{273273 if (!task_has_pi_waiters(task))274274- return 0;274274+ return newprio;275275276276- return task_top_pi_waiter(task)->task->prio <= newprio;276276+ if (task_top_pi_waiter(task)->task->prio <= newprio)277277+ return task_top_pi_waiter(task)->task->prio;278278+ return newprio;277279}278280279281/*
+26-28
kernel/sched/core.c
···3300330033013301/* Actually do priority change: must hold pi & rq lock. */33023302static void __setscheduler(struct rq *rq, struct task_struct *p,33033303- const struct sched_attr *attr)33033303+ const struct sched_attr *attr, bool keep_boost)33043304{33053305 __setscheduler_params(p, attr);3306330633073307 /*33083308- * If we get here, there was no pi waiters boosting the33093309- * task. It is safe to use the normal prio.33083308+ * Keep a potential priority boosting if called from33093309+ * sched_setscheduler().33103310 */33113311- p->prio = normal_prio(p);33113311+ if (keep_boost)33123312+ p->prio = rt_mutex_get_effective_prio(p, normal_prio(p));33133313+ else33143314+ p->prio = normal_prio(p);3312331533133316 if (dl_prio(p->prio))33143317 p->sched_class = &dl_sched_class;···34113408 int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :34123409 MAX_RT_PRIO - 1 - attr->sched_priority;34133410 int retval, oldprio, oldpolicy = -1, queued, running;34143414- int policy = attr->sched_policy;34113411+ int new_effective_prio, policy = attr->sched_policy;34153412 unsigned long flags;34163413 const struct sched_class *prev_class;34173414 struct rq *rq;···35933590 oldprio = p->prio;3594359135953592 /*35963596- * Special case for priority boosted tasks.35973597- *35983598- * If the new priority is lower or equal (user space view)35993599- * than the current (boosted) priority, we just store the new35933593+ * Take priority boosted tasks into account. If the new35943594+ * effective priority is unchanged, we just store the new36003595 * normal parameters and do not touch the scheduler class and36013596 * the runqueue. This will be done when the task deboost36023597 * itself.36033598 */36043604- if (rt_mutex_check_prio(p, newprio)) {35993599+ new_effective_prio = rt_mutex_get_effective_prio(p, newprio);36003600+ if (new_effective_prio == oldprio) {36053601 __setscheduler_params(p, attr);36063602 task_rq_unlock(rq, p, &flags);36073603 return 0;···36143612 put_prev_task(rq, p);3615361336163614 prev_class = p->sched_class;36173617- __setscheduler(rq, p, attr);36153615+ __setscheduler(rq, p, attr, true);3618361636193617 if (running)36203618 p->sched_class->set_curr_task(rq);···69996997 unsigned long flags;70006998 long cpu = (long)hcpu;70016999 struct dl_bw *dl_b;70007000+ bool overflow;70017001+ int cpus;7002700270037003- switch (action & ~CPU_TASKS_FROZEN) {70037003+ switch (action) {70047004 case CPU_DOWN_PREPARE:70057005- /* explicitly allow suspend */70067006- if (!(action & CPU_TASKS_FROZEN)) {70077007- bool overflow;70087008- int cpus;70057005+ rcu_read_lock_sched();70067006+ dl_b = dl_bw_of(cpu);7009700770107010- rcu_read_lock_sched();70117011- dl_b = dl_bw_of(cpu);70087008+ raw_spin_lock_irqsave(&dl_b->lock, flags);70097009+ cpus = dl_bw_cpus(cpu);70107010+ overflow = __dl_overflow(dl_b, cpus, 0, 0);70117011+ raw_spin_unlock_irqrestore(&dl_b->lock, flags);7012701270137013- raw_spin_lock_irqsave(&dl_b->lock, flags);70147014- cpus = dl_bw_cpus(cpu);70157015- overflow = __dl_overflow(dl_b, cpus, 0, 0);70167016- raw_spin_unlock_irqrestore(&dl_b->lock, flags);70137013+ rcu_read_unlock_sched();7017701470187018- rcu_read_unlock_sched();70197019-70207020- if (overflow)70217021- return notifier_from_errno(-EBUSY);70227022- }70157015+ if (overflow)70167016+ return notifier_from_errno(-EBUSY);70237017 cpuset_update_active_cpus(false);70247018 break;70257019 case CPU_DOWN_PREPARE_FROZEN:···73447346 queued = task_on_rq_queued(p);73457347 if (queued)73467348 dequeue_task(rq, p, 0);73477347- __setscheduler(rq, p, &attr);73497349+ __setscheduler(rq, p, &attr, false);73487350 if (queued) {73497351 enqueue_task(rq, p, 0);73507352 resched_curr(rq);
+1-5
kernel/time/clockevents.c
···117117 /* Transition with new state-specific callbacks */118118 switch (state) {119119 case CLOCK_EVT_STATE_DETACHED:120120- /*121121- * This is an internal state, which is guaranteed to go from122122- * SHUTDOWN to DETACHED. No driver interaction required.123123- */124124- return 0;120120+ /* The clockevent device is getting replaced. Shut it down. */125121126122 case CLOCK_EVT_STATE_SHUTDOWN:127123 return dev->set_state_shutdown(dev);
···25182518 if (numabalancing_override)25192519 set_numabalancing_state(numabalancing_override == 1);2520252025212521- if (nr_node_ids > 1 && !numabalancing_override) {25212521+ if (num_online_nodes() > 1 && !numabalancing_override) {25222522 pr_info("%s automatic NUMA balancing. "25232523 "Configure with numa_balancing= or the "25242524 "kernel.numa_balancing sysctl",
+3-3
mm/page-writeback.c
···580580 long x;581581582582 x = div64_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT,583583- limit - setpoint + 1);583583+ (limit - setpoint) | 1);584584 pos_ratio = x;585585 pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;586586 pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;···807807 * scale global setpoint to bdi's:808808 * bdi_setpoint = setpoint * bdi_thresh / thresh809809 */810810- x = div_u64((u64)bdi_thresh << 16, thresh + 1);810810+ x = div_u64((u64)bdi_thresh << 16, thresh | 1);811811 bdi_setpoint = setpoint * (u64)x >> 16;812812 /*813813 * Use span=(8*write_bw) in single bdi case as indicated by···822822823823 if (bdi_dirty < x_intercept - span / 4) {824824 pos_ratio = div64_u64(pos_ratio * (x_intercept - bdi_dirty),825825- x_intercept - bdi_setpoint + 1);825825+ (x_intercept - bdi_setpoint) | 1);826826 } else827827 pos_ratio /= 4;828828
···256256}257257258258/* Extract info for Tcp socket info provided via netlink. */259259-static int tcp_westwood_info(struct sock *sk, u32 ext, struct sk_buff *skb)259259+static size_t tcp_westwood_info(struct sock *sk, u32 ext, int *attr,260260+ union tcp_cc_info *info)260261{261262 const struct westwood *ca = inet_csk_ca(sk);262263263264 if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) {264264- struct tcpvegas_info info = {265265- .tcpv_enabled = 1,266266- .tcpv_rtt = jiffies_to_usecs(ca->rtt),267267- .tcpv_minrtt = jiffies_to_usecs(ca->rtt_min),268268- };265265+ info->vegas.tcpv_enabled = 1;266266+ info->vegas.tcpv_rttcnt = 0;267267+ info->vegas.tcpv_rtt = jiffies_to_usecs(ca->rtt),268268+ info->vegas.tcpv_minrtt = jiffies_to_usecs(ca->rtt_min),269269270270- return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info);270270+ *attr = INET_DIAG_VEGASINFO;271271+ return sizeof(struct tcpvegas_info);271272 }272273 return 0;273274}
+32-9
net/ipv6/ip6_output.c
···886886#endif887887 int err;888888889889+ /* The correct way to handle this would be to do890890+ * ip6_route_get_saddr, and then ip6_route_output; however,891891+ * the route-specific preferred source forces the892892+ * ip6_route_output call _before_ ip6_route_get_saddr.893893+ *894894+ * In source specific routing (no src=any default route),895895+ * ip6_route_output will fail given src=any saddr, though, so896896+ * that's why we try it again later.897897+ */898898+ if (ipv6_addr_any(&fl6->saddr) && (!*dst || !(*dst)->error)) {899899+ struct rt6_info *rt;900900+ bool had_dst = *dst != NULL;901901+902902+ if (!had_dst)903903+ *dst = ip6_route_output(net, sk, fl6);904904+ rt = (*dst)->error ? NULL : (struct rt6_info *)*dst;905905+ err = ip6_route_get_saddr(net, rt, &fl6->daddr,906906+ sk ? inet6_sk(sk)->srcprefs : 0,907907+ &fl6->saddr);908908+ if (err)909909+ goto out_err_release;910910+911911+ /* If we had an erroneous initial result, pretend it912912+ * never existed and let the SA-enabled version take913913+ * over.914914+ */915915+ if (!had_dst && (*dst)->error) {916916+ dst_release(*dst);917917+ *dst = NULL;918918+ }919919+ }920920+889921 if (!*dst)890922 *dst = ip6_route_output(net, sk, fl6);891923892924 err = (*dst)->error;893925 if (err)894926 goto out_err_release;895895-896896- if (ipv6_addr_any(&fl6->saddr)) {897897- struct rt6_info *rt = (struct rt6_info *) *dst;898898- err = ip6_route_get_saddr(net, rt, &fl6->daddr,899899- sk ? inet6_sk(sk)->srcprefs : 0,900900- &fl6->saddr);901901- if (err)902902- goto out_err_release;903903- }904927905928#ifdef CONFIG_IPV6_OPTIMISTIC_DAD906929 /*
+3-2
net/ipv6/route.c
···22452245 unsigned int prefs,22462246 struct in6_addr *saddr)22472247{22482248- struct inet6_dev *idev = ip6_dst_idev((struct dst_entry *)rt);22482248+ struct inet6_dev *idev =22492249+ rt ? ip6_dst_idev((struct dst_entry *)rt) : NULL;22492250 int err = 0;22502250- if (rt->rt6i_prefsrc.plen)22512251+ if (rt && rt->rt6i_prefsrc.plen)22512252 *saddr = rt->rt6i_prefsrc.addr;22522253 else22532254 err = ipv6_dev_get_saddr(net, idev ? idev->dev : NULL,
+7-5
net/mac80211/iface.c
···819819 * (because if we remove a STA after ops->remove_interface()820820 * the driver will have removed the vif info already!)821821 *822822- * This is relevant only in WDS mode, in all other modes we've823823- * already removed all stations when disconnecting or similar,824824- * so warn otherwise.822822+ * In WDS mode a station must exist here and be flushed, for823823+ * AP_VLANs stations may exist since there's nothing else that824824+ * would have removed them, but in other modes there shouldn't825825+ * be any stations.825826 */826827 flushed = sta_info_flush(sdata);827827- WARN_ON_ONCE((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) ||828828- (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1));828828+ WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&829829+ ((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) ||830830+ (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1)));829831830832 /* don't count this interface for promisc/allmulti while it is down */831833 if (sdata->flags & IEEE80211_SDATA_ALLMULTI)
+18-1
net/mac80211/sta_info.c
···66666767static const struct rhashtable_params sta_rht_params = {6868 .nelem_hint = 3, /* start small */6969+ .automatic_shrinking = true,6970 .head_offset = offsetof(struct sta_info, hash_node),7071 .key_offset = offsetof(struct sta_info, sta.addr),7172 .key_len = ETH_ALEN,···158157 const u8 *addr)159158{160159 struct ieee80211_local *local = sdata->local;160160+ struct sta_info *sta;161161+ struct rhash_head *tmp;162162+ const struct bucket_table *tbl;161163162162- return rhashtable_lookup_fast(&local->sta_hash, addr, sta_rht_params);164164+ rcu_read_lock();165165+ tbl = rht_dereference_rcu(local->sta_hash.tbl, &local->sta_hash);166166+167167+ for_each_sta_info(local, tbl, addr, sta, tmp) {168168+ if (sta->sdata == sdata) {169169+ rcu_read_unlock();170170+ /* this is safe as the caller must already hold171171+ * another rcu read section or the mutex172172+ */173173+ return sta;174174+ }175175+ }176176+ rcu_read_unlock();177177+ return NULL;163178}164179165180/*
···134134 for (i = 0; i < ARRAY_SIZE(key->tfm); i++) {135135 key->tfm[i] = crypto_alloc_aead("ccm(aes)", 0,136136 CRYPTO_ALG_ASYNC);137137- if (!key->tfm[i])137137+ if (IS_ERR(key->tfm[i]))138138 goto err_tfm;139139 if (crypto_aead_setkey(key->tfm[i], template->key,140140 IEEE802154_LLSEC_KEY_SIZE))···144144 }145145146146 key->tfm0 = crypto_alloc_blkcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC);147147- if (!key->tfm0)147147+ if (IS_ERR(key->tfm0))148148 goto err_tfm;149149150150 if (crypto_blkcipher_setkey(key->tfm0, template->key,
···647647 return -EINVAL;648648649649 switch (dec.label) {650650- case LABEL_IMPLICIT_NULL:650650+ case MPLS_LABEL_IMPLNULL:651651 /* RFC3032: This is a label that an LSR may652652 * assign and distribute, but which never653653 * actually appears in the encapsulation.···935935 }936936937937 /* In case the predefined labels need to be populated */938938- if (limit > LABEL_IPV4_EXPLICIT_NULL) {938938+ if (limit > MPLS_LABEL_IPV4NULL) {939939 struct net_device *lo = net->loopback_dev;940940 rt0 = mpls_rt_alloc(lo->addr_len);941941 if (!rt0)···945945 rt0->rt_via_table = NEIGH_LINK_TABLE;946946 memcpy(rt0->rt_via, lo->dev_addr, lo->addr_len);947947 }948948- if (limit > LABEL_IPV6_EXPLICIT_NULL) {948948+ if (limit > MPLS_LABEL_IPV6NULL) {949949 struct net_device *lo = net->loopback_dev;950950 rt2 = mpls_rt_alloc(lo->addr_len);951951 if (!rt2)···973973 memcpy(labels, old, cp_size);974974975975 /* If needed set the predefined labels */976976- if ((old_limit <= LABEL_IPV6_EXPLICIT_NULL) &&977977- (limit > LABEL_IPV6_EXPLICIT_NULL)) {978978- RCU_INIT_POINTER(labels[LABEL_IPV6_EXPLICIT_NULL], rt2);976976+ if ((old_limit <= MPLS_LABEL_IPV6NULL) &&977977+ (limit > MPLS_LABEL_IPV6NULL)) {978978+ RCU_INIT_POINTER(labels[MPLS_LABEL_IPV6NULL], rt2);979979 rt2 = NULL;980980 }981981982982- if ((old_limit <= LABEL_IPV4_EXPLICIT_NULL) &&983983- (limit > LABEL_IPV4_EXPLICIT_NULL)) {984984- RCU_INIT_POINTER(labels[LABEL_IPV4_EXPLICIT_NULL], rt0);982982+ if ((old_limit <= MPLS_LABEL_IPV4NULL) &&983983+ (limit > MPLS_LABEL_IPV4NULL)) {984984+ RCU_INIT_POINTER(labels[MPLS_LABEL_IPV4NULL], rt0);985985 rt0 = NULL;986986 }987987
···23112311 tlen = dev->needed_tailroom;23122312 skb = sock_alloc_send_skb(&po->sk,23132313 hlen + tlen + sizeof(struct sockaddr_ll),23142314- 0, &err);23142314+ !need_wait, &err);2315231523162316- if (unlikely(skb == NULL))23162316+ if (unlikely(skb == NULL)) {23172317+ /* we assume the socket was initially writeable ... */23182318+ if (likely(len_sum > 0))23192319+ err = len_sum;23172320 goto out_status;23182318-23212321+ }23192322 tp_len = tpacket_fill_skb(po, skb, ph, dev, size_max, proto,23202323 addr, hlen);23212324 if (tp_len > dev->mtu + dev->hard_header_len) {
+15-2
net/rds/connection.c
···126126 struct rds_transport *loop_trans;127127 unsigned long flags;128128 int ret;129129+ struct rds_transport *otrans = trans;129130131131+ if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)132132+ goto new_conn;130133 rcu_read_lock();131134 conn = rds_conn_lookup(head, laddr, faddr, trans);132135 if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport &&···145142 if (conn)146143 goto out;147144145145+new_conn:148146 conn = kmem_cache_zalloc(rds_conn_slab, gfp);149147 if (!conn) {150148 conn = ERR_PTR(-ENOMEM);···234230 /* Creating normal conn */235231 struct rds_connection *found;236232237237- found = rds_conn_lookup(head, laddr, faddr, trans);233233+ if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)234234+ found = NULL;235235+ else236236+ found = rds_conn_lookup(head, laddr, faddr, trans);238237 if (found) {239238 trans->conn_free(conn->c_transport_data);240239 kmem_cache_free(rds_conn_slab, conn);241240 conn = found;242241 } else {243243- hlist_add_head_rcu(&conn->c_hash_node, head);242242+ if ((is_outgoing && otrans->t_type == RDS_TRANS_TCP) ||243243+ (otrans->t_type != RDS_TRANS_TCP)) {244244+ /* Only the active side should be added to245245+ * reconnect list for TCP.246246+ */247247+ hlist_add_head_rcu(&conn->c_hash_node, head);248248+ }244249 rds_cong_add_conn(conn);245250 rds_conn_count++;246251 }
+11-2
net/rds/ib_cm.c
···183183184184 /* If the peer gave us the last packet it saw, process this as if185185 * we had received a regular ACK. */186186- if (dp && dp->dp_ack_seq)187187- rds_send_drop_acked(conn, be64_to_cpu(dp->dp_ack_seq), NULL);186186+ if (dp) {187187+ /* dp structure start is not guaranteed to be 8 bytes aligned.188188+ * Since dp_ack_seq is 64-bit extended load operations can be189189+ * used so go through get_unaligned to avoid unaligned errors.190190+ */191191+ __be64 dp_ack_seq = get_unaligned(&dp->dp_ack_seq);192192+193193+ if (dp_ack_seq)194194+ rds_send_drop_acked(conn, be64_to_cpu(dp_ack_seq),195195+ NULL);196196+ }188197189198 rds_connect_complete(conn);190199}
+1
net/rds/tcp_connect.c
···6262 case TCP_ESTABLISHED:6363 rds_connect_complete(conn);6464 break;6565+ case TCP_CLOSE_WAIT:6566 case TCP_CLOSE:6667 rds_conn_drop(conn);6768 default:
+46
net/rds/tcp_listen.c
···4545static DECLARE_WORK(rds_tcp_listen_work, rds_tcp_accept_worker);4646static struct socket *rds_tcp_listen_sock;47474848+static int rds_tcp_keepalive(struct socket *sock)4949+{5050+ /* values below based on xs_udp_default_timeout */5151+ int keepidle = 5; /* send a probe 'keepidle' secs after last data */5252+ int keepcnt = 5; /* number of unack'ed probes before declaring dead */5353+ int keepalive = 1;5454+ int ret = 0;5555+5656+ ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,5757+ (char *)&keepalive, sizeof(keepalive));5858+ if (ret < 0)5959+ goto bail;6060+6161+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPCNT,6262+ (char *)&keepcnt, sizeof(keepcnt));6363+ if (ret < 0)6464+ goto bail;6565+6666+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPIDLE,6767+ (char *)&keepidle, sizeof(keepidle));6868+ if (ret < 0)6969+ goto bail;7070+7171+ /* KEEPINTVL is the interval between successive probes. We follow7272+ * the model in xs_tcp_finish_connecting() and re-use keepidle.7373+ */7474+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPINTVL,7575+ (char *)&keepidle, sizeof(keepidle));7676+bail:7777+ return ret;7878+}7979+4880static int rds_tcp_accept_one(struct socket *sock)4981{5082 struct socket *new_sock = NULL;5183 struct rds_connection *conn;5284 int ret;5385 struct inet_sock *inet;8686+ struct rds_tcp_connection *rs_tcp;54875588 ret = sock_create_lite(sock->sk->sk_family, sock->sk->sk_type,5689 sock->sk->sk_protocol, &new_sock);···9360 new_sock->type = sock->type;9461 new_sock->ops = sock->ops;9562 ret = sock->ops->accept(sock, new_sock, O_NONBLOCK);6363+ if (ret < 0)6464+ goto out;6565+6666+ ret = rds_tcp_keepalive(new_sock);9667 if (ret < 0)9768 goto out;9869···11477 ret = PTR_ERR(conn);11578 goto out;11679 }8080+ /* An incoming SYN request came in, and TCP just accepted it.8181+ * We always create a new conn for listen side of TCP, and do not8282+ * add it to the c_hash_list.8383+ *8484+ * If the client reboots, this conn will need to be cleaned up.8585+ * rds_tcp_state_change() will do that cleanup8686+ */8787+ rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data;8888+ WARN_ON(!rs_tcp || rs_tcp->t_sock);1178911890 /*11991 * see the comment above rds_queue_delayed_reconnect()
+3-4
net/sched/cls_api.c
···308308 case RTM_DELTFILTER:309309 err = tp->ops->delete(tp, fh);310310 if (err == 0) {311311- tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);312312- if (tcf_destroy(tp, false)) {313313- struct tcf_proto *next = rtnl_dereference(tp->next);311311+ struct tcf_proto *next = rtnl_dereference(tp->next);314312313313+ tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);314314+ if (tcf_destroy(tp, false))315315 RCU_INIT_POINTER(*back, next);316316- }317316 }318317 goto errout;319318 case RTM_GETTFILTER:
···793793{794794 u32 value_follows;795795 int err;796796+ struct page *scratch;797797+798798+ scratch = alloc_page(GFP_KERNEL);799799+ if (!scratch)800800+ return -ENOMEM;801801+ xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE);796802797803 /* res->status */798804 err = gssx_dec_status(xdr, &res->status);799805 if (err)800800- return err;806806+ goto out_free;801807802808 /* res->context_handle */803809 err = gssx_dec_bool(xdr, &value_follows);804810 if (err)805805- return err;811811+ goto out_free;806812 if (value_follows) {807813 err = gssx_dec_ctx(xdr, res->context_handle);808814 if (err)809809- return err;815815+ goto out_free;810816 } else {811817 res->context_handle = NULL;812818 }···820814 /* res->output_token */821815 err = gssx_dec_bool(xdr, &value_follows);822816 if (err)823823- return err;817817+ goto out_free;824818 if (value_follows) {825819 err = gssx_dec_buffer(xdr, res->output_token);826820 if (err)827827- return err;821821+ goto out_free;828822 } else {829823 res->output_token = NULL;830824 }···832826 /* res->delegated_cred_handle */833827 err = gssx_dec_bool(xdr, &value_follows);834828 if (err)835835- return err;829829+ goto out_free;836830 if (value_follows) {837831 /* we do not support upcall servers sending this data. */838838- return -EINVAL;832832+ err = -EINVAL;833833+ goto out_free;839834 }840835841836 /* res->options */842837 err = gssx_dec_option_array(xdr, &res->options);843838839839+out_free:840840+ __free_page(scratch);844841 return err;845842}
+2-1
tools/lib/lockdep/Makefile
···1414 $(eval $(1) = $(2)))1515endef16161717-# Allow setting CC and AR, or setting CROSS_COMPILE as a prefix.1717+# Allow setting CC and AR and LD, or setting CROSS_COMPILE as a prefix.1818$(call allow-override,CC,$(CROSS_COMPILE)gcc)1919$(call allow-override,AR,$(CROSS_COMPILE)ar)2020+$(call allow-override,LD,$(CROSS_COMPILE)ld)20212122INSTALL = install2223
···11+/*22+ * Trivial program to check that we have a valid 32-bit build environment.33+ * Copyright (c) 2015 Andy Lutomirski44+ * GPL v255+ */66+77+#ifndef __x86_64__88+# error wrong architecture99+#endif1010+1111+#include <stdio.h>1212+1313+int main()1414+{1515+ printf("\n");1616+1717+ return 0;1818+}