···11+What: /sys/class/scsi_host/hostX/isci_id22+Date: June 201133+Contact: Dave Jiang <dave.jiang@intel.com>44+Description:55+ This file contains the enumerated host ID for the Intel66+ SCU controller. The Intel(R) C600 Series Chipset SATA/SAS77+ Storage Control Unit embeds up to two 4-port controllers in88+ a single PCI device. The controllers are enumerated in order99+ which usually means the lowest number scsi_host corresponds1010+ with the first controller, but this association is not1111+ guaranteed. The 'isci_id' attribute unambiguously identifies1212+ the controller index: '0' for the first controller,1313+ '1' for the second.
+1-84
Documentation/cgroups/memory.txt
···3803803813815.2 stat file382382383383-5.2.1 memory.stat file includes following statistics383383+memory.stat file includes following statistics384384385385# per-memory cgroup local status386386cache - # of bytes of page cache memory.···437437 (Note: file and shmem may be shared among other cgroups. In that case,438438 file_mapped is accounted only when the memory cgroup is owner of page439439 cache.)440440-441441-5.2.2 memory.vmscan_stat442442-443443-memory.vmscan_stat includes statistics information for memory scanning and444444-freeing, reclaiming. The statistics shows memory scanning information since445445-memory cgroup creation and can be reset to 0 by writing 0 as446446-447447- #echo 0 > ../memory.vmscan_stat448448-449449-This file contains following statistics.450450-451451-[param]_[file_or_anon]_pages_by_[reason]_[under_heararchy]452452-[param]_elapsed_ns_by_[reason]_[under_hierarchy]453453-454454-For example,455455-456456- scanned_file_pages_by_limit indicates the number of scanned457457- file pages at vmscan.458458-459459-Now, 3 parameters are supported460460-461461- scanned - the number of pages scanned by vmscan462462- rotated - the number of pages activated at vmscan463463- freed - the number of pages freed by vmscan464464-465465-If "rotated" is high against scanned/freed, the memcg seems busy.466466-467467-Now, 2 reason are supported468468-469469- limit - the memory cgroup's limit470470- system - global memory pressure + softlimit471471- (global memory pressure not under softlimit is not handled now)472472-473473-When under_hierarchy is added in the tail, the number indicates the474474-total memcg scan of its children and itself.475475-476476-elapsed_ns is a elapsed time in nanosecond. This may include sleep time477477-and not indicates CPU usage. So, please take this as just showing478478-latency.479479-480480-Here is an example.481481-482482-# cat /cgroup/memory/A/memory.vmscan_stat483483-scanned_pages_by_limit 9471864484484-scanned_anon_pages_by_limit 6640629485485-scanned_file_pages_by_limit 2831235486486-rotated_pages_by_limit 4243974487487-rotated_anon_pages_by_limit 3971968488488-rotated_file_pages_by_limit 272006489489-freed_pages_by_limit 2318492490490-freed_anon_pages_by_limit 962052491491-freed_file_pages_by_limit 1356440492492-elapsed_ns_by_limit 351386416101493493-scanned_pages_by_system 0494494-scanned_anon_pages_by_system 0495495-scanned_file_pages_by_system 0496496-rotated_pages_by_system 0497497-rotated_anon_pages_by_system 0498498-rotated_file_pages_by_system 0499499-freed_pages_by_system 0500500-freed_anon_pages_by_system 0501501-freed_file_pages_by_system 0502502-elapsed_ns_by_system 0503503-scanned_pages_by_limit_under_hierarchy 9471864504504-scanned_anon_pages_by_limit_under_hierarchy 6640629505505-scanned_file_pages_by_limit_under_hierarchy 2831235506506-rotated_pages_by_limit_under_hierarchy 4243974507507-rotated_anon_pages_by_limit_under_hierarchy 3971968508508-rotated_file_pages_by_limit_under_hierarchy 272006509509-freed_pages_by_limit_under_hierarchy 2318492510510-freed_anon_pages_by_limit_under_hierarchy 962052511511-freed_file_pages_by_limit_under_hierarchy 1356440512512-elapsed_ns_by_limit_under_hierarchy 351386416101513513-scanned_pages_by_system_under_hierarchy 0514514-scanned_anon_pages_by_system_under_hierarchy 0515515-scanned_file_pages_by_system_under_hierarchy 0516516-rotated_pages_by_system_under_hierarchy 0517517-rotated_anon_pages_by_system_under_hierarchy 0518518-rotated_file_pages_by_system_under_hierarchy 0519519-freed_pages_by_system_under_hierarchy 0520520-freed_anon_pages_by_system_under_hierarchy 0521521-freed_file_pages_by_system_under_hierarchy 0522522-elapsed_ns_by_system_under_hierarchy 05234405244415.3 swappiness525442
+4-10
Documentation/hwmon/coretemp
···3535All Sysfs entries are named with their core_id (represented here by 'X').3636tempX_input - Core temperature (in millidegrees Celsius).3737tempX_max - All cooling devices should be turned on (on Core2).3838- Initialized with IA32_THERM_INTERRUPT. When the CPU3939- temperature reaches this temperature, an interrupt is4040- generated and tempX_max_alarm is set.4141-tempX_max_hyst - If the CPU temperature falls below than temperature,4242- an interrupt is generated and tempX_max_alarm is reset.4343-tempX_max_alarm - Set if the temperature reaches or exceeds tempX_max.4444- Reset if the temperature drops to or below tempX_max_hyst.4538tempX_crit - Maximum junction temperature (in millidegrees Celsius).4639tempX_crit_alarm - Set when Out-of-spec bit is set, never clears.4740 Correct CPU operation is no longer guaranteed.···4249 number. For Package temp, this will be "Physical id Y",4350 where Y is the package number.44514545-The TjMax temperature is set to 85 degrees C if undocumented model specific4646-register (UMSR) 0xee has bit 30 set. If not the TjMax is 100 degrees C as4747-(sometimes) documented in processor datasheet.5252+On CPU models which support it, TjMax is read from a model-specific register.5353+On other models, it is set to an arbitrary value based on weak heuristics.5454+If these heuristics don't work for you, you can pass the correct TjMax value5555+as a module parameter (tjmax).48564957Appendix A. Known TjMax lists (TBD):5058Some information comes from ark.intel.com
+6-3
Documentation/kernel-parameters.txt
···20862086 Override pmtimer IOPort with a hex value.20872087 e.g. pmtmr=0x5082088208820892089- pnp.debug [PNP]20902090- Enable PNP debug messages. This depends on the20912091- CONFIG_PNP_DEBUG_MESSAGES option.20892089+ pnp.debug=1 [PNP]20902090+ Enable PNP debug messages (depends on the20912091+ CONFIG_PNP_DEBUG_MESSAGES option). Change at run-time20922092+ via /sys/module/pnp/parameters/debug. We always show20932093+ current resource usage; turning this on also shows20942094+ possible settings and some assignment information.2092209520932096 pnpacpi= [ACPI]20942097 { off }
+2-1
Documentation/networking/dmfe.txt
···11+Note: This driver doesn't have a maintainer.22+13Davicom DM9102(A)/DM9132/DM9801 fast ethernet driver for Linux.2435This program is free software; you can redistribute it and/or···5755Authors:58565957Sten Wang <sten_wang@davicom.com.tw > : Original Author6060-Tobias Ringstrom <tori@unhappy.mine.nu> : Current Maintainer61586259Contributors:6360
+2-2
Documentation/networking/ip-sysctl.txt
···10421042 The functional behaviour for certain settings is different10431043 depending on whether local forwarding is enabled or not.1044104410451045-accept_ra - BOOLEAN10451045+accept_ra - INTEGER10461046 Accept Router Advertisements; autoconfigure using them.1047104710481048 Possible values are:···11061106 The amount of Duplicate Address Detection probes to send.11071107 Default: 11108110811091109-forwarding - BOOLEAN11091109+forwarding - INTEGER11101110 Configure interface-specific Host/Router behaviour.1111111111121112 Note: It is recommended to have the same setting on all
+1-1
Documentation/networking/scaling.txt
···243243244244The number of entries in the per-queue flow table are set through:245245246246- /sys/class/net/<dev>/queues/tx-<n>/rps_flow_cnt246246+ /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt247247248248== Suggested Configuration249249
+4-3
Documentation/vm/transhuge.txt
···123123khugepaged runs usually at low frequency so while one may not want to124124invoke defrag algorithms synchronously during the page faults, it125125should be worth invoking defrag at least in khugepaged. However it's126126-also possible to disable defrag in khugepaged:126126+also possible to disable defrag in khugepaged by writing 0 or enable127127+defrag in khugepaged by writing 1:127128128128-echo yes >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag129129-echo no >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag129129+echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag130130+echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag130131131132You can also control how many pages khugepaged should scan at each132133pass:
+18-6
MAINTAINERS
···12781278ATLX ETHERNET DRIVERS12791279M: Jay Cliburn <jcliburn@gmail.com>12801280M: Chris Snook <chris.snook@gmail.com>12811281-M: Jie Yang <jie.yang@atheros.com>12821281L: netdev@vger.kernel.org12831282W: http://sourceforge.net/projects/atl112841283W: http://atl1.sourceforge.net···1573157415741575BROCADE BNA 10 GIGABIT ETHERNET DRIVER15751576M: Rasesh Mody <rmody@brocade.com>15761576-M: Debashis Dutt <ddutt@brocade.com>15771577L: netdev@vger.kernel.org15781578S: Supported15791579F: drivers/net/bna/···1756175817571759CISCO VIC ETHERNET NIC DRIVER17581760M: Christian Benvenuti <benve@cisco.com>17591759-M: Vasanthy Kolluri <vkolluri@cisco.com>17601761M: Roopa Prabhu <roprabhu@cisco.com>17611762M: David Wang <dwang2@cisco.com>17621763S: Supported···32593262F: drivers/input/input-mt.c32603263K: \b(ABS|SYN)_MT_3261326432653265+INTEL C600 SERIES SAS CONTROLLER DRIVER32663266+M: Intel SCU Linux support <intel-linux-scu@intel.com>32673267+M: Dan Williams <dan.j.williams@intel.com>32683268+M: Dave Jiang <dave.jiang@intel.com>32693269+M: Ed Nadolski <edmund.nadolski@intel.com>32703270+L: linux-scsi@vger.kernel.org32713271+T: git git://git.kernel.org/pub/scm/linux/kernel/git/djbw/isci.git32723272+S: Maintained32733273+F: drivers/scsi/isci/32743274+F: firmware/isci/32753275+32623276INTEL IDLE DRIVER32633277M: Len Brown <lenb@kernel.org>32643278L: linux-pm@lists.linux-foundation.org···44124404L: coreteam@netfilter.org44134405W: http://www.netfilter.org/44144406W: http://www.iptables.org/44154415-T: git git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-2.6.git44074407+T: git git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-2.6.git44084408+T: git git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next-2.6.git44164409S: Supported44174410F: include/linux/netfilter*44184411F: include/linux/netfilter/···4783477447844775OSD LIBRARY and FILESYSTEM47854776M: Boaz Harrosh <bharrosh@panasas.com>47864786-M: Benny Halevy <bhalevy@panasas.com>47774777+M: Benny Halevy <bhalevy@tonian.com>47874778L: osd-dev@open-osd.org47884779W: http://open-osd.org47894780T: git git://git.open-osd.org/open-osd.git···63746365F: arch/arm/mach-tegra6375636663766367TEHUTI ETHERNET DRIVER63776377-M: Alexander Indenbaum <baum@tehutinetworks.net>63786368M: Andy Gospodarek <andy@greyhouse.net>63796369L: netdev@vger.kernel.org63806370S: Supported···72087200S: Supported72097201F: Documentation/hwmon/wm83??72107202F: drivers/leds/leds-wm83*.c72037203+F: drivers/input/misc/wm831x-on.c72047204+F: drivers/input/touchscreen/wm831x-ts.c72057205+F: drivers/input/touchscreen/wm97*.c72117206F: drivers/mfd/wm8*.c72127207F: drivers/power/wm83*.c72137208F: drivers/rtc/rtc-wm83*.c···72207209F: include/linux/mfd/wm831x/72217210F: include/linux/mfd/wm8350/72227211F: include/linux/mfd/wm8400*72127212+F: include/linux/wm97xx.h72237213F: include/sound/wm????.h72247214F: sound/soc/codecs/wm*72257215
···12841284 processor into full low interrupt latency mode. ARM11MPCore12851285 is not affected.1286128612871287+config ARM_ERRATA_76436912881288+ bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"12891289+ depends on CPU_V7 && SMP12901290+ help12911291+ This option enables the workaround for erratum 76436912921292+ affecting Cortex-A9 MPCore with two or more processors (all12931293+ current revisions). Under certain timing circumstances, a data12941294+ cache line maintenance operation by MVA targeting an Inner12951295+ Shareable memory region may fail to proceed up to either the12961296+ Point of Coherency or to the Point of Unification of the12971297+ system. This workaround adds a DSB instruction before the12981298+ relevant cache maintenance functions and sets a specific bit12991299+ in the diagnostic control register of the SCU.13001300+12871301endmenu1288130212891303source "arch/arm/common/Kconfig"
···2626CONFIG_MACH_MX28EVK=y2727CONFIG_MACH_STMP378X_DEVB=y2828CONFIG_MACH_TX28=y2929+CONFIG_MACH_M28EVK=y2930# CONFIG_ARM_THUMB is not set3031CONFIG_NO_HZ=y3132CONFIG_HIGH_RES_TIMERS=y
···23232424#if defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)2525#define ARM_EXIT_KEEP(x) x2626+#define ARM_EXIT_DISCARD(x)2627#else2728#define ARM_EXIT_KEEP(x)2929+#define ARM_EXIT_DISCARD(x) x2830#endif29313032OUTPUT_ARCH(arm)···4139SECTIONS4240{4341 /*4242+ * XXX: The linker does not define how output sections are4343+ * assigned to input sections when there are multiple statements4444+ * matching the same input section name. There is no documented4545+ * order of matching.4646+ *4447 * unwind exit sections must be discarded before the rest of the4548 * unwind sections get included.4649 */···5447 *(.ARM.extab.exit.text)5548 ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))5649 ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))5050+ ARM_EXIT_DISCARD(EXIT_TEXT)5151+ ARM_EXIT_DISCARD(EXIT_DATA)5252+ EXIT_CALL5753#ifndef CONFIG_HOTPLUG5854 *(.ARM.exidx.devexit.text)5955 *(.ARM.extab.devexit.text)···6858#ifndef CONFIG_SMP_ON_UP6959 *(.alt.smp.init)7060#endif6161+ *(.discard)6262+ *(.discard.*)7163 }72647365#ifdef CONFIG_XIP_KERNEL···291279292280 STABS_DEBUG293281 .comment 0 : { *(.comment) }294294-295295- /* Default discards */296296- DISCARDS297282}298283299284/*
···6363 mxs_duart_base = MX23_DUART_BASE_ADDR;6464 break;6565 case MACH_TYPE_MX28EVK:6666+ case MACH_TYPE_M28EVK:6667 case MACH_TYPE_TX28:6768 mxs_duart_base = MX28_DUART_BASE_ADDR;6869 break;
···11+/*22+ * Copyright (C) 2011 Freescale Semiconductor, Inc. All Rights Reserved.33+ */44+55+/*66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License as published by88+ * the Free Software Foundation; either version 2 of the License, or99+ * (at your option) any later version.1010+1111+ * This program is distributed in the hope that it will be useful,1212+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1313+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1414+ * GNU General Public License for more details.1515+1616+ * You should have received a copy of the GNU General Public License along1717+ * with this program; if not, write to the Free Software Foundation, Inc.,1818+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.1919+ */2020+2121+#include <linux/io.h>2222+#include <linux/clk.h>2323+#include <linux/err.h>2424+#include <linux/device.h>2525+#include <linux/dma-mapping.h>2626+#include <asm/sizes.h>2727+#include <mach/hardware.h>2828+#include <mach/devices-common.h>2929+3030+#define imx_ahci_imx_data_entry_single(soc, _devid) \3131+ { \3232+ .devid = _devid, \3333+ .iobase = soc ## _SATA_BASE_ADDR, \3434+ .irq = soc ## _INT_SATA, \3535+ }3636+3737+#ifdef CONFIG_SOC_IMX533838+const struct imx_ahci_imx_data imx53_ahci_imx_data __initconst =3939+ imx_ahci_imx_data_entry_single(MX53, "imx53-ahci");4040+#endif4141+4242+enum {4343+ HOST_CAP = 0x00,4444+ HOST_CAP_SSS = (1 << 27), /* Staggered Spin-up */4545+ HOST_PORTS_IMPL = 0x0c,4646+ HOST_TIMER1MS = 0xe0, /* Timer 1-ms */4747+};4848+4949+static struct clk *sata_clk, *sata_ref_clk;5050+5151+/* AHCI module Initialization, if return 0, initialization is successful. */5252+static int imx_sata_init(struct device *dev, void __iomem *addr)5353+{5454+ u32 tmpdata;5555+ int ret = 0;5656+ struct clk *clk;5757+5858+ sata_clk = clk_get(dev, "ahci");5959+ if (IS_ERR(sata_clk)) {6060+ dev_err(dev, "no sata clock.\n");6161+ return PTR_ERR(sata_clk);6262+ }6363+ ret = clk_enable(sata_clk);6464+ if (ret) {6565+ dev_err(dev, "can't enable sata clock.\n");6666+ goto put_sata_clk;6767+ }6868+6969+ /* Get the AHCI SATA PHY CLK */7070+ sata_ref_clk = clk_get(dev, "ahci_phy");7171+ if (IS_ERR(sata_ref_clk)) {7272+ dev_err(dev, "no sata ref clock.\n");7373+ ret = PTR_ERR(sata_ref_clk);7474+ goto release_sata_clk;7575+ }7676+ ret = clk_enable(sata_ref_clk);7777+ if (ret) {7878+ dev_err(dev, "can't enable sata ref clock.\n");7979+ goto put_sata_ref_clk;8080+ }8181+8282+ /* Get the AHB clock rate, and configure the TIMER1MS reg later */8383+ clk = clk_get(dev, "ahci_dma");8484+ if (IS_ERR(clk)) {8585+ dev_err(dev, "no dma clock.\n");8686+ ret = PTR_ERR(clk);8787+ goto release_sata_ref_clk;8888+ }8989+ tmpdata = clk_get_rate(clk) / 1000;9090+ clk_put(clk);9191+9292+ writel(tmpdata, addr + HOST_TIMER1MS);9393+9494+ tmpdata = readl(addr + HOST_CAP);9595+ if (!(tmpdata & HOST_CAP_SSS)) {9696+ tmpdata |= HOST_CAP_SSS;9797+ writel(tmpdata, addr + HOST_CAP);9898+ }9999+100100+ if (!(readl(addr + HOST_PORTS_IMPL) & 0x1))101101+ writel((readl(addr + HOST_PORTS_IMPL) | 0x1),102102+ addr + HOST_PORTS_IMPL);103103+104104+ return 0;105105+106106+release_sata_ref_clk:107107+ clk_disable(sata_ref_clk);108108+put_sata_ref_clk:109109+ clk_put(sata_ref_clk);110110+release_sata_clk:111111+ clk_disable(sata_clk);112112+put_sata_clk:113113+ clk_put(sata_clk);114114+115115+ return ret;116116+}117117+118118+static void imx_sata_exit(struct device *dev)119119+{120120+ clk_disable(sata_ref_clk);121121+ clk_put(sata_ref_clk);122122+123123+ clk_disable(sata_clk);124124+ clk_put(sata_clk);125125+126126+}127127+struct platform_device *__init imx_add_ahci_imx(128128+ const struct imx_ahci_imx_data *data,129129+ const struct ahci_platform_data *pdata)130130+{131131+ struct resource res[] = {132132+ {133133+ .start = data->iobase,134134+ .end = data->iobase + SZ_4K - 1,135135+ .flags = IORESOURCE_MEM,136136+ }, {137137+ .start = data->irq,138138+ .end = data->irq,139139+ .flags = IORESOURCE_IRQ,140140+ },141141+ };142142+143143+ return imx_add_platform_device_dmamask(data->devid, 0,144144+ res, ARRAY_SIZE(res),145145+ pdata, sizeof(*pdata), DMA_BIT_MASK(32));146146+}147147+148148+struct platform_device *__init imx53_add_ahci_imx(void)149149+{150150+ struct ahci_platform_data pdata = {151151+ .init = imx_sata_init,152152+ .exit = imx_sata_exit,153153+ };154154+155155+ return imx_add_ahci_imx(&imx53_ahci_imx_data, &pdata);156156+}
···114114{115115 static int used_gpioint_groups = 0;116116 int group = chip->group;117117- struct s5p_gpioint_bank *bank = NULL;117117+ struct s5p_gpioint_bank *b, *bank = NULL;118118 struct irq_chip_generic *gc;119119 struct irq_chip_type *ct;120120121121 if (used_gpioint_groups >= S5P_GPIOINT_GROUP_COUNT)122122 return -ENOMEM;123123124124- list_for_each_entry(bank, &banks, list) {125125- if (group >= bank->start &&126126- group < bank->start + bank->nr_groups)124124+ list_for_each_entry(b, &banks, list) {125125+ if (group >= b->start && group < b->start + b->nr_groups) {126126+ bank = b;127127 break;128128+ }128129 }129130 if (!bank)130131 return -EINVAL;
+11
arch/arm/plat-samsung/clock.c
···6464 */6565DEFINE_SPINLOCK(clocks_lock);66666767+/* Global watchdog clock used by arch_wtd_reset() callback */6868+struct clk *s3c2410_wdtclk;6969+static int __init s3c_wdt_reset_init(void)7070+{7171+ s3c2410_wdtclk = clk_get(NULL, "watchdog");7272+ if (IS_ERR(s3c2410_wdtclk))7373+ printk(KERN_WARNING "%s: warning: cannot get watchdog clock\n", __func__);7474+ return 0;7575+}7676+arch_initcall(s3c_wdt_reset_init);7777+6778/* enable and disable calls for use with the clk struct */68796980static int clk_null_enable(struct clk *clk, int enable)
+8
arch/arm/plat-samsung/include/plat/clock.h
···99 * published by the Free Software Foundation.1010*/11111212+#ifndef __ASM_PLAT_CLOCK_H1313+#define __ASM_PLAT_CLOCK_H __FILE__1414+1215#include <linux/spinlock.h>1316#include <linux/clkdev.h>1417···124121125122extern void s3c_pwmclk_init(void);126123124124+/* Global watchdog clock used by arch_wtd_reset() callback */125125+126126+extern struct clk *s3c2410_wdtclk;127127+128128+#endif /* __ASM_PLAT_CLOCK_H */
···1010 * published by the Free Software Foundation.1111*/12121313+#include <plat/clock.h>1314#include <plat/regs-watchdog.h>1415#include <mach/map.h>1516···20192120static inline void arch_wdt_reset(void)2221{2323- struct clk *wdtclk;2424-2522 printk("arch_reset: attempting watchdog reset\n");26232724 __raw_writel(0, S3C2410_WTCON); /* disable watchdog, to be safe */28252929- wdtclk = clk_get(NULL, "watchdog");3030- if (!IS_ERR(wdtclk)) {3131- clk_enable(wdtclk);3232- } else3333- printk(KERN_WARNING "%s: warning: cannot get watchdog clock\n", __func__);2626+ if (s3c2410_wdtclk)2727+ clk_enable(s3c2410_wdtclk);34283529 /* put initial values into count and data */3630 __raw_writel(0x80, S3C2410_WTCNT);
+14
arch/powerpc/platforms/powermac/pci.c
···561561 .write = u4_pcie_write_config,562562};563563564564+static void __devinit pmac_pci_fixup_u4_of_node(struct pci_dev *dev)565565+{566566+ /* Apple's device-tree "hides" the root complex virtual P2P bridge567567+ * on U4. However, Linux sees it, causing the PCI <-> OF matching568568+ * code to fail to properly match devices below it. This works around569569+ * it by setting the node of the bridge to point to the PHB node,570570+ * which is not entirely correct but fixes the matching code and571571+ * doesn't break anything else. It's also the simplest possible fix.572572+ */573573+ if (dev->dev.of_node == NULL)574574+ dev->dev.of_node = pcibios_get_phb_of_node(dev->bus);575575+}576576+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_APPLE, 0x5b, pmac_pci_fixup_u4_of_node);577577+564578#endif /* CONFIG_PPC64 */565579566580#ifdef CONFIG_PPC32
···658658 * struct gmap_struct - guest address space659659 * @mm: pointer to the parent mm_struct660660 * @table: pointer to the page directory661661+ * @asce: address space control element for gmap page table661662 * @crst_list: list of all crst tables used in the guest address space662663 */663664struct gmap {664665 struct list_head list;665666 struct mm_struct *mm;666667 unsigned long *table;668668+ unsigned long asce;667669 struct list_head crst_list;668670};669671
+3
arch/s390/kernel/asm-offsets.c
···1010#include <linux/sched.h>1111#include <asm/vdso.h>1212#include <asm/sigp.h>1313+#include <asm/pgtable.h>13141415/*1516 * Make sure that the compiler is new enough. We want a compiler that···127126 DEFINE(__LC_KERNEL_STACK, offsetof(struct _lowcore, kernel_stack));128127 DEFINE(__LC_ASYNC_STACK, offsetof(struct _lowcore, async_stack));129128 DEFINE(__LC_PANIC_STACK, offsetof(struct _lowcore, panic_stack));129129+ DEFINE(__LC_USER_ASCE, offsetof(struct _lowcore, user_asce));130130 DEFINE(__LC_INT_CLOCK, offsetof(struct _lowcore, int_clock));131131 DEFINE(__LC_MCCK_CLOCK, offsetof(struct _lowcore, mcck_clock));132132 DEFINE(__LC_MACHINE_FLAGS, offsetof(struct _lowcore, machine_flags));···153151 DEFINE(__LC_VDSO_PER_CPU, offsetof(struct _lowcore, vdso_per_cpu_data));154152 DEFINE(__LC_GMAP, offsetof(struct _lowcore, gmap));155153 DEFINE(__LC_CMF_HPP, offsetof(struct _lowcore, cmf_hpp));154154+ DEFINE(__GMAP_ASCE, offsetof(struct gmap, asce));156155#endif /* CONFIG_32BIT */157156 return 0;158157}
+6
arch/s390/kernel/entry64.S
···10761076 lg %r14,__LC_THREAD_INFO # pointer thread_info struct10771077 tm __TI_flags+7(%r14),_TIF_EXIT_SIE10781078 jnz sie_exit10791079+ lg %r14,__LC_GMAP # get gmap pointer10801080+ ltgr %r14,%r1410811081+ jz sie_gmap10821082+ lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce10831083+sie_gmap:10791084 lg %r14,__SF_EMPTY(%r15) # get control block pointer10801085 SPP __SF_EMPTY(%r15) # set guest id10811086 sie 0(%r14)···10881083 SPP __LC_CMF_HPP # set host id10891084 lg %r14,__LC_THREAD_INFO # pointer thread_info struct10901085sie_exit:10861086+ lctlg %c1,%c1,__LC_USER_ASCE # load primary asce10911087 ni __TI_flags+6(%r14),255-(_TIF_SIE>>8)10921088 lg %r14,__SF_EMPTY+8(%r15) # load guest register save area10931089 stmg %r0,%r13,0(%r14) # save guest gprs 0-13
+3-2
arch/s390/kvm/kvm-s390.c
···123123124124 switch (ext) {125125 case KVM_CAP_S390_PSW:126126+ case KVM_CAP_S390_GMAP:126127 r = 1;127128 break;128129 default:···264263 vcpu->arch.guest_fpregs.fpc &= FPC_VALID_MASK;265264 restore_fp_regs(&vcpu->arch.guest_fpregs);266265 restore_access_regs(vcpu->arch.guest_acrs);266266+ gmap_enable(vcpu->arch.gmap);267267}268268269269void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)270270{271271+ gmap_disable(vcpu->arch.gmap);271272 save_fp_regs(&vcpu->arch.guest_fpregs);272273 save_access_regs(vcpu->arch.guest_acrs);273274 restore_fp_regs(&vcpu->arch.host_fpregs);···464461 local_irq_disable();465462 kvm_guest_enter();466463 local_irq_enable();467467- gmap_enable(vcpu->arch.gmap);468464 VCPU_EVENT(vcpu, 6, "entering sie flags %x",469465 atomic_read(&vcpu->arch.sie_block->cpuflags));470466 if (sie64a(vcpu->arch.sie_block, vcpu->arch.guest_gprs)) {···472470 }473471 VCPU_EVENT(vcpu, 6, "exit sie icptcode %d",474472 vcpu->arch.sie_block->icptcode);475475- gmap_disable(vcpu->arch.gmap);476473 local_irq_disable();477474 kvm_guest_exit();478475 local_irq_enable();
···325325 case SUN4V_CHIP_NIAGARA1:326326 case SUN4V_CHIP_NIAGARA2:327327 case SUN4V_CHIP_NIAGARA3:328328+ case SUN4V_CHIP_NIAGARA4:329329+ case SUN4V_CHIP_NIAGARA5:328330 rover_inc_table = niagara_iterate_method;329331 break;330332 default:
···380380#endif381381 }382382383383- /* Now, this task is no longer a kernel thread. */384384- current->thread.current_ds = USER_DS;383383+ /* This task is no longer a kernel thread. */385384 if (current->thread.flags & SPARC_FLAG_KTHREAD) {386385 current->thread.flags &= ~SPARC_FLAG_KTHREAD;387386
-3
arch/sparc/kernel/process_64.c
···368368369369 /* Clear FPU register state. */370370 t->fpsaved[0] = 0;371371-372372- if (get_thread_current_ds() != ASI_AIUS)373373- set_fs(USER_DS);374371}375372376373/* It's a bit more tricky when 64-bit tasks are involved... */
+1-1
arch/sparc/kernel/setup_32.c
···137137 prom_halt();138138 break;139139 case 'p':140140- /* Just ignore, this behavior is now the default. */140140+ prom_early_console.flags &= ~CON_BOOT;141141 break;142142 default:143143 printk("Unknown boot switch (-%c)\n", c);
+13-5
arch/sparc/kernel/setup_64.c
···106106 prom_halt();107107 break;108108 case 'p':109109- /* Just ignore, this behavior is now the default. */109109+ prom_early_console.flags &= ~CON_BOOT;110110 break;111111 case 'P':112112 /* Force UltraSPARC-III P-Cache on. */···425425 else if (tlb_type == hypervisor) {426426 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1 ||427427 sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||428428- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)428428+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||429429+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||430430+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)429431 cap |= HWCAP_SPARC_BLKINIT;430432 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||431431- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)433433+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||434434+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||435435+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)432436 cap |= HWCAP_SPARC_N2;433437 }434438···456452 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1)457453 cap |= AV_SPARC_ASI_BLK_INIT;458454 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||459459- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)455455+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||456456+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||457457+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)460458 cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 |461459 AV_SPARC_ASI_BLK_INIT |462460 AV_SPARC_POPC);463463- if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3)461461+ if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||462462+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||463463+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)464464 cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC |465465 AV_SPARC_FMAF);466466 }
+5
arch/sparc/mm/init_64.c
···511511 for (i = 0; i < prom_trans_ents; i++)512512 prom_trans[i].data &= ~0x0003fe0000000000UL;513513 }514514+515515+ /* Force execute bit on. */516516+ for (i = 0; i < prom_trans_ents; i++)517517+ prom_trans[i].data |= (tlb_type == hypervisor ?518518+ _PAGE_EXEC_4V : _PAGE_EXEC_4U);514519}515520516521static void __init hypervisor_tlb_lock(unsigned long vaddr,
···4242 unsigned long addr, unsigned long data);4343extern unsigned long getreg(struct task_struct *child, int regno);4444extern int putreg(struct task_struct *child, int regno, unsigned long value);4545-extern int get_fpregs(struct user_i387_struct __user *buf,4646- struct task_struct *child);4747-extern int set_fpregs(struct user_i387_struct __user *buf,4848- struct task_struct *child);49455046extern int arch_copy_tls(struct task_struct *new);5147extern void clear_flushed_tls(struct task_struct *task);
+1
arch/um/include/shared/line.h
···3333struct line {3434 struct tty_struct *tty;3535 spinlock_t count_lock;3636+ unsigned long count;3637 int valid;37383839 char *init_str;
+1-1
arch/um/include/shared/registers.h
···1616extern int save_registers(int pid, struct uml_pt_regs *regs);1717extern int restore_registers(int pid, struct uml_pt_regs *regs);1818extern int init_registers(int pid);1919-extern void get_safe_registers(unsigned long *regs);1919+extern void get_safe_registers(unsigned long *regs, unsigned long *fp_regs);2020extern unsigned long get_thread_reg(int reg, jmp_buf *buf);2121extern int get_fp_registers(int pid, unsigned long *regs);2222extern int put_fp_registers(int pid, unsigned long *regs);
···5050 void __user *vp = p;51515252 switch (request) {5353- /* read word at location addr. */5454- case PTRACE_PEEKTEXT:5555- case PTRACE_PEEKDATA:5656- ret = generic_ptrace_peekdata(child, addr, data);5757- break;5858-5953 /* read the word at location addr in the USER area. */6054 case PTRACE_PEEKUSR:6155 ret = peek_user(child, addr, data);6262- break;6363-6464- /* write the word at location addr. */6565- case PTRACE_POKETEXT:6666- case PTRACE_POKEDATA:6767- ret = generic_ptrace_pokedata(child, addr, data);6856 break;69577058 /* write the word at location addr in the USER area */···95107 break;96108 }97109#endif9898-#ifdef PTRACE_GETFPREGS9999- case PTRACE_GETFPREGS: /* Get the child FPU state. */100100- ret = get_fpregs(vp, child);101101- break;102102-#endif103103-#ifdef PTRACE_SETFPREGS104104- case PTRACE_SETFPREGS: /* Set the child FPU state. */105105- ret = set_fpregs(vp, child);106106- break;107107-#endif108110 case PTRACE_GET_THREAD_AREA:109111 ret = ptrace_get_thread_area(child, addr, vp);110112 break;···131153 ret = -EIO;132154 break;133155 }134134-#endif135135-#ifdef PTRACE_ARCH_PRCTL136136- case PTRACE_ARCH_PRCTL:137137- /* XXX Calls ptrace on the host - needs some SMP thinking */138138- ret = arch_prctl(child, data, (void __user *) addr);139139- break;140156#endif141157 default:142158 ret = ptrace_request(child, request, addr, data);
+8-1
arch/um/os-Linux/registers.c
···88#include <string.h>99#include <sys/ptrace.h>1010#include "sysdep/ptrace.h"1111+#include "sysdep/ptrace_user.h"1212+#include "registers.h"11131214int save_registers(int pid, struct uml_pt_regs *regs)1315{···3432/* This is set once at boot time and not changed thereafter */35333634static unsigned long exec_regs[MAX_REG_NR];3535+static unsigned long exec_fp_regs[FP_SIZE];37363837int init_registers(int pid)3938{···4542 return -errno;46434744 arch_init_registers(pid);4545+ get_fp_registers(pid, exec_fp_regs);4846 return 0;4947}50485151-void get_safe_registers(unsigned long *regs)4949+void get_safe_registers(unsigned long *regs, unsigned long *fp_regs)5250{5351 memcpy(regs, exec_regs, sizeof(exec_regs));5252+5353+ if (fp_regs)5454+ memcpy(fp_regs, exec_fp_regs, sizeof(exec_fp_regs));5455}
···373373 if (ptrace(PTRACE_SETREGS, pid, 0, regs->gp))374374 fatal_sigsegv();375375376376+ if (put_fp_registers(pid, regs->fp))377377+ fatal_sigsegv();378378+376379 /* Now we set local_using_sysemu to be used for one loop */377380 local_using_sysemu = get_using_sysemu();378381···398395 regs->is_user = 1;399396 if (ptrace(PTRACE_GETREGS, pid, 0, regs->gp)) {400397 printk(UM_KERN_ERR "userspace - PTRACE_GETREGS failed, "398398+ "errno = %d\n", errno);399399+ fatal_sigsegv();400400+ }401401+402402+ if (get_fp_registers(pid, regs->fp)) {403403+ printk(UM_KERN_ERR "userspace - get_fp_registers failed, "401404 "errno = %d\n", errno);402405 fatal_sigsegv();403406 }···466457}467458468459static unsigned long thread_regs[MAX_REG_NR];460460+static unsigned long thread_fp_regs[FP_SIZE];469461470462static int __init init_thread_regs(void)471463{472472- get_safe_registers(thread_regs);464464+ get_safe_registers(thread_regs, thread_fp_regs);473465 /* Set parent's instruction pointer to start of clone-stub */474466 thread_regs[REGS_IP_INDEX] = STUB_CODE +475467 (unsigned long) stub_clone_handler -···510500 err = -errno;511501 printk(UM_KERN_ERR "copy_context_skas0 : PTRACE_SETREGS "512502 "failed, pid = %d, errno = %d\n", pid, -err);503503+ return err;504504+ }505505+506506+ err = put_fp_registers(pid, thread_fp_regs);507507+ if (err < 0) {508508+ printk(UM_KERN_ERR "copy_context_skas0 : put_fp_registers "509509+ "failed, pid = %d, err = %d\n", pid, err);513510 return err;514511 }515512
-5
arch/um/sys-i386/asm/ptrace.h
···4242 */4343struct user_desc;44444545-extern int get_fpxregs(struct user_fxsr_struct __user *buf,4646- struct task_struct *child);4747-extern int set_fpxregs(struct user_fxsr_struct __user *buf,4848- struct task_struct *tsk);4949-5045extern int ptrace_get_thread_area(struct task_struct *child, int idx,5146 struct user_desc __user *user_desc);5247
+23-5
arch/um/sys-i386/ptrace.c
···145145 return put_user(tmp, (unsigned long __user *) data);146146}147147148148-int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)148148+static int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)149149{150150 int err, n, cpu = ((struct thread_info *) child->stack)->cpu;151151 struct user_i387_struct fpregs;···161161 return n;162162}163163164164-int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)164164+static int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)165165{166166 int n, cpu = ((struct thread_info *) child->stack)->cpu;167167 struct user_i387_struct fpregs;···174174 (unsigned long *) &fpregs);175175}176176177177-int get_fpxregs(struct user_fxsr_struct __user *buf, struct task_struct *child)177177+static int get_fpxregs(struct user_fxsr_struct __user *buf, struct task_struct *child)178178{179179 int err, n, cpu = ((struct thread_info *) child->stack)->cpu;180180 struct user_fxsr_struct fpregs;···190190 return n;191191}192192193193-int set_fpxregs(struct user_fxsr_struct __user *buf, struct task_struct *child)193193+static int set_fpxregs(struct user_fxsr_struct __user *buf, struct task_struct *child)194194{195195 int n, cpu = ((struct thread_info *) child->stack)->cpu;196196 struct user_fxsr_struct fpregs;···206206long subarch_ptrace(struct task_struct *child, long request,207207 unsigned long addr, unsigned long data)208208{209209- return -EIO;209209+ int ret = -EIO;210210+ void __user *datap = (void __user *) data;211211+ switch (request) {212212+ case PTRACE_GETFPREGS: /* Get the child FPU state. */213213+ ret = get_fpregs(datap, child);214214+ break;215215+ case PTRACE_SETFPREGS: /* Set the child FPU state. */216216+ ret = set_fpregs(datap, child);217217+ break;218218+ case PTRACE_GETFPXREGS: /* Get the child FPU state. */219219+ ret = get_fpxregs(datap, child);220220+ break;221221+ case PTRACE_SETFPXREGS: /* Set the child FPU state. */222222+ ret = set_fpxregs(datap, child);223223+ break;224224+ default:225225+ ret = -EIO;226226+ }227227+ return ret;210228}
+1
arch/um/sys-i386/shared/sysdep/ptrace.h
···53535454struct uml_pt_regs {5555 unsigned long gp[MAX_REG_NR];5656+ unsigned long fp[HOST_FPX_SIZE];5657 struct faultinfo faultinfo;5758 long syscall;5859 int is_user;
+8-4
arch/um/sys-x86_64/ptrace.c
···145145 return instr == 0x050f;146146}147147148148-int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)148148+static int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)149149{150150 int err, n, cpu = ((struct thread_info *) child->stack)->cpu;151151 long fpregs[HOST_FP_SIZE];···162162 return n;163163}164164165165-int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)165165+static int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)166166{167167 int n, cpu = ((struct thread_info *) child->stack)->cpu;168168 long fpregs[HOST_FP_SIZE];···182182 void __user *datap = (void __user *) data;183183184184 switch (request) {185185- case PTRACE_GETFPXREGS: /* Get the child FPU state. */185185+ case PTRACE_GETFPREGS: /* Get the child FPU state. */186186 ret = get_fpregs(datap, child);187187 break;188188- case PTRACE_SETFPXREGS: /* Set the child FPU state. */188188+ case PTRACE_SETFPREGS: /* Set the child FPU state. */189189 ret = set_fpregs(datap, child);190190+ break;191191+ case PTRACE_ARCH_PRCTL:192192+ /* XXX Calls ptrace on the host - needs some SMP thinking */193193+ ret = arch_prctl(child, data, (void __user *) addr);190194 break;191195 }192196
+1
arch/um/sys-x86_64/shared/sysdep/ptrace.h
···85858686struct uml_pt_regs {8787 unsigned long gp[MAX_REG_NR];8888+ unsigned long fp[HOST_FP_SIZE];8889 struct faultinfo faultinfo;8990 long syscall;9091 int is_user;
···400400401401 /* xchg acts as a barrier before the setting of the high bits */402402 orig.spte_low = xchg(&ssptep->spte_low, sspte.spte_low);403403- orig.spte_high = ssptep->spte_high = sspte.spte_high;403403+ orig.spte_high = ssptep->spte_high;404404+ ssptep->spte_high = sspte.spte_high;404405 count_spte_clear(sptep, spte);405406406407 return orig.spte;
+9
arch/x86/platform/mrst/vrtc.c
···5858unsigned long vrtc_get_time(void)5959{6060 u8 sec, min, hour, mday, mon;6161+ unsigned long flags;6162 u32 year;6363+6464+ spin_lock_irqsave(&rtc_lock, flags);62656366 while ((vrtc_cmos_read(RTC_FREQ_SELECT) & RTC_UIP))6467 cpu_relax();···7269 mday = vrtc_cmos_read(RTC_DAY_OF_MONTH);7370 mon = vrtc_cmos_read(RTC_MONTH);7471 year = vrtc_cmos_read(RTC_YEAR);7272+7373+ spin_unlock_irqrestore(&rtc_lock, flags);75747675 /* vRTC YEAR reg contains the offset to 1960 */7776 year += 1960;···8883int vrtc_set_mmss(unsigned long nowtime)8984{9085 int real_sec, real_min;8686+ unsigned long flags;9187 int vrtc_min;92888989+ spin_lock_irqsave(&rtc_lock, flags);9390 vrtc_min = vrtc_cmos_read(RTC_MINUTES);94919592 real_sec = nowtime % 60;···1029510396 vrtc_cmos_write(real_sec, RTC_SECONDS);10497 vrtc_cmos_write(real_min, RTC_MINUTES);9898+ spin_unlock_irqrestore(&rtc_lock, flags);9999+105100 return 0;106101}107102
···532532 WARN_ON(xen_smp_intr_init(0));533533534534 xen_init_lock_cpu(0);535535- xen_init_spinlocks();536535}537536538537static int __cpuinit xen_hvm_cpu_up(unsigned int cpu)
···785785{786786 char *s[4], *p, *major_s = NULL, *minor_s = NULL;787787 int ret;788788- unsigned long major, minor, temp;788788+ unsigned long major, minor;789789 int i = 0;790790 dev_t dev;791791- u64 bps, iops;791791+ u64 temp;792792793793 memset(s, 0, sizeof(s));794794···826826827827 dev = MKDEV(major, minor);828828829829- ret = blkio_check_dev_num(dev);829829+ ret = strict_strtoull(s[1], 10, &temp);830830 if (ret)831831- return ret;831831+ return -EINVAL;832832+833833+ /* For rule removal, do not check for device presence. */834834+ if (temp) {835835+ ret = blkio_check_dev_num(dev);836836+ if (ret)837837+ return ret;838838+ }832839833840 newpn->dev = dev;834841835835- if (s[1] == NULL)836836- return -EINVAL;837837-838842 switch (plid) {839843 case BLKIO_POLICY_PROP:840840- ret = strict_strtoul(s[1], 10, &temp);841841- if (ret || (temp < BLKIO_WEIGHT_MIN && temp > 0) ||842842- temp > BLKIO_WEIGHT_MAX)844844+ if ((temp < BLKIO_WEIGHT_MIN && temp > 0) ||845845+ temp > BLKIO_WEIGHT_MAX)843846 return -EINVAL;844847845848 newpn->plid = plid;···853850 switch(fileid) {854851 case BLKIO_THROTL_read_bps_device:855852 case BLKIO_THROTL_write_bps_device:856856- ret = strict_strtoull(s[1], 10, &bps);857857- if (ret)858858- return -EINVAL;859859-860853 newpn->plid = plid;861854 newpn->fileid = fileid;862862- newpn->val.bps = bps;855855+ newpn->val.bps = temp;863856 break;864857 case BLKIO_THROTL_read_iops_device:865858 case BLKIO_THROTL_write_iops_device:866866- ret = strict_strtoull(s[1], 10, &iops);867867- if (ret)868868- return -EINVAL;869869-870870- if (iops > THROTL_IOPS_MAX)859859+ if (temp > THROTL_IOPS_MAX)871860 return -EINVAL;872861873862 newpn->plid = plid;874863 newpn->fileid = fileid;875875- newpn->val.iops = (unsigned int)iops;864864+ newpn->val.iops = (unsigned int)temp;876865 break;877866 }878867 break;
+15-15
block/blk-core.c
···348348EXPORT_SYMBOL(blk_put_queue);349349350350/*351351- * Note: If a driver supplied the queue lock, it should not zap that lock352352- * unexpectedly as some queue cleanup components like elevator_exit() and353353- * blk_throtl_exit() need queue lock.351351+ * Note: If a driver supplied the queue lock, it is disconnected352352+ * by this function. The actual state of the lock doesn't matter353353+ * here as the request_queue isn't accessible after this point354354+ * (QUEUE_FLAG_DEAD is set) and no other requests will be queued.354355 */355356void blk_cleanup_queue(struct request_queue *q)356357{···368367 queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q);369368 mutex_unlock(&q->sysfs_lock);370369371371- if (q->elevator)372372- elevator_exit(q->elevator);373373-374374- blk_throtl_exit(q);370370+ if (q->queue_lock != &q->__queue_lock)371371+ q->queue_lock = &q->__queue_lock;375372376373 blk_put_queue(q);377374}···11661167 * true if merge was successful, otherwise false.11671168 */11681169static bool attempt_plug_merge(struct task_struct *tsk, struct request_queue *q,11691169- struct bio *bio)11701170+ struct bio *bio, unsigned int *request_count)11701171{11711172 struct blk_plug *plug;11721173 struct request *rq;···11751176 plug = tsk->plug;11761177 if (!plug)11771178 goto out;11791179+ *request_count = 0;1178118011791181 list_for_each_entry_reverse(rq, &plug->list, queuelist) {11801182 int el_ret;11831183+11841184+ (*request_count)++;1181118511821186 if (rq->q != q)11831187 continue;···12211219 struct blk_plug *plug;12221220 int el_ret, rw_flags, where = ELEVATOR_INSERT_SORT;12231221 struct request *req;12221222+ unsigned int request_count = 0;1224122312251224 /*12261225 * low level driver can indicate that it wants pages above a···12401237 * Check if we can merge with the plugged list before grabbing12411238 * any locks.12421239 */12431243- if (attempt_plug_merge(current, q, bio))12401240+ if (attempt_plug_merge(current, q, bio, &request_count))12441241 goto out;1245124212461243 spin_lock_irq(q->queue_lock);···13051302 if (__rq->q != q)13061303 plug->should_sort = 1;13071304 }13081308- list_add_tail(&req->queuelist, &plug->list);13091309- plug->count++;13101310- drive_stat_acct(req, 1);13111311- if (plug->count >= BLK_MAX_REQUEST_COUNT)13051305+ if (request_count >= BLK_MAX_REQUEST_COUNT)13121306 blk_flush_plug_list(plug, false);13071307+ list_add_tail(&req->queuelist, &plug->list);13081308+ drive_stat_acct(req, 1);13131309 } else {13141310 spin_lock_irq(q->queue_lock);13151311 add_acct_request(q, req, where);···26362634 INIT_LIST_HEAD(&plug->list);26372635 INIT_LIST_HEAD(&plug->cb_list);26382636 plug->should_sort = 0;26392639- plug->count = 0;2640263726412638 /*26422639 * If this is a nested plug, don't actually assign it. It will be···27192718 return;2720271927212720 list_splice_init(&plug->list, &list);27222722- plug->count = 0;2723272127242722 if (plug->should_sort) {27252723 list_sort(NULL, &list, plug_rq_cmp);
+1-1
block/blk-softirq.c
···115115 /*116116 * Select completion CPU117117 */118118- if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags) && req->cpu != -1) {118118+ if (req->cpu != -1) {119119 ccpu = req->cpu;120120 if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags)) {121121 ccpu = blk_cpu_to_group(ccpu);
+11-4
block/blk-sysfs.c
···258258259259 ret = queue_var_store(&val, page, count);260260 spin_lock_irq(q->queue_lock);261261- if (val) {261261+ if (val == 2) {262262 queue_flag_set(QUEUE_FLAG_SAME_COMP, q);263263- if (val == 2)264264- queue_flag_set(QUEUE_FLAG_SAME_FORCE, q);265265- } else {263263+ queue_flag_set(QUEUE_FLAG_SAME_FORCE, q);264264+ } else if (val == 1) {265265+ queue_flag_set(QUEUE_FLAG_SAME_COMP, q);266266+ queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q);267267+ } else if (val == 0) {266268 queue_flag_clear(QUEUE_FLAG_SAME_COMP, q);267269 queue_flag_clear(QUEUE_FLAG_SAME_FORCE, q);268270 }···478476 struct request_list *rl = &q->rq;479477480478 blk_sync_queue(q);479479+480480+ if (q->elevator)481481+ elevator_exit(q->elevator);482482+483483+ blk_throtl_exit(q);481484482485 if (rl->rq_pool)483486 mempool_destroy(rl->rq_pool);
+10-10
block/cfq-iosched.c
···130130 unsigned long slice_end;131131 long slice_resid;132132133133- /* pending metadata requests */134134- int meta_pending;133133+ /* pending priority requests */134134+ int prio_pending;135135 /* number of requests that are on the dispatch list or inside driver */136136 int dispatched;137137···684684 if (rq_is_sync(rq1) != rq_is_sync(rq2))685685 return rq_is_sync(rq1) ? rq1 : rq2;686686687687- if ((rq1->cmd_flags ^ rq2->cmd_flags) & REQ_META)688688- return rq1->cmd_flags & REQ_META ? rq1 : rq2;687687+ if ((rq1->cmd_flags ^ rq2->cmd_flags) & REQ_PRIO)688688+ return rq1->cmd_flags & REQ_PRIO ? rq1 : rq2;689689690690 s1 = blk_rq_pos(rq1);691691 s2 = blk_rq_pos(rq2);···16121612 cfqq->cfqd->rq_queued--;16131613 cfq_blkiocg_update_io_remove_stats(&(RQ_CFQG(rq))->blkg,16141614 rq_data_dir(rq), rq_is_sync(rq));16151615- if (rq->cmd_flags & REQ_META) {16161616- WARN_ON(!cfqq->meta_pending);16171617- cfqq->meta_pending--;16151615+ if (rq->cmd_flags & REQ_PRIO) {16161616+ WARN_ON(!cfqq->prio_pending);16171617+ cfqq->prio_pending--;16181618 }16191619}16201620···33723372 * So both queues are sync. Let the new request get disk time if33733373 * it's a metadata request and the current queue is doing regular IO.33743374 */33753375- if ((rq->cmd_flags & REQ_META) && !cfqq->meta_pending)33753375+ if ((rq->cmd_flags & REQ_PRIO) && !cfqq->prio_pending)33763376 return true;3377337733783378 /*···34393439 struct cfq_io_context *cic = RQ_CIC(rq);3440344034413441 cfqd->rq_queued++;34423442- if (rq->cmd_flags & REQ_META)34433443- cfqq->meta_pending++;34423442+ if (rq->cmd_flags & REQ_PRIO)34433443+ cfqq->prio_pending++;3444344434453445 cfq_update_io_thinktime(cfqd, cfqq, cic);34463446 cfq_update_io_seektime(cfqd, cfqq, rq);
+1-1
drivers/acpi/acpica/acconfig.h
···121121122122/* Maximum sleep allowed via Sleep() operator */123123124124-#define ACPI_MAX_SLEEP 20000 /* Two seconds */124124+#define ACPI_MAX_SLEEP 2000 /* Two seconds */125125126126/******************************************************************************127127 *
···590590591591 /*592592 * Enforce precondition before potential leak point.593593- * blkif_disconnect() is idempotent.593593+ * xen_blkif_disconnect() is idempotent.594594 */595595 xen_blkif_disconnect(be->blkif);596596···601601 break;602602603603 case XenbusStateClosing:604604- xen_blkif_disconnect(be->blkif);605604 xenbus_switch_state(dev, XenbusStateClosing);606605 break;607606608607 case XenbusStateClosed:608608+ xen_blkif_disconnect(be->blkif);609609 xenbus_switch_state(dev, XenbusStateClosed);610610 if (xenbus_dev_is_online(dev))611611 break;612612 /* fall through if not online */613613 case XenbusStateUnknown:614614- /* implies blkif_disconnect() via blkback_remove() */614614+ /* implies xen_blkif_disconnect() via xen_blkbk_remove() */615615 device_unregister(&dev->dev);616616 break;617617
+6
drivers/bluetooth/btusb.c
···7272 /* Apple MacBookAir3,1, MacBookAir3,2 */7373 { USB_DEVICE(0x05ac, 0x821b) },74747575+ /* Apple MacBookAir4,1 */7676+ { USB_DEVICE(0x05ac, 0x821f) },7777+7578 /* Apple MacBookPro8,2 */7679 { USB_DEVICE(0x05ac, 0x821a) },8080+8181+ /* Apple MacMini5,1 */8282+ { USB_DEVICE(0x05ac, 0x8281) },77837884 /* AVM BlueFRITZ! USB v2.0 */7985 { USB_DEVICE(0x057c, 0x3800) },
···43434444config TCG_ATMEL4545 tristate "Atmel TPM Interface"4646+ depends on PPC64 || HAS_IOPORT4647 ---help---4748 If you have a TPM security chip from Atmel say Yes and it 4849 will be accessible from within Linux. To compile this driver
···6868 if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {6969 int saved_dpms = connector->dpms;70707171- if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&7272- radeon_dp_needs_link_train(radeon_connector))7373- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);7474- else7171+ /* Only turn off the display it it's physically disconnected */7272+ if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd))7573 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);7474+ else if (radeon_dp_needs_link_train(radeon_connector))7575+ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);7676 connector->dpms = saved_dpms;7777 }7878}
+19-21
drivers/gpu/drm/radeon/radeon_cursor.c
···208208 int xorigin = 0, yorigin = 0;209209 int w = radeon_crtc->cursor_width;210210211211- if (x < 0)212212- xorigin = -x + 1;213213- if (y < 0)214214- yorigin = -y + 1;215215- if (xorigin >= CURSOR_WIDTH)216216- xorigin = CURSOR_WIDTH - 1;217217- if (yorigin >= CURSOR_HEIGHT)218218- yorigin = CURSOR_HEIGHT - 1;211211+ if (ASIC_IS_AVIVO(rdev)) {212212+ /* avivo cursor are offset into the total surface */213213+ x += crtc->x;214214+ y += crtc->y;215215+ }216216+ DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);217217+218218+ if (x < 0) {219219+ xorigin = min(-x, CURSOR_WIDTH - 1);220220+ x = 0;221221+ }222222+ if (y < 0) {223223+ yorigin = min(-y, CURSOR_HEIGHT - 1);224224+ y = 0;225225+ }219226220227 if (ASIC_IS_AVIVO(rdev)) {221228 int i = 0;222229 struct drm_crtc *crtc_p;223223-224224- /* avivo cursor are offset into the total surface */225225- x += crtc->x;226226- y += crtc->y;227227- DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);228230229231 /* avivo cursor image can't end on 128 pixel boundary or230232 * go past the end of the frame if both crtcs are enabled···255253256254 radeon_lock_cursor(crtc, true);257255 if (ASIC_IS_DCE4(rdev)) {258258- WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset,259259- ((xorigin ? 0 : x) << 16) |260260- (yorigin ? 0 : y));256256+ WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);261257 WREG32(EVERGREEN_CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);262258 WREG32(EVERGREEN_CUR_SIZE + radeon_crtc->crtc_offset,263259 ((w - 1) << 16) | (radeon_crtc->cursor_height - 1));264260 } else if (ASIC_IS_AVIVO(rdev)) {265265- WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset,266266- ((xorigin ? 0 : x) << 16) |267267- (yorigin ? 0 : y));261261+ WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);268262 WREG32(AVIVO_D1CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);269263 WREG32(AVIVO_D1CUR_SIZE + radeon_crtc->crtc_offset,270264 ((w - 1) << 16) | (radeon_crtc->cursor_height - 1));···274276 | yorigin));275277 WREG32(RADEON_CUR_HORZ_VERT_POSN + radeon_crtc->crtc_offset,276278 (RADEON_CUR_LOCK277277- | ((xorigin ? 0 : x) << 16)278278- | (yorigin ? 0 : y)));279279+ | (x << 16)280280+ | y));279281 /* offset is from DISP(2)_BASE_ADDRESS */280282 WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset +281283 (yorigin * 256)));
···536536 return backend_map;537537}538538539539-static void rv770_program_channel_remap(struct radeon_device *rdev)540540-{541541- u32 tcp_chan_steer, mc_shared_chremap, tmp;542542- bool force_no_swizzle;543543-544544- switch (rdev->family) {545545- case CHIP_RV770:546546- case CHIP_RV730:547547- force_no_swizzle = false;548548- break;549549- case CHIP_RV710:550550- case CHIP_RV740:551551- default:552552- force_no_swizzle = true;553553- break;554554- }555555-556556- tmp = RREG32(MC_SHARED_CHMAP);557557- switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {558558- case 0:559559- case 1:560560- default:561561- /* default mapping */562562- mc_shared_chremap = 0x00fac688;563563- break;564564- case 2:565565- case 3:566566- if (force_no_swizzle)567567- mc_shared_chremap = 0x00fac688;568568- else569569- mc_shared_chremap = 0x00bbc298;570570- break;571571- }572572-573573- if (rdev->family == CHIP_RV740)574574- tcp_chan_steer = 0x00ef2a60;575575- else576576- tcp_chan_steer = 0x00fac688;577577-578578- /* RV770 CE has special chremap setup */579579- if (rdev->pdev->device == 0x944e) {580580- tcp_chan_steer = 0x00b08b08;581581- mc_shared_chremap = 0x00b08b08;582582- }583583-584584- WREG32(TCP_CHAN_STEER, tcp_chan_steer);585585- WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);586586-}587587-588539static void rv770_gpu_init(struct radeon_device *rdev)589540{590541 int i, j, num_qd_pipes;···735784 WREG32(GB_TILING_CONFIG, gb_tiling_config);736785 WREG32(DCP_TILING_CONFIG, (gb_tiling_config & 0xffff));737786 WREG32(HDP_TILING_CONFIG, (gb_tiling_config & 0xffff));738738-739739- rv770_program_channel_remap(rdev);740787741788 WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);742789 WREG32(CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config);
+2-1
drivers/gpu/drm/ttm/ttm_bo.c
···394394395395 if (!(new_man->flags & TTM_MEMTYPE_FLAG_FIXED)) {396396 if (bo->ttm == NULL) {397397- ret = ttm_bo_add_ttm(bo, false);397397+ bool zero = !(old_man->flags & TTM_MEMTYPE_FLAG_FIXED);398398+ ret = ttm_bo_add_ttm(bo, zero);398399 if (ret)399400 goto out_err;400401 }
···3636#include <linux/cpu.h>3737#include <linux/pci.h>3838#include <linux/smp.h>3939+#include <linux/moduleparam.h>3940#include <asm/msr.h>4041#include <asm/processor.h>41424243#define DRVNAME "coretemp"43444545+/*4646+ * force_tjmax only matters when TjMax can't be read from the CPU itself.4747+ * When set, it replaces the driver's suboptimal heuristic.4848+ */4949+static int force_tjmax;5050+module_param_named(tjmax, force_tjmax, int, 0444);5151+MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");5252+4453#define BASE_SYSFS_ATTR_NO 2 /* Sysfs Base attr no for coretemp */4554#define NUM_REAL_CORES 16 /* Number of Real cores per cpu */4655#define CORETEMP_NAME_LENGTH 17 /* String Length of attrs */4756#define MAX_CORE_ATTRS 4 /* Maximum no of basic attrs */4848-#define MAX_THRESH_ATTRS 3 /* Maximum no of Threshold attrs */4949-#define TOTAL_ATTRS (MAX_CORE_ATTRS + MAX_THRESH_ATTRS)5757+#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)5058#define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)51595260#ifdef CONFIG_SMP···7769 * This value is passed as "id" field to rdmsr/wrmsr functions.7870 * @status_reg: One of IA32_THERM_STATUS or IA32_PACKAGE_THERM_STATUS,7971 * from where the temperature values should be read.8080- * @intrpt_reg: One of IA32_THERM_INTERRUPT or IA32_PACKAGE_THERM_INTERRUPT,8181- * from where the thresholds are read.8272 * @attr_size: Total number of pre-core attrs displayed in the sysfs.8373 * @is_pkg_data: If this is 1, the temp_data holds pkgtemp data.8474 * Otherwise, temp_data holds coretemp data.···8579struct temp_data {8680 int temp;8781 int ttarget;8888- int tmin;8982 int tjmax;9083 unsigned long last_updated;9184 unsigned int cpu;9285 u32 cpu_core_id;9386 u32 status_reg;9494- u32 intrpt_reg;9587 int attr_size;9688 bool is_pkg_data;9789 bool valid;···147143 return sprintf(buf, "%d\n", (eax >> 5) & 1);148144}149145150150-static ssize_t show_max_alarm(struct device *dev,151151- struct device_attribute *devattr, char *buf)152152-{153153- u32 eax, edx;154154- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);155155- struct platform_data *pdata = dev_get_drvdata(dev);156156- struct temp_data *tdata = pdata->core_data[attr->index];157157-158158- rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &eax, &edx);159159-160160- return sprintf(buf, "%d\n", !!(eax & THERM_STATUS_THRESHOLD1));161161-}162162-163146static ssize_t show_tjmax(struct device *dev,164147 struct device_attribute *devattr, char *buf)165148{···163172 struct platform_data *pdata = dev_get_drvdata(dev);164173165174 return sprintf(buf, "%d\n", pdata->core_data[attr->index]->ttarget);166166-}167167-168168-static ssize_t store_ttarget(struct device *dev,169169- struct device_attribute *devattr,170170- const char *buf, size_t count)171171-{172172- struct platform_data *pdata = dev_get_drvdata(dev);173173- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);174174- struct temp_data *tdata = pdata->core_data[attr->index];175175- u32 eax, edx;176176- unsigned long val;177177- int diff;178178-179179- if (strict_strtoul(buf, 10, &val))180180- return -EINVAL;181181-182182- /*183183- * THERM_MASK_THRESHOLD1 is 7 bits wide. Values are entered in terms184184- * of milli degree celsius. Hence don't accept val > (127 * 1000)185185- */186186- if (val > tdata->tjmax || val > 127000)187187- return -EINVAL;188188-189189- diff = (tdata->tjmax - val) / 1000;190190-191191- mutex_lock(&tdata->update_lock);192192- rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);193193- eax = (eax & ~THERM_MASK_THRESHOLD1) |194194- (diff << THERM_SHIFT_THRESHOLD1);195195- wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);196196- tdata->ttarget = val;197197- mutex_unlock(&tdata->update_lock);198198-199199- return count;200200-}201201-202202-static ssize_t show_tmin(struct device *dev,203203- struct device_attribute *devattr, char *buf)204204-{205205- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);206206- struct platform_data *pdata = dev_get_drvdata(dev);207207-208208- return sprintf(buf, "%d\n", pdata->core_data[attr->index]->tmin);209209-}210210-211211-static ssize_t store_tmin(struct device *dev,212212- struct device_attribute *devattr,213213- const char *buf, size_t count)214214-{215215- struct platform_data *pdata = dev_get_drvdata(dev);216216- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);217217- struct temp_data *tdata = pdata->core_data[attr->index];218218- u32 eax, edx;219219- unsigned long val;220220- int diff;221221-222222- if (strict_strtoul(buf, 10, &val))223223- return -EINVAL;224224-225225- /*226226- * THERM_MASK_THRESHOLD0 is 7 bits wide. Values are entered in terms227227- * of milli degree celsius. Hence don't accept val > (127 * 1000)228228- */229229- if (val > tdata->tjmax || val > 127000)230230- return -EINVAL;231231-232232- diff = (tdata->tjmax - val) / 1000;233233-234234- mutex_lock(&tdata->update_lock);235235- rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);236236- eax = (eax & ~THERM_MASK_THRESHOLD0) |237237- (diff << THERM_SHIFT_THRESHOLD0);238238- wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);239239- tdata->tmin = val;240240- mutex_unlock(&tdata->update_lock);241241-242242- return count;243175}244176245177static ssize_t show_temp(struct device *dev,···288374289375static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)290376{291291- /* The 100C is default for both mobile and non mobile CPUs */292377 int err;293378 u32 eax, edx;294379 u32 val;···298385 */299386 err = rdmsr_safe_on_cpu(id, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);300387 if (err) {301301- dev_warn(dev, "Unable to read TjMax from CPU.\n");388388+ if (c->x86_model > 0xe && c->x86_model != 0x1c)389389+ dev_warn(dev, "Unable to read TjMax from CPU %u\n", id);302390 } else {303391 val = (eax >> 16) & 0xff;304392 /*···307393 * will be used308394 */309395 if (val) {310310- dev_info(dev, "TjMax is %d C.\n", val);396396+ dev_dbg(dev, "TjMax is %d degrees C\n", val);311397 return val * 1000;312398 }399399+ }400400+401401+ if (force_tjmax) {402402+ dev_notice(dev, "TjMax forced to %d degrees C by user\n",403403+ force_tjmax);404404+ return force_tjmax * 1000;313405 }314406315407 /*···334414 rdmsr(MSR_IA32_UCODE_REV, eax, *(u32 *)edx);335415}336416337337-static int get_pkg_tjmax(unsigned int cpu, struct device *dev)338338-{339339- int err;340340- u32 eax, edx, val;341341-342342- err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);343343- if (!err) {344344- val = (eax >> 16) & 0xff;345345- if (val)346346- return val * 1000;347347- }348348- dev_warn(dev, "Unable to read Pkg-TjMax from CPU:%u\n", cpu);349349- return 100000; /* Default TjMax: 100 degree celsius */350350-}351351-352417static int create_name_attr(struct platform_data *pdata, struct device *dev)353418{354419 sysfs_attr_init(&pdata->name_attr.attr);···347442 int attr_no)348443{349444 int err, i;350350- static ssize_t (*rd_ptr[TOTAL_ATTRS]) (struct device *dev,445445+ static ssize_t (*const rd_ptr[TOTAL_ATTRS]) (struct device *dev,351446 struct device_attribute *devattr, char *buf) = {352447 show_label, show_crit_alarm, show_temp, show_tjmax,353353- show_max_alarm, show_ttarget, show_tmin };354354- static ssize_t (*rw_ptr[TOTAL_ATTRS]) (struct device *dev,355355- struct device_attribute *devattr, const char *buf,356356- size_t count) = { NULL, NULL, NULL, NULL, NULL,357357- store_ttarget, store_tmin };358358- static const char *names[TOTAL_ATTRS] = {448448+ show_ttarget };449449+ static const char *const names[TOTAL_ATTRS] = {359450 "temp%d_label", "temp%d_crit_alarm",360451 "temp%d_input", "temp%d_crit",361361- "temp%d_max_alarm", "temp%d_max",362362- "temp%d_max_hyst" };452452+ "temp%d_max" };363453364454 for (i = 0; i < tdata->attr_size; i++) {365455 snprintf(tdata->attr_name[i], CORETEMP_NAME_LENGTH, names[i],···362462 sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr);363463 tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i];364464 tdata->sd_attrs[i].dev_attr.attr.mode = S_IRUGO;365365- if (rw_ptr[i]) {366366- tdata->sd_attrs[i].dev_attr.attr.mode |= S_IWUSR;367367- tdata->sd_attrs[i].dev_attr.store = rw_ptr[i];368368- }369465 tdata->sd_attrs[i].dev_attr.show = rd_ptr[i];370466 tdata->sd_attrs[i].index = attr_no;371467 err = device_create_file(dev, &tdata->sd_attrs[i].dev_attr);···377481}378482379483380380-static int __devinit chk_ucode_version(struct platform_device *pdev)484484+static int __cpuinit chk_ucode_version(unsigned int cpu)381485{382382- struct cpuinfo_x86 *c = &cpu_data(pdev->id);486486+ struct cpuinfo_x86 *c = &cpu_data(cpu);383487 int err;384488 u32 edx;385489···390494 */391495 if (c->x86_model == 0xe && c->x86_mask < 0xc) {392496 /* check for microcode update */393393- err = smp_call_function_single(pdev->id, get_ucode_rev_on_cpu,497497+ err = smp_call_function_single(cpu, get_ucode_rev_on_cpu,394498 &edx, 1);395499 if (err) {396396- dev_err(&pdev->dev,397397- "Cannot determine microcode revision of "398398- "CPU#%u (%d)!\n", pdev->id, err);500500+ pr_err("Cannot determine microcode revision of "501501+ "CPU#%u (%d)!\n", cpu, err);399502 return -ENODEV;400503 } else if (edx < 0x39) {401401- dev_err(&pdev->dev,402402- "Errata AE18 not fixed, update BIOS or "403403- "microcode of the CPU!\n");504504+ pr_err("Errata AE18 not fixed, update BIOS or "505505+ "microcode of the CPU!\n");404506 return -ENODEV;405507 }406508 }···432538433539 tdata->status_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_STATUS :434540 MSR_IA32_THERM_STATUS;435435- tdata->intrpt_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_INTERRUPT :436436- MSR_IA32_THERM_INTERRUPT;437541 tdata->is_pkg_data = pkg_flag;438542 tdata->cpu = cpu;439543 tdata->cpu_core_id = TO_CORE_ID(cpu);···440548 return tdata;441549}442550443443-static int create_core_data(struct platform_data *pdata,444444- struct platform_device *pdev,551551+static int create_core_data(struct platform_device *pdev,445552 unsigned int cpu, int pkg_flag)446553{447554 struct temp_data *tdata;555555+ struct platform_data *pdata = platform_get_drvdata(pdev);448556 struct cpuinfo_x86 *c = &cpu_data(cpu);449557 u32 eax, edx;450558 int err, attr_no;···480588 goto exit_free;481589482590 /* We can access status register. Get Critical Temperature */483483- if (pkg_flag)484484- tdata->tjmax = get_pkg_tjmax(pdev->id, &pdev->dev);485485- else486486- tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);591591+ tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);487592488593 /*489489- * Test if we can access the intrpt register. If so, increase the490490- * 'size' enough to have ttarget/tmin/max_alarm interfaces.491491- * Initialize ttarget with bits 16:22 of MSR_IA32_THERM_INTERRUPT594594+ * Read the still undocumented bits 8:15 of IA32_TEMPERATURE_TARGET.595595+ * The target temperature is available on older CPUs but not in this596596+ * register. Atoms don't have the register at all.492597 */493493- err = rdmsr_safe_on_cpu(cpu, tdata->intrpt_reg, &eax, &edx);494494- if (!err) {495495- tdata->attr_size += MAX_THRESH_ATTRS;496496- tdata->ttarget = tdata->tjmax - ((eax >> 16) & 0x7f) * 1000;598598+ if (c->x86_model > 0xe && c->x86_model != 0x1c) {599599+ err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET,600600+ &eax, &edx);601601+ if (!err) {602602+ tdata->ttarget603603+ = tdata->tjmax - ((eax >> 8) & 0xff) * 1000;604604+ tdata->attr_size++;605605+ }497606 }498607499608 pdata->core_data[attr_no] = tdata;···506613507614 return 0;508615exit_free:616616+ pdata->core_data[attr_no] = NULL;509617 kfree(tdata);510618 return err;511619}512620513621static void coretemp_add_core(unsigned int cpu, int pkg_flag)514622{515515- struct platform_data *pdata;516623 struct platform_device *pdev = coretemp_get_pdev(cpu);517624 int err;518625519626 if (!pdev)520627 return;521628522522- pdata = platform_get_drvdata(pdev);523523-524524- err = create_core_data(pdata, pdev, cpu, pkg_flag);629629+ err = create_core_data(pdev, cpu, pkg_flag);525630 if (err)526631 dev_err(&pdev->dev, "Adding Core %u failed\n", cpu);527632}···543652 struct platform_data *pdata;544653 int err;545654546546- /* Check the microcode version of the CPU */547547- err = chk_ucode_version(pdev);548548- if (err)549549- return err;550550-551655 /* Initialize the per-package data structures */552656 pdata = kzalloc(sizeof(struct platform_data), GFP_KERNEL);553657 if (!pdata)···552666 if (err)553667 goto exit_free;554668555555- pdata->phys_proc_id = TO_PHYS_ID(pdev->id);669669+ pdata->phys_proc_id = pdev->id;556670 platform_set_drvdata(pdev, pdata);557671558672 pdata->hwmon_dev = hwmon_device_register(&pdev->dev);···604718605719 mutex_lock(&pdev_list_mutex);606720607607- pdev = platform_device_alloc(DRVNAME, cpu);721721+ pdev = platform_device_alloc(DRVNAME, TO_PHYS_ID(cpu));608722 if (!pdev) {609723 err = -ENOMEM;610724 pr_err("Device allocation failed\n");···624738 }625739626740 pdev_entry->pdev = pdev;627627- pdev_entry->phys_proc_id = TO_PHYS_ID(cpu);741741+ pdev_entry->phys_proc_id = pdev->id;628742629743 list_add_tail(&pdev_entry->list, &pdev_list);630744 mutex_unlock(&pdev_list_mutex);···685799 return;686800687801 if (!pdev) {802802+ /* Check the microcode version of the CPU */803803+ if (chk_ucode_version(cpu))804804+ return;805805+688806 /*689807 * Alright, we have DTS support.690808 * We are bringing the _first_ core in this pkg
+1-1
drivers/hwmon/ds620.c
···7272 char valid; /* !=0 if following fields are valid */7373 unsigned long last_updated; /* In jiffies */74747575- u16 temp[3]; /* Register values, word */7575+ s16 temp[3]; /* Register values, word */7676};77777878/*
···11041104 * buffers, making sure userspace applications are notified of the problem11051105 * instead of waiting forever.11061106 */11071107-int uvc_video_resume(struct uvc_streaming *stream)11071107+int uvc_video_resume(struct uvc_streaming *stream, int reset)11081108{11091109 int ret;11101110+11111111+ /* If the bus has been reset on resume, set the alternate setting to 0.11121112+ * This should be the default value, but some devices crash or otherwise11131113+ * misbehave if they don't receive a SET_INTERFACE request before any11141114+ * other video control request.11151115+ */11161116+ if (reset)11171117+ usb_set_interface(stream->dev->udev, stream->intfnum, 0);1110111811111119 stream->frozen = 0;11121120
+1-1
drivers/media/video/uvc/uvcvideo.h
···638638/* Video */639639extern int uvc_video_init(struct uvc_streaming *stream);640640extern int uvc_video_suspend(struct uvc_streaming *stream);641641-extern int uvc_video_resume(struct uvc_streaming *stream);641641+extern int uvc_video_resume(struct uvc_streaming *stream, int reset);642642extern int uvc_video_enable(struct uvc_streaming *stream, int enable);643643extern int uvc_probe_video(struct uvc_streaming *stream,644644 struct uvc_streaming_control *probe);
+11
drivers/media/video/v4l2-dev.c
···173173 media_device_unregister_entity(&vdev->entity);174174#endif175175176176+ /* Do not call v4l2_device_put if there is no release callback set.177177+ * Drivers that have no v4l2_device release callback might free the178178+ * v4l2_dev instance in the video_device release callback below, so we179179+ * must perform this check here.180180+ *181181+ * TODO: In the long run all drivers that use v4l2_device should use the182182+ * v4l2_device release callback. This check will then be unnecessary.183183+ */184184+ if (v4l2_dev->release == NULL)185185+ v4l2_dev = NULL;186186+176187 /* Release video_device and perform other177188 cleanups as needed. */178189 vdev->release(vdev);
+2
drivers/media/video/v4l2-device.c
···3838 mutex_init(&v4l2_dev->ioctl_lock);3939 v4l2_prio_init(&v4l2_dev->prio);4040 kref_init(&v4l2_dev->ref);4141+ get_device(dev);4142 v4l2_dev->dev = dev;4243 if (dev == NULL) {4344 /* If dev == NULL, then name must be filled in by the caller */···94939594 if (dev_get_drvdata(v4l2_dev->dev) == v4l2_dev)9695 dev_set_drvdata(v4l2_dev->dev, NULL);9696+ put_device(v4l2_dev->dev);9797 v4l2_dev->dev = NULL;9898}9999EXPORT_SYMBOL_GPL(v4l2_device_disconnect);
···1717 * along with this program. If not, see <http://www.gnu.org/licenses/>.1818 */1919#include <linux/kernel.h>2020+#include <linux/module.h>2021#include <linux/types.h>2122#include <linux/slab.h>2223#include <linux/delay.h>···677676 | OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF678677 | OMAP_TLL_CHANNEL_CONF_ULPIDDRMODE);679678680680- reg |= (1 << (i + 1));681679 } else682680 continue;683681
+2
drivers/mfd/tps65910-irq.c
···178178 switch (tps65910_chip_id(tps65910)) {179179 case TPS65910:180180 tps65910->irq_num = TPS65910_NUM_IRQ;181181+ break;181182 case TPS65911:182183 tps65910->irq_num = TPS65911_NUM_IRQ;184184+ break;183185 }184186185187 /* Register with genirq */
+4-1
drivers/mfd/twl4030-madc.c
···510510 u8 ch_msb, ch_lsb;511511 int ret;512512513513- if (!req)513513+ if (!req || !twl4030_madc)514514 return -EINVAL;515515+515516 mutex_lock(&twl4030_madc->lock);516517 if (req->method < TWL4030_MADC_RT || req->method > TWL4030_MADC_SW2) {517518 ret = -EINVAL;···706705 madc = kzalloc(sizeof(*madc), GFP_KERNEL);707706 if (!madc)708707 return -ENOMEM;708708+709709+ madc->dev = &pdev->dev;709710710711 /*711712 * Phoenix provides 2 interrupt lines. The first one is connected to
+2-2
drivers/mfd/wm8350-gpio.c
···3737 return ret;3838}39394040-static int gpio_set_debounce(struct wm8350 *wm8350, int gpio, int db)4040+static int wm8350_gpio_set_debounce(struct wm8350 *wm8350, int gpio, int db)4141{4242 if (db == WM8350_GPIO_DEBOUNCE_ON)4343 return wm8350_set_bits(wm8350, WM8350_GPIO_DEBOUNCE,···210210 goto err;211211 if (gpio_set_polarity(wm8350, gpio, pol))212212 goto err;213213- if (gpio_set_debounce(wm8350, gpio, debounce))213213+ if (wm8350_gpio_set_debounce(wm8350, gpio, debounce))214214 goto err;215215 if (gpio_set_dir(wm8350, gpio, dir))216216 goto err;
+8-6
drivers/misc/lis3lv02d/lis3lv02d.c
···375375 * both have been read. So the value read will always be correct.376376 * Set BOOT bit to refresh factory tuning values.377377 */378378- lis3->read(lis3, CTRL_REG2, ®);379379- if (lis3->whoami == WAI_12B)380380- reg |= CTRL2_BDU | CTRL2_BOOT;381381- else382382- reg |= CTRL2_BOOT_8B;383383- lis3->write(lis3, CTRL_REG2, reg);378378+ if (lis3->pdata) {379379+ lis3->read(lis3, CTRL_REG2, ®);380380+ if (lis3->whoami == WAI_12B)381381+ reg |= CTRL2_BDU | CTRL2_BOOT;382382+ else383383+ reg |= CTRL2_BOOT_8B;384384+ lis3->write(lis3, CTRL_REG2, reg);385385+ }384386385387 /* LIS3 power on delay is quite long */386388 msleep(lis3->pwron_delay / lis3lv02d_get_odr());
+5-7
drivers/misc/pti.c
···165165static void pti_control_frame_built_and_sent(struct pti_masterchannel *mc,166166 const char *thread_name)167167{168168+ /*169169+ * Since we access the comm member in current's task_struct, we only170170+ * need to be as large as what 'comm' in that structure is.171171+ */172172+ char comm[TASK_COMM_LEN];168173 struct pti_masterchannel mccontrol = {.master = CONTROL_ID,169174 .channel = 0};170175 const char *thread_name_p;···177172 u8 control_frame[CONTROL_FRAME_LEN];178173179174 if (!thread_name) {180180- /*181181- * Since we access the comm member in current's task_struct,182182- * we only need to be as large as what 'comm' in that183183- * structure is.184184- */185185- char comm[TASK_COMM_LEN];186186-187175 if (!in_interrupt())188176 get_task_comm(comm, current);189177 else
+3
drivers/mmc/card/block.c
···926926 /*927927 * Reliable writes are used to implement Forced Unit Access and928928 * REQ_META accesses, and are supported only on MMCs.929929+ *930930+ * XXX: this really needs a good explanation of why REQ_META931931+ * is treated special.929932 */930933 bool do_rel_wr = ((req->cmd_flags & REQ_FUA) ||931934 (req->cmd_flags & REQ_META)) &&
+6-5
drivers/net/Kconfig
···25352535source "drivers/net/stmmac/Kconfig"2536253625372537config PCH_GBE25382538- tristate "Intel EG20T PCH / OKI SEMICONDUCTOR ML7223 IOH GbE"25382538+ tristate "Intel EG20T PCH/OKI SEMICONDUCTOR IOH(ML7223/ML7831) GbE"25392539 depends on PCI25402540 select MII25412541 ---help---···25482548 This driver enables Gigabit Ethernet function.2549254925502550 This driver also can be used for OKI SEMICONDUCTOR IOH(Input/25512551- Output Hub), ML7223.25522552- ML7223 IOH is for MP(Media Phone) use.25532553- ML7223 is companion chip for Intel Atom E6xx series.25542554- ML7223 is completely compatible for Intel EG20T PCH.25512551+ Output Hub), ML7223/ML7831.25522552+ ML7223 IOH is for MP(Media Phone) use. ML7831 IOH is for general25532553+ purpose use.25542554+ ML7223/ML7831 is companion chip for Intel Atom E6xx series.25552555+ ML7223/ML7831 is completely compatible for Intel EG20T PCH.2555255625562557config FTGMAC10025572558 tristate "Faraday FTGMAC100 Gigabit Ethernet support"
+91-31
drivers/net/bnx2x/bnx2x.h
···315315 u32 raw;316316};317317318318+/* dropless fc FW/HW related params */319319+#define BRB_SIZE(bp) (CHIP_IS_E3(bp) ? 1024 : 512)320320+#define MAX_AGG_QS(bp) (CHIP_IS_E1(bp) ? \321321+ ETH_MAX_AGGREGATION_QUEUES_E1 :\322322+ ETH_MAX_AGGREGATION_QUEUES_E1H_E2)323323+#define FW_DROP_LEVEL(bp) (3 + MAX_SPQ_PENDING + MAX_AGG_QS(bp))324324+#define FW_PREFETCH_CNT 16325325+#define DROPLESS_FC_HEADROOM 100318326319327/* MC hsi */320328#define BCM_PAGE_SHIFT 12···339331/* SGE ring related macros */340332#define NUM_RX_SGE_PAGES 2341333#define RX_SGE_CNT (BCM_PAGE_SIZE / sizeof(struct eth_rx_sge))342342-#define MAX_RX_SGE_CNT (RX_SGE_CNT - 2)334334+#define NEXT_PAGE_SGE_DESC_CNT 2335335+#define MAX_RX_SGE_CNT (RX_SGE_CNT - NEXT_PAGE_SGE_DESC_CNT)343336/* RX_SGE_CNT is promised to be a power of 2 */344337#define RX_SGE_MASK (RX_SGE_CNT - 1)345338#define NUM_RX_SGE (RX_SGE_CNT * NUM_RX_SGE_PAGES)346339#define MAX_RX_SGE (NUM_RX_SGE - 1)347340#define NEXT_SGE_IDX(x) ((((x) & RX_SGE_MASK) == \348348- (MAX_RX_SGE_CNT - 1)) ? (x) + 3 : (x) + 1)341341+ (MAX_RX_SGE_CNT - 1)) ? \342342+ (x) + 1 + NEXT_PAGE_SGE_DESC_CNT : \343343+ (x) + 1)349344#define RX_SGE(x) ((x) & MAX_RX_SGE)345345+346346+/*347347+ * Number of required SGEs is the sum of two:348348+ * 1. Number of possible opened aggregations (next packet for349349+ * these aggregations will probably consume SGE immidiatelly)350350+ * 2. Rest of BRB blocks divided by 2 (block will consume new SGE only351351+ * after placement on BD for new TPA aggregation)352352+ *353353+ * Takes into account NEXT_PAGE_SGE_DESC_CNT "next" elements on each page354354+ */355355+#define NUM_SGE_REQ (MAX_AGG_QS(bp) + \356356+ (BRB_SIZE(bp) - MAX_AGG_QS(bp)) / 2)357357+#define NUM_SGE_PG_REQ ((NUM_SGE_REQ + MAX_RX_SGE_CNT - 1) / \358358+ MAX_RX_SGE_CNT)359359+#define SGE_TH_LO(bp) (NUM_SGE_REQ + \360360+ NUM_SGE_PG_REQ * NEXT_PAGE_SGE_DESC_CNT)361361+#define SGE_TH_HI(bp) (SGE_TH_LO(bp) + DROPLESS_FC_HEADROOM)350362351363/* Manipulate a bit vector defined as an array of u64 */352364···579551580552#define NUM_TX_RINGS 16581553#define TX_DESC_CNT (BCM_PAGE_SIZE / sizeof(union eth_tx_bd_types))582582-#define MAX_TX_DESC_CNT (TX_DESC_CNT - 1)554554+#define NEXT_PAGE_TX_DESC_CNT 1555555+#define MAX_TX_DESC_CNT (TX_DESC_CNT - NEXT_PAGE_TX_DESC_CNT)583556#define NUM_TX_BD (TX_DESC_CNT * NUM_TX_RINGS)584557#define MAX_TX_BD (NUM_TX_BD - 1)585558#define MAX_TX_AVAIL (MAX_TX_DESC_CNT * NUM_TX_RINGS - 2)586559#define NEXT_TX_IDX(x) ((((x) & MAX_TX_DESC_CNT) == \587587- (MAX_TX_DESC_CNT - 1)) ? (x) + 2 : (x) + 1)560560+ (MAX_TX_DESC_CNT - 1)) ? \561561+ (x) + 1 + NEXT_PAGE_TX_DESC_CNT : \562562+ (x) + 1)588563#define TX_BD(x) ((x) & MAX_TX_BD)589564#define TX_BD_POFF(x) ((x) & MAX_TX_DESC_CNT)590565591566/* The RX BD ring is special, each bd is 8 bytes but the last one is 16 */592567#define NUM_RX_RINGS 8593568#define RX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct eth_rx_bd))594594-#define MAX_RX_DESC_CNT (RX_DESC_CNT - 2)569569+#define NEXT_PAGE_RX_DESC_CNT 2570570+#define MAX_RX_DESC_CNT (RX_DESC_CNT - NEXT_PAGE_RX_DESC_CNT)595571#define RX_DESC_MASK (RX_DESC_CNT - 1)596572#define NUM_RX_BD (RX_DESC_CNT * NUM_RX_RINGS)597573#define MAX_RX_BD (NUM_RX_BD - 1)598574#define MAX_RX_AVAIL (MAX_RX_DESC_CNT * NUM_RX_RINGS - 2)599599-#define MIN_RX_AVAIL 128575575+576576+/* dropless fc calculations for BDs577577+ *578578+ * Number of BDs should as number of buffers in BRB:579579+ * Low threshold takes into account NEXT_PAGE_RX_DESC_CNT580580+ * "next" elements on each page581581+ */582582+#define NUM_BD_REQ BRB_SIZE(bp)583583+#define NUM_BD_PG_REQ ((NUM_BD_REQ + MAX_RX_DESC_CNT - 1) / \584584+ MAX_RX_DESC_CNT)585585+#define BD_TH_LO(bp) (NUM_BD_REQ + \586586+ NUM_BD_PG_REQ * NEXT_PAGE_RX_DESC_CNT + \587587+ FW_DROP_LEVEL(bp))588588+#define BD_TH_HI(bp) (BD_TH_LO(bp) + DROPLESS_FC_HEADROOM)589589+590590+#define MIN_RX_AVAIL ((bp)->dropless_fc ? BD_TH_HI(bp) + 128 : 128)600591601592#define MIN_RX_SIZE_TPA_HW (CHIP_IS_E1(bp) ? \602593 ETH_MIN_RX_CQES_WITH_TPA_E1 : \···626579 MIN_RX_AVAIL))627580628581#define NEXT_RX_IDX(x) ((((x) & RX_DESC_MASK) == \629629- (MAX_RX_DESC_CNT - 1)) ? (x) + 3 : (x) + 1)582582+ (MAX_RX_DESC_CNT - 1)) ? \583583+ (x) + 1 + NEXT_PAGE_RX_DESC_CNT : \584584+ (x) + 1)630585#define RX_BD(x) ((x) & MAX_RX_BD)631586632587/*···638589#define CQE_BD_REL (sizeof(union eth_rx_cqe) / sizeof(struct eth_rx_bd))639590#define NUM_RCQ_RINGS (NUM_RX_RINGS * CQE_BD_REL)640591#define RCQ_DESC_CNT (BCM_PAGE_SIZE / sizeof(union eth_rx_cqe))641641-#define MAX_RCQ_DESC_CNT (RCQ_DESC_CNT - 1)592592+#define NEXT_PAGE_RCQ_DESC_CNT 1593593+#define MAX_RCQ_DESC_CNT (RCQ_DESC_CNT - NEXT_PAGE_RCQ_DESC_CNT)642594#define NUM_RCQ_BD (RCQ_DESC_CNT * NUM_RCQ_RINGS)643595#define MAX_RCQ_BD (NUM_RCQ_BD - 1)644596#define MAX_RCQ_AVAIL (MAX_RCQ_DESC_CNT * NUM_RCQ_RINGS - 2)645597#define NEXT_RCQ_IDX(x) ((((x) & MAX_RCQ_DESC_CNT) == \646646- (MAX_RCQ_DESC_CNT - 1)) ? (x) + 2 : (x) + 1)598598+ (MAX_RCQ_DESC_CNT - 1)) ? \599599+ (x) + 1 + NEXT_PAGE_RCQ_DESC_CNT : \600600+ (x) + 1)647601#define RCQ_BD(x) ((x) & MAX_RCQ_BD)602602+603603+/* dropless fc calculations for RCQs604604+ *605605+ * Number of RCQs should be as number of buffers in BRB:606606+ * Low threshold takes into account NEXT_PAGE_RCQ_DESC_CNT607607+ * "next" elements on each page608608+ */609609+#define NUM_RCQ_REQ BRB_SIZE(bp)610610+#define NUM_RCQ_PG_REQ ((NUM_BD_REQ + MAX_RCQ_DESC_CNT - 1) / \611611+ MAX_RCQ_DESC_CNT)612612+#define RCQ_TH_LO(bp) (NUM_RCQ_REQ + \613613+ NUM_RCQ_PG_REQ * NEXT_PAGE_RCQ_DESC_CNT + \614614+ FW_DROP_LEVEL(bp))615615+#define RCQ_TH_HI(bp) (RCQ_TH_LO(bp) + DROPLESS_FC_HEADROOM)648616649617650618/* This is needed for determining of last_max */···751685#define FP_CSB_FUNC_OFF \752686 offsetof(struct cstorm_status_block_c, func)753687754754-#define HC_INDEX_TOE_RX_CQ_CONS 0 /* Formerly Ustorm TOE CQ index */755755- /* (HC_INDEX_U_TOE_RX_CQ_CONS) */756756-#define HC_INDEX_ETH_RX_CQ_CONS 1 /* Formerly Ustorm ETH CQ index */757757- /* (HC_INDEX_U_ETH_RX_CQ_CONS) */758758-#define HC_INDEX_ETH_RX_BD_CONS 2 /* Formerly Ustorm ETH BD index */759759- /* (HC_INDEX_U_ETH_RX_BD_CONS) */688688+#define HC_INDEX_ETH_RX_CQ_CONS 1760689761761-#define HC_INDEX_TOE_TX_CQ_CONS 4 /* Formerly Cstorm TOE CQ index */762762- /* (HC_INDEX_C_TOE_TX_CQ_CONS) */763763-#define HC_INDEX_ETH_TX_CQ_CONS_COS0 5 /* Formerly Cstorm ETH CQ index */764764- /* (HC_INDEX_C_ETH_TX_CQ_CONS) */765765-#define HC_INDEX_ETH_TX_CQ_CONS_COS1 6 /* Formerly Cstorm ETH CQ index */766766- /* (HC_INDEX_C_ETH_TX_CQ_CONS) */767767-#define HC_INDEX_ETH_TX_CQ_CONS_COS2 7 /* Formerly Cstorm ETH CQ index */768768- /* (HC_INDEX_C_ETH_TX_CQ_CONS) */690690+#define HC_INDEX_OOO_TX_CQ_CONS 4691691+692692+#define HC_INDEX_ETH_TX_CQ_CONS_COS0 5693693+694694+#define HC_INDEX_ETH_TX_CQ_CONS_COS1 6695695+696696+#define HC_INDEX_ETH_TX_CQ_CONS_COS2 7769697770698#define HC_INDEX_ETH_FIRST_TX_CQ_CONS HC_INDEX_ETH_TX_CQ_CONS_COS0771771-772699773700#define BNX2X_RX_SB_INDEX \774701 (&fp->sb_index_values[HC_INDEX_ETH_RX_CQ_CONS])···11591100#define BP_PORT(bp) (bp->pfid & 1)11601101#define BP_FUNC(bp) (bp->pfid)11611102#define BP_ABS_FUNC(bp) (bp->pf_num)11621162-#define BP_E1HVN(bp) (bp->pfid >> 1)11631163-#define BP_VN(bp) (BP_E1HVN(bp)) /*remove when approved*/11641164-#define BP_L_ID(bp) (BP_E1HVN(bp) << 2)11651165-#define BP_FW_MB_IDX(bp) (BP_PORT(bp) +\11661166- BP_VN(bp) * ((CHIP_IS_E1x(bp) || (CHIP_MODE_IS_4_PORT(bp))) ? 2 : 1))11031103+#define BP_VN(bp) ((bp)->pfid >> 1)11041104+#define BP_MAX_VN_NUM(bp) (CHIP_MODE_IS_4_PORT(bp) ? 2 : 4)11051105+#define BP_L_ID(bp) (BP_VN(bp) << 2)11061106+#define BP_FW_MB_IDX_VN(bp, vn) (BP_PORT(bp) +\11071107+ (vn) * ((CHIP_IS_E1x(bp) || (CHIP_MODE_IS_4_PORT(bp))) ? 2 : 1))11081108+#define BP_FW_MB_IDX(bp) BP_FW_MB_IDX_VN(bp, BP_VN(bp))1167110911681110 struct net_device *dev;11691111 struct pci_dev *pdev;···1827176718281768#define MAX_DMAE_C_PER_PORT 818291769#define INIT_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \18301830- BP_E1HVN(bp))17701770+ BP_VN(bp))18311771#define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \18321772 E1HVN_MAX)18331773···1853179318541794/* must be used on a CID before placing it on a HW ring */18551795#define HW_CID(bp, x) ((BP_PORT(bp) << 23) | \18561856- (BP_E1HVN(bp) << BNX2X_SWCID_SHIFT) | \17961796+ (BP_VN(bp) << BNX2X_SWCID_SHIFT) | \18571797 (x))1858179818591799#define SP_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct eth_spe))
+14-13
drivers/net/bnx2x/bnx2x_cmn.c
···987987void bnx2x_init_rx_rings(struct bnx2x *bp)988988{989989 int func = BP_FUNC(bp);990990- int max_agg_queues = CHIP_IS_E1(bp) ? ETH_MAX_AGGREGATION_QUEUES_E1 :991991- ETH_MAX_AGGREGATION_QUEUES_E1H_E2;992990 u16 ring_prod;993991 int i, j;994992···999100110001002 if (!fp->disable_tpa) {10011003 /* Fill the per-aggregtion pool */10021002- for (i = 0; i < max_agg_queues; i++) {10041004+ for (i = 0; i < MAX_AGG_QS(bp); i++) {10031005 struct bnx2x_agg_info *tpa_info =10041006 &fp->tpa_info[i];10051007 struct sw_rx_bd *first_buf =···10391041 bnx2x_free_rx_sge_range(bp, fp,10401042 ring_prod);10411043 bnx2x_free_tpa_pool(bp, fp,10421042- max_agg_queues);10441044+ MAX_AGG_QS(bp));10431045 fp->disable_tpa = 1;10441046 ring_prod = 0;10451047 break;···11351137 bnx2x_free_rx_bds(fp);1136113811371139 if (!fp->disable_tpa)11381138- bnx2x_free_tpa_pool(bp, fp, CHIP_IS_E1(bp) ?11391139- ETH_MAX_AGGREGATION_QUEUES_E1 :11401140- ETH_MAX_AGGREGATION_QUEUES_E1H_E2);11401140+ bnx2x_free_tpa_pool(bp, fp, MAX_AGG_QS(bp));11411141 }11421142}11431143···30913095 struct bnx2x_fastpath *fp = &bp->fp[index];30923096 int ring_size = 0;30933097 u8 cos;30983098+ int rx_ring_size = 0;3094309930953100 /* if rx_ring_size specified - use it */30963096- int rx_ring_size = bp->rx_ring_size ? bp->rx_ring_size :30973097- MAX_RX_AVAIL/BNX2X_NUM_RX_QUEUES(bp);31013101+ if (!bp->rx_ring_size) {3098310230993099- /* allocate at least number of buffers required by FW */31003100- rx_ring_size = max_t(int, bp->disable_tpa ? MIN_RX_SIZE_NONTPA :31013101- MIN_RX_SIZE_TPA,31023102- rx_ring_size);31033103+ rx_ring_size = MAX_RX_AVAIL/BNX2X_NUM_RX_QUEUES(bp);31043104+31053105+ /* allocate at least number of buffers required by FW */31063106+ rx_ring_size = max_t(int, bp->disable_tpa ? MIN_RX_SIZE_NONTPA :31073107+ MIN_RX_SIZE_TPA, rx_ring_size);31083108+31093109+ bp->rx_ring_size = rx_ring_size;31103110+ } else31113111+ rx_ring_size = bp->rx_ring_size;3103311231043113 /* Common */31053114 sb = &bnx2x_fp(bp, index, status_blk);
···777777778778 read_lock(&bond->lock);779779780780+ if (bond->kill_timers)781781+ goto out;782782+780783 /* rejoin all groups on bond device */781784 __bond_resend_igmp_join_requests(bond->dev);782785···793790 __bond_resend_igmp_join_requests(vlan_dev);794791 }795792796796- if (--bond->igmp_retrans > 0)793793+ if ((--bond->igmp_retrans > 0) && !bond->kill_timers)797794 queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);798798-795795+out:799796 read_unlock(&bond->lock);800797}801798···25412538 }2542253925432540re_arm:25442544- if (bond->params.miimon)25412541+ if (bond->params.miimon && !bond->kill_timers)25452542 queue_delayed_work(bond->wq, &bond->mii_work,25462543 msecs_to_jiffies(bond->params.miimon));25472544out:···28892886 }2890288728912888re_arm:28922892- if (bond->params.arp_interval)28892889+ if (bond->params.arp_interval && !bond->kill_timers)28932890 queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);28942891out:28952892 read_unlock(&bond->lock);···31573154 bond_ab_arp_probe(bond);3158315531593156re_arm:31603160- if (bond->params.arp_interval)31573157+ if (bond->params.arp_interval && !bond->kill_timers)31613158 queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);31623159out:31633160 read_unlock(&bond->lock);
···300300struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,301301 struct net_device *dev)302302{303303- struct l2t_entry *e;304304- struct l2t_data *d = L2DATA(cdev);303303+ struct l2t_entry *e = NULL;304304+ struct l2t_data *d;305305+ int hash;305306 u32 addr = *(u32 *) neigh->primary_key;306307 int ifidx = neigh->dev->ifindex;307307- int hash = arp_hash(addr, ifidx, d);308308 struct port_info *p = netdev_priv(dev);309309 int smt_idx = p->port_id;310310+311311+ rcu_read_lock();312312+ d = L2DATA(cdev);313313+ if (!d)314314+ goto done_rcu;315315+316316+ hash = arp_hash(addr, ifidx, d);310317311318 write_lock_bh(&d->lock);312319 for (e = d->l2tab[hash].first; e; e = e->next)···345338 }346339done:347340 write_unlock_bh(&d->lock);341341+done_rcu:342342+ rcu_read_unlock();348343 return e;349344}350345
+12-4
drivers/net/cxgb3/l2t.h
···7676 atomic_t nfree; /* number of free entries */7777 rwlock_t lock;7878 struct l2t_entry l2tab[0];7979+ struct rcu_head rcu_head; /* to handle rcu cleanup */7980};80818182typedef void (*arp_failure_handler_func)(struct t3cdev * dev,···10099/*101100 * Getting to the L2 data from an offload device.102101 */103103-#define L2DATA(dev) ((dev)->l2opt)102102+#define L2DATA(cdev) (rcu_dereference((cdev)->l2opt))104103105104#define W_TCB_L2T_IX 0106105#define S_TCB_L2T_IX 7···127126 return t3_l2t_send_slow(dev, skb, e);128127}129128130130-static inline void l2t_release(struct l2t_data *d, struct l2t_entry *e)129129+static inline void l2t_release(struct t3cdev *t, struct l2t_entry *e)131130{132132- if (atomic_dec_and_test(&e->refcnt))131131+ struct l2t_data *d;132132+133133+ rcu_read_lock();134134+ d = L2DATA(t);135135+136136+ if (atomic_dec_and_test(&e->refcnt) && d)133137 t3_l2e_free(d, e);138138+139139+ rcu_read_unlock();134140}135141136142static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)137143{138138- if (atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */144144+ if (d && atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */139145 atomic_dec(&d->nfree);140146}141147
+3
drivers/net/cxgb4/cxgb4_main.c
···37123712 setup_debugfs(adapter);37133713 }3714371437153715+ /* PCIe EEH recovery on powerpc platforms needs fundamental reset */37163716+ pdev->needs_freset = 1;37173717+37153718 if (is_offload(adapter))37163719 attach_ulds(adapter);37173720
+6
drivers/net/e1000/e1000_hw.c
···40264026 checksum += eeprom_data;40274027 }4028402840294029+#ifdef CONFIG_PARISC40304030+ /* This is a signature and not a checksum on HP c8000 */40314031+ if ((hw->subsystem_vendor_id == 0x103C) && (eeprom_data == 0x16d6))40324032+ return E1000_SUCCESS;40334033+40344034+#endif40294035 if (checksum == (u16) EEPROM_SUM)40304036 return E1000_SUCCESS;40314037 else {
+4-4
drivers/net/gianfar_ethtool.c
···16691669 u32 i = 0;1670167016711671 list_for_each_entry(comp, &priv->rx_list.list, list) {16721672- if (i <= cmd->rule_cnt) {16731673- rule_locs[i] = comp->fs.location;16741674- i++;16751675- }16721672+ if (i == cmd->rule_cnt)16731673+ return -EMSGSIZE;16741674+ rule_locs[i] = comp->fs.location;16751675+ i++;16761676 }1677167716781678 cmd->data = MAX_FILER_IDX;
+10-2
drivers/net/greth.c
···428428 dma_sync_single_for_device(greth->dev, dma_addr, skb->len, DMA_TO_DEVICE);429429430430 status = GRETH_BD_EN | GRETH_BD_IE | (skb->len & GRETH_BD_LEN);431431+ greth->tx_bufs_length[greth->tx_next] = skb->len & GRETH_BD_LEN;431432432433 /* Wrap around descriptor ring */433434 if (greth->tx_next == GRETH_TXBD_NUM_MASK) {···491490 if (nr_frags != 0)492491 status = GRETH_TXBD_MORE;493492494494- status |= GRETH_TXBD_CSALL;493493+ if (skb->ip_summed == CHECKSUM_PARTIAL)494494+ status |= GRETH_TXBD_CSALL;495495 status |= skb_headlen(skb) & GRETH_BD_LEN;496496 if (greth->tx_next == GRETH_TXBD_NUM_MASK)497497 status |= GRETH_BD_WR;···515513 greth->tx_skbuff[curr_tx] = NULL;516514 bdp = greth->tx_bd_base + curr_tx;517515518518- status = GRETH_TXBD_CSALL | GRETH_BD_EN;516516+ status = GRETH_BD_EN;517517+ if (skb->ip_summed == CHECKSUM_PARTIAL)518518+ status |= GRETH_TXBD_CSALL;519519 status |= frag->size & GRETH_BD_LEN;520520521521 /* Wrap around descriptor ring */···645641 dev->stats.tx_fifo_errors++;646642 }647643 dev->stats.tx_packets++;644644+ dev->stats.tx_bytes += greth->tx_bufs_length[greth->tx_last];648645 greth->tx_last = NEXT_TX(greth->tx_last);649646 greth->tx_free++;650647 }···700695 greth->tx_skbuff[greth->tx_last] = NULL;701696702697 greth_update_tx_stats(dev, stat);698698+ dev->stats.tx_bytes += skb->len;703699704700 bdp = greth->tx_bd_base + greth->tx_last;705701···802796 memcpy(skb_put(skb, pkt_len), phys_to_virt(dma_addr), pkt_len);803797804798 skb->protocol = eth_type_trans(skb, dev);799799+ dev->stats.rx_bytes += pkt_len;805800 dev->stats.rx_packets++;806801 netif_receive_skb(skb);807802 }···917910918911 skb->protocol = eth_type_trans(skb, dev);919912 dev->stats.rx_packets++;913913+ dev->stats.rx_bytes += pkt_len;920914 netif_receive_skb(skb);921915922916 greth->rx_skbuff[greth->rx_cur] = newskb;
···636636 netdev_err(netdev, "unable to request irq 0x%x, rc %d\n",637637 netdev->irq, rc);638638 do {639639- rc = h_free_logical_lan(adapter->vdev->unit_address);640640- } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));639639+ lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);640640+ } while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));641641642642 goto err_out;643643 }···757757 struct ibmveth_adapter *adapter = netdev_priv(dev);758758 unsigned long set_attr, clr_attr, ret_attr;759759 unsigned long set_attr6, clr_attr6;760760- long ret, ret6;760760+ long ret, ret4, ret6;761761 int rc1 = 0, rc2 = 0;762762 int restart = 0;763763···770770771771 set_attr = 0;772772 clr_attr = 0;773773+ set_attr6 = 0;774774+ clr_attr6 = 0;773775774776 if (data) {775777 set_attr = IBMVETH_ILLAN_IPV4_TCP_CSUM;···786784 if (ret == H_SUCCESS && !(ret_attr & IBMVETH_ILLAN_ACTIVE_TRUNK) &&787785 !(ret_attr & IBMVETH_ILLAN_TRUNK_PRI_MASK) &&788786 (ret_attr & IBMVETH_ILLAN_PADDED_PKT_CSUM)) {789789- ret = h_illan_attributes(adapter->vdev->unit_address, clr_attr,787787+ ret4 = h_illan_attributes(adapter->vdev->unit_address, clr_attr,790788 set_attr, &ret_attr);791789792792- if (ret != H_SUCCESS) {790790+ if (ret4 != H_SUCCESS) {793791 netdev_err(dev, "unable to change IPv4 checksum "794792 "offload settings. %d rc=%ld\n",795795- data, ret);793793+ data, ret4);796794797797- ret = h_illan_attributes(adapter->vdev->unit_address,798798- set_attr, clr_attr, &ret_attr);795795+ h_illan_attributes(adapter->vdev->unit_address,796796+ set_attr, clr_attr, &ret_attr);797797+798798+ if (data == 1)799799+ dev->features &= ~NETIF_F_IP_CSUM;800800+799801 } else {800802 adapter->fw_ipv4_csum_support = data;801803 }···810804 if (ret6 != H_SUCCESS) {811805 netdev_err(dev, "unable to change IPv6 checksum "812806 "offload settings. %d rc=%ld\n",813813- data, ret);807807+ data, ret6);814808815815- ret = h_illan_attributes(adapter->vdev->unit_address,816816- set_attr6, clr_attr6,817817- &ret_attr);809809+ h_illan_attributes(adapter->vdev->unit_address,810810+ set_attr6, clr_attr6, &ret_attr);811811+812812+ if (data == 1)813813+ dev->features &= ~NETIF_F_IPV6_CSUM;814814+818815 } else819816 adapter->fw_ipv6_csum_support = data;820817821821- if (ret != H_SUCCESS || ret6 != H_SUCCESS)818818+ if (ret4 == H_SUCCESS || ret6 == H_SUCCESS)822819 adapter->rx_csum = data;823820 else824821 rc1 = -EIO;···939930 union ibmveth_buf_desc descs[6];940931 int last, i;941932 int force_bounce = 0;933933+ dma_addr_t dma_addr;942934943935 /*944936 * veth handles a maximum of 6 segments including the header, so···1004994 }10059951006996 /* Map the header */10071007- descs[0].fields.address = dma_map_single(&adapter->vdev->dev, skb->data,10081008- skb_headlen(skb),10091009- DMA_TO_DEVICE);10101010- if (dma_mapping_error(&adapter->vdev->dev, descs[0].fields.address))997997+ dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,998998+ skb_headlen(skb), DMA_TO_DEVICE);999999+ if (dma_mapping_error(&adapter->vdev->dev, dma_addr))10111000 goto map_failed;1012100110131002 descs[0].fields.flags_len = desc_flags | skb_headlen(skb);10031003+ descs[0].fields.address = dma_addr;1014100410151005 /* Map the frags */10161006 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {10171017- unsigned long dma_addr;10181007 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];1019100810201009 dma_addr = dma_map_page(&adapter->vdev->dev, frag->page,···10351026 netdev->stats.tx_bytes += skb->len;10361027 }1037102810381038- for (i = 0; i < skb_shinfo(skb)->nr_frags + 1; i++)10291029+ dma_unmap_single(&adapter->vdev->dev,10301030+ descs[0].fields.address,10311031+ descs[0].fields.flags_len & IBMVETH_BUF_LEN_MASK,10321032+ DMA_TO_DEVICE);10331033+10341034+ for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++)10391035 dma_unmap_page(&adapter->vdev->dev, descs[i].fields.address,10401036 descs[i].fields.flags_len & IBMVETH_BUF_LEN_MASK,10411037 DMA_TO_DEVICE);
+2-2
drivers/net/ixgbe/ixgbe_main.c
···13211321 if (ring_is_rsc_enabled(rx_ring))13221322 pkt_is_rsc = ixgbe_get_rsc_state(rx_desc);1323132313241324- /* if this is a skb from previous receive DMA will be 0 */13251325- if (rx_buffer_info->dma) {13241324+ /* linear means we are building an skb from multiple pages */13251325+ if (!skb_is_nonlinear(skb)) {13261326 u16 hlen;13271327 if (pkt_is_rsc &&13281328 !(staterr & IXGBE_RXD_STAT_EOP) &&
+7-1
drivers/net/netconsole.c
···799799 }800800}801801802802-module_init(init_netconsole);802802+/*803803+ * Use late_initcall to ensure netconsole is804804+ * initialized after network device driver if built-in.805805+ *806806+ * late_initcall() and module_init() are identical if built as module.807807+ */808808+late_initcall(init_netconsole);803809module_exit(cleanup_netconsole);
+10-2
drivers/net/pch_gbe/pch_gbe.h
···127127128128/* Reset */129129#define PCH_GBE_ALL_RST 0x80000000 /* All reset */130130-#define PCH_GBE_TX_RST 0x40000000 /* TX MAC, TX FIFO, TX DMA reset */131131-#define PCH_GBE_RX_RST 0x04000000 /* RX MAC, RX FIFO, RX DMA reset */130130+#define PCH_GBE_TX_RST 0x00008000 /* TX MAC, TX FIFO, TX DMA reset */131131+#define PCH_GBE_RX_RST 0x00004000 /* RX MAC, RX FIFO, RX DMA reset */132132133133/* TCP/IP Accelerator Control */134134#define PCH_GBE_EX_LIST_EN 0x00000008 /* External List Enable */···275275/* DMA Control */276276#define PCH_GBE_RX_DMA_EN 0x00000002 /* Enables Receive DMA */277277#define PCH_GBE_TX_DMA_EN 0x00000001 /* Enables Transmission DMA */278278+279279+/* RX DMA STATUS */280280+#define PCH_GBE_IDLE_CHECK 0xFFFFFFFE278281279282/* Wake On LAN Status */280283#define PCH_GBE_WLS_BR 0x00000008 /* Broadcas Address */···474471struct pch_gbe_buffer {475472 struct sk_buff *skb;476473 dma_addr_t dma;474474+ unsigned char *rx_buffer;477475 unsigned long time_stamp;478476 u16 length;479477 bool mapped;···515511struct pch_gbe_rx_ring {516512 struct pch_gbe_rx_desc *desc;517513 dma_addr_t dma;514514+ unsigned char *rx_buff_pool;515515+ dma_addr_t rx_buff_pool_logic;516516+ unsigned int rx_buff_pool_size;518517 unsigned int size;519518 unsigned int count;520519 unsigned int next_to_use;···629622 unsigned long rx_buffer_len;630623 unsigned long tx_queue_len;631624 bool have_msi;625625+ bool rx_stop_flag;632626};633627634628extern const char pch_driver_version[];
+213-131
drivers/net/pch_gbe/pch_gbe_main.c
···20202121#include "pch_gbe.h"2222#include "pch_gbe_api.h"2323-#include <linux/prefetch.h>24232524#define DRV_VERSION "1.00"2625const char pch_driver_version[] = DRV_VERSION;···3334#define PCH_GBE_WATCHDOG_PERIOD (1 * HZ) /* watchdog time */3435#define PCH_GBE_COPYBREAK_DEFAULT 2563536#define PCH_GBE_PCI_BAR 13737+#define PCH_GBE_RESERVE_MEMORY 0x200000 /* 2MB */36383739/* Macros for ML7223 */3840#define PCI_VENDOR_ID_ROHM 0x10db3941#define PCI_DEVICE_ID_ROHM_ML7223_GBE 0x80134242+4343+/* Macros for ML7831 */4444+#define PCI_DEVICE_ID_ROHM_ML7831_GBE 0x880240454146#define PCH_GBE_TX_WEIGHT 644247#define PCH_GBE_RX_WEIGHT 64···5552 )56535754/* Ethertype field values */5555+#define PCH_GBE_MAX_RX_BUFFER_SIZE 0x28805856#define PCH_GBE_MAX_JUMBO_FRAME_SIZE 103185957#define PCH_GBE_FRAME_SIZE_2048 20486058#define PCH_GBE_FRAME_SIZE_4096 4096···8783#define PCH_GBE_INT_ENABLE_MASK ( \8884 PCH_GBE_INT_RX_DMA_CMPLT | \8985 PCH_GBE_INT_RX_DSC_EMP | \8686+ PCH_GBE_INT_RX_FIFO_ERR | \9087 PCH_GBE_INT_WOL_DET | \9188 PCH_GBE_INT_TX_CMPLT \9289 )93909191+#define PCH_GBE_INT_DISABLE_ALL 094929593static unsigned int copybreak __read_mostly = PCH_GBE_COPYBREAK_DEFAULT;9694···144138 if (!tmp)145139 pr_err("Error: busy bit is not cleared\n");146140}141141+142142+/**143143+ * pch_gbe_wait_clr_bit_irq - Wait to clear a bit for interrupt context144144+ * @reg: Pointer of register145145+ * @busy: Busy bit146146+ */147147+static int pch_gbe_wait_clr_bit_irq(void *reg, u32 bit)148148+{149149+ u32 tmp;150150+ int ret = -1;151151+ /* wait busy */152152+ tmp = 20;153153+ while ((ioread32(reg) & bit) && --tmp)154154+ udelay(5);155155+ if (!tmp)156156+ pr_err("Error: busy bit is not cleared\n");157157+ else158158+ ret = 0;159159+ return ret;160160+}161161+147162/**148163 * pch_gbe_mac_mar_set - Set MAC address register149164 * @hw: Pointer to the HW structure···212185#endif213186 pch_gbe_wait_clr_bit(&hw->reg->RESET, PCH_GBE_ALL_RST);214187 /* Setup the receive address */188188+ pch_gbe_mac_mar_set(hw, hw->mac.addr, 0);189189+ return;190190+}191191+192192+static void pch_gbe_mac_reset_rx(struct pch_gbe_hw *hw)193193+{194194+ /* Read the MAC address. and store to the private data */195195+ pch_gbe_mac_read_mac_addr(hw);196196+ iowrite32(PCH_GBE_RX_RST, &hw->reg->RESET);197197+ pch_gbe_wait_clr_bit_irq(&hw->reg->RESET, PCH_GBE_RX_RST);198198+ /* Setup the MAC address */215199 pch_gbe_mac_mar_set(hw, hw->mac.addr, 0);216200 return;217201}···709671710672 tcpip = ioread32(&hw->reg->TCPIP_ACC);711673712712- if (netdev->features & NETIF_F_RXCSUM) {713713- tcpip &= ~PCH_GBE_RX_TCPIPACC_OFF;714714- tcpip |= PCH_GBE_RX_TCPIPACC_EN;715715- } else {716716- tcpip |= PCH_GBE_RX_TCPIPACC_OFF;717717- tcpip &= ~PCH_GBE_RX_TCPIPACC_EN;718718- }674674+ tcpip |= PCH_GBE_RX_TCPIPACC_OFF;675675+ tcpip &= ~PCH_GBE_RX_TCPIPACC_EN;719676 iowrite32(tcpip, &hw->reg->TCPIP_ACC);720677 return;721678}···750717 iowrite32(rdba, &hw->reg->RX_DSC_BASE);751718 iowrite32(rdlen, &hw->reg->RX_DSC_SIZE);752719 iowrite32((rdba + rdlen), &hw->reg->RX_DSC_SW_P);753753-754754- /* Enables Receive DMA */755755- rxdma = ioread32(&hw->reg->DMA_CTRL);756756- rxdma |= PCH_GBE_RX_DMA_EN;757757- iowrite32(rxdma, &hw->reg->DMA_CTRL);758758- /* Enables Receive */759759- iowrite32(PCH_GBE_MRE_MAC_RX_EN, &hw->reg->MAC_RX_EN);760720}761721762722/**···11231097 spin_unlock_irqrestore(&adapter->stats_lock, flags);11241098}1125109911001100+static void pch_gbe_stop_receive(struct pch_gbe_adapter *adapter)11011101+{11021102+ struct pch_gbe_hw *hw = &adapter->hw;11031103+ u32 rxdma;11041104+ u16 value;11051105+ int ret;11061106+11071107+ /* Disable Receive DMA */11081108+ rxdma = ioread32(&hw->reg->DMA_CTRL);11091109+ rxdma &= ~PCH_GBE_RX_DMA_EN;11101110+ iowrite32(rxdma, &hw->reg->DMA_CTRL);11111111+ /* Wait Rx DMA BUS is IDLE */11121112+ ret = pch_gbe_wait_clr_bit_irq(&hw->reg->RX_DMA_ST, PCH_GBE_IDLE_CHECK);11131113+ if (ret) {11141114+ /* Disable Bus master */11151115+ pci_read_config_word(adapter->pdev, PCI_COMMAND, &value);11161116+ value &= ~PCI_COMMAND_MASTER;11171117+ pci_write_config_word(adapter->pdev, PCI_COMMAND, value);11181118+ /* Stop Receive */11191119+ pch_gbe_mac_reset_rx(hw);11201120+ /* Enable Bus master */11211121+ value |= PCI_COMMAND_MASTER;11221122+ pci_write_config_word(adapter->pdev, PCI_COMMAND, value);11231123+ } else {11241124+ /* Stop Receive */11251125+ pch_gbe_mac_reset_rx(hw);11261126+ }11271127+}11281128+11291129+static void pch_gbe_start_receive(struct pch_gbe_hw *hw)11301130+{11311131+ u32 rxdma;11321132+11331133+ /* Enables Receive DMA */11341134+ rxdma = ioread32(&hw->reg->DMA_CTRL);11351135+ rxdma |= PCH_GBE_RX_DMA_EN;11361136+ iowrite32(rxdma, &hw->reg->DMA_CTRL);11371137+ /* Enables Receive */11381138+ iowrite32(PCH_GBE_MRE_MAC_RX_EN, &hw->reg->MAC_RX_EN);11391139+ return;11401140+}11411141+11261142/**11271143 * pch_gbe_intr - Interrupt Handler11281144 * @irq: Interrupt number···11911123 if (int_st & PCH_GBE_INT_RX_FRAME_ERR)11921124 adapter->stats.intr_rx_frame_err_count++;11931125 if (int_st & PCH_GBE_INT_RX_FIFO_ERR)11941194- adapter->stats.intr_rx_fifo_err_count++;11261126+ if (!adapter->rx_stop_flag) {11271127+ adapter->stats.intr_rx_fifo_err_count++;11281128+ pr_debug("Rx fifo over run\n");11291129+ adapter->rx_stop_flag = true;11301130+ int_en = ioread32(&hw->reg->INT_EN);11311131+ iowrite32((int_en & ~PCH_GBE_INT_RX_FIFO_ERR),11321132+ &hw->reg->INT_EN);11331133+ pch_gbe_stop_receive(adapter);11341134+ int_st |= ioread32(&hw->reg->INT_ST);11351135+ int_st = int_st & ioread32(&hw->reg->INT_EN);11361136+ }11951137 if (int_st & PCH_GBE_INT_RX_DMA_ERR)11961138 adapter->stats.intr_rx_dma_err_count++;11971139 if (int_st & PCH_GBE_INT_TX_FIFO_ERR)···12131135 /* When Rx descriptor is empty */12141136 if ((int_st & PCH_GBE_INT_RX_DSC_EMP)) {12151137 adapter->stats.intr_rx_dsc_empty_count++;12161216- pr_err("Rx descriptor is empty\n");11381138+ pr_debug("Rx descriptor is empty\n");12171139 int_en = ioread32(&hw->reg->INT_EN);12181140 iowrite32((int_en & ~PCH_GBE_INT_RX_DSC_EMP), &hw->reg->INT_EN);12191141 if (hw->mac.tx_fc_enable) {12201142 /* Set Pause packet */12211143 pch_gbe_mac_set_pause_packet(hw);12221144 }12231223- if ((int_en & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))12241224- == 0) {12251225- return IRQ_HANDLED;12261226- }12271145 }1228114612291147 /* When request status is Receive interruption */12301230- if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))) {11481148+ if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT)) ||11491149+ (adapter->rx_stop_flag == true)) {12311150 if (likely(napi_schedule_prep(&adapter->napi))) {12321151 /* Enable only Rx Descriptor empty */12331152 atomic_inc(&adapter->irq_sem);···12601185 unsigned int i;12611186 unsigned int bufsz;1262118712631263- bufsz = adapter->rx_buffer_len + PCH_GBE_DMA_ALIGN;11881188+ bufsz = adapter->rx_buffer_len + NET_IP_ALIGN;12641189 i = rx_ring->next_to_use;1265119012661191 while ((cleaned_count--)) {12671192 buffer_info = &rx_ring->buffer_info[i];12681268- skb = buffer_info->skb;12691269- if (skb) {12701270- skb_trim(skb, 0);12711271- } else {12721272- skb = netdev_alloc_skb(netdev, bufsz);12731273- if (unlikely(!skb)) {12741274- /* Better luck next round */12751275- adapter->stats.rx_alloc_buff_failed++;12761276- break;12771277- }12781278- /* 64byte align */12791279- skb_reserve(skb, PCH_GBE_DMA_ALIGN);12801280-12811281- buffer_info->skb = skb;12821282- buffer_info->length = adapter->rx_buffer_len;11931193+ skb = netdev_alloc_skb(netdev, bufsz);11941194+ if (unlikely(!skb)) {11951195+ /* Better luck next round */11961196+ adapter->stats.rx_alloc_buff_failed++;11971197+ break;12831198 }11991199+ /* align */12001200+ skb_reserve(skb, NET_IP_ALIGN);12011201+ buffer_info->skb = skb;12021202+12841203 buffer_info->dma = dma_map_single(&pdev->dev,12851285- skb->data,12041204+ buffer_info->rx_buffer,12861205 buffer_info->length,12871206 DMA_FROM_DEVICE);12881207 if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {···13071238 &hw->reg->RX_DSC_SW_P);13081239 }13091240 return;12411241+}12421242+12431243+static int12441244+pch_gbe_alloc_rx_buffers_pool(struct pch_gbe_adapter *adapter,12451245+ struct pch_gbe_rx_ring *rx_ring, int cleaned_count)12461246+{12471247+ struct pci_dev *pdev = adapter->pdev;12481248+ struct pch_gbe_buffer *buffer_info;12491249+ unsigned int i;12501250+ unsigned int bufsz;12511251+ unsigned int size;12521252+12531253+ bufsz = adapter->rx_buffer_len;12541254+12551255+ size = rx_ring->count * bufsz + PCH_GBE_RESERVE_MEMORY;12561256+ rx_ring->rx_buff_pool = dma_alloc_coherent(&pdev->dev, size,12571257+ &rx_ring->rx_buff_pool_logic,12581258+ GFP_KERNEL);12591259+ if (!rx_ring->rx_buff_pool) {12601260+ pr_err("Unable to allocate memory for the receive poll buffer\n");12611261+ return -ENOMEM;12621262+ }12631263+ memset(rx_ring->rx_buff_pool, 0, size);12641264+ rx_ring->rx_buff_pool_size = size;12651265+ for (i = 0; i < rx_ring->count; i++) {12661266+ buffer_info = &rx_ring->buffer_info[i];12671267+ buffer_info->rx_buffer = rx_ring->rx_buff_pool + bufsz * i;12681268+ buffer_info->length = bufsz;12691269+ }12701270+ return 0;13101271}1311127213121273/**···13841285 struct sk_buff *skb;13851286 unsigned int i;13861287 unsigned int cleaned_count = 0;13871387- bool cleaned = false;12881288+ bool cleaned = true;1388128913891290 pr_debug("next_to_clean : %d\n", tx_ring->next_to_clean);13901291···1395129613961297 while ((tx_desc->gbec_status & DSC_INIT16) == 0x0000) {13971298 pr_debug("gbec_status:0x%04x\n", tx_desc->gbec_status);13981398- cleaned = true;13991299 buffer_info = &tx_ring->buffer_info[i];14001300 skb = buffer_info->skb;14011301···14371339 tx_desc = PCH_GBE_TX_DESC(*tx_ring, i);1438134014391341 /* weight of a sort for tx, to avoid endless transmit cleanup */14401440- if (cleaned_count++ == PCH_GBE_TX_WEIGHT)13421342+ if (cleaned_count++ == PCH_GBE_TX_WEIGHT) {13431343+ cleaned = false;14411344 break;13451345+ }14421346 }14431347 pr_debug("called pch_gbe_unmap_and_free_tx_resource() %d count\n",14441348 cleaned_count);···14801380 unsigned int i;14811381 unsigned int cleaned_count = 0;14821382 bool cleaned = false;14831483- struct sk_buff *skb, *new_skb;13831383+ struct sk_buff *skb;14841384 u8 dma_status;14851385 u16 gbec_status;14861386 u32 tcp_ip_status;···15011401 rx_desc->gbec_status = DSC_INIT16;15021402 buffer_info = &rx_ring->buffer_info[i];15031403 skb = buffer_info->skb;14041404+ buffer_info->skb = NULL;1504140515051406 /* unmap dma */15061407 dma_unmap_single(&pdev->dev, buffer_info->dma,15071408 buffer_info->length, DMA_FROM_DEVICE);15081409 buffer_info->mapped = false;15091509- /* Prefetch the packet */15101510- prefetch(skb->data);1511141015121411 pr_debug("RxDecNo = 0x%04x Status[DMA:0x%02x GBE:0x%04x "15131412 "TCP:0x%08x] BufInf = 0x%p\n",···15261427 pr_err("Receive CRC Error\n");15271428 } else {15281429 /* get receive length */15291529- /* length convert[-3] */15301530- length = (rx_desc->rx_words_eob) - 3;14301430+ /* length convert[-3], length includes FCS length */14311431+ length = (rx_desc->rx_words_eob) - 3 - ETH_FCS_LEN;14321432+ if (rx_desc->rx_words_eob & 0x02)14331433+ length = length - 4;14341434+ /*14351435+ * buffer_info->rx_buffer: [Header:14][payload]14361436+ * skb->data: [Reserve:2][Header:14][payload]14371437+ */14381438+ memcpy(skb->data, buffer_info->rx_buffer, length);1531143915321532- /* Decide the data conversion method */15331533- if (!(netdev->features & NETIF_F_RXCSUM)) {15341534- /* [Header:14][payload] */15351535- if (NET_IP_ALIGN) {15361536- /* Because alignment differs,15371537- * the new_skb is newly allocated,15381538- * and data is copied to new_skb.*/15391539- new_skb = netdev_alloc_skb(netdev,15401540- length + NET_IP_ALIGN);15411541- if (!new_skb) {15421542- /* dorrop error */15431543- pr_err("New skb allocation "15441544- "Error\n");15451545- goto dorrop;15461546- }15471547- skb_reserve(new_skb, NET_IP_ALIGN);15481548- memcpy(new_skb->data, skb->data,15491549- length);15501550- skb = new_skb;15511551- } else {15521552- /* DMA buffer is used as SKB as it is.*/15531553- buffer_info->skb = NULL;15541554- }15551555- } else {15561556- /* [Header:14][padding:2][payload] */15571557- /* The length includes padding length */15581558- length = length - PCH_GBE_DMA_PADDING;15591559- if ((length < copybreak) ||15601560- (NET_IP_ALIGN != PCH_GBE_DMA_PADDING)) {15611561- /* Because alignment differs,15621562- * the new_skb is newly allocated,15631563- * and data is copied to new_skb.15641564- * Padding data is deleted15651565- * at the time of a copy.*/15661566- new_skb = netdev_alloc_skb(netdev,15671567- length + NET_IP_ALIGN);15681568- if (!new_skb) {15691569- /* dorrop error */15701570- pr_err("New skb allocation "15711571- "Error\n");15721572- goto dorrop;15731573- }15741574- skb_reserve(new_skb, NET_IP_ALIGN);15751575- memcpy(new_skb->data, skb->data,15761576- ETH_HLEN);15771577- memcpy(&new_skb->data[ETH_HLEN],15781578- &skb->data[ETH_HLEN +15791579- PCH_GBE_DMA_PADDING],15801580- length - ETH_HLEN);15811581- skb = new_skb;15821582- } else {15831583- /* Padding data is deleted15841584- * by moving header data.*/15851585- memmove(&skb->data[PCH_GBE_DMA_PADDING],15861586- &skb->data[0], ETH_HLEN);15871587- skb_reserve(skb, NET_IP_ALIGN);15881588- buffer_info->skb = NULL;15891589- }15901590- }15911591- /* The length includes FCS length */15921592- length = length - ETH_FCS_LEN;15931440 /* update status of driver */15941441 adapter->stats.rx_bytes += length;15951442 adapter->stats.rx_packets++;···15541509 pr_debug("Receive skb->ip_summed: %d length: %d\n",15551510 skb->ip_summed, length);15561511 }15571557-dorrop:15581512 /* return some buffers to hardware, one at a time is too slow */15591513 if (unlikely(cleaned_count >= PCH_GBE_RX_BUFFER_WRITE)) {15601514 pch_gbe_alloc_rx_buffers(adapter, rx_ring,···17581714 pr_err("Error: can't bring device up\n");17591715 return err;17601716 }17171717+ err = pch_gbe_alloc_rx_buffers_pool(adapter, rx_ring, rx_ring->count);17181718+ if (err) {17191719+ pr_err("Error: can't bring device up\n");17201720+ return err;17211721+ }17611722 pch_gbe_alloc_tx_buffers(adapter, tx_ring);17621723 pch_gbe_alloc_rx_buffers(adapter, rx_ring, rx_ring->count);17631724 adapter->tx_queue_len = netdev->tx_queue_len;17251725+ pch_gbe_start_receive(&adapter->hw);1764172617651727 mod_timer(&adapter->watchdog_timer, jiffies);17661728···17841734void pch_gbe_down(struct pch_gbe_adapter *adapter)17851735{17861736 struct net_device *netdev = adapter->netdev;17371737+ struct pch_gbe_rx_ring *rx_ring = adapter->rx_ring;1787173817881739 /* signal that we're down so the interrupt handler does not17891740 * reschedule our watchdog timer */···18031752 pch_gbe_reset(adapter);18041753 pch_gbe_clean_tx_ring(adapter, adapter->tx_ring);18051754 pch_gbe_clean_rx_ring(adapter, adapter->rx_ring);17551755+17561756+ pci_free_consistent(adapter->pdev, rx_ring->rx_buff_pool_size,17571757+ rx_ring->rx_buff_pool, rx_ring->rx_buff_pool_logic);17581758+ rx_ring->rx_buff_pool_logic = 0;17591759+ rx_ring->rx_buff_pool_size = 0;17601760+ rx_ring->rx_buff_pool = NULL;18061761}1807176218081763/**···20612004{20622005 struct pch_gbe_adapter *adapter = netdev_priv(netdev);20632006 int max_frame;20072007+ unsigned long old_rx_buffer_len = adapter->rx_buffer_len;20082008+ int err;2064200920652010 max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;20662011 if ((max_frame < ETH_ZLEN + ETH_FCS_LEN) ||···20772018 else if (max_frame <= PCH_GBE_FRAME_SIZE_8192)20782019 adapter->rx_buffer_len = PCH_GBE_FRAME_SIZE_8192;20792020 else20802080- adapter->rx_buffer_len = PCH_GBE_MAX_JUMBO_FRAME_SIZE;20812081- netdev->mtu = new_mtu;20822082- adapter->hw.mac.max_frame_size = max_frame;20212021+ adapter->rx_buffer_len = PCH_GBE_MAX_RX_BUFFER_SIZE;2083202220842084- if (netif_running(netdev))20852085- pch_gbe_reinit_locked(adapter);20862086- else20232023+ if (netif_running(netdev)) {20242024+ pch_gbe_down(adapter);20252025+ err = pch_gbe_up(adapter);20262026+ if (err) {20272027+ adapter->rx_buffer_len = old_rx_buffer_len;20282028+ pch_gbe_up(adapter);20292029+ return -ENOMEM;20302030+ } else {20312031+ netdev->mtu = new_mtu;20322032+ adapter->hw.mac.max_frame_size = max_frame;20332033+ }20342034+ } else {20872035 pch_gbe_reset(adapter);20362036+ netdev->mtu = new_mtu;20372037+ adapter->hw.mac.max_frame_size = max_frame;20382038+ }2088203920892040 pr_debug("max_frame : %d rx_buffer_len : %d mtu : %d max_frame_size : %d\n",20902041 max_frame, (u32) adapter->rx_buffer_len, netdev->mtu,···21682099{21692100 struct pch_gbe_adapter *adapter =21702101 container_of(napi, struct pch_gbe_adapter, napi);21712171- struct net_device *netdev = adapter->netdev;21722102 int work_done = 0;21732103 bool poll_end_flag = false;21742104 bool cleaned = false;21052105+ u32 int_en;2175210621762107 pr_debug("budget : %d\n", budget);2177210821782178- /* Keep link state information with original netdev */21792179- if (!netif_carrier_ok(netdev)) {21802180- poll_end_flag = true;21812181- } else {21822182- cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);21832183- pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);21092109+ pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);21102110+ cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);2184211121852185- if (cleaned)21862186- work_done = budget;21872187- /* If no Tx and not enough Rx work done,21882188- * exit the polling mode21892189- */21902190- if ((work_done < budget) || !netif_running(netdev))21912191- poll_end_flag = true;21922192- }21122112+ if (!cleaned)21132113+ work_done = budget;21142114+ /* If no Tx and not enough Rx work done,21152115+ * exit the polling mode21162116+ */21172117+ if (work_done < budget)21182118+ poll_end_flag = true;2193211921942120 if (poll_end_flag) {21952121 napi_complete(napi);21222122+ if (adapter->rx_stop_flag) {21232123+ adapter->rx_stop_flag = false;21242124+ pch_gbe_start_receive(&adapter->hw);21252125+ }21962126 pch_gbe_irq_enable(adapter);21972197- }21272127+ } else21282128+ if (adapter->rx_stop_flag) {21292129+ adapter->rx_stop_flag = false;21302130+ pch_gbe_start_receive(&adapter->hw);21312131+ int_en = ioread32(&adapter->hw.reg->INT_EN);21322132+ iowrite32((int_en | PCH_GBE_INT_RX_FIFO_ERR),21332133+ &adapter->hw.reg->INT_EN);21342134+ }2198213521992136 pr_debug("poll_end_flag : %d work_done : %d budget : %d\n",22002137 poll_end_flag, work_done, budget);···25222447 },25232448 {.vendor = PCI_VENDOR_ID_ROHM,25242449 .device = PCI_DEVICE_ID_ROHM_ML7223_GBE,24502450+ .subvendor = PCI_ANY_ID,24512451+ .subdevice = PCI_ANY_ID,24522452+ .class = (PCI_CLASS_NETWORK_ETHERNET << 8),24532453+ .class_mask = (0xFFFF00)24542454+ },24552455+ {.vendor = PCI_VENDOR_ID_ROHM,24562456+ .device = PCI_DEVICE_ID_ROHM_ML7831_GBE,25252457 .subvendor = PCI_ANY_ID,25262458 .subdevice = PCI_ANY_ID,25272459 .class = (PCI_CLASS_NETWORK_ETHERNET << 8),
+2-2
drivers/net/phy/dp83640.c
···589589 prune_rx_ts(dp83640);590590591591 if (list_empty(&dp83640->rxpool)) {592592- pr_warning("dp83640: rx timestamp pool is empty\n");592592+ pr_debug("dp83640: rx timestamp pool is empty\n");593593 goto out;594594 }595595 rxts = list_first_entry(&dp83640->rxpool, struct rxts, list);···612612 skb = skb_dequeue(&dp83640->tx_queue);613613614614 if (!skb) {615615- pr_warning("dp83640: have timestamp but tx_queue empty\n");615615+ pr_debug("dp83640: have timestamp but tx_queue empty\n");616616 return;617617 }618618 ns = phy2txts(phy_txts);
+6-1
drivers/net/ppp_generic.c
···14651465 continue;14661466 }1467146714681468- mtu = pch->chan->mtu - hdrlen;14681468+ /*14691469+ * hdrlen includes the 2-byte PPP protocol field, but the14701470+ * MTU counts only the payload excluding the protocol field.14711471+ * (RFC1661 Section 2)14721472+ */14731473+ mtu = pch->chan->mtu - (hdrlen - 2);14691474 if (mtu < 4)14701475 mtu = 4;14711476 if (flen > mtu)
···10501050{10511051 struct pci_dev *pci_dev = efx->pci_dev;10521052 dma_addr_t dma_mask = efx->type->max_dma_mask;10531053- bool use_wc;10541053 int rc;1055105410561055 netif_dbg(efx, probe, efx->net_dev, "initialising I/O\n");···11001101 rc = -EIO;11011102 goto fail3;11021103 }11031103-11041104- /* bug22643: If SR-IOV is enabled then tx push over a write combined11051105- * mapping is unsafe. We need to disable write combining in this case.11061106- * MSI is unsupported when SR-IOV is enabled, and the firmware will11071107- * have removed the MSI capability. So write combining is safe if11081108- * there is an MSI capability.11091109- */11101110- use_wc = (!EFX_WORKAROUND_22643(efx) ||11111111- pci_find_capability(pci_dev, PCI_CAP_ID_MSI));11121112- if (use_wc)11131113- efx->membase = ioremap_wc(efx->membase_phys,11141114- efx->type->mem_map_size);11151115- else11161116- efx->membase = ioremap_nocache(efx->membase_phys,11171117- efx->type->mem_map_size);11041104+ efx->membase = ioremap_nocache(efx->membase_phys,11051105+ efx->type->mem_map_size);11181106 if (!efx->membase) {11191107 netif_err(efx, probe, efx->net_dev,11201108 "could not map memory BAR at %llx+%x\n",
···4141 case ADC_DC_CAL:4242 /* Run ADC Gain Cal for non-CCK & non 2GHz-HT20 only */4343 if (!IS_CHAN_B(chan) &&4444- !(IS_CHAN_2GHZ(chan) && IS_CHAN_HT20(chan)))4444+ !((IS_CHAN_2GHZ(chan) || IS_CHAN_A_FAST_CLOCK(ah, chan)) &&4545+ IS_CHAN_HT20(chan)))4546 supported = true;4647 break;4748 }
···671671 REG_WRITE_ARRAY(&ah->iniModesAdditional,672672 modesIndex, regWrites);673673674674- if (AR_SREV_9300(ah))674674+ if (AR_SREV_9330(ah))675675 REG_WRITE_ARRAY(&ah->iniModesAdditional, 1, regWrites);676676677677 if (AR_SREV_9340(ah) && !ah->is_clk_25mhz)
+6
drivers/net/wireless/ath/ath9k/main.c
···23032303 mutex_lock(&sc->mutex);23042304 cancel_delayed_work_sync(&sc->tx_complete_work);2305230523062306+ if (ah->ah_flags & AH_UNPLUGGED) {23072307+ ath_dbg(common, ATH_DBG_ANY, "Device has been unplugged!\n");23082308+ mutex_unlock(&sc->mutex);23092309+ return;23102310+ }23112311+23062312 if (sc->sc_flags & SC_OP_INVALID) {23072313 ath_dbg(common, ATH_DBG_ANY, "Device not present\n");23082314 mutex_unlock(&sc->mutex);
···16321632 u32 cmd, beacon0_valid, beacon1_valid;1633163316341634 if (!b43_is_mode(wl, NL80211_IFTYPE_AP) &&16351635- !b43_is_mode(wl, NL80211_IFTYPE_MESH_POINT))16351635+ !b43_is_mode(wl, NL80211_IFTYPE_MESH_POINT) &&16361636+ !b43_is_mode(wl, NL80211_IFTYPE_ADHOC))16361637 return;1637163816381639 /* This is the bottom half of the asynchronous beacon update. */
+14-7
drivers/net/wireless/ipw2x00/ipw2100.c
···19031903static int ipw2100_net_init(struct net_device *dev)19041904{19051905 struct ipw2100_priv *priv = libipw_priv(dev);19061906+19071907+ return ipw2100_up(priv, 1);19081908+}19091909+19101910+static int ipw2100_wdev_init(struct net_device *dev)19111911+{19121912+ struct ipw2100_priv *priv = libipw_priv(dev);19061913 const struct libipw_geo *geo = libipw_get_geo(priv->ieee);19071914 struct wireless_dev *wdev = &priv->ieee->wdev;19081908- int ret;19091915 int i;19101910-19111911- ret = ipw2100_up(priv, 1);19121912- if (ret)19131913- return ret;1914191619151917 memcpy(wdev->wiphy->perm_addr, priv->mac_addr, ETH_ALEN);19161918···63526350 "Error calling register_netdev.\n");63536351 goto fail;63546352 }63536353+ registered = 1;63546354+63556355+ err = ipw2100_wdev_init(dev);63566356+ if (err)63576357+ goto fail;6355635863566359 mutex_lock(&priv->action_mutex);63576357- registered = 1;6358636063596361 IPW_DEBUG_INFO("%s: Bound to %s\n", dev->name, pci_name(pci_dev));63606362···6395638963966390 fail_unlock:63976391 mutex_unlock(&priv->action_mutex);63986398-63926392+ wiphy_unregister(priv->ieee->wdev.wiphy);63936393+ kfree(priv->ieee->bg_band.channels);63996394 fail:64006395 if (dev) {64016396 if (registered)
+26-13
drivers/net/wireless/ipw2x00/ipw2200.c
···1142511425/* Called by register_netdev() */1142611426static int ipw_net_init(struct net_device *dev)1142711427{1142811428+ int rc = 0;1142911429+ struct ipw_priv *priv = libipw_priv(dev);1143011430+1143111431+ mutex_lock(&priv->mutex);1143211432+ if (ipw_up(priv))1143311433+ rc = -EIO;1143411434+ mutex_unlock(&priv->mutex);1143511435+1143611436+ return rc;1143711437+}1143811438+1143911439+static int ipw_wdev_init(struct net_device *dev)1144011440+{1142811441 int i, rc = 0;1142911442 struct ipw_priv *priv = libipw_priv(dev);1143011443 const struct libipw_geo *geo = libipw_get_geo(priv->ieee);1143111444 struct wireless_dev *wdev = &priv->ieee->wdev;1143211432- mutex_lock(&priv->mutex);1143311433-1143411434- if (ipw_up(priv)) {1143511435- rc = -EIO;1143611436- goto out;1143711437- }11438114451143911446 memcpy(wdev->wiphy->perm_addr, priv->mac_addr, ETH_ALEN);1144011447···1152611519 set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);11527115201152811521 /* With that information in place, we can now register the wiphy... */1152911529- if (wiphy_register(wdev->wiphy)) {1152211522+ if (wiphy_register(wdev->wiphy))1153011523 rc = -EIO;1153111531- goto out;1153211532- }1153311533-1153411524out:1153511535- mutex_unlock(&priv->mutex);1153611525 return rc;1153711526}1153811527···1183511832 goto out_remove_sysfs;1183611833 }11837118341183511835+ err = ipw_wdev_init(net_dev);1183611836+ if (err) {1183711837+ IPW_ERROR("failed to register wireless device\n");1183811838+ goto out_unregister_netdev;1183911839+ }1184011840+1183811841#ifdef CONFIG_IPW2200_PROMISCUOUS1183911842 if (rtap_iface) {1184011843 err = ipw_prom_alloc(priv);1184111844 if (err) {1184211845 IPW_ERROR("Failed to register promiscuous network "1184311846 "device (error %d).\n", err);1184411844- unregister_netdev(priv->net_dev);1184511845- goto out_remove_sysfs;1184711847+ wiphy_unregister(priv->ieee->wdev.wiphy);1184811848+ kfree(priv->ieee->a_band.channels);1184911849+ kfree(priv->ieee->bg_band.channels);1185011850+ goto out_unregister_netdev;1184611851 }1184711852 }1184811853#endif···11862118511186311852 return 0;11864118531185411854+ out_unregister_netdev:1185511855+ unregister_netdev(priv->net_dev);1186511856 out_remove_sysfs:1186611857 sysfs_remove_group(&pdev->dev.kobj, &ipw_attribute_group);1186711858 out_release_irq:
+8-5
drivers/net/wireless/iwlegacy/iwl-3945-rs.c
···822822823823 out:824824825825- rs_sta->last_txrate_idx = index;826826- if (sband->band == IEEE80211_BAND_5GHZ)827827- info->control.rates[0].idx = rs_sta->last_txrate_idx -828828- IWL_FIRST_OFDM_RATE;829829- else825825+ if (sband->band == IEEE80211_BAND_5GHZ) {826826+ if (WARN_ON_ONCE(index < IWL_FIRST_OFDM_RATE))827827+ index = IWL_FIRST_OFDM_RATE;828828+ rs_sta->last_txrate_idx = index;829829+ info->control.rates[0].idx = index - IWL_FIRST_OFDM_RATE;830830+ } else {831831+ rs_sta->last_txrate_idx = index;830832 info->control.rates[0].idx = rs_sta->last_txrate_idx;833833+ }831834832835 IWL_DEBUG_RATE(priv, "leave: %d\n", index);833836}
+2-2
drivers/net/wireless/iwlegacy/iwl-core.c
···937937 &priv->contexts[IWL_RXON_CTX_BSS]);938938#endif939939940940- wake_up_interruptible(&priv->wait_command_queue);940940+ wake_up(&priv->wait_command_queue);941941942942 /* Keep the restart process from trying to send host943943 * commands by clearing the INIT status bit */···1746174617471747 /* Set the FW error flag -- cleared on iwl_down */17481748 set_bit(STATUS_FW_ERROR, &priv->status);17491749- wake_up_interruptible(&priv->wait_command_queue);17491749+ wake_up(&priv->wait_command_queue);17501750 /*17511751 * Keep the restart process from trying to send host17521752 * commands by clearing the INIT status bit
+1-1
drivers/net/wireless/iwlegacy/iwl-hcmd.c
···167167 goto out;168168 }169169170170- ret = wait_event_interruptible_timeout(priv->wait_command_queue,170170+ ret = wait_event_timeout(priv->wait_command_queue,171171 !test_bit(STATUS_HCMD_ACTIVE, &priv->status),172172 HOST_COMPLETE_TIMEOUT);173173 if (!ret) {
+3-1
drivers/net/wireless/iwlegacy/iwl-tx.c
···625625 cmd = txq->cmd[cmd_index];626626 meta = &txq->meta[cmd_index];627627628628+ txq->time_stamp = jiffies;629629+628630 pci_unmap_single(priv->pci_dev,629631 dma_unmap_addr(meta, mapping),630632 dma_unmap_len(meta, len),···647645 clear_bit(STATUS_HCMD_ACTIVE, &priv->status);648646 IWL_DEBUG_INFO(priv, "Clearing HCMD_ACTIVE for command %s\n",649647 iwl_legacy_get_cmd_string(cmd->hdr.cmd));650650- wake_up_interruptible(&priv->wait_command_queue);648648+ wake_up(&priv->wait_command_queue);651649 }652650653651 /* Mark as unmapped */
+4-4
drivers/net/wireless/iwlegacy/iwl3945-base.c
···841841 wiphy_rfkill_set_hw_state(priv->hw->wiphy,842842 test_bit(STATUS_RF_KILL_HW, &priv->status));843843 else844844- wake_up_interruptible(&priv->wait_command_queue);844844+ wake_up(&priv->wait_command_queue);845845}846846847847/**···22692269 iwl3945_reg_txpower_periodic(priv);2270227022712271 IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");22722272- wake_up_interruptible(&priv->wait_command_queue);22722272+ wake_up(&priv->wait_command_queue);2273227322742274 return;22752275···23002300 iwl_legacy_clear_driver_stations(priv);2301230123022302 /* Unblock any waiting calls */23032303- wake_up_interruptible_all(&priv->wait_command_queue);23032303+ wake_up_all(&priv->wait_command_queue);2304230423052305 /* Wipe out the EXIT_PENDING status bit if we are not actually23062306 * exiting the module */···2853285328542854 /* Wait for START_ALIVE from ucode. Otherwise callbacks from28552855 * mac80211 will not be run successfully. */28562856- ret = wait_event_interruptible_timeout(priv->wait_command_queue,28562856+ ret = wait_event_timeout(priv->wait_command_queue,28572857 test_bit(STATUS_READY, &priv->status),28582858 UCODE_READY_TIMEOUT);28592859 if (!ret) {
+5-5
drivers/net/wireless/iwlegacy/iwl4965-base.c
···576576 wiphy_rfkill_set_hw_state(priv->hw->wiphy,577577 test_bit(STATUS_RF_KILL_HW, &priv->status));578578 else579579- wake_up_interruptible(&priv->wait_command_queue);579579+ wake_up(&priv->wait_command_queue);580580}581581582582/**···926926 handled |= CSR_INT_BIT_FH_TX;927927 /* Wake up uCode load routine, now that load is complete */928928 priv->ucode_write_complete = 1;929929- wake_up_interruptible(&priv->wait_command_queue);929929+ wake_up(&priv->wait_command_queue);930930 }931931932932 if (inta & ~handled) {···17951795 iwl4965_rf_kill_ct_config(priv);1796179617971797 IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");17981798- wake_up_interruptible(&priv->wait_command_queue);17981798+ wake_up(&priv->wait_command_queue);1799179918001800 iwl_legacy_power_update_mode(priv, true);18011801 IWL_DEBUG_INFO(priv, "Updated power mode\n");···18281828 iwl_legacy_clear_driver_stations(priv);1829182918301830 /* Unblock any waiting calls */18311831- wake_up_interruptible_all(&priv->wait_command_queue);18311831+ wake_up_all(&priv->wait_command_queue);1832183218331833 /* Wipe out the EXIT_PENDING status bit if we are not actually18341834 * exiting the module */···2266226622672267 /* Wait for START_ALIVE from Run Time ucode. Otherwise callbacks from22682268 * mac80211 will not be run successfully. */22692269- ret = wait_event_interruptible_timeout(priv->wait_command_queue,22692269+ ret = wait_event_timeout(priv->wait_command_queue,22702270 test_bit(STATUS_READY, &priv->status),22712271 UCODE_READY_TIMEOUT);22722272 if (!ret) {
···21402140 IEEE80211_HW_SPECTRUM_MGMT |21412141 IEEE80211_HW_REPORTS_TX_ACK_STATUS;2142214221432143+ /*21442144+ * Including the following line will crash some AP's. This21452145+ * workaround removes the stimulus which causes the crash until21462146+ * the AP software can be fixed.21432147 hw->max_tx_aggregation_subframes = LINK_QUAL_AGG_FRAME_LIMIT_DEF;21482148+ */2144214921452150 hw->flags |= IEEE80211_HW_SUPPORTS_PS |21462151 IEEE80211_HW_SUPPORTS_DYNAMIC_PS;
+16-14
drivers/net/wireless/iwlwifi/iwl-scan.c
···405405406406 mutex_lock(&priv->mutex);407407408408- if (test_bit(STATUS_SCANNING, &priv->status) &&409409- priv->scan_type != IWL_SCAN_NORMAL) {410410- IWL_DEBUG_SCAN(priv, "Scan already in progress.\n");411411- ret = -EAGAIN;412412- goto out_unlock;413413- }414414-415415- /* mac80211 will only ask for one band at a time */416416- priv->scan_request = req;417417- priv->scan_vif = vif;418418-419408 /*420409 * If an internal scan is in progress, just set421410 * up the scan_request as per above.422411 */423412 if (priv->scan_type != IWL_SCAN_NORMAL) {424424- IWL_DEBUG_SCAN(priv, "SCAN request during internal scan\n");413413+ IWL_DEBUG_SCAN(priv,414414+ "SCAN request during internal scan - defer\n");415415+ priv->scan_request = req;416416+ priv->scan_vif = vif;425417 ret = 0;426426- } else418418+ } else {419419+ priv->scan_request = req;420420+ priv->scan_vif = vif;421421+ /*422422+ * mac80211 will only ask for one band at a time423423+ * so using channels[0] here is ok424424+ */427425 ret = iwl_scan_initiate(priv, vif, IWL_SCAN_NORMAL,428426 req->channels[0]->band);427427+ if (ret) {428428+ priv->scan_request = NULL;429429+ priv->scan_vif = NULL;430430+ }431431+ }429432430433 IWL_DEBUG_MAC80211(priv, "leave\n");431434432432-out_unlock:433435 mutex_unlock(&priv->mutex);434436435437 return ret;
+2
drivers/net/wireless/iwlwifi/iwl-trans-tx-pcie.c
···771771 cmd = txq->cmd[cmd_index];772772 meta = &txq->meta[cmd_index];773773774774+ txq->time_stamp = jiffies;775775+774776 iwlagn_unmap_tfd(priv, meta, &txq->tfds[index], DMA_BIDIRECTIONAL);775777776778 /* Input error checking is done when commands are added to queue. */
+26-21
drivers/net/wireless/rt2x00/rt2800lib.c
···36973697 rt2800_regbusy_read(rt2x00dev, EFUSE_CTRL, EFUSE_CTRL_KICK, ®);3698369836993699 /* Apparently the data is read from end to start */37003700- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3,37013701- (u32 *)&rt2x00dev->eeprom[i]);37023702- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2,37033703- (u32 *)&rt2x00dev->eeprom[i + 2]);37043704- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1,37053705- (u32 *)&rt2x00dev->eeprom[i + 4]);37063706- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA0,37073707- (u32 *)&rt2x00dev->eeprom[i + 6]);37003700+ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3, ®);37013701+ /* The returned value is in CPU order, but eeprom is le */37023702+ rt2x00dev->eeprom[i] = cpu_to_le32(reg);37033703+ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2, ®);37043704+ *(u32 *)&rt2x00dev->eeprom[i + 2] = cpu_to_le32(reg);37053705+ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1, ®);37063706+ *(u32 *)&rt2x00dev->eeprom[i + 4] = cpu_to_le32(reg);37073707+ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA0, ®);37083708+ *(u32 *)&rt2x00dev->eeprom[i + 6] = cpu_to_le32(reg);3708370937093710 mutex_unlock(&rt2x00dev->csr_mutex);37103711}···38713870 return -ENODEV;38723871 }3873387238743874- if (!rt2x00_rf(rt2x00dev, RF2820) &&38753875- !rt2x00_rf(rt2x00dev, RF2850) &&38763876- !rt2x00_rf(rt2x00dev, RF2720) &&38773877- !rt2x00_rf(rt2x00dev, RF2750) &&38783878- !rt2x00_rf(rt2x00dev, RF3020) &&38793879- !rt2x00_rf(rt2x00dev, RF2020) &&38803880- !rt2x00_rf(rt2x00dev, RF3021) &&38813881- !rt2x00_rf(rt2x00dev, RF3022) &&38823882- !rt2x00_rf(rt2x00dev, RF3052) &&38833883- !rt2x00_rf(rt2x00dev, RF3320) &&38843884- !rt2x00_rf(rt2x00dev, RF5370) &&38853885- !rt2x00_rf(rt2x00dev, RF5390)) {38863886- ERROR(rt2x00dev, "Invalid RF chipset detected.\n");38733873+ switch (rt2x00dev->chip.rf) {38743874+ case RF2820:38753875+ case RF2850:38763876+ case RF2720:38773877+ case RF2750:38783878+ case RF3020:38793879+ case RF2020:38803880+ case RF3021:38813881+ case RF3022:38823882+ case RF3052:38833883+ case RF3320:38843884+ case RF5370:38853885+ case RF5390:38863886+ break;38873887+ default:38883888+ ERROR(rt2x00dev, "Invalid RF chipset 0x%x detected.\n",38893889+ rt2x00dev->chip.rf);38873890 return -ENODEV;38883891 }38893892
···327327 xenvif_get(vif);328328329329 rtnl_lock();330330- if (netif_running(vif->dev))331331- xenvif_up(vif);332330 if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)333331 dev_set_mtu(vif->dev, ETH_DATA_LEN);334332 netdev_update_features(vif->dev);335333 netif_carrier_on(vif->dev);334334+ if (netif_running(vif->dev))335335+ xenvif_up(vif);336336 rtnl_unlock();337337338338 return 0;
+5-1
drivers/pci/pci.c
···7777unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE;7878unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE;79798080-enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_SAFE;8080+enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_TUNE_OFF;81818282/*8383 * The default CLS is used if arch didn't set CLS explicitly and not···35683568 pci_hotplug_io_size = memparse(str + 9, &str);35693569 } else if (!strncmp(str, "hpmemsize=", 10)) {35703570 pci_hotplug_mem_size = memparse(str + 10, &str);35713571+ } else if (!strncmp(str, "pcie_bus_tune_off", 17)) {35723572+ pcie_bus_config = PCIE_BUS_TUNE_OFF;35713573 } else if (!strncmp(str, "pcie_bus_safe", 13)) {35723574 pcie_bus_config = PCIE_BUS_SAFE;35733575 } else if (!strncmp(str, "pcie_bus_perf", 13)) {35743576 pcie_bus_config = PCIE_BUS_PERFORMANCE;35773577+ } else if (!strncmp(str, "pcie_bus_peer2peer", 18)) {35783578+ pcie_bus_config = PCIE_BUS_PEER2PEER;35753579 } else {35763580 printk(KERN_ERR "PCI: Unknown option `%s'\n",35773581 str);
+15-2
drivers/pci/probe.c
···13511351 * will occur as normal.13521352 */13531353 if (dev->is_hotplug_bridge && (!list_is_singular(&dev->bus->devices) ||13541354- dev->bus->self->pcie_type != PCI_EXP_TYPE_ROOT_PORT))13541354+ (dev->bus->self &&13551355+ dev->bus->self->pcie_type != PCI_EXP_TYPE_ROOT_PORT)))13551356 *smpss = 0;1356135713571358 if (*smpss > dev->pcie_mpss)···14581457 */14591458void pcie_bus_configure_settings(struct pci_bus *bus, u8 mpss)14601459{14611461- u8 smpss = mpss;14601460+ u8 smpss;1462146114631462 if (!pci_is_pcie(bus->self))14641463 return;1465146414651465+ if (pcie_bus_config == PCIE_BUS_TUNE_OFF)14661466+ return;14671467+14681468+ /* FIXME - Peer to peer DMA is possible, though the endpoint would need14691469+ * to be aware to the MPS of the destination. To work around this,14701470+ * simply force the MPS of the entire system to the smallest possible.14711471+ */14721472+ if (pcie_bus_config == PCIE_BUS_PEER2PEER)14731473+ smpss = 0;14741474+14661475 if (pcie_bus_config == PCIE_BUS_SAFE) {14761476+ smpss = mpss;14771477+14671478 pcie_find_smpss(bus->self, &smpss);14681479 pci_walk_bus(bus, pcie_find_smpss, &smpss);14691480 }
···654654static int console_subchannel_in_use;655655656656/*657657- * Use tpi to get a pending interrupt, call the interrupt handler and658658- * return a pointer to the subchannel structure.657657+ * Use cio_tpi to get a pending interrupt and call the interrupt handler.658658+ * Return non-zero if an interrupt was processed, zero otherwise.659659 */660660static int cio_tpi(void)661661{···667667 tpi_info = (struct tpi_info *)&S390_lowcore.subchannel_id;668668 if (tpi(NULL) != 1)669669 return 0;670670+ if (tpi_info->adapter_IO) {671671+ do_adapter_IO(tpi_info->isc);672672+ return 1;673673+ }670674 irb = (struct irb *)&S390_lowcore.irb;671675 /* Store interrupt response block to lowcore. */672676 if (tsch(tpi_info->schid, irb) != 0)
···837837 # (temporary): known alpha quality driver838838 depends on EXPERIMENTAL839839 select SCSI_SAS_LIBSAS840840+ select SCSI_SAS_HOST_SMP840841 ---help---841842 This driver supports the 6Gb/s SAS capabilities of the storage842843 control unit found in the Intel(R) C600 series chipset.
···432432 u8 flogi_maddr[ETH_ALEN];433433 const struct net_device_ops *ops;434434435435+ rtnl_lock();436436+435437 /*436438 * Don't listen for Ethernet packets anymore.437439 * synchronize_net() ensures that the packet handlers are not running···462460 FCOE_NETDEV_DBG(netdev, "Failed to disable FCoE"463461 " specific feature for LLD.\n");464462 }463463+464464+ rtnl_unlock();465465466466 /* Release the self-reference taken during fcoe_interface_create() */467467 fcoe_interface_put(fcoe);···19551951 fcoe_if_destroy(port->lport);1956195219571953 /* Do not tear down the fcoe interface for NPIV port */19581958- if (!npiv) {19591959- rtnl_lock();19541954+ if (!npiv)19601955 fcoe_interface_cleanup(fcoe);19611961- rtnl_unlock();19621962- }1963195619641957 mutex_unlock(&fcoe_config_mutex);19651958}···20102009 printk(KERN_ERR "fcoe: Failed to create interface (%s)\n",20112010 netdev->name);20122011 rc = -EIO;20122012+ rtnl_unlock();20132013 fcoe_interface_cleanup(fcoe);20142014- goto out_nodev;20142014+ goto out_nortnl;20152015 }2016201620172017 /* Make this the "master" N_Port */···2029202720302028out_nodev:20312029 rtnl_unlock();20302030+out_nortnl:20322031 mutex_unlock(&fcoe_config_mutex);20332032 return rc;20342033}
+37-20
drivers/scsi/hpsa.c
···676676 BUG_ON(entry < 0 || entry >= HPSA_MAX_SCSI_DEVS_PER_HBA);677677 removed[*nremoved] = h->dev[entry];678678 (*nremoved)++;679679+680680+ /*681681+ * New physical devices won't have target/lun assigned yet682682+ * so we need to preserve the values in the slot we are replacing.683683+ */684684+ if (new_entry->target == -1) {685685+ new_entry->target = h->dev[entry]->target;686686+ new_entry->lun = h->dev[entry]->lun;687687+ }688688+679689 h->dev[entry] = new_entry;680690 added[*nadded] = new_entry;681691 (*nadded)++;···15581548}1559154915601550static int hpsa_update_device_info(struct ctlr_info *h,15611561- unsigned char scsi3addr[], struct hpsa_scsi_dev_t *this_device)15511551+ unsigned char scsi3addr[], struct hpsa_scsi_dev_t *this_device,15521552+ unsigned char *is_OBDR_device)15621553{15631563-#define OBDR_TAPE_INQ_SIZE 4915541554+15551555+#define OBDR_SIG_OFFSET 4315561556+#define OBDR_TAPE_SIG "$DR-10"15571557+#define OBDR_SIG_LEN (sizeof(OBDR_TAPE_SIG) - 1)15581558+#define OBDR_TAPE_INQ_SIZE (OBDR_SIG_OFFSET + OBDR_SIG_LEN)15591559+15641560 unsigned char *inq_buff;15611561+ unsigned char *obdr_sig;1565156215661563 inq_buff = kzalloc(OBDR_TAPE_INQ_SIZE, GFP_KERNEL);15671564 if (!inq_buff)···15991582 hpsa_get_raid_level(h, scsi3addr, &this_device->raid_level);16001583 else16011584 this_device->raid_level = RAID_UNKNOWN;15851585+15861586+ if (is_OBDR_device) {15871587+ /* See if this is a One-Button-Disaster-Recovery device15881588+ * by looking for "$DR-10" at offset 43 in inquiry data.15891589+ */15901590+ obdr_sig = &inq_buff[OBDR_SIG_OFFSET];15911591+ *is_OBDR_device = (this_device->devtype == TYPE_ROM &&15921592+ strncmp(obdr_sig, OBDR_TAPE_SIG,15931593+ OBDR_SIG_LEN) == 0);15941594+ }1602159516031596 kfree(inq_buff);16041597 return 0;···17431716 return 0;17441717 }1745171817461746- if (hpsa_update_device_info(h, scsi3addr, this_device))17191719+ if (hpsa_update_device_info(h, scsi3addr, this_device, NULL))17471720 return 0;17481721 (*nmsa2xxx_enclosures)++;17491722 hpsa_set_bus_target_lun(this_device, bus, target, 0);···18351808 */18361809 struct ReportLUNdata *physdev_list = NULL;18371810 struct ReportLUNdata *logdev_list = NULL;18381838- unsigned char *inq_buff = NULL;18391811 u32 nphysicals = 0;18401812 u32 nlogicals = 0;18411813 u32 ndev_allocated = 0;···18501824 GFP_KERNEL);18511825 physdev_list = kzalloc(reportlunsize, GFP_KERNEL);18521826 logdev_list = kzalloc(reportlunsize, GFP_KERNEL);18531853- inq_buff = kmalloc(OBDR_TAPE_INQ_SIZE, GFP_KERNEL);18541827 tmpdevice = kzalloc(sizeof(*tmpdevice), GFP_KERNEL);1855182818561856- if (!currentsd || !physdev_list || !logdev_list ||18571857- !inq_buff || !tmpdevice) {18291829+ if (!currentsd || !physdev_list || !logdev_list || !tmpdevice) {18581830 dev_err(&h->pdev->dev, "out of memory\n");18591831 goto out;18601832 }···18871863 /* adjust our table of devices */18881864 nmsa2xxx_enclosures = 0;18891865 for (i = 0; i < nphysicals + nlogicals + 1; i++) {18901890- u8 *lunaddrbytes;18661866+ u8 *lunaddrbytes, is_OBDR = 0;1891186718921868 /* Figure out where the LUN ID info is coming from */18931869 lunaddrbytes = figure_lunaddrbytes(h, raid_ctlr_position,···18981874 continue;1899187519001876 /* Get device type, vendor, model, device id */19011901- if (hpsa_update_device_info(h, lunaddrbytes, tmpdevice))18771877+ if (hpsa_update_device_info(h, lunaddrbytes, tmpdevice,18781878+ &is_OBDR))19021879 continue; /* skip it if we can't talk to it. */19031880 figure_bus_target_lun(h, lunaddrbytes, &bus, &target, &lun,19041881 tmpdevice);···19231898 hpsa_set_bus_target_lun(this_device, bus, target, lun);1924189919251900 switch (this_device->devtype) {19261926- case TYPE_ROM: {19011901+ case TYPE_ROM:19271902 /* We don't *really* support actual CD-ROM devices,19281903 * just "One Button Disaster Recovery" tape drive19291904 * which temporarily pretends to be a CD-ROM drive.···19311906 * device by checking for "$DR-10" in bytes 43-48 of19321907 * the inquiry data.19331908 */19341934- char obdr_sig[7];19351935-#define OBDR_TAPE_SIG "$DR-10"19361936- strncpy(obdr_sig, &inq_buff[43], 6);19371937- obdr_sig[6] = '\0';19381938- if (strncmp(obdr_sig, OBDR_TAPE_SIG, 6) != 0)19391939- /* Not OBDR device, ignore it. */19401940- break;19411941- }19421942- ncurrent++;19091909+ if (is_OBDR)19101910+ ncurrent++;19431911 break;19441912 case TYPE_DISK:19451913 if (i < nphysicals)···19651947 for (i = 0; i < ndev_allocated; i++)19661948 kfree(currentsd[i]);19671949 kfree(currentsd);19681968- kfree(inq_buff);19691950 kfree(physdev_list);19701951 kfree(logdev_list);19711952}
+12-1
drivers/scsi/isci/host.c
···531531 break;532532533533 case SCU_COMPLETION_TYPE_EVENT:534534+ sci_controller_event_completion(ihost, ent);535535+ break;536536+534537 case SCU_COMPLETION_TYPE_NOTIFY: {535538 event_cycle ^= ((event_get+1) & SCU_MAX_EVENTS) <<536539 (SMU_COMPLETION_QUEUE_GET_EVENT_CYCLE_BIT_SHIFT - SCU_MAX_EVENTS_SHIFT);···10941091 struct isci_request *request;10951092 struct isci_request *next_request;10961093 struct sas_task *task;10941094+ u16 active;1097109510981096 INIT_LIST_HEAD(&completed_request_list);10991097 INIT_LIST_HEAD(&errored_request_list);···11851181 }11861182 }1187118311841184+ /* the coalesence timeout doubles at each encoding step, so11851185+ * update it based on the ilog2 value of the outstanding requests11861186+ */11871187+ active = isci_tci_active(ihost);11881188+ writel(SMU_ICC_GEN_VAL(NUMBER, active) |11891189+ SMU_ICC_GEN_VAL(TIMER, ISCI_COALESCE_BASE + ilog2(active)),11901190+ &ihost->smu_registers->interrupt_coalesce_control);11881191}1189119211901193/**···14821471 struct isci_host *ihost = container_of(sm, typeof(*ihost), sm);1483147214841473 /* set the default interrupt coalescence number and timeout value. */14851485- sci_controller_set_interrupt_coalescence(ihost, 0x10, 250);14741474+ sci_controller_set_interrupt_coalescence(ihost, 0, 0);14861475}1487147614881477static void sci_controller_ready_state_exit(struct sci_base_state_machine *sm)
+3
drivers/scsi/isci/host.h
···369369#define ISCI_TAG_SEQ(tag) (((tag) >> 12) & (SCI_MAX_SEQ-1))370370#define ISCI_TAG_TCI(tag) ((tag) & (SCI_MAX_IO_REQUESTS-1))371371372372+/* interrupt coalescing baseline: 9 == 3 to 5us interrupt delay per command */373373+#define ISCI_COALESCE_BASE 9374374+372375/* expander attached sata devices require 3 rnc slots */373376static inline int sci_remote_device_node_count(struct isci_remote_device *idev)374377{
···104104 u32 parity_count = 0;105105 u32 llctl, link_rate;106106 u32 clksm_value = 0;107107+ u32 sp_timeouts = 0;107108108109 iphy->link_layer_registers = reg;109110···211210 }212211 llctl |= SCU_SAS_LLCTL_GEN_VAL(MAX_LINK_RATE, link_rate);213212 writel(llctl, &iphy->link_layer_registers->link_layer_control);213213+214214+ sp_timeouts = readl(&iphy->link_layer_registers->sas_phy_timeouts);215215+216216+ /* Clear the default 0x36 (54us) RATE_CHANGE timeout value. */217217+ sp_timeouts &= ~SCU_SAS_PHYTOV_GEN_VAL(RATE_CHANGE, 0xFF);218218+219219+ /* Set RATE_CHANGE timeout value to 0x3B (59us). This ensures SCU can220220+ * lock with 3Gb drive when SCU max rate is set to 1.5Gb.221221+ */222222+ sp_timeouts |= SCU_SAS_PHYTOV_GEN_VAL(RATE_CHANGE, 0x3B);223223+224224+ writel(sp_timeouts, &iphy->link_layer_registers->sas_phy_timeouts);214225215226 if (is_a2(ihost->pdev)) {216227 /* Program the max ARB time for the PHY to 700us so we inter-operate with
···732732 sci_change_state(&ireq->sm, SCI_REQ_ABORTING);733733 return SCI_SUCCESS;734734 case SCI_REQ_TASK_WAIT_TC_RESP:735735+ /* The task frame was already confirmed to have been736736+ * sent by the SCU HW. Since the state machine is737737+ * now only waiting for the task response itself,738738+ * abort the request and complete it immediately739739+ * and don't wait for the task response.740740+ */735741 sci_change_state(&ireq->sm, SCI_REQ_ABORTING);736742 sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);737743 return SCI_SUCCESS;738744 case SCI_REQ_ABORTING:739739- sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);740740- return SCI_SUCCESS;745745+ /* If a request has a termination requested twice, return746746+ * a failure indication, since HW confirmation of the first747747+ * abort is still outstanding.748748+ */741749 case SCI_REQ_COMPLETED:742750 default:743751 dev_warn(&ireq->owning_controller->pdev->dev,···24072399 }24082400}2409240124102410-static void isci_request_process_stp_response(struct sas_task *task,24112411- void *response_buffer)24022402+static void isci_process_stp_response(struct sas_task *task, struct dev_to_host_fis *fis)24122403{24132413- struct dev_to_host_fis *d2h_reg_fis = response_buffer;24142404 struct task_status_struct *ts = &task->task_status;24152405 struct ata_task_resp *resp = (void *)&ts->buf[0];2416240624172417- resp->frame_len = le16_to_cpu(*(__le16 *)(response_buffer + 6));24182418- memcpy(&resp->ending_fis[0], response_buffer + 16, 24);24072407+ resp->frame_len = sizeof(*fis);24082408+ memcpy(resp->ending_fis, fis, sizeof(*fis));24192409 ts->buf_valid_size = sizeof(*resp);2420241024212421- /**24222422- * If the device fault bit is set in the status register, then24112411+ /* If the device fault bit is set in the status register, then24232412 * set the sense data and return.24242413 */24252425- if (d2h_reg_fis->status & ATA_DF)24142414+ if (fis->status & ATA_DF)24262415 ts->stat = SAS_PROTO_RESPONSE;24272416 else24282417 ts->stat = SAM_STAT_GOOD;···24332428{24342429 struct sas_task *task = isci_request_access_task(request);24352430 struct ssp_response_iu *resp_iu;24362436- void *resp_buf;24372431 unsigned long task_flags;24382432 struct isci_remote_device *idev = isci_lookup_device(task->dev);24392433 enum service_response response = SAS_TASK_UNDELIVERED;···25692565 task);2570256625712567 if (sas_protocol_ata(task->task_proto)) {25722572- resp_buf = &request->stp.rsp;25732573- isci_request_process_stp_response(task,25742574- resp_buf);25682568+ isci_process_stp_response(task, &request->stp.rsp);25752569 } else if (SAS_PROTOCOL_SSP == task->task_proto) {2576257025772571 /* crack the iu response buffer. */
+1-1
drivers/scsi/isci/unsolicited_frame_control.c
···7272 */7373 buf_len = SCU_MAX_UNSOLICITED_FRAMES * SCU_UNSOLICITED_FRAME_BUFFER_SIZE;7474 header_len = SCU_MAX_UNSOLICITED_FRAMES * sizeof(struct scu_unsolicited_frame_header);7575- size = buf_len + header_len + SCU_MAX_UNSOLICITED_FRAMES * sizeof(dma_addr_t);7575+ size = buf_len + header_len + SCU_MAX_UNSOLICITED_FRAMES * sizeof(uf_control->address_table.array[0]);76767777 /*7878 * The Unsolicited Frame buffers are set at the start of the UF
+1-1
drivers/scsi/isci/unsolicited_frame_control.h
···214214 * starting address of the UF address table.215215 * 64-bit pointers are required by the hardware.216216 */217217- dma_addr_t *array;217217+ u64 *array;218218219219 /**220220 * This field specifies the physical address location for the UF
+41-18
drivers/scsi/libfc/fc_exch.c
···494494 */495495 error = lport->tt.frame_send(lport, fp);496496497497+ if (fh->fh_type == FC_TYPE_BLS)498498+ return error;499499+497500 /*498501 * Update the exchange and sequence flags,499502 * assuming all frames for the sequence have been sent.···578575}579576580577/**581581- * fc_seq_exch_abort() - Abort an exchange and sequence582582- * @req_sp: The sequence to be aborted578578+ * fc_exch_abort_locked() - Abort an exchange579579+ * @ep: The exchange to be aborted583580 * @timer_msec: The period of time to wait before aborting584581 *585585- * Generally called because of a timeout or an abort from the upper layer.582582+ * Locking notes: Called with exch lock held583583+ *584584+ * Return value: 0 on success else error code586585 */587587-static int fc_seq_exch_abort(const struct fc_seq *req_sp,588588- unsigned int timer_msec)586586+static int fc_exch_abort_locked(struct fc_exch *ep,587587+ unsigned int timer_msec)589588{590589 struct fc_seq *sp;591591- struct fc_exch *ep;592590 struct fc_frame *fp;593591 int error;594592595595- ep = fc_seq_exch(req_sp);596596-597597- spin_lock_bh(&ep->ex_lock);598593 if (ep->esb_stat & (ESB_ST_COMPLETE | ESB_ST_ABNORMAL) ||599599- ep->state & (FC_EX_DONE | FC_EX_RST_CLEANUP)) {600600- spin_unlock_bh(&ep->ex_lock);594594+ ep->state & (FC_EX_DONE | FC_EX_RST_CLEANUP))601595 return -ENXIO;602602- }603596604597 /*605598 * Send the abort on a new sequence if possible.606599 */607600 sp = fc_seq_start_next_locked(&ep->seq);608608- if (!sp) {609609- spin_unlock_bh(&ep->ex_lock);601601+ if (!sp)610602 return -ENOMEM;611611- }612603613604 ep->esb_stat |= ESB_ST_SEQ_INIT | ESB_ST_ABNORMAL;614605 if (timer_msec)615606 fc_exch_timer_set_locked(ep, timer_msec);616616- spin_unlock_bh(&ep->ex_lock);617607618608 /*619609 * If not logged into the fabric, don't send ABTS but leave···625629 error = fc_seq_send(ep->lp, sp, fp);626630 } else627631 error = -ENOBUFS;632632+ return error;633633+}634634+635635+/**636636+ * fc_seq_exch_abort() - Abort an exchange and sequence637637+ * @req_sp: The sequence to be aborted638638+ * @timer_msec: The period of time to wait before aborting639639+ *640640+ * Generally called because of a timeout or an abort from the upper layer.641641+ *642642+ * Return value: 0 on success else error code643643+ */644644+static int fc_seq_exch_abort(const struct fc_seq *req_sp,645645+ unsigned int timer_msec)646646+{647647+ struct fc_exch *ep;648648+ int error;649649+650650+ ep = fc_seq_exch(req_sp);651651+ spin_lock_bh(&ep->ex_lock);652652+ error = fc_exch_abort_locked(ep, timer_msec);653653+ spin_unlock_bh(&ep->ex_lock);628654 return error;629655}630656···17331715 int rc = 1;1734171617351717 spin_lock_bh(&ep->ex_lock);17181718+ fc_exch_abort_locked(ep, 0);17361719 ep->state |= FC_EX_RST_CLEANUP;17371720 if (cancel_delayed_work(&ep->timeout_work))17381721 atomic_dec(&ep->ex_refcnt); /* drop hold for timer */···19811962 struct fc_exch *ep;19821963 struct fc_seq *sp = NULL;19831964 struct fc_frame_header *fh;19651965+ struct fc_fcp_pkt *fsp = NULL;19841966 int rc = 1;1985196719861968 ep = fc_exch_alloc(lport, fp);···20041984 fc_exch_setup_hdr(ep, fp, ep->f_ctl);20051985 sp->cnt++;2006198620072007- if (ep->xid <= lport->lro_xid && fh->fh_r_ctl == FC_RCTL_DD_UNSOL_CMD)19871987+ if (ep->xid <= lport->lro_xid && fh->fh_r_ctl == FC_RCTL_DD_UNSOL_CMD) {19881988+ fsp = fr_fsp(fp);20081989 fc_fcp_ddp_setup(fr_fsp(fp), ep->xid);19901990+ }2009199120101992 if (unlikely(lport->tt.frame_send(lport, fp)))20111993 goto err;···20211999 spin_unlock_bh(&ep->ex_lock);20222000 return sp;20232001err:20242024- fc_fcp_ddp_done(fr_fsp(fp));20022002+ if (fsp)20032003+ fc_fcp_ddp_done(fsp);20252004 rc = fc_exch_done_locked(ep);20262005 spin_unlock_bh(&ep->ex_lock);20272006 if (!rc)
+9-2
drivers/scsi/libfc/fc_fcp.c
···20192019 struct fc_fcp_internal *si;20202020 int rc = FAILED;20212021 unsigned long flags;20222022+ int rval;20232023+20242024+ rval = fc_block_scsi_eh(sc_cmd);20252025+ if (rval)20262026+ return rval;2022202720232028 lport = shost_priv(sc_cmd->device->host);20242029 if (lport->state != LPORT_ST_READY)···20732068 int rc = FAILED;20742069 int rval;2075207020762076- rval = fc_remote_port_chkready(rport);20712071+ rval = fc_block_scsi_eh(sc_cmd);20772072 if (rval)20782078- goto out;20732073+ return rval;2079207420802075 lport = shost_priv(sc_cmd->device->host);20812076···21202115 unsigned long wait_tmo;2121211621222117 FC_SCSI_DBG(lport, "Resetting host\n");21182118+21192119+ fc_block_scsi_eh(sc_cmd);2123212021242121 lport->tt.lport_reset(lport);21252122 wait_tmo = jiffies + FC_HOST_RESET_TIMEOUT;
+10-1
drivers/scsi/libfc/fc_lport.c
···8888 */89899090#include <linux/timer.h>9191+#include <linux/delay.h>9192#include <linux/slab.h>9293#include <asm/unaligned.h>9394···10301029 FCH_EVT_LIPRESET, 0);10311030 fc_vports_linkchange(lport);10321031 fc_lport_reset_locked(lport);10331033- if (lport->link_up)10321032+ if (lport->link_up) {10331033+ /*10341034+ * Wait upto resource allocation time out before10351035+ * doing re-login since incomplete FIP exchanged10361036+ * from last session may collide with exchanges10371037+ * in new session.10381038+ */10391039+ msleep(lport->r_a_tov);10341040 fc_lport_enter_flogi(lport);10411041+ }10351042}1036104310371044/**
+1-1
drivers/scsi/libsas/sas_expander.c
···17211721 list_for_each_entry(ch, &ex->children, siblings) {17221722 if (ch->dev_type == EDGE_DEV || ch->dev_type == FANOUT_DEV) {17231723 res = sas_find_bcast_dev(ch, src_dev);17241724- if (src_dev)17241724+ if (*src_dev)17251725 return res;17261726 }17271727 }
+5-2
drivers/scsi/qla2xxx/qla_attr.c
···17861786 fc_vport_set_state(fc_vport, FC_VPORT_LINKDOWN);17871787 }1788178817891789- if ((IS_QLA25XX(ha) || IS_QLA81XX(ha)) && ql2xenabledif) {17891789+ if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {17901790 if (ha->fw_attributes & BIT_4) {17911791+ int prot = 0;17911792 vha->flags.difdix_supported = 1;17921793 ql_dbg(ql_dbg_user, vha, 0x7082,17931794 "Registered for DIF/DIX type 1 and 3 protection.\n");17951795+ if (ql2xenabledif == 1)17961796+ prot = SHOST_DIX_TYPE0_PROTECTION;17941797 scsi_host_set_prot(vha->host,17951795- SHOST_DIF_TYPE1_PROTECTION17981798+ prot | SHOST_DIF_TYPE1_PROTECTION17961799 | SHOST_DIF_TYPE2_PROTECTION17971800 | SHOST_DIF_TYPE3_PROTECTION17981801 | SHOST_DIX_TYPE1_PROTECTION
+18-18
drivers/scsi/qla2xxx/qla_dbg.c
···88/*99 * Table for showing the current message id in use for particular level1010 * Change this table for addition of log/debug messages.1111- * -----------------------------------------------------1212- * | Level | Last Value Used |1313- * -----------------------------------------------------1414- * | Module Init and Probe | 0x0116 |1515- * | Mailbox commands | 0x111e |1616- * | Device Discovery | 0x2083 |1717- * | Queue Command and IO tracing | 0x302e |1818- * | DPC Thread | 0x401c |1919- * | Async Events | 0x5059 |2020- * | Timer Routines | 0x600d |2121- * | User Space Interactions | 0x709c |2222- * | Task Management | 0x8043 |2323- * | AER/EEH | 0x900f |2424- * | Virtual Port | 0xa007 |2525- * | ISP82XX Specific | 0xb027 |2626- * | MultiQ | 0xc00b |2727- * | Misc | 0xd00b |2828- * -----------------------------------------------------1111+ * ----------------------------------------------------------------------1212+ * | Level | Last Value Used | Holes |1313+ * ----------------------------------------------------------------------1414+ * | Module Init and Probe | 0x0116 | |1515+ * | Mailbox commands | 0x1126 | |1616+ * | Device Discovery | 0x2083 | |1717+ * | Queue Command and IO tracing | 0x302e | 0x3008 |1818+ * | DPC Thread | 0x401c | |1919+ * | Async Events | 0x5059 | |2020+ * | Timer Routines | 0x600d | |2121+ * | User Space Interactions | 0x709d | |2222+ * | Task Management | 0x8041 | |2323+ * | AER/EEH | 0x900f | |2424+ * | Virtual Port | 0xa007 | |2525+ * | ISP82XX Specific | 0xb04f | |2626+ * | MultiQ | 0xc00b | |2727+ * | Misc | 0xd00b | |2828+ * ----------------------------------------------------------------------2929 */30303131#include "qla_def.h"
···537537 /*538538 * If DIF Error is set in comp_status, these additional fields are539539 * defined:540540+ *541541+ * !!! NOTE: Firmware sends expected/actual DIF data in big endian542542+ * format; but all of the "data" field gets swab32-d in the beginning543543+ * of qla2x00_status_entry().544544+ *540545 * &data[10] : uint8_t report_runt_bg[2]; - computed guard541546 * &data[12] : uint8_t actual_dif[8]; - DIF Data received542547 * &data[20] : uint8_t expected_dif[8]; - DIF Data computed
-3
drivers/scsi/qla2xxx/qla_init.c
···38383838 req = vha->req;38393839 rsp = req->rsp;3840384038413841- atomic_set(&vha->loop_state, LOOP_UPDATE);38423841 clear_bit(ISP_ABORT_RETRY, &vha->dpc_flags);38433842 if (vha->flags.online) {38443843 if (!(rval = qla2x00_fw_ready(vha))) {38453844 /* Wait at most MAX_TARGET RSCNs for a stable link. */38463845 wait_time = 256;38473846 do {38483848- atomic_set(&vha->loop_state, LOOP_UPDATE);38493849-38503847 /* Issue a marker after FW becomes ready. */38513848 qla2x00_marker(vha, req, rsp, 0, 0,38523849 MK_SYNC_ALL);
+29
drivers/scsi/qla2xxx/qla_inline.h
···102102 fcport->d_id.b.al_pa);103103 }104104}105105+106106+static inline int107107+qla2x00_hba_err_chk_enabled(srb_t *sp)108108+{109109+ /*110110+ * Uncomment when corresponding SCSI changes are done.111111+ *112112+ if (!sp->cmd->prot_chk)113113+ return 0;114114+ *115115+ */116116+117117+ switch (scsi_get_prot_op(sp->cmd)) {118118+ case SCSI_PROT_READ_STRIP:119119+ case SCSI_PROT_WRITE_INSERT:120120+ if (ql2xenablehba_err_chk >= 1)121121+ return 1;122122+ break;123123+ case SCSI_PROT_READ_PASS:124124+ case SCSI_PROT_WRITE_PASS:125125+ if (ql2xenablehba_err_chk >= 2)126126+ return 1;127127+ break;128128+ case SCSI_PROT_READ_INSERT:129129+ case SCSI_PROT_WRITE_STRIP:130130+ return 1;131131+ }132132+ return 0;133133+}
+235-47
drivers/scsi/qla2xxx/qla_iocb.c
···709709 *710710 */711711static inline void712712-qla24xx_set_t10dif_tags(struct scsi_cmnd *cmd, struct fw_dif_context *pkt,712712+qla24xx_set_t10dif_tags(srb_t *sp, struct fw_dif_context *pkt,713713 unsigned int protcnt)714714{715715- struct sd_dif_tuple *spt;715715+ struct scsi_cmnd *cmd = sp->cmd;716716 scsi_qla_host_t *vha = shost_priv(cmd->device->host);717717- unsigned char op = scsi_get_prot_op(cmd);718717719718 switch (scsi_get_prot_type(cmd)) {720720- /* For TYPE 0 protection: no checking */721719 case SCSI_PROT_DIF_TYPE0:722722- pkt->ref_tag_mask[0] = 0x00;723723- pkt->ref_tag_mask[1] = 0x00;724724- pkt->ref_tag_mask[2] = 0x00;725725- pkt->ref_tag_mask[3] = 0x00;720720+ /*721721+ * No check for ql2xenablehba_err_chk, as it would be an722722+ * I/O error if hba tag generation is not done.723723+ */724724+ pkt->ref_tag = cpu_to_le32((uint32_t)725725+ (0xffffffff & scsi_get_lba(cmd)));726726+727727+ if (!qla2x00_hba_err_chk_enabled(sp))728728+ break;729729+730730+ pkt->ref_tag_mask[0] = 0xff;731731+ pkt->ref_tag_mask[1] = 0xff;732732+ pkt->ref_tag_mask[2] = 0xff;733733+ pkt->ref_tag_mask[3] = 0xff;726734 break;727735728736 /*···738730 * match LBA in CDB + N739731 */740732 case SCSI_PROT_DIF_TYPE2:741741- if (!ql2xenablehba_err_chk)742742- break;743743-744744- if (scsi_prot_sg_count(cmd)) {745745- spt = page_address(sg_page(scsi_prot_sglist(cmd))) +746746- scsi_prot_sglist(cmd)[0].offset;747747- pkt->app_tag = swab32(spt->app_tag);748748- pkt->app_tag_mask[0] = 0xff;749749- pkt->app_tag_mask[1] = 0xff;750750- }733733+ pkt->app_tag = __constant_cpu_to_le16(0);734734+ pkt->app_tag_mask[0] = 0x0;735735+ pkt->app_tag_mask[1] = 0x0;751736752737 pkt->ref_tag = cpu_to_le32((uint32_t)753738 (0xffffffff & scsi_get_lba(cmd)));739739+740740+ if (!qla2x00_hba_err_chk_enabled(sp))741741+ break;754742755743 /* enable ALL bytes of the ref tag */756744 pkt->ref_tag_mask[0] = 0xff;···767763 * 16 bit app tag.768764 */769765 case SCSI_PROT_DIF_TYPE1:770770- if (!ql2xenablehba_err_chk)766766+ pkt->ref_tag = cpu_to_le32((uint32_t)767767+ (0xffffffff & scsi_get_lba(cmd)));768768+ pkt->app_tag = __constant_cpu_to_le16(0);769769+ pkt->app_tag_mask[0] = 0x0;770770+ pkt->app_tag_mask[1] = 0x0;771771+772772+ if (!qla2x00_hba_err_chk_enabled(sp))771773 break;772774773773- if (protcnt && (op == SCSI_PROT_WRITE_STRIP ||774774- op == SCSI_PROT_WRITE_PASS)) {775775- spt = page_address(sg_page(scsi_prot_sglist(cmd))) +776776- scsi_prot_sglist(cmd)[0].offset;777777- ql_dbg(ql_dbg_io, vha, 0x3008,778778- "LBA from user %p, lba = 0x%x for cmd=%p.\n",779779- spt, (int)spt->ref_tag, cmd);780780- pkt->ref_tag = swab32(spt->ref_tag);781781- pkt->app_tag_mask[0] = 0x0;782782- pkt->app_tag_mask[1] = 0x0;783783- } else {784784- pkt->ref_tag = cpu_to_le32((uint32_t)785785- (0xffffffff & scsi_get_lba(cmd)));786786- pkt->app_tag = __constant_cpu_to_le16(0);787787- pkt->app_tag_mask[0] = 0x0;788788- pkt->app_tag_mask[1] = 0x0;789789- }790775 /* enable ALL bytes of the ref tag */791776 pkt->ref_tag_mask[0] = 0xff;792777 pkt->ref_tag_mask[1] = 0xff;···791798 scsi_get_prot_type(cmd), cmd);792799}793800801801+struct qla2_sgx {802802+ dma_addr_t dma_addr; /* OUT */803803+ uint32_t dma_len; /* OUT */794804805805+ uint32_t tot_bytes; /* IN */806806+ struct scatterlist *cur_sg; /* IN */807807+808808+ /* for book keeping, bzero on initial invocation */809809+ uint32_t bytes_consumed;810810+ uint32_t num_bytes;811811+ uint32_t tot_partial;812812+813813+ /* for debugging */814814+ uint32_t num_sg;815815+ srb_t *sp;816816+};817817+818818+static int819819+qla24xx_get_one_block_sg(uint32_t blk_sz, struct qla2_sgx *sgx,820820+ uint32_t *partial)821821+{822822+ struct scatterlist *sg;823823+ uint32_t cumulative_partial, sg_len;824824+ dma_addr_t sg_dma_addr;825825+826826+ if (sgx->num_bytes == sgx->tot_bytes)827827+ return 0;828828+829829+ sg = sgx->cur_sg;830830+ cumulative_partial = sgx->tot_partial;831831+832832+ sg_dma_addr = sg_dma_address(sg);833833+ sg_len = sg_dma_len(sg);834834+835835+ sgx->dma_addr = sg_dma_addr + sgx->bytes_consumed;836836+837837+ if ((cumulative_partial + (sg_len - sgx->bytes_consumed)) >= blk_sz) {838838+ sgx->dma_len = (blk_sz - cumulative_partial);839839+ sgx->tot_partial = 0;840840+ sgx->num_bytes += blk_sz;841841+ *partial = 0;842842+ } else {843843+ sgx->dma_len = sg_len - sgx->bytes_consumed;844844+ sgx->tot_partial += sgx->dma_len;845845+ *partial = 1;846846+ }847847+848848+ sgx->bytes_consumed += sgx->dma_len;849849+850850+ if (sg_len == sgx->bytes_consumed) {851851+ sg = sg_next(sg);852852+ sgx->num_sg++;853853+ sgx->cur_sg = sg;854854+ sgx->bytes_consumed = 0;855855+ }856856+857857+ return 1;858858+}859859+860860+static int861861+qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *ha, srb_t *sp,862862+ uint32_t *dsd, uint16_t tot_dsds)863863+{864864+ void *next_dsd;865865+ uint8_t avail_dsds = 0;866866+ uint32_t dsd_list_len;867867+ struct dsd_dma *dsd_ptr;868868+ struct scatterlist *sg_prot;869869+ uint32_t *cur_dsd = dsd;870870+ uint16_t used_dsds = tot_dsds;871871+872872+ uint32_t prot_int;873873+ uint32_t partial;874874+ struct qla2_sgx sgx;875875+ dma_addr_t sle_dma;876876+ uint32_t sle_dma_len, tot_prot_dma_len = 0;877877+ struct scsi_cmnd *cmd = sp->cmd;878878+879879+ prot_int = cmd->device->sector_size;880880+881881+ memset(&sgx, 0, sizeof(struct qla2_sgx));882882+ sgx.tot_bytes = scsi_bufflen(sp->cmd);883883+ sgx.cur_sg = scsi_sglist(sp->cmd);884884+ sgx.sp = sp;885885+886886+ sg_prot = scsi_prot_sglist(sp->cmd);887887+888888+ while (qla24xx_get_one_block_sg(prot_int, &sgx, &partial)) {889889+890890+ sle_dma = sgx.dma_addr;891891+ sle_dma_len = sgx.dma_len;892892+alloc_and_fill:893893+ /* Allocate additional continuation packets? */894894+ if (avail_dsds == 0) {895895+ avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ?896896+ QLA_DSDS_PER_IOCB : used_dsds;897897+ dsd_list_len = (avail_dsds + 1) * 12;898898+ used_dsds -= avail_dsds;899899+900900+ /* allocate tracking DS */901901+ dsd_ptr = kzalloc(sizeof(struct dsd_dma), GFP_ATOMIC);902902+ if (!dsd_ptr)903903+ return 1;904904+905905+ /* allocate new list */906906+ dsd_ptr->dsd_addr = next_dsd =907907+ dma_pool_alloc(ha->dl_dma_pool, GFP_ATOMIC,908908+ &dsd_ptr->dsd_list_dma);909909+910910+ if (!next_dsd) {911911+ /*912912+ * Need to cleanup only this dsd_ptr, rest913913+ * will be done by sp_free_dma()914914+ */915915+ kfree(dsd_ptr);916916+ return 1;917917+ }918918+919919+ list_add_tail(&dsd_ptr->list,920920+ &((struct crc_context *)sp->ctx)->dsd_list);921921+922922+ sp->flags |= SRB_CRC_CTX_DSD_VALID;923923+924924+ /* add new list to cmd iocb or last list */925925+ *cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));926926+ *cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));927927+ *cur_dsd++ = dsd_list_len;928928+ cur_dsd = (uint32_t *)next_dsd;929929+ }930930+ *cur_dsd++ = cpu_to_le32(LSD(sle_dma));931931+ *cur_dsd++ = cpu_to_le32(MSD(sle_dma));932932+ *cur_dsd++ = cpu_to_le32(sle_dma_len);933933+ avail_dsds--;934934+935935+ if (partial == 0) {936936+ /* Got a full protection interval */937937+ sle_dma = sg_dma_address(sg_prot) + tot_prot_dma_len;938938+ sle_dma_len = 8;939939+940940+ tot_prot_dma_len += sle_dma_len;941941+ if (tot_prot_dma_len == sg_dma_len(sg_prot)) {942942+ tot_prot_dma_len = 0;943943+ sg_prot = sg_next(sg_prot);944944+ }945945+946946+ partial = 1; /* So as to not re-enter this block */947947+ goto alloc_and_fill;948948+ }949949+ }950950+ /* Null termination */951951+ *cur_dsd++ = 0;952952+ *cur_dsd++ = 0;953953+ *cur_dsd++ = 0;954954+ return 0;955955+}795956static int796957qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp, uint32_t *dsd,797958 uint16_t tot_dsds)···1128981 struct scsi_cmnd *cmd;1129982 struct scatterlist *cur_seg;1130983 int sgc;11311131- uint32_t total_bytes;984984+ uint32_t total_bytes = 0;1132985 uint32_t data_bytes;1133986 uint32_t dif_bytes;1134987 uint8_t bundling = 1;···11701023 __constant_cpu_to_le16(CF_READ_DATA);11711024 }1172102511731173- tot_prot_dsds = scsi_prot_sg_count(cmd);11741174- if (!tot_prot_dsds)10261026+ if ((scsi_get_prot_op(sp->cmd) == SCSI_PROT_READ_INSERT) ||10271027+ (scsi_get_prot_op(sp->cmd) == SCSI_PROT_WRITE_STRIP) ||10281028+ (scsi_get_prot_op(sp->cmd) == SCSI_PROT_READ_STRIP) ||10291029+ (scsi_get_prot_op(sp->cmd) == SCSI_PROT_WRITE_INSERT))11751030 bundling = 0;1176103111771032 /* Allocate CRC context from global pool */···1196104711971048 INIT_LIST_HEAD(&crc_ctx_pkt->dsd_list);1198104911991199- qla24xx_set_t10dif_tags(cmd, (struct fw_dif_context *)10501050+ qla24xx_set_t10dif_tags(sp, (struct fw_dif_context *)12001051 &crc_ctx_pkt->ref_tag, tot_prot_dsds);1201105212021053 cmd_pkt->crc_context_address[0] = cpu_to_le32(LSD(crc_ctx_dma));···12251076 fcp_cmnd->additional_cdb_len |= 2;1226107712271078 int_to_scsilun(sp->cmd->device->lun, &fcp_cmnd->lun);12281228- host_to_fcp_swap((uint8_t *)&fcp_cmnd->lun, sizeof(fcp_cmnd->lun));12291079 memcpy(fcp_cmnd->cdb, cmd->cmnd, cmd->cmd_len);12301080 cmd_pkt->fcp_cmnd_dseg_len = cpu_to_le16(fcp_cmnd_len);12311081 cmd_pkt->fcp_cmnd_dseg_address[0] = cpu_to_le32(···12551107 cmd_pkt->fcp_rsp_dseg_len = 0; /* Let response come in status iocb */1256110812571109 /* Compute dif len and adjust data len to incude protection */12581258- total_bytes = data_bytes;12591110 dif_bytes = 0;12601111 blk_size = cmd->device->sector_size;12611261- if (scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) {12621262- dif_bytes = (data_bytes / blk_size) * 8;12631263- total_bytes += dif_bytes;11121112+ dif_bytes = (data_bytes / blk_size) * 8;11131113+11141114+ switch (scsi_get_prot_op(sp->cmd)) {11151115+ case SCSI_PROT_READ_INSERT:11161116+ case SCSI_PROT_WRITE_STRIP:11171117+ total_bytes = data_bytes;11181118+ data_bytes += dif_bytes;11191119+ break;11201120+11211121+ case SCSI_PROT_READ_STRIP:11221122+ case SCSI_PROT_WRITE_INSERT:11231123+ case SCSI_PROT_READ_PASS:11241124+ case SCSI_PROT_WRITE_PASS:11251125+ total_bytes = data_bytes + dif_bytes;11261126+ break;11271127+ default:11281128+ BUG();12641129 }1265113012661266- if (!ql2xenablehba_err_chk)11311131+ if (!qla2x00_hba_err_chk_enabled(sp))12671132 fw_prot_opts |= 0x10; /* Disable Guard tag checking */1268113312691134 if (!bundling) {···1312115113131152 cmd_pkt->control_flags |=13141153 __constant_cpu_to_le16(CF_DATA_SEG_DESCR_ENABLE);13151315- if (qla24xx_walk_and_build_sglist(ha, sp, cur_dsd,11541154+11551155+ if (!bundling && tot_prot_dsds) {11561156+ if (qla24xx_walk_and_build_sglist_no_difb(ha, sp,11571157+ cur_dsd, tot_dsds))11581158+ goto crc_queuing_error;11591159+ } else if (qla24xx_walk_and_build_sglist(ha, sp, cur_dsd,13161160 (tot_dsds - tot_prot_dsds)))13171161 goto crc_queuing_error;13181162···15801414 goto queuing_error;15811415 else15821416 sp->flags |= SRB_DMA_VALID;14171417+14181418+ if ((scsi_get_prot_op(cmd) == SCSI_PROT_READ_INSERT) ||14191419+ (scsi_get_prot_op(cmd) == SCSI_PROT_WRITE_STRIP)) {14201420+ struct qla2_sgx sgx;14211421+ uint32_t partial;14221422+14231423+ memset(&sgx, 0, sizeof(struct qla2_sgx));14241424+ sgx.tot_bytes = scsi_bufflen(cmd);14251425+ sgx.cur_sg = scsi_sglist(cmd);14261426+ sgx.sp = sp;14271427+14281428+ nseg = 0;14291429+ while (qla24xx_get_one_block_sg(14301430+ cmd->device->sector_size, &sgx, &partial))14311431+ nseg++;14321432+ }15831433 } else15841434 nseg = 0;15851435···16101428 goto queuing_error;16111429 else16121430 sp->flags |= SRB_CRC_PROT_DMA_VALID;14311431+14321432+ if ((scsi_get_prot_op(cmd) == SCSI_PROT_READ_INSERT) ||14331433+ (scsi_get_prot_op(cmd) == SCSI_PROT_WRITE_STRIP)) {14341434+ nseg = scsi_bufflen(cmd) / cmd->device->sector_size;14351435+ }16131436 } else {16141437 nseg = 0;16151438 }···16411454 /* Build header part of command packet (excluding the OPCODE). */16421455 req->current_outstanding_cmd = handle;16431456 req->outstanding_cmds[handle] = sp;14571457+ sp->handle = handle;16441458 sp->cmd->host_scribble = (unsigned char *)(unsigned long)handle;16451459 req->cnt -= req_cnt;16461460
+87-28
drivers/scsi/qla2xxx/qla_isr.c
···719719 vha->flags.rscn_queue_overflow = 1;720720 }721721722722- atomic_set(&vha->loop_state, LOOP_UPDATE);723722 atomic_set(&vha->loop_down_timer, 0);724723 vha->flags.management_server_logged_in = 0;725724···14341435 * ASC/ASCQ fields in the sense buffer with ILLEGAL_REQUEST14351436 * to indicate to the kernel that the HBA detected error.14361437 */14371437-static inline void14381438+static inline int14381439qla2x00_handle_dif_error(srb_t *sp, struct sts_entry_24xx *sts24)14391440{14401441 struct scsi_qla_host *vha = sp->fcport->vha;14411442 struct scsi_cmnd *cmd = sp->cmd;14421442- struct scsi_dif_tuple *ep =14431443- (struct scsi_dif_tuple *)&sts24->data[20];14441444- struct scsi_dif_tuple *ap =14451445- (struct scsi_dif_tuple *)&sts24->data[12];14431443+ uint8_t *ap = &sts24->data[12];14441444+ uint8_t *ep = &sts24->data[20];14461445 uint32_t e_ref_tag, a_ref_tag;14471446 uint16_t e_app_tag, a_app_tag;14481447 uint16_t e_guard, a_guard;1449144814501450- e_ref_tag = be32_to_cpu(ep->ref_tag);14511451- a_ref_tag = be32_to_cpu(ap->ref_tag);14521452- e_app_tag = be16_to_cpu(ep->app_tag);14531453- a_app_tag = be16_to_cpu(ap->app_tag);14541454- e_guard = be16_to_cpu(ep->guard);14551455- a_guard = be16_to_cpu(ap->guard);14491449+ /*14501450+ * swab32 of the "data" field in the beginning of qla2x00_status_entry()14511451+ * would make guard field appear at offset 214521452+ */14531453+ a_guard = le16_to_cpu(*(uint16_t *)(ap + 2));14541454+ a_app_tag = le16_to_cpu(*(uint16_t *)(ap + 0));14551455+ a_ref_tag = le32_to_cpu(*(uint32_t *)(ap + 4));14561456+ e_guard = le16_to_cpu(*(uint16_t *)(ep + 2));14571457+ e_app_tag = le16_to_cpu(*(uint16_t *)(ep + 0));14581458+ e_ref_tag = le32_to_cpu(*(uint32_t *)(ep + 4));1456145914571460 ql_dbg(ql_dbg_io, vha, 0x3023,14581461 "iocb(s) %p Returned STATUS.\n", sts24);···14661465 cmd->cmnd[0], (u64)scsi_get_lba(cmd), a_ref_tag, e_ref_tag,14671466 a_app_tag, e_app_tag, a_guard, e_guard);1468146714681468+ /*14691469+ * Ignore sector if:14701470+ * For type 3: ref & app tag is all 'f's14711471+ * For type 0,1,2: app tag is all 'f's14721472+ */14731473+ if ((a_app_tag == 0xffff) &&14741474+ ((scsi_get_prot_type(cmd) != SCSI_PROT_DIF_TYPE3) ||14751475+ (a_ref_tag == 0xffffffff))) {14761476+ uint32_t blocks_done, resid;14771477+ sector_t lba_s = scsi_get_lba(cmd);14781478+14791479+ /* 2TB boundary case covered automatically with this */14801480+ blocks_done = e_ref_tag - (uint32_t)lba_s + 1;14811481+14821482+ resid = scsi_bufflen(cmd) - (blocks_done *14831483+ cmd->device->sector_size);14841484+14851485+ scsi_set_resid(cmd, resid);14861486+ cmd->result = DID_OK << 16;14871487+14881488+ /* Update protection tag */14891489+ if (scsi_prot_sg_count(cmd)) {14901490+ uint32_t i, j = 0, k = 0, num_ent;14911491+ struct scatterlist *sg;14921492+ struct sd_dif_tuple *spt;14931493+14941494+ /* Patch the corresponding protection tags */14951495+ scsi_for_each_prot_sg(cmd, sg,14961496+ scsi_prot_sg_count(cmd), i) {14971497+ num_ent = sg_dma_len(sg) / 8;14981498+ if (k + num_ent < blocks_done) {14991499+ k += num_ent;15001500+ continue;15011501+ }15021502+ j = blocks_done - k - 1;15031503+ k = blocks_done;15041504+ break;15051505+ }15061506+15071507+ if (k != blocks_done) {15081508+ qla_printk(KERN_WARNING, sp->fcport->vha->hw,15091509+ "unexpected tag values tag:lba=%x:%llx)\n",15101510+ e_ref_tag, (unsigned long long)lba_s);15111511+ return 1;15121512+ }15131513+15141514+ spt = page_address(sg_page(sg)) + sg->offset;15151515+ spt += j;15161516+15171517+ spt->app_tag = 0xffff;15181518+ if (scsi_get_prot_type(cmd) == SCSI_PROT_DIF_TYPE3)15191519+ spt->ref_tag = 0xffffffff;15201520+ }15211521+15221522+ return 0;15231523+ }15241524+14691525 /* check guard */14701526 if (e_guard != a_guard) {14711527 scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,···15301472 set_driver_byte(cmd, DRIVER_SENSE);15311473 set_host_byte(cmd, DID_ABORT);15321474 cmd->result |= SAM_STAT_CHECK_CONDITION << 1;15331533- return;15341534- }15351535-15361536- /* check appl tag */15371537- if (e_app_tag != a_app_tag) {15381538- scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,15391539- 0x10, 0x2);15401540- set_driver_byte(cmd, DRIVER_SENSE);15411541- set_host_byte(cmd, DID_ABORT);15421542- cmd->result |= SAM_STAT_CHECK_CONDITION << 1;15431543- return;14751475+ return 1;15441476 }1545147715461478 /* check ref tag */···15401492 set_driver_byte(cmd, DRIVER_SENSE);15411493 set_host_byte(cmd, DID_ABORT);15421494 cmd->result |= SAM_STAT_CHECK_CONDITION << 1;15431543- return;14951495+ return 1;15441496 }14971497+14981498+ /* check appl tag */14991499+ if (e_app_tag != a_app_tag) {15001500+ scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,15011501+ 0x10, 0x2);15021502+ set_driver_byte(cmd, DRIVER_SENSE);15031503+ set_host_byte(cmd, DID_ABORT);15041504+ cmd->result |= SAM_STAT_CHECK_CONDITION << 1;15051505+ return 1;15061506+ }15071507+15081508+ return 1;15451509}1546151015471511/**···18271767 break;1828176818291769 case CS_DIF_ERROR:18301830- qla2x00_handle_dif_error(sp, sts24);17701770+ logit = qla2x00_handle_dif_error(sp, sts24);18311771 break;18321772 default:18331773 cp->result = DID_ERROR << 16;···25282468 goto skip_msi;25292469 }2530247025312531- if (IS_QLA2432(ha) && (ha->pdev->revision < QLA_MSIX_CHIP_REV_24XX ||25322532- !QLA_MSIX_FW_MODE_1(ha->fw_attributes))) {24712471+ if (IS_QLA2432(ha) && (ha->pdev->revision < QLA_MSIX_CHIP_REV_24XX)) {25332472 ql_log(ql_log_warn, vha, 0x0035,25342473 "MSI-X; Unsupported ISP2432 (0x%X, 0x%X).\n",25352535- ha->pdev->revision, ha->fw_attributes);24742474+ ha->pdev->revision, QLA_MSIX_CHIP_REV_24XX);25362475 goto skip_msix;25372476 }25382477
···786786 int cs_gpio = of_get_named_gpio(np, "cs-gpios", i);787787 if (cs_gpio < 0)788788 cs_gpio = mxc_platform_info->chipselect[i];789789+790790+ spi_imx->chipselect[i] = cs_gpio;789791 if (cs_gpio < 0)790792 continue;791791- spi_imx->chipselect[i] = cs_gpio;793793+792794 ret = gpio_request(spi_imx->chipselect[i], DRIVER_NAME);793795 if (ret) {794796 while (i > 0) {
+66-27
drivers/spi/spi-topcliff-pch.c
···5050#define PCH_RX_THOLD 75151#define PCH_RX_THOLD_MAX 1552525353+#define PCH_TX_THOLD 25454+5355#define PCH_MAX_BAUDRATE 50000005456#define PCH_MAX_FIFO_DEPTH 165557···6058#define PCH_SLEEP_TIME 1061596260#define SSN_LOW 0x02U6161+#define SSN_HIGH 0x03U6362#define SSN_NO_CONTROL 0x00U6463#define PCH_MAX_CS 0xFF6564#define PCI_DEVICE_ID_GE_SPI 0x8816···319316320317 /* if transfer complete interrupt */321318 if (reg_spsr_val & SPSR_FI_BIT) {322322- if (tx_index < bpw_len)319319+ if ((tx_index == bpw_len) && (rx_index == tx_index)) {320320+ /* disable interrupts */321321+ pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);322322+323323+ /* transfer is completed;324324+ inform pch_spi_process_messages */325325+ data->transfer_complete = true;326326+ data->transfer_active = false;327327+ wake_up(&data->wait);328328+ } else {323329 dev_err(&data->master->dev,324330 "%s : Transfer is not completed", __func__);325325- /* disable interrupts */326326- pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);327327-328328- /* transfer is completed;inform pch_spi_process_messages */329329- data->transfer_complete = true;330330- data->transfer_active = false;331331- wake_up(&data->wait);331331+ }332332 }333333}334334···354348 "%s returning due to suspend\n", __func__);355349 return IRQ_NONE;356350 }357357- if (data->use_dma)358358- return IRQ_NONE;359351360352 io_remap_addr = data->io_remap_addr;361353 spsr = io_remap_addr + PCH_SPSR;362354363355 reg_spsr_val = ioread32(spsr);364356365365- if (reg_spsr_val & SPSR_ORF_BIT)366366- dev_err(&board_dat->pdev->dev, "%s Over run error", __func__);357357+ if (reg_spsr_val & SPSR_ORF_BIT) {358358+ dev_err(&board_dat->pdev->dev, "%s Over run error\n", __func__);359359+ if (data->current_msg->complete != 0) {360360+ data->transfer_complete = true;361361+ data->current_msg->status = -EIO;362362+ data->current_msg->complete(data->current_msg->context);363363+ data->bcurrent_msg_processing = false;364364+ data->current_msg = NULL;365365+ data->cur_trans = NULL;366366+ }367367+ }368368+369369+ if (data->use_dma)370370+ return IRQ_NONE;367371368372 /* Check if the interrupt is for SPI device */369373 if (reg_spsr_val & (SPSR_FI_BIT | SPSR_RFI_BIT)) {···772756773757 wait_event_interruptible(data->wait, data->transfer_complete);774758775775- pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);776776- dev_dbg(&data->master->dev,777777- "%s:no more control over SSN-writing 0 to SSNXCR.", __func__);778778-779759 /* clear all interrupts */780760 pch_spi_writereg(data->master, PCH_SPSR,781761 pch_spi_readreg(data->master, PCH_SPSR));···827815 }828816}829817830830-static void pch_spi_start_transfer(struct pch_spi_data *data)818818+static int pch_spi_start_transfer(struct pch_spi_data *data)831819{832820 struct pch_spi_dma_ctrl *dma;833821 unsigned long flags;822822+ int rtn;834823835824 dma = &data->dma;836825···846833 initiating the transfer. */847834 dev_dbg(&data->master->dev,848835 "%s:waiting for transfer to get over\n", __func__);849849- wait_event_interruptible(data->wait, data->transfer_complete);836836+ rtn = wait_event_interruptible_timeout(data->wait,837837+ data->transfer_complete,838838+ msecs_to_jiffies(2 * HZ));850839851840 dma_sync_sg_for_cpu(&data->master->dev, dma->sg_rx_p, dma->nent,852841 DMA_FROM_DEVICE);842842+843843+ dma_sync_sg_for_cpu(&data->master->dev, dma->sg_tx_p, dma->nent,844844+ DMA_FROM_DEVICE);845845+ memset(data->dma.tx_buf_virt, 0, PAGE_SIZE);846846+853847 async_tx_ack(dma->desc_rx);854848 async_tx_ack(dma->desc_tx);855849 kfree(dma->sg_tx_p);856850 kfree(dma->sg_rx_p);857851858852 spin_lock_irqsave(&data->lock, flags);859859- pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);860860- dev_dbg(&data->master->dev,861861- "%s:no more control over SSN-writing 0 to SSNXCR.", __func__);862853863854 /* clear fifo threshold, disable interrupts, disable SPI transfer */864855 pch_spi_setclr_reg(data->master, PCH_SPCR, 0,···875858 pch_spi_clear_fifo(data->master);876859877860 spin_unlock_irqrestore(&data->lock, flags);861861+862862+ return rtn;878863}879864880865static void pch_dma_rx_complete(void *arg)···10421023 /* set receive fifo threshold and transmit fifo threshold */10431024 pch_spi_setclr_reg(data->master, PCH_SPCR,10441025 ((size - 1) << SPCR_RFIC_FIELD) |10451045- ((PCH_MAX_FIFO_DEPTH - PCH_DMA_TRANS_SIZE) <<10461046- SPCR_TFIC_FIELD),10261026+ (PCH_TX_THOLD << SPCR_TFIC_FIELD),10471027 MASK_RFIC_SPCR_BITS | MASK_TFIC_SPCR_BITS);1048102810491029 spin_unlock_irqrestore(&data->lock, flags);···10531035 /* offset, length setting */10541036 sg = dma->sg_rx_p;10551037 for (i = 0; i < num; i++, sg++) {10561056- if (i == 0) {10571057- sg->offset = 0;10381038+ if (i == (num - 2)) {10391039+ sg->offset = size * i;10401040+ sg->offset = sg->offset * (*bpw / 8);10581041 sg_set_page(sg, virt_to_page(dma->rx_buf_virt), rem,10591042 sg->offset);10601043 sg_dma_len(sg) = rem;10441044+ } else if (i == (num - 1)) {10451045+ sg->offset = size * (i - 1) + rem;10461046+ sg->offset = sg->offset * (*bpw / 8);10471047+ sg_set_page(sg, virt_to_page(dma->rx_buf_virt), size,10481048+ sg->offset);10491049+ sg_dma_len(sg) = size;10611050 } else {10621062- sg->offset = rem + size * (i - 1);10511051+ sg->offset = size * i;10631052 sg->offset = sg->offset * (*bpw / 8);10641053 sg_set_page(sg, virt_to_page(dma->rx_buf_virt), size,10651054 sg->offset);···10901065 dma->desc_rx = desc_rx;1091106610921067 /* TX */10681068+ if (data->bpw_len > PCH_DMA_TRANS_SIZE) {10691069+ num = data->bpw_len / PCH_DMA_TRANS_SIZE;10701070+ size = PCH_DMA_TRANS_SIZE;10711071+ rem = 16;10721072+ } else {10731073+ num = 1;10741074+ size = data->bpw_len;10751075+ rem = data->bpw_len;10761076+ }10771077+10931078 dma->sg_tx_p = kzalloc(sizeof(struct scatterlist)*num, GFP_ATOMIC);10941079 sg_init_table(dma->sg_tx_p, num); /* Initialize SG table */10951080 /* offset, length setting */···11971162 if (data->use_dma)11981163 pch_spi_request_dma(data,11991164 data->current_msg->spi->bits_per_word);11651165+ pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);12001166 do {12011167 /* If we are already processing a message get the next12021168 transfer structure from the message otherwise retrieve···1220118412211185 if (data->use_dma) {12221186 pch_spi_handle_dma(data, &bpw);12231223- pch_spi_start_transfer(data);11871187+ if (!pch_spi_start_transfer(data))11881188+ goto out;12241189 pch_spi_copy_rx_data_for_dma(data, bpw);12251190 } else {12261191 pch_spi_set_tx(data, &bpw);···1259122212601223 } while (data->cur_trans != NULL);1261122412251225+out:12261226+ pch_spi_writereg(data->master, PCH_SSNXCR, SSN_HIGH);12621227 if (data->use_dma)12631228 pch_spi_release_dma(data);12641229}
+3-1
drivers/staging/comedi/drivers/ni_labpc.c
···241241 struct comedi_insn *insn,242242 unsigned int *data);243243static void labpc_adc_timing(struct comedi_device *dev, struct comedi_cmd *cmd);244244-#ifdef CONFIG_COMEDI_PCI244244+#ifdef CONFIG_ISA_DMA_API245245static unsigned int labpc_suggest_transfer_size(struct comedi_cmd cmd);246246+#endif247247+#ifdef CONFIG_COMEDI_PCI246248static int labpc_find_device(struct comedi_device *dev, int bus, int slot);247249#endif248250static int labpc_dio_mem_callback(int dir, int port, int data,
+1-1
drivers/staging/zcache/zcache-main.c
···12421242 int ret = 0;1243124312441244 BUG_ON(!is_ephemeral(pool));12451245- zbud_decompress(virt_to_page(data), pampd);12451245+ zbud_decompress((struct page *)(data), pampd);12461246 zbud_free_and_delist((struct zbud_hdr *)pampd);12471247 atomic_dec(&zcache_curr_eph_pampd_count);12481248 return ret;
···2424 */25252626#include <linux/kernel.h>2727+#include <linux/ctype.h>2728#include <asm/unaligned.h>2829#include <scsi/scsi.h>2930···155154 return 0;156155}157156157157+static void158158+target_parse_naa_6h_vendor_specific(struct se_device *dev, unsigned char *buf_off)159159+{160160+ unsigned char *p = &dev->se_sub_dev->t10_wwn.unit_serial[0];161161+ unsigned char *buf = buf_off;162162+ int cnt = 0, next = 1;163163+ /*164164+ * Generate up to 36 bits of VENDOR SPECIFIC IDENTIFIER starting on165165+ * byte 3 bit 3-0 for NAA IEEE Registered Extended DESIGNATOR field166166+ * format, followed by 64 bits of VENDOR SPECIFIC IDENTIFIER EXTENSION167167+ * to complete the payload. These are based from VPD=0x80 PRODUCT SERIAL168168+ * NUMBER set via vpd_unit_serial in target_core_configfs.c to ensure169169+ * per device uniqeness.170170+ */171171+ while (*p != '\0') {172172+ if (cnt >= 13)173173+ break;174174+ if (!isxdigit(*p)) {175175+ p++;176176+ continue;177177+ }178178+ if (next != 0) {179179+ buf[cnt++] |= hex_to_bin(*p++);180180+ next = 0;181181+ } else {182182+ buf[cnt] = hex_to_bin(*p++) << 4;183183+ next = 1;184184+ }185185+ }186186+}187187+158188/*159189 * Device identification VPD, for a complete list of160190 * DESIGNATOR TYPEs see spc4r17 Table 459.···251219 * VENDOR_SPECIFIC_IDENTIFIER and252220 * VENDOR_SPECIFIC_IDENTIFIER_EXTENTION253221 */254254- buf[off++] |= hex_to_bin(dev->se_sub_dev->t10_wwn.unit_serial[0]);255255- hex2bin(&buf[off], &dev->se_sub_dev->t10_wwn.unit_serial[1], 12);222222+ target_parse_naa_6h_vendor_specific(dev, &buf[off]);256223257224 len = 20;258225 off = (len + 4);
+4-5
drivers/target/target_core_transport.c
···977977{978978 struct se_device *dev = container_of(work, struct se_device,979979 qf_work_queue);980980+ LIST_HEAD(qf_cmd_list);980981 struct se_cmd *cmd, *cmd_tmp;981982982983 spin_lock_irq(&dev->qf_cmd_lock);983983- list_for_each_entry_safe(cmd, cmd_tmp, &dev->qf_cmd_list, se_qf_node) {984984+ list_splice_init(&dev->qf_cmd_list, &qf_cmd_list);985985+ spin_unlock_irq(&dev->qf_cmd_lock);984986987987+ list_for_each_entry_safe(cmd, cmd_tmp, &qf_cmd_list, se_qf_node) {985988 list_del(&cmd->se_qf_node);986989 atomic_dec(&dev->dev_qf_count);987990 smp_mb__after_atomic_dec();988988- spin_unlock_irq(&dev->qf_cmd_lock);989991990992 pr_debug("Processing %s cmd: %p QUEUE_FULL in work queue"991993 " context: %s\n", cmd->se_tfo->get_fabric_name(), cmd,···999997 * has been added to head of queue1000998 */1001999 transport_add_cmd_to_queue(cmd, cmd->t_state);10021002-10031003- spin_lock_irq(&dev->qf_cmd_lock);10041000 }10051005- spin_unlock_irq(&dev->qf_cmd_lock);10061001}1007100210081003unsigned char *transport_dump_cmd_direction(struct se_cmd *cmd)
+2-10
drivers/target/tcm_fc/tcm_fc.h
···9898 struct list_head list; /* linkage in ft_lport_acl tpg_list */9999 struct list_head lun_list; /* head of LUNs */100100 struct se_portal_group se_tpg;101101- struct task_struct *thread; /* processing thread */102102- struct se_queue_obj qobj; /* queue for processing thread */101101+ struct workqueue_struct *workqueue;103102};104103105104struct ft_lport_acl {···109110 struct se_wwn fc_lport_wwn;110111};111112112112-enum ft_cmd_state {113113- FC_CMD_ST_NEW = 0,114114- FC_CMD_ST_REJ115115-};116116-117113/*118114 * Commands119115 */120116struct ft_cmd {121121- enum ft_cmd_state state;122117 u32 lun; /* LUN from request */123118 struct ft_sess *sess; /* session held for cmd */124119 struct fc_seq *seq; /* sequence in exchange mgr */···120127 struct fc_frame *req_frame;121128 unsigned char *cdb; /* pointer to CDB inside frame */122129 u32 write_data_len; /* data received on writes */123123- struct se_queue_req se_req;130130+ struct work_struct work;124131 /* Local sense buffer */125132 unsigned char ft_sense_buffer[TRANSPORT_SENSE_BUFFER];126133 u32 was_ddp_setup:1; /* Set only if ddp is setup */···170177/*171178 * other internal functions.172179 */173173-int ft_thread(void *);174180void ft_recv_req(struct ft_sess *, struct fc_frame *);175181struct ft_tpg *ft_lport_find_tpg(struct fc_lport *);176182struct ft_node_acl *ft_acl_get(struct ft_tpg *, struct fc_rport_priv *);
···327327 tpg->index = index;328328 tpg->lport_acl = lacl;329329 INIT_LIST_HEAD(&tpg->lun_list);330330- transport_init_queue_obj(&tpg->qobj);331330332331 ret = core_tpg_register(&ft_configfs->tf_ops, wwn, &tpg->se_tpg,333332 tpg, TRANSPORT_TPG_TYPE_NORMAL);···335336 return NULL;336337 }337338338338- tpg->thread = kthread_run(ft_thread, tpg, "ft_tpg%lu", index);339339- if (IS_ERR(tpg->thread)) {339339+ tpg->workqueue = alloc_workqueue("tcm_fc", 0, 1);340340+ if (!tpg->workqueue) {340341 kfree(tpg);341342 return NULL;342343 }···355356 pr_debug("del tpg %s\n",356357 config_item_name(&tpg->se_tpg.tpg_group.cg_item));357358358358- kthread_stop(tpg->thread);359359+ destroy_workqueue(tpg->workqueue);359360360361 /* Wait for sessions to be freed thru RCU, for BUG_ON below */361362 synchronize_rcu();
+30-32
drivers/target/tcm_fc/tfc_io.c
···219219 if (cmd->was_ddp_setup) {220220 BUG_ON(!ep);221221 BUG_ON(!lport);222222- }223223-224224- /*225225- * Doesn't expect payload if DDP is setup. Payload226226- * is expected to be copied directly to user buffers227227- * due to DDP (Large Rx offload),228228- */229229- buf = fc_frame_payload_get(fp, 1);230230- if (buf)231231- pr_err("%s: xid 0x%x, f_ctl 0x%x, cmd->sg %p, "222222+ /*223223+ * Since DDP (Large Rx offload) was setup for this request,224224+ * payload is expected to be copied directly to user buffers.225225+ */226226+ buf = fc_frame_payload_get(fp, 1);227227+ if (buf)228228+ pr_err("%s: xid 0x%x, f_ctl 0x%x, cmd->sg %p, "232229 "cmd->sg_cnt 0x%x. DDP was setup"233230 " hence not expected to receive frame with "234234- "payload, Frame will be dropped if "235235- "'Sequence Initiative' bit in f_ctl is "231231+ "payload, Frame will be dropped if"232232+ "'Sequence Initiative' bit in f_ctl is"236233 "not set\n", __func__, ep->xid, f_ctl,237234 cmd->sg, cmd->sg_cnt);238238- /*239239- * Invalidate HW DDP context if it was setup for respective240240- * command. Invalidation of HW DDP context is requited in both241241- * situation (success and error). 242242- */243243- ft_invl_hw_context(cmd);235235+ /*236236+ * Invalidate HW DDP context if it was setup for respective237237+ * command. Invalidation of HW DDP context is requited in both238238+ * situation (success and error).239239+ */240240+ ft_invl_hw_context(cmd);244241245245- /*246246- * If "Sequence Initiative (TSI)" bit set in f_ctl, means last247247- * write data frame is received successfully where payload is248248- * posted directly to user buffer and only the last frame's249249- * header is posted in receive queue.250250- *251251- * If "Sequence Initiative (TSI)" bit is not set, means error252252- * condition w.r.t. DDP, hence drop the packet and let explict253253- * ABORTS from other end of exchange timer trigger the recovery.254254- */255255- if (f_ctl & FC_FC_SEQ_INIT)256256- goto last_frame;257257- else258258- goto drop;242242+ /*243243+ * If "Sequence Initiative (TSI)" bit set in f_ctl, means last244244+ * write data frame is received successfully where payload is245245+ * posted directly to user buffer and only the last frame's246246+ * header is posted in receive queue.247247+ *248248+ * If "Sequence Initiative (TSI)" bit is not set, means error249249+ * condition w.r.t. DDP, hence drop the packet and let explict250250+ * ABORTS from other end of exchange timer trigger the recovery.251251+ */252252+ if (f_ctl & FC_FC_SEQ_INIT)253253+ goto last_frame;254254+ else255255+ goto drop;256256+ }259257260258 rel_off = ntohl(fh->fh_parm_offset);261259 frame_len = fr_len(fp);
+2-2
drivers/tty/serial/crisv10.c
···4450445044514451#if defined(CONFIG_ETRAX_RS485)44524452#if defined(CONFIG_ETRAX_RS485_ON_PA)44534453- if (cris_io_interface_allocate_pins(if_ser0, 'a', rs485_pa_bit,44534453+ if (cris_io_interface_allocate_pins(if_serial_0, 'a', rs485_pa_bit,44544454 rs485_pa_bit)) {44554455 printk(KERN_CRIT "ETRAX100LX serial: Could not allocate "44564456 "RS485 pin\n");···44594459 }44604460#endif44614461#if defined(CONFIG_ETRAX_RS485_ON_PORT_G)44624462- if (cris_io_interface_allocate_pins(if_ser0, 'g', rs485_pa_bit,44624462+ if (cris_io_interface_allocate_pins(if_serial_0, 'g', rs485_pa_bit,44634463 rs485_port_g_bit)) {44644464 printk(KERN_CRIT "ETRAX100LX serial: Could not allocate "44654465 "RS485 pin\n");
+1-1
drivers/usb/host/xhci-hub.c
···761761 memset(buf, 0, retval);762762 status = 0;763763764764- mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC;764764+ mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC;765765766766 spin_lock_irqsave(&xhci->lock, flags);767767 /* For each port, did anything change? If so, set that bit in buf. */
+19
drivers/usb/host/xhci-ring.c
···19341934 int status = -EINPROGRESS;19351935 struct urb_priv *urb_priv;19361936 struct xhci_ep_ctx *ep_ctx;19371937+ struct list_head *tmp;19371938 u32 trb_comp_code;19381939 int ret = 0;19401940+ int td_num = 0;1939194119401942 slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));19411943 xdev = xhci->devs[slot_id];···19571955 xhci_err(xhci, "ERROR Transfer event for disabled endpoint "19581956 "or incorrect stream ring\n");19591957 return -ENODEV;19581958+ }19591959+19601960+ /* Count current td numbers if ep->skip is set */19611961+ if (ep->skip) {19621962+ list_for_each(tmp, &ep_ring->td_list)19631963+ td_num++;19601964 }1961196519621966 event_dma = le64_to_cpu(event->buffer);···20762068 goto cleanup;20772069 }2078207020712071+ /* We've skipped all the TDs on the ep ring when ep->skip set */20722072+ if (ep->skip && td_num == 0) {20732073+ ep->skip = false;20742074+ xhci_dbg(xhci, "All tds on the ep_ring skipped. "20752075+ "Clear skip flag.\n");20762076+ ret = 0;20772077+ goto cleanup;20782078+ }20792079+20792080 td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list);20812081+ if (ep->skip)20822082+ td_num--;2080208320812084 /* Is this a TRB in the currently executing TD? */20822085 event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue,
+7-6
drivers/watchdog/hpwdt.c
···494494 asminline_call(&cmn_regs, cru_rom_addr);495495 die_nmi_called = 1;496496 spin_unlock_irqrestore(&rom_lock, rom_pl);497497- if (!is_icru) {498498- if (cmn_regs.u1.ral == 0) {499499- printk(KERN_WARNING "hpwdt: An NMI occurred, "500500- "but unable to determine source.\n");501501- }502502- }503497504498 if (allow_kdump)505499 hpwdt_stop();500500+501501+ if (!is_icru) {502502+ if (cmn_regs.u1.ral == 0) {503503+ panic("An NMI occurred, "504504+ "but unable to determine source.\n");505505+ }506506+ }506507 panic("An NMI occurred, please see the Integrated "507508 "Management Log for details.\n");508509
+4-4
drivers/watchdog/lantiq_wdt.c
···5151static void5252ltq_wdt_enable(void)5353{5454- ltq_wdt_timeout = ltq_wdt_timeout *5454+ unsigned long int timeout = ltq_wdt_timeout *5555 (ltq_io_region_clk_rate / LTQ_WDT_DIVIDER) + 0x1000;5656- if (ltq_wdt_timeout > LTQ_MAX_TIMEOUT)5757- ltq_wdt_timeout = LTQ_MAX_TIMEOUT;5656+ if (timeout > LTQ_MAX_TIMEOUT)5757+ timeout = LTQ_MAX_TIMEOUT;58585959 /* write the first password magic */6060 ltq_w32(LTQ_WDT_PW1, ltq_wdt_membase + LTQ_WDT_CR);6161 /* write the second magic plus the configuration and new timeout */6262 ltq_w32(LTQ_WDT_SR_EN | LTQ_WDT_SR_PWD | LTQ_WDT_SR_CLKDIV |6363- LTQ_WDT_PW2 | ltq_wdt_timeout, ltq_wdt_membase + LTQ_WDT_CR);6363+ LTQ_WDT_PW2 | timeout, ltq_wdt_membase + LTQ_WDT_CR);6464}65656666static void
···40794079 T2_FNEXT_RSP_PARMS *parms;40804080 char *response_data;40814081 int rc = 0;40824082- int bytes_returned, name_len;40824082+ int bytes_returned;40834083+ unsigned int name_len;40834084 __u16 params, byte_count;4084408540854086 cFYI(1, "In FindNext");
···721721 if (!path->dentry->d_op || !path->dentry->d_op->d_automount)722722 return -EREMOTE;723723724724- /* We don't want to mount if someone supplied AT_NO_AUTOMOUNT725725- * and this is the terminal part of the path.726726- */727727- if ((flags & LOOKUP_NO_AUTOMOUNT) && !(flags & LOOKUP_PARENT))728728- return -EISDIR; /* we actually want to stop here */729729-730724 /* We don't want to mount if someone's just doing a stat -731725 * unless they're stat'ing a directory and appended a '/' to732726 * the name.···733739 * of the daemon to instantiate them before they can be used.734740 */735741 if (!(flags & (LOOKUP_PARENT | LOOKUP_DIRECTORY |736736- LOOKUP_OPEN | LOOKUP_CREATE)) &&742742+ LOOKUP_OPEN | LOOKUP_CREATE | LOOKUP_AUTOMOUNT)) &&737743 path->dentry->d_inode)738744 return -EISDIR;739745···26102616 if (!dir->i_op->rmdir)26112617 return -EPERM;2612261826192619+ dget(dentry);26132620 mutex_lock(&dentry->d_inode->i_mutex);2614262126152622 error = -EBUSY;···2631263626322637out:26332638 mutex_unlock(&dentry->d_inode->i_mutex);26392639+ dput(dentry);26342640 if (!error)26352641 d_delete(dentry);26362642 return error;···30213025 if (error)30223026 return error;3023302730283028+ dget(new_dentry);30243029 if (target)30253030 mutex_lock(&target->i_mutex);30263031···30423045out:30433046 if (target)30443047 mutex_unlock(&target->i_mutex);30483048+ dput(new_dentry);30453049 if (!error)30463050 if (!(old_dir->i_sb->s_type->fs_flags & FS_RENAME_DOES_D_MOVE))30473051 d_move(old_dentry,new_dentry);
···3374337433753375 if (task->tk_status < 0) {33763376 /* Unless we're shutting down, schedule state recovery! */33773377- if (test_bit(NFS_CS_RENEWD, &clp->cl_res_state) != 0)33773377+ if (test_bit(NFS_CS_RENEWD, &clp->cl_res_state) == 0)33783378+ return;33793379+ if (task->tk_status != NFS4ERR_CB_PATH_DOWN) {33783380 nfs4_schedule_lease_recovery(clp);33793379- return;33813381+ return;33823382+ }33833383+ nfs4_schedule_path_down_recovery(clp);33803384 }33813385 do_renew_lease(clp, timestamp);33823386}···33903386 .rpc_release = nfs4_renew_release,33913387};3392338833933393-int nfs4_proc_async_renew(struct nfs_client *clp, struct rpc_cred *cred)33893389+static int nfs4_proc_async_renew(struct nfs_client *clp, struct rpc_cred *cred, unsigned renew_flags)33943390{33953391 struct rpc_message msg = {33963392 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_RENEW],···33993395 };34003396 struct nfs4_renewdata *data;3401339733983398+ if (renew_flags == 0)33993399+ return 0;34023400 if (!atomic_inc_not_zero(&clp->cl_count))34033401 return -EIO;34043404- data = kmalloc(sizeof(*data), GFP_KERNEL);34023402+ data = kmalloc(sizeof(*data), GFP_NOFS);34053403 if (data == NULL)34063404 return -ENOMEM;34073405 data->client = clp;···34123406 &nfs4_renew_ops, data);34133407}3414340834153415-int nfs4_proc_renew(struct nfs_client *clp, struct rpc_cred *cred)34093409+static int nfs4_proc_renew(struct nfs_client *clp, struct rpc_cred *cred)34163410{34173411 struct rpc_message msg = {34183412 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_RENEW],···55105504 return rpc_run_task(&task_setup_data);55115505}5512550655135513-static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cred)55075507+static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cred, unsigned renew_flags)55145508{55155509 struct rpc_task *task;55165510 int ret = 0;5517551155125512+ if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0)55135513+ return 0;55185514 task = _nfs41_proc_sequence(clp, cred);55195515 if (IS_ERR(task))55205516 ret = PTR_ERR(task);
+9-3
fs/nfs/nfs4renewd.c
···6060 struct rpc_cred *cred;6161 long lease;6262 unsigned long last, now;6363+ unsigned renew_flags = 0;63646465 ops = clp->cl_mvops->state_renewal_ops;6566 dprintk("%s: start\n", __func__);···7372 last = clp->cl_last_renewal;7473 now = jiffies;7574 /* Are we close to a lease timeout? */7676- if (time_after(now, last + lease/3)) {7575+ if (time_after(now, last + lease/3))7676+ renew_flags |= NFS4_RENEW_TIMEOUT;7777+ if (nfs_delegations_present(clp))7878+ renew_flags |= NFS4_RENEW_DELEGATION_CB;7979+8080+ if (renew_flags != 0) {7781 cred = ops->get_state_renewal_cred_locked(clp);7882 spin_unlock(&clp->cl_lock);7983 if (cred == NULL) {8080- if (!nfs_delegations_present(clp)) {8484+ if (!(renew_flags & NFS4_RENEW_DELEGATION_CB)) {8185 set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);8286 goto out;8387 }8488 nfs_expire_all_delegations(clp);8589 } else {8690 /* Queue an asynchronous RENEW. */8787- ops->sched_state_renewal(clp, cred);9191+ ops->sched_state_renewal(clp, cred, renew_flags);8892 put_rpccred(cred);8993 goto out_exp;9094 }
···20352035 sb->s_blocksize = nfs_block_bits(server->wsize,20362036 &sb->s_blocksize_bits);2037203720382038- if (server->flags & NFS_MOUNT_NOAC)20392039- sb->s_flags |= MS_SYNCHRONOUS;20402040-20412038 sb->s_bdi = &server->backing_dev_info;2042203920432040 nfs_super_set_maxbytes(sb, server->maxfilesize);···22462249 if (server->flags & NFS_MOUNT_UNSHARED)22472250 compare_super = NULL;2248225122522252+ /* -o noac implies -o sync */22532253+ if (server->flags & NFS_MOUNT_NOAC)22542254+ sb_mntdata.mntflags |= MS_SYNCHRONOUS;22552255+22492256 /* Get a superblock - note that we may end up sharing one that already exists */22502257 s = sget(fs_type, compare_super, nfs_set_super, &sb_mntdata);22512258 if (IS_ERR(s)) {···2361236023622361 if (server->flags & NFS_MOUNT_UNSHARED)23632362 compare_super = NULL;23632363+23642364+ /* -o noac implies -o sync */23652365+ if (server->flags & NFS_MOUNT_NOAC)23662366+ sb_mntdata.mntflags |= MS_SYNCHRONOUS;2364236723652368 /* Get a superblock - note that we may end up sharing one that already exists */23662369 s = sget(&nfs_fs_type, compare_super, nfs_set_super, &sb_mntdata);···26332628 if (server->flags & NFS4_MOUNT_UNSHARED)26342629 compare_super = NULL;2635263026312631+ /* -o noac implies -o sync */26322632+ if (server->flags & NFS_MOUNT_NOAC)26332633+ sb_mntdata.mntflags |= MS_SYNCHRONOUS;26342634+26362635 /* Get a superblock - note that we may end up sharing one that already exists */26372636 s = sget(&nfs4_fs_type, compare_super, nfs_set_super, &sb_mntdata);26382637 if (IS_ERR(s)) {···27982789 goto out_put_mnt_ns;2799279028002791 ret = vfs_path_lookup(root_mnt->mnt_root, root_mnt,28012801- export_path, LOOKUP_FOLLOW, &path);27922792+ export_path, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &path);2802279328032794 nfs_referral_loop_unprotect();28042795 put_mnt_ns(ns_private);···29252916 if (server->flags & NFS4_MOUNT_UNSHARED)29262917 compare_super = NULL;2927291829192919+ /* -o noac implies -o sync */29202920+ if (server->flags & NFS_MOUNT_NOAC)29212921+ sb_mntdata.mntflags |= MS_SYNCHRONOUS;29222922+29282923 /* Get a superblock - note that we may end up sharing one that already exists */29292924 s = sget(&nfs4_fs_type, compare_super, nfs_set_super, &sb_mntdata);29302925 if (IS_ERR(s)) {···3015300230163003 if (server->flags & NFS4_MOUNT_UNSHARED)30173004 compare_super = NULL;30053005+30063006+ /* -o noac implies -o sync */30073007+ if (server->flags & NFS_MOUNT_NOAC)30083008+ sb_mntdata.mntflags |= MS_SYNCHRONOUS;3018300930193010 /* Get a superblock - note that we may end up sharing one that already exists */30203011 s = sget(&nfs4_fs_type, compare_super, nfs_set_super, &sb_mntdata);
···877877 struct numa_maps md;878878};879879880880-static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty)880880+static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,881881+ unsigned long nr_pages)881882{882883 int count = page_mapcount(page);883884884884- md->pages++;885885+ md->pages += nr_pages;885886 if (pte_dirty || PageDirty(page))886886- md->dirty++;887887+ md->dirty += nr_pages;887888888889 if (PageSwapCache(page))889889- md->swapcache++;890890+ md->swapcache += nr_pages;890891891892 if (PageActive(page) || PageUnevictable(page))892892- md->active++;893893+ md->active += nr_pages;893894894895 if (PageWriteback(page))895895- md->writeback++;896896+ md->writeback += nr_pages;896897897898 if (PageAnon(page))898898- md->anon++;899899+ md->anon += nr_pages;899900900901 if (count > md->mapcount_max)901902 md->mapcount_max = count;902903903903- md->node[page_to_nid(page)]++;904904+ md->node[page_to_nid(page)] += nr_pages;905905+}906906+907907+static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,908908+ unsigned long addr)909909+{910910+ struct page *page;911911+ int nid;912912+913913+ if (!pte_present(pte))914914+ return NULL;915915+916916+ page = vm_normal_page(vma, addr, pte);917917+ if (!page)918918+ return NULL;919919+920920+ if (PageReserved(page))921921+ return NULL;922922+923923+ nid = page_to_nid(page);924924+ if (!node_isset(nid, node_states[N_HIGH_MEMORY]))925925+ return NULL;926926+927927+ return page;904928}905929906930static int gather_pte_stats(pmd_t *pmd, unsigned long addr,···936912 pte_t *pte;937913938914 md = walk->private;915915+ spin_lock(&walk->mm->page_table_lock);916916+ if (pmd_trans_huge(*pmd)) {917917+ if (pmd_trans_splitting(*pmd)) {918918+ spin_unlock(&walk->mm->page_table_lock);919919+ wait_split_huge_page(md->vma->anon_vma, pmd);920920+ } else {921921+ pte_t huge_pte = *(pte_t *)pmd;922922+ struct page *page;923923+924924+ page = can_gather_numa_stats(huge_pte, md->vma, addr);925925+ if (page)926926+ gather_stats(page, md, pte_dirty(huge_pte),927927+ HPAGE_PMD_SIZE/PAGE_SIZE);928928+ spin_unlock(&walk->mm->page_table_lock);929929+ return 0;930930+ }931931+ } else {932932+ spin_unlock(&walk->mm->page_table_lock);933933+ }934934+939935 orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);940936 do {941941- struct page *page;942942- int nid;943943-944944- if (!pte_present(*pte))945945- continue;946946-947947- page = vm_normal_page(md->vma, addr, *pte);937937+ struct page *page = can_gather_numa_stats(*pte, md->vma, addr);948938 if (!page)949939 continue;950950-951951- if (PageReserved(page))952952- continue;953953-954954- nid = page_to_nid(page);955955- if (!node_isset(nid, node_states[N_HIGH_MEMORY]))956956- continue;957957-958958- gather_stats(page, md, pte_dirty(*pte));940940+ gather_stats(page, md, pte_dirty(*pte), 1);959941960942 } while (pte++, addr += PAGE_SIZE, addr != end);961943 pte_unmap_unlock(orig_pte, ptl);···982952 return 0;983953984954 md = walk->private;985985- gather_stats(page, md, pte_dirty(*pte));955955+ gather_stats(page, md, pte_dirty(*pte), 1);986956 return 0;987957}988958
+1-1
fs/quota/quota.c
···355355 * resolution (think about autofs) and thus deadlocks could arise.356356 */357357 if (cmds == Q_QUOTAON) {358358- ret = user_path_at(AT_FDCWD, addr, LOOKUP_FOLLOW, &path);358358+ ret = user_path_at(AT_FDCWD, addr, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &path);359359 if (ret)360360 pathp = ERR_PTR(ret);361361 else
-2
fs/stat.c
···81818282 if (!(flag & AT_SYMLINK_NOFOLLOW))8383 lookup_flags |= LOOKUP_FOLLOW;8484- if (flag & AT_NO_AUTOMOUNT)8585- lookup_flags |= LOOKUP_NO_AUTOMOUNT;8684 if (flag & AT_EMPTY_PATH)8785 lookup_flags |= LOOKUP_EMPTY;8886
···3939 struct mem_cgroup *mem_cont,4040 int active, int file);41414242-struct memcg_scanrecord {4343- struct mem_cgroup *mem; /* scanend memory cgroup */4444- struct mem_cgroup *root; /* scan target hierarchy root */4545- int context; /* scanning context (see memcontrol.c) */4646- unsigned long nr_scanned[2]; /* the number of scanned pages */4747- unsigned long nr_rotated[2]; /* the number of rotated pages */4848- unsigned long nr_freed[2]; /* the number of freed pages */4949- unsigned long elapsed; /* nsec of time elapsed while scanning */5050-};5151-5242#ifdef CONFIG_CGROUP_MEM_RES_CTLR5343/*5444 * All "charge" functions with gfp_mask should use GFP_KERNEL or···116126mem_cgroup_get_reclaim_stat_from_page(struct page *page);117127extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,118128 struct task_struct *p);119119-120120-extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,121121- gfp_t gfp_mask, bool noswap,122122- struct memcg_scanrecord *rec);123123-extern unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,124124- gfp_t gfp_mask, bool noswap,125125- struct zone *zone,126126- struct memcg_scanrecord *rec,127127- unsigned long *nr_scanned);128129129130#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP130131extern int do_swap_account;
···1956195619571957extern unsigned long long19581958task_sched_runtime(struct task_struct *task);19591959-extern unsigned long long thread_group_sched_runtime(struct task_struct *task);1960195919611960/* sched_exec is called by processes performing an exec */19621961#ifdef CONFIG_SMP
···252252extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,253253 gfp_t gfp_mask, nodemask_t *mask);254254extern int __isolate_lru_page(struct page *page, int mode, int file);255255+extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,256256+ gfp_t gfp_mask, bool noswap);257257+extern unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,258258+ gfp_t gfp_mask, bool noswap,259259+ struct zone *zone,260260+ unsigned long *nr_scanned);255261extern unsigned long shrink_all_memory(unsigned long nr_pages);256262extern int vm_swappiness;257263extern int remove_mapping(struct address_space *mapping, struct page *page);
···9696 */9797struct listen_sock {9898 u8 max_qlen_log;9999- /* 3 bytes hole, try to use */9999+ u8 synflood_warned;100100+ /* 2 bytes hole, try to use */100101 int qlen;101102 int qlen_young;102103 int clock_hand;
+1
include/net/sctp/command.h
···109109 SCTP_CMD_SEND_MSG, /* Send the whole use message */110110 SCTP_CMD_SEND_NEXT_ASCONF, /* Send the next ASCONF after ACK */111111 SCTP_CMD_PURGE_ASCONF_QUEUE, /* Purge all asconf queues.*/112112+ SCTP_CMD_SET_ASOC, /* Restore association context */112113 SCTP_CMD_LAST113114} sctp_verb_t;114115
···209209210210static int __init loglevel(char *str)211211{212212- get_option(&str, &console_loglevel);213213- return 0;212212+ int newlevel;213213+214214+ /*215215+ * Only update loglevel value when a correct setting was passed,216216+ * to prevent blind crashes (when loglevel being set to 0) that217217+ * are quite hard to debug218218+ */219219+ if (get_option(&str, &newlevel)) {220220+ console_loglevel = newlevel;221221+ return 0;222222+ }223223+224224+ return -EINVAL;214225}215226216227early_param("loglevel", loglevel);···380369 init_idle_bootup_task(current);381370 preempt_enable_no_resched();382371 schedule();383383-384384- /* At this point, we can enable user mode helper functionality */385385- usermodehelper_enable();386372387373 /* Call into cpu_idle with preempt disabled */388374 preempt_disable();···730722 driver_init();731723 init_irq_proc();732724 do_ctors();725725+ usermodehelper_enable();733726 do_initcalls();734727}735728
+1-1
kernel/irq/chip.c
···178178 desc->depth = 1;179179 if (desc->irq_data.chip->irq_shutdown)180180 desc->irq_data.chip->irq_shutdown(&desc->irq_data);181181- if (desc->irq_data.chip->irq_disable)181181+ else if (desc->irq_data.chip->irq_disable)182182 desc->irq_data.chip->irq_disable(&desc->irq_data);183183 else184184 desc->irq_data.chip->irq_mask(&desc->irq_data);
+5-1
kernel/irq/irqdomain.c
···2929 */3030 for (hwirq = 0; hwirq < domain->nr_irq; hwirq++) {3131 d = irq_get_irq_data(irq_domain_to_irq(domain, hwirq));3232- if (d || d->domain) {3232+ if (!d) {3333+ WARN(1, "error: assigning domain to non existant irq_desc");3434+ return;3535+ }3636+ if (d->domain) {3337 /* things are broken; just report, don't clean up */3438 WARN(1, "error: irq_desc already assigned to a domain");3539 return;
···744744 break;745745746746 si = child->last_siginfo;747747- if (unlikely(!si || si->si_code >> 8 != PTRACE_EVENT_STOP))748748- break;749749-750750- child->jobctl |= JOBCTL_LISTENING;751751-752752- /*753753- * If NOTIFY is set, it means event happened between start754754- * of this trap and now. Trigger re-trap immediately.755755- */756756- if (child->jobctl & JOBCTL_TRAP_NOTIFY)757757- signal_wake_up(child, true);758758-747747+ if (likely(si && (si->si_code >> 8) == PTRACE_EVENT_STOP)) {748748+ child->jobctl |= JOBCTL_LISTENING;749749+ /*750750+ * If NOTIFY is set, it means event happened between751751+ * start of this trap and now. Trigger re-trap.752752+ */753753+ if (child->jobctl & JOBCTL_TRAP_NOTIFY)754754+ signal_wake_up(child, true);755755+ ret = 0;756756+ }759757 unlock_task_sighand(child, &flags);760760- ret = 0;761758 break;762759763760 case PTRACE_DETACH: /* detach a process that was attached. */
+6-1
kernel/resource.c
···419419 else420420 tmp.end = root->end;421421422422+ if (tmp.end < tmp.start)423423+ goto next;424424+422425 resource_clip(&tmp, constraint->min, constraint->max);423426 arch_remove_reservations(&tmp);424427···439436 return 0;440437 }441438 }442442- if (!this)439439+440440+next: if (!this || this->end == root->end)443441 break;442442+444443 if (this != old)445444 tmp.start = this->end + 1;446445 this = this->sibling;
+1-25
kernel/sched.c
···37253725}3726372637273727/*37283728- * Return sum_exec_runtime for the thread group.37293729- * In case the task is currently running, return the sum plus current's37303730- * pending runtime that have not been accounted yet.37313731- *37323732- * Note that the thread group might have other running tasks as well,37333733- * so the return value not includes other pending runtime that other37343734- * running tasks might have.37353735- */37363736-unsigned long long thread_group_sched_runtime(struct task_struct *p)37373737-{37383738- struct task_cputime totals;37393739- unsigned long flags;37403740- struct rq *rq;37413741- u64 ns;37423742-37433743- rq = task_rq_lock(p, &flags);37443744- thread_group_cputime(p, &totals);37453745- ns = totals.sum_exec_runtime + do_task_delta_exec(p, rq);37463746- task_rq_unlock(rq, p, &flags);37473747-37483748- return ns;37493749-}37503750-37513751-/*37523728 * Account user cpu time to a process.37533729 * @p: the process that the cpu time gets accounted to37543730 * @cputime: the cpu time spent in user space since the last update···43484372 blk_schedule_flush_plug(tsk);43494373}4350437443514351-asmlinkage void schedule(void)43754375+asmlinkage void __sched schedule(void)43524376{43534377 struct task_struct *tsk = current;43544378
···441441 * next filter in the chain. Apply the BCJ filter on the new data442442 * in the output buffer. If everything cannot be filtered, copy it443443 * to temp and rewind the output buffer position accordingly.444444+ *445445+ * This needs to be always run when temp.size == 0 to handle a special446446+ * case where the output buffer is full and the next filter has no447447+ * more output coming but hasn't returned XZ_STREAM_END yet.444448 */445445- if (s->temp.size < b->out_size - b->out_pos) {449449+ if (s->temp.size < b->out_size - b->out_pos || s->temp.size == 0) {446450 out_start = b->out_pos;447451 memcpy(b->out + b->out_pos, s->temp.buf, s->temp.size);448452 b->out_pos += s->temp.size;···469465 s->temp.size = b->out_pos - out_start;470466 b->out_pos -= s->temp.size;471467 memcpy(s->temp.buf, b->out + b->out_pos, s->temp.size);468468+469469+ /*470470+ * If there wasn't enough input to the next filter to fill471471+ * the output buffer with unfiltered data, there's no point472472+ * to try decoding more data to temp.473473+ */474474+ if (b->out_pos + s->temp.size < b->out_size)475475+ return XZ_OK;472476 }473477474478 /*475475- * If we have unfiltered data in temp, try to fill by decoding more476476- * data from the next filter. Apply the BCJ filter on temp. Then we477477- * hopefully can fill the actual output buffer by copying filtered478478- * data from temp. A mix of filtered and unfiltered data may be left479479- * in temp; it will be taken care on the next call to this function.479479+ * We have unfiltered data in temp. If the output buffer isn't full480480+ * yet, try to fill the temp buffer by decoding more data from the481481+ * next filter. Apply the BCJ filter on temp. Then we hopefully can482482+ * fill the actual output buffer by copying filtered data from temp.483483+ * A mix of filtered and unfiltered data may be left in temp; it will484484+ * be taken care on the next call to this function.480485 */481481- if (s->temp.size > 0) {486486+ if (b->out_pos < b->out_size) {482487 /* Make b->out{,_pos,_size} temporarily point to s->temp. */483488 s->out = b->out;484489 s->out_pos = b->out_pos;
+21-9
mm/backing-dev.c
···359359 return max(5UL * 60 * HZ, interval);360360}361361362362+/*363363+ * Clear pending bit and wakeup anybody waiting for flusher thread creation or364364+ * shutdown365365+ */366366+static void bdi_clear_pending(struct backing_dev_info *bdi)367367+{368368+ clear_bit(BDI_pending, &bdi->state);369369+ smp_mb__after_clear_bit();370370+ wake_up_bit(&bdi->state, BDI_pending);371371+}372372+362373static int bdi_forker_thread(void *ptr)363374{364375 struct bdi_writeback *me = ptr;···401390 }402391403392 spin_lock_bh(&bdi_lock);393393+ /*394394+ * In the following loop we are going to check whether we have395395+ * some work to do without any synchronization with tasks396396+ * waking us up to do work for them. So we have to set task397397+ * state already here so that we don't miss wakeups coming398398+ * after we verify some condition.399399+ */404400 set_current_state(TASK_INTERRUPTIBLE);405401406402 list_for_each_entry(bdi, &bdi_list, bdi_list) {···487469 spin_unlock_bh(&bdi->wb_lock);488470 wake_up_process(task);489471 }472472+ bdi_clear_pending(bdi);490473 break;491474492475 case KILL_THREAD:493476 __set_current_state(TASK_RUNNING);494477 kthread_stop(task);478478+ bdi_clear_pending(bdi);495479 break;496480497481 case NO_ACTION:···509489 else510490 schedule_timeout(msecs_to_jiffies(dirty_writeback_interval * 10));511491 try_to_freeze();512512- /* Back to the main loop */513513- continue;492492+ break;514493 }515515-516516- /*517517- * Clear pending bit and wakeup anybody waiting to tear us down.518518- */519519- clear_bit(BDI_pending, &bdi->state);520520- smp_mb__after_clear_bit();521521- wake_up_bit(&bdi->state, BDI_pending);522494 }523495524496 return 0;
+4-2
mm/filemap.c
···827827{828828 unsigned int i;829829 unsigned int ret;830830- unsigned int nr_found;830830+ unsigned int nr_found, nr_skip;831831832832 rcu_read_lock();833833restart:834834 nr_found = radix_tree_gang_lookup_slot(&mapping->page_tree,835835 (void ***)pages, NULL, start, nr_pages);836836 ret = 0;837837+ nr_skip = 0;837838 for (i = 0; i < nr_found; i++) {838839 struct page *page;839840repeat:···857856 * here as an exceptional entry: so skip over it -858857 * we only reach this from invalidate_mapping_pages().859858 */859859+ nr_skip++;860860 continue;861861 }862862···878876 * If all entries were removed before we could secure them,879877 * try again, because callers stop trying once 0 is returned.880878 */881881- if (unlikely(!ret && nr_found))879879+ if (unlikely(!ret && nr_found > nr_skip))882880 goto restart;883881 rcu_read_unlock();884882 return ret;
+6-166
mm/memcontrol.c
···204204static void mem_cgroup_threshold(struct mem_cgroup *mem);205205static void mem_cgroup_oom_notify(struct mem_cgroup *mem);206206207207-enum {208208- SCAN_BY_LIMIT,209209- SCAN_BY_SYSTEM,210210- NR_SCAN_CONTEXT,211211- SCAN_BY_SHRINK, /* not recorded now */212212-};213213-214214-enum {215215- SCAN,216216- SCAN_ANON,217217- SCAN_FILE,218218- ROTATE,219219- ROTATE_ANON,220220- ROTATE_FILE,221221- FREED,222222- FREED_ANON,223223- FREED_FILE,224224- ELAPSED,225225- NR_SCANSTATS,226226-};227227-228228-struct scanstat {229229- spinlock_t lock;230230- unsigned long stats[NR_SCAN_CONTEXT][NR_SCANSTATS];231231- unsigned long rootstats[NR_SCAN_CONTEXT][NR_SCANSTATS];232232-};233233-234234-const char *scanstat_string[NR_SCANSTATS] = {235235- "scanned_pages",236236- "scanned_anon_pages",237237- "scanned_file_pages",238238- "rotated_pages",239239- "rotated_anon_pages",240240- "rotated_file_pages",241241- "freed_pages",242242- "freed_anon_pages",243243- "freed_file_pages",244244- "elapsed_ns",245245-};246246-#define SCANSTAT_WORD_LIMIT "_by_limit"247247-#define SCANSTAT_WORD_SYSTEM "_by_system"248248-#define SCANSTAT_WORD_HIERARCHY "_under_hierarchy"249249-250250-251207/*252208 * The memory controller data structure. The memory controller controls both253209 * page cache and RSS per cgroup. We would eventually like to provide···269313270314 /* For oom notifier event fd */271315 struct list_head oom_notify;272272- /* For recording LRU-scan statistics */273273- struct scanstat scanstat;316316+274317 /*275318 * Should we move charges of a task when a task is moved into this276319 * mem_cgroup ? And what type of charges should we move ?···16331678}16341679#endif1635168016361636-static void __mem_cgroup_record_scanstat(unsigned long *stats,16371637- struct memcg_scanrecord *rec)16381638-{16391639-16401640- stats[SCAN] += rec->nr_scanned[0] + rec->nr_scanned[1];16411641- stats[SCAN_ANON] += rec->nr_scanned[0];16421642- stats[SCAN_FILE] += rec->nr_scanned[1];16431643-16441644- stats[ROTATE] += rec->nr_rotated[0] + rec->nr_rotated[1];16451645- stats[ROTATE_ANON] += rec->nr_rotated[0];16461646- stats[ROTATE_FILE] += rec->nr_rotated[1];16471647-16481648- stats[FREED] += rec->nr_freed[0] + rec->nr_freed[1];16491649- stats[FREED_ANON] += rec->nr_freed[0];16501650- stats[FREED_FILE] += rec->nr_freed[1];16511651-16521652- stats[ELAPSED] += rec->elapsed;16531653-}16541654-16551655-static void mem_cgroup_record_scanstat(struct memcg_scanrecord *rec)16561656-{16571657- struct mem_cgroup *mem;16581658- int context = rec->context;16591659-16601660- if (context >= NR_SCAN_CONTEXT)16611661- return;16621662-16631663- mem = rec->mem;16641664- spin_lock(&mem->scanstat.lock);16651665- __mem_cgroup_record_scanstat(mem->scanstat.stats[context], rec);16661666- spin_unlock(&mem->scanstat.lock);16671667-16681668- mem = rec->root;16691669- spin_lock(&mem->scanstat.lock);16701670- __mem_cgroup_record_scanstat(mem->scanstat.rootstats[context], rec);16711671- spin_unlock(&mem->scanstat.lock);16721672-}16731673-16741681/*16751682 * Scan the hierarchy if needed to reclaim memory. We remember the last child16761683 * we reclaimed from, so that we don't end up penalizing one child extensively···16571740 bool noswap = reclaim_options & MEM_CGROUP_RECLAIM_NOSWAP;16581741 bool shrink = reclaim_options & MEM_CGROUP_RECLAIM_SHRINK;16591742 bool check_soft = reclaim_options & MEM_CGROUP_RECLAIM_SOFT;16601660- struct memcg_scanrecord rec;16611743 unsigned long excess;16621662- unsigned long scanned;17441744+ unsigned long nr_scanned;1663174516641746 excess = res_counter_soft_limit_excess(&root_mem->res) >> PAGE_SHIFT;1665174716661748 /* If memsw_is_minimum==1, swap-out is of-no-use. */16671749 if (!check_soft && !shrink && root_mem->memsw_is_minimum)16681750 noswap = true;16691669-16701670- if (shrink)16711671- rec.context = SCAN_BY_SHRINK;16721672- else if (check_soft)16731673- rec.context = SCAN_BY_SYSTEM;16741674- else16751675- rec.context = SCAN_BY_LIMIT;16761676-16771677- rec.root = root_mem;1678175116791752 while (1) {16801753 victim = mem_cgroup_select_victim(root_mem);···17061799 css_put(&victim->css);17071800 continue;17081801 }17091709- rec.mem = victim;17101710- rec.nr_scanned[0] = 0;17111711- rec.nr_scanned[1] = 0;17121712- rec.nr_rotated[0] = 0;17131713- rec.nr_rotated[1] = 0;17141714- rec.nr_freed[0] = 0;17151715- rec.nr_freed[1] = 0;17161716- rec.elapsed = 0;17171802 /* we use swappiness of local cgroup */17181803 if (check_soft) {17191804 ret = mem_cgroup_shrink_node_zone(victim, gfp_mask,17201720- noswap, zone, &rec, &scanned);17211721- *total_scanned += scanned;18051805+ noswap, zone, &nr_scanned);18061806+ *total_scanned += nr_scanned;17221807 } else17231808 ret = try_to_free_mem_cgroup_pages(victim, gfp_mask,17241724- noswap, &rec);17251725- mem_cgroup_record_scanstat(&rec);18091809+ noswap);17261810 css_put(&victim->css);17271811 /*17281812 * At shrinking usage, we can't check we should stop here or···37523854 /* try to free all pages in this cgroup */37533855 shrink = 1;37543856 while (nr_retries && mem->res.usage > 0) {37553755- struct memcg_scanrecord rec;37563857 int progress;3757385837583859 if (signal_pending(current)) {37593860 ret = -EINTR;37603861 goto out;37613862 }37623762- rec.context = SCAN_BY_SHRINK;37633763- rec.mem = mem;37643764- rec.root = mem;37653863 progress = try_to_free_mem_cgroup_pages(mem, GFP_KERNEL,37663766- false, &rec);38643864+ false);37673865 if (!progress) {37683866 nr_retries--;37693867 /* maybe some writeback is necessary */···46034709}46044710#endif /* CONFIG_NUMA */4605471146064606-static int mem_cgroup_vmscan_stat_read(struct cgroup *cgrp,46074607- struct cftype *cft,46084608- struct cgroup_map_cb *cb)46094609-{46104610- struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);46114611- char string[64];46124612- int i;46134613-46144614- for (i = 0; i < NR_SCANSTATS; i++) {46154615- strcpy(string, scanstat_string[i]);46164616- strcat(string, SCANSTAT_WORD_LIMIT);46174617- cb->fill(cb, string, mem->scanstat.stats[SCAN_BY_LIMIT][i]);46184618- }46194619-46204620- for (i = 0; i < NR_SCANSTATS; i++) {46214621- strcpy(string, scanstat_string[i]);46224622- strcat(string, SCANSTAT_WORD_SYSTEM);46234623- cb->fill(cb, string, mem->scanstat.stats[SCAN_BY_SYSTEM][i]);46244624- }46254625-46264626- for (i = 0; i < NR_SCANSTATS; i++) {46274627- strcpy(string, scanstat_string[i]);46284628- strcat(string, SCANSTAT_WORD_LIMIT);46294629- strcat(string, SCANSTAT_WORD_HIERARCHY);46304630- cb->fill(cb, string, mem->scanstat.rootstats[SCAN_BY_LIMIT][i]);46314631- }46324632- for (i = 0; i < NR_SCANSTATS; i++) {46334633- strcpy(string, scanstat_string[i]);46344634- strcat(string, SCANSTAT_WORD_SYSTEM);46354635- strcat(string, SCANSTAT_WORD_HIERARCHY);46364636- cb->fill(cb, string, mem->scanstat.rootstats[SCAN_BY_SYSTEM][i]);46374637- }46384638- return 0;46394639-}46404640-46414641-static int mem_cgroup_reset_vmscan_stat(struct cgroup *cgrp,46424642- unsigned int event)46434643-{46444644- struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);46454645-46464646- spin_lock(&mem->scanstat.lock);46474647- memset(&mem->scanstat.stats, 0, sizeof(mem->scanstat.stats));46484648- memset(&mem->scanstat.rootstats, 0, sizeof(mem->scanstat.rootstats));46494649- spin_unlock(&mem->scanstat.lock);46504650- return 0;46514651-}46524652-46534653-46544712static struct cftype mem_cgroup_files[] = {46554713 {46564714 .name = "usage_in_bytes",···46734827 .mode = S_IRUGO,46744828 },46754829#endif46764676- {46774677- .name = "vmscan_stat",46784678- .read_map = mem_cgroup_vmscan_stat_read,46794679- .trigger = mem_cgroup_reset_vmscan_stat,46804680- },46814830};4682483146834832#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP···49365095 atomic_set(&mem->refcnt, 1);49375096 mem->move_charge_at_immigrate = 0;49385097 mutex_init(&mem->thresholds_lock);49394939- spin_lock_init(&mem->scanstat.lock);49405098 return &mem->css;49415099free_out:49425100 __mem_cgroup_free(mem);
···21402140 return NULL;21412141 }2142214221432143+ /*21442144+ * If the allocated address space is passed to a hypercall21452145+ * before being used then we cannot rely on a page fault to21462146+ * trigger an update of the page tables. So sync all the page21472147+ * tables here.21482148+ */21492149+ vmalloc_sync_all();21502150+21432151 return area;21442152}21452153EXPORT_SYMBOL_GPL(alloc_vm_area);
+17-49
mm/vmscan.c
···105105106106 /* Which cgroup do we reclaim from */107107 struct mem_cgroup *mem_cgroup;108108- struct memcg_scanrecord *memcg_record;109108110109 /*111110 * Nodemask of nodes allowed by the caller. If NULL, all nodes···13481349 int file = is_file_lru(lru);13491350 int numpages = hpage_nr_pages(page);13501351 reclaim_stat->recent_rotated[file] += numpages;13511351- if (!scanning_global_lru(sc))13521352- sc->memcg_record->nr_rotated[file] += numpages;13531352 }13541353 if (!pagevec_add(&pvec, page)) {13551354 spin_unlock_irq(&zone->lru_lock);···1391139413921395 reclaim_stat->recent_scanned[0] += *nr_anon;13931396 reclaim_stat->recent_scanned[1] += *nr_file;13941394- if (!scanning_global_lru(sc)) {13951395- sc->memcg_record->nr_scanned[0] += *nr_anon;13961396- sc->memcg_record->nr_scanned[1] += *nr_file;13971397- }13981397}1399139814001399/*···15041511 nr_reclaimed += shrink_page_list(&page_list, zone, sc);15051512 }1506151315071507- if (!scanning_global_lru(sc))15081508- sc->memcg_record->nr_freed[file] += nr_reclaimed;15091509-15101514 local_irq_disable();15111515 if (current_is_kswapd())15121516 __count_vm_events(KSWAPD_STEAL, nr_reclaimed);···16031613 }1604161416051615 reclaim_stat->recent_scanned[file] += nr_taken;16061606- if (!scanning_global_lru(sc))16071607- sc->memcg_record->nr_scanned[file] += nr_taken;1608161616091617 __count_zone_vm_events(PGREFILL, zone, pgscanned);16101618 if (file)···16541666 * get_scan_ratio.16551667 */16561668 reclaim_stat->recent_rotated[file] += nr_rotated;16571657- if (!scanning_global_lru(sc))16581658- sc->memcg_record->nr_rotated[file] += nr_rotated;1659166916601670 move_active_pages_to_lru(zone, &l_active,16611671 LRU_ACTIVE + file * LRU_FILE);···17941808 u64 fraction[2], denominator;17951809 enum lru_list l;17961810 int noswap = 0;17971797- int force_scan = 0;18111811+ bool force_scan = false;17981812 unsigned long nr_force_scan[2];1799181318001800-18011801- anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) +18021802- zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON);18031803- file = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_FILE) +18041804- zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE);18051805-18061806- if (((anon + file) >> priority) < SWAP_CLUSTER_MAX) {18071807- /* kswapd does zone balancing and need to scan this zone */18081808- if (scanning_global_lru(sc) && current_is_kswapd())18091809- force_scan = 1;18101810- /* memcg may have small limit and need to avoid priority drop */18111811- if (!scanning_global_lru(sc))18121812- force_scan = 1;18131813- }18141814+ /* kswapd does zone balancing and needs to scan this zone */18151815+ if (scanning_global_lru(sc) && current_is_kswapd())18161816+ force_scan = true;18171817+ /* memcg may have small limit and need to avoid priority drop */18181818+ if (!scanning_global_lru(sc))18191819+ force_scan = true;1814182018151821 /* If we have no swap space, do not bother scanning anon pages. */18161822 if (!sc->may_swap || (nr_swap_pages <= 0)) {···18141836 nr_force_scan[1] = SWAP_CLUSTER_MAX;18151837 goto out;18161838 }18391839+18401840+ anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) +18411841+ zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON);18421842+ file = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_FILE) +18431843+ zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE);1817184418181845 if (scanning_global_lru(sc)) {18191846 free = zone_page_state(zone, NR_FREE_PAGES);···22512268#ifdef CONFIG_CGROUP_MEM_RES_CTLR2252226922532270unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,22542254- gfp_t gfp_mask, bool noswap,22552255- struct zone *zone,22562256- struct memcg_scanrecord *rec,22572257- unsigned long *scanned)22712271+ gfp_t gfp_mask, bool noswap,22722272+ struct zone *zone,22732273+ unsigned long *nr_scanned)22582274{22592275 struct scan_control sc = {22602276 .nr_scanned = 0,···22632281 .may_swap = !noswap,22642282 .order = 0,22652283 .mem_cgroup = mem,22662266- .memcg_record = rec,22672284 };22682268- ktime_t start, end;2269228522702286 sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |22712287 (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);···22722292 sc.may_writepage,22732293 sc.gfp_mask);2274229422752275- start = ktime_get();22762295 /*22772296 * NOTE: Although we can get the priority field, using it22782297 * here is not a good idea, since it limits the pages we can scan.···22802301 * the priority and make it zero.22812302 */22822303 shrink_zone(0, zone, &sc);22832283- end = ktime_get();22842284-22852285- if (rec)22862286- rec->elapsed += ktime_to_ns(ktime_sub(end, start));22872287- *scanned = sc.nr_scanned;2288230422892305 trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed);2290230623072307+ *nr_scanned = sc.nr_scanned;22912308 return sc.nr_reclaimed;22922309}2293231022942311unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,22952312 gfp_t gfp_mask,22962296- bool noswap,22972297- struct memcg_scanrecord *rec)23132313+ bool noswap)22982314{22992315 struct zonelist *zonelist;23002316 unsigned long nr_reclaimed;23012301- ktime_t start, end;23022317 int nid;23032318 struct scan_control sc = {23042319 .may_writepage = !laptop_mode,···23012328 .nr_to_reclaim = SWAP_CLUSTER_MAX,23022329 .order = 0,23032330 .mem_cgroup = mem_cont,23042304- .memcg_record = rec,23052331 .nodemask = NULL, /* we don't care the placement */23062332 .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |23072333 (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),···23092337 .gfp_mask = sc.gfp_mask,23102338 };2311233923122312- start = ktime_get();23132340 /*23142341 * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't23152342 * take care of from where we get pages. So the node where we start the···23232352 sc.gfp_mask);2324235323252354 nr_reclaimed = do_try_to_free_pages(zonelist, &sc, &shrink);23262326- end = ktime_get();23272327- if (rec)23282328- rec->elapsed += ktime_to_ns(ktime_sub(end, start));2329235523302356 trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed);23312357
···3030 struct hlist_node hlist;3131 struct list_head gc_list;3232 } u;3333+ struct net *net;3334 u16 family;3435 u8 dir;3536 u32 genid;···173172174173static u32 flow_hash_code(struct flow_cache *fc,175174 struct flow_cache_percpu *fcp,176176- const struct flowi *key)175175+ const struct flowi *key,176176+ size_t keysize)177177{178178 const u32 *k = (const u32 *) key;179179+ const u32 length = keysize * sizeof(flow_compare_t) / sizeof(u32);179180180180- return jhash2(k, (sizeof(*key) / sizeof(u32)), fcp->hash_rnd)181181+ return jhash2(k, length, fcp->hash_rnd)181182 & (flow_cache_hash_size(fc) - 1);182183}183184184184-typedef unsigned long flow_compare_t;185185-186185/* I hear what you're saying, use memcmp. But memcmp cannot make187187- * important assumptions that we can here, such as alignment and188188- * constant size.186186+ * important assumptions that we can here, such as alignment.189187 */190190-static int flow_key_compare(const struct flowi *key1, const struct flowi *key2)188188+static int flow_key_compare(const struct flowi *key1, const struct flowi *key2,189189+ size_t keysize)191190{192191 const flow_compare_t *k1, *k1_lim, *k2;193193- const int n_elem = sizeof(struct flowi) / sizeof(flow_compare_t);194194-195195- BUILD_BUG_ON(sizeof(struct flowi) % sizeof(flow_compare_t));196192197193 k1 = (const flow_compare_t *) key1;198198- k1_lim = k1 + n_elem;194194+ k1_lim = k1 + keysize;199195200196 k2 = (const flow_compare_t *) key2;201197···213215 struct flow_cache_entry *fle, *tfle;214216 struct hlist_node *entry;215217 struct flow_cache_object *flo;218218+ size_t keysize;216219 unsigned int hash;217220218221 local_bh_disable();···221222222223 fle = NULL;223224 flo = NULL;225225+226226+ keysize = flow_key_size(family);227227+ if (!keysize)228228+ goto nocache;229229+224230 /* Packet really early in init? Making flow_cache_init a225231 * pre-smp initcall would solve this. --RR */226232 if (!fcp->hash_table)···234230 if (fcp->hash_rnd_recalc)235231 flow_new_hash_rnd(fc, fcp);236232237237- hash = flow_hash_code(fc, fcp, key);233233+ hash = flow_hash_code(fc, fcp, key, keysize);238234 hlist_for_each_entry(tfle, entry, &fcp->hash_table[hash], u.hlist) {239239- if (tfle->family == family &&235235+ if (tfle->net == net &&236236+ tfle->family == family &&240237 tfle->dir == dir &&241241- flow_key_compare(key, &tfle->key) == 0) {238238+ flow_key_compare(key, &tfle->key, keysize) == 0) {242239 fle = tfle;243240 break;244241 }···251246252247 fle = kmem_cache_alloc(flow_cachep, GFP_ATOMIC);253248 if (fle) {249249+ fle->net = net;254250 fle->family = family;255251 fle->dir = dir;256256- memcpy(&fle->key, key, sizeof(*key));252252+ memcpy(&fle->key, key, keysize * sizeof(flow_compare_t));257253 fle->object = NULL;258254 hlist_add_head(&fle->u.hlist, &fcp->hash_table[hash]);259255 fcp->hash_count++;
+17-5
net/core/skbuff.c
···611611}612612EXPORT_SYMBOL_GPL(skb_morph);613613614614-/* skb frags copy userspace buffers to kernel */615615-static int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)614614+/* skb_copy_ubufs - copy userspace skb frags buffers to kernel615615+ * @skb: the skb to modify616616+ * @gfp_mask: allocation priority617617+ *618618+ * This must be called on SKBTX_DEV_ZEROCOPY skb.619619+ * It will copy all frags into kernel and drop the reference620620+ * to userspace pages.621621+ *622622+ * If this function is called from an interrupt gfp_mask() must be623623+ * %GFP_ATOMIC.624624+ *625625+ * Returns 0 on success or a negative error code on failure626626+ * to allocate kernel memory to copy to.627627+ */628628+int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)616629{617630 int i;618631 int num_frags = skb_shinfo(skb)->nr_frags;···665652 skb_shinfo(skb)->frags[i - 1].page = head;666653 head = (struct page *)head->private;667654 }655655+656656+ skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;668657 return 0;669658}670659···692677 if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {693678 if (skb_copy_ubufs(skb, gfp_mask))694679 return NULL;695695- skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;696680 }697681698682 n = skb + 1;···817803 n = NULL;818804 goto out;819805 }820820- skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;821806 }822807 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {823808 skb_shinfo(n)->frags[i] = skb_shinfo(skb)->frags[i];···909896 if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {910897 if (skb_copy_ubufs(skb, gfp_mask))911898 goto nofrags;912912- skb_shinfo(skb)->tx_flags &= ~SKBTX_DEV_ZEROCOPY;913899 }914900 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)915901 get_page(skb_shinfo(skb)->frags[i].page);
···11241124 return 0;1125112511261126 /* ...Then it's D-SACK, and must reside below snd_una completely */11271127- if (!after(end_seq, tp->snd_una))11271127+ if (after(end_seq, tp->snd_una))11281128 return 0;1129112911301130 if (!before(start_seq, tp->undo_marker))
+28-21
net/ipv4/tcp_ipv4.c
···808808 kfree(inet_rsk(req)->opt);809809}810810811811-static void syn_flood_warning(const struct sk_buff *skb)811811+/*812812+ * Return 1 if a syncookie should be sent813813+ */814814+int tcp_syn_flood_action(struct sock *sk,815815+ const struct sk_buff *skb,816816+ const char *proto)812817{813813- const char *msg;818818+ const char *msg = "Dropping request";819819+ int want_cookie = 0;820820+ struct listen_sock *lopt;821821+822822+814823815824#ifdef CONFIG_SYN_COOKIES816816- if (sysctl_tcp_syncookies)825825+ if (sysctl_tcp_syncookies) {817826 msg = "Sending cookies";818818- else827827+ want_cookie = 1;828828+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES);829829+ } else819830#endif820820- msg = "Dropping request";831831+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP);821832822822- pr_info("TCP: Possible SYN flooding on port %d. %s.\n",823823- ntohs(tcp_hdr(skb)->dest), msg);833833+ lopt = inet_csk(sk)->icsk_accept_queue.listen_opt;834834+ if (!lopt->synflood_warned) {835835+ lopt->synflood_warned = 1;836836+ pr_info("%s: Possible SYN flooding on port %d. %s. "837837+ " Check SNMP counters.\n",838838+ proto, ntohs(tcp_hdr(skb)->dest), msg);839839+ }840840+ return want_cookie;824841}842842+EXPORT_SYMBOL(tcp_syn_flood_action);825843826844/*827845 * Save and compile IPv4 options into the request_sock if needed.···12531235 __be32 saddr = ip_hdr(skb)->saddr;12541236 __be32 daddr = ip_hdr(skb)->daddr;12551237 __u32 isn = TCP_SKB_CB(skb)->when;12561256-#ifdef CONFIG_SYN_COOKIES12571238 int want_cookie = 0;12581258-#else12591259-#define want_cookie 0 /* Argh, why doesn't gcc optimize this :( */12601260-#endif1261123912621240 /* Never answer to SYNs send to broadcast or multicast */12631241 if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))···12641250 * evidently real one.12651251 */12661252 if (inet_csk_reqsk_queue_is_full(sk) && !isn) {12671267- if (net_ratelimit())12681268- syn_flood_warning(skb);12691269-#ifdef CONFIG_SYN_COOKIES12701270- if (sysctl_tcp_syncookies) {12711271- want_cookie = 1;12721272- } else12731273-#endif12741274- goto drop;12531253+ want_cookie = tcp_syn_flood_action(sk, skb, "TCP");12541254+ if (!want_cookie)12551255+ goto drop;12751256 }1276125712771258 /* Accept backlog is full. If we have already queued enough···13121303 while (l-- > 0)13131304 *c++ ^= *hash_location++;1314130513151315-#ifdef CONFIG_SYN_COOKIES13161306 want_cookie = 0; /* not our kind of cookie */13171317-#endif13181307 tmp_ext.cookie_out_never = 0; /* false */13191308 tmp_ext.cookie_plus = tmp_opt.cookie_plus;13201309 } else if (!tp->rx_opt.cookie_in_always) {
···4040extern int sysctl_fast_poll_increase;4141extern char sysctl_devname[];4242extern int sysctl_max_baud_rate;4343-extern int sysctl_min_tx_turn_time;4444-extern int sysctl_max_tx_data_size;4545-extern int sysctl_max_tx_window;4343+extern unsigned int sysctl_min_tx_turn_time;4444+extern unsigned int sysctl_max_tx_data_size;4545+extern unsigned int sysctl_max_tx_window;4646extern int sysctl_max_noreply_time;4747extern int sysctl_warn_noreply_time;4848extern int sysctl_lap_keepalive_time;
+3-3
net/irda/qos.c
···6060 * Default is 10us which means using the unmodified value given by the6161 * peer except if it's 0 (0 is likely a bug in the other stack).6262 */6363-unsigned sysctl_min_tx_turn_time = 10;6363+unsigned int sysctl_min_tx_turn_time = 10;6464/*6565 * Maximum data size to be used in transmission in payload of LAP frame.6666 * There is a bit of confusion in the IrDA spec :···7575 * bytes frames or all negotiated frame sizes, but you can use the sysctl7676 * to play with this value anyway.7777 * Jean II */7878-unsigned sysctl_max_tx_data_size = 2042;7878+unsigned int sysctl_max_tx_data_size = 2042;7979/*8080 * Maximum transmit window, i.e. number of LAP frames between turn-around.8181 * This allow to override what the peer told us. Some peers are buggy and8282 * don't always support what they tell us.8383 * Jean II */8484-unsigned sysctl_max_tx_window = 7;8484+unsigned int sysctl_max_tx_window = 7;85858686static int irlap_param_baud_rate(void *instance, irda_param_t *param, int get);8787static int irlap_param_link_disconnect(void *instance, irda_param_t *parm,
···364364 break;365365366366 case PPTP_WAN_ERROR_NOTIFY:367367+ case PPTP_SET_LINK_INFO:367368 case PPTP_ECHO_REQUEST:368369 case PPTP_ECHO_REPLY:369370 /* I don't have to explain these ;) */
···16891689 case SCTP_CMD_PURGE_ASCONF_QUEUE:16901690 sctp_asconf_queue_teardown(asoc);16911691 break;16921692+16931693+ case SCTP_CMD_SET_ASOC:16941694+ asoc = cmd->obj.asoc;16951695+ break;16961696+16921697 default:16931698 pr_warn("Impossible command: %u, %p\n",16941699 cmd->verb, cmd->obj.ptr);
+6
net/sctp/sm_statefuns.c
···20472047 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));20482048 sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());2049204920502050+ /* Restore association pointer to provide SCTP command interpeter20512051+ * with a valid context in case it needs to manipulate20522052+ * the queues */20532053+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC,20542054+ SCTP_ASOC((struct sctp_association *)asoc));20552055+20502056 return retval;2051205720522058nomem:
+4-1
net/wireless/nl80211.c
···41134113 if (len % sizeof(u32))41144114 return -EINVAL;4115411541164116+ if (settings->n_akm_suites > NL80211_MAX_NR_AKM_SUITES)41174117+ return -EINVAL;41184118+41164119 memcpy(settings->akm_suites, data, len);4117412041184118- for (i = 0; i < settings->n_ciphers_pairwise; i++)41214121+ for (i = 0; i < settings->n_akm_suites; i++)41194122 if (!nl80211_valid_akm_suite(settings->akm_suites[i]))41204123 return -EINVAL;41214124 }
···17611761 snd_pcm_uframes_t avail = 0;17621762 long wait_time, tout;1763176317641764+ init_waitqueue_entry(&wait, current);17651765+ set_current_state(TASK_INTERRUPTIBLE);17661766+ add_wait_queue(&runtime->tsleep, &wait);17671767+17641768 if (runtime->no_period_wakeup)17651769 wait_time = MAX_SCHEDULE_TIMEOUT;17661770 else {···17751771 }17761772 wait_time = msecs_to_jiffies(wait_time * 1000);17771773 }17781778- init_waitqueue_entry(&wait, current);17791779- add_wait_queue(&runtime->tsleep, &wait);17741774+17801775 for (;;) {17811776 if (signal_pending(current)) {17821777 err = -ERESTARTSYS;17831778 break;17841779 }17801780+17811781+ /*17821782+ * We need to check if space became available already17831783+ * (and thus the wakeup happened already) first to close17841784+ * the race of space already having become available.17851785+ * This check must happen after been added to the waitqueue17861786+ * and having current state be INTERRUPTIBLE.17871787+ */17881788+ if (is_playback)17891789+ avail = snd_pcm_playback_avail(runtime);17901790+ else17911791+ avail = snd_pcm_capture_avail(runtime);17921792+ if (avail >= runtime->twake)17931793+ break;17851794 snd_pcm_stream_unlock_irq(substream);17861786- tout = schedule_timeout_interruptible(wait_time);17951795+17961796+ tout = schedule_timeout(wait_time);17971797+17871798 snd_pcm_stream_lock_irq(substream);17991799+ set_current_state(TASK_INTERRUPTIBLE);17881800 switch (runtime->status->state) {17891801 case SNDRV_PCM_STATE_SUSPENDED:17901802 err = -ESTRPIPE;···18261806 err = -EIO;18271807 break;18281808 }18291829- if (is_playback)18301830- avail = snd_pcm_playback_avail(runtime);18311831- else18321832- avail = snd_pcm_capture_avail(runtime);18331833- if (avail >= runtime->twake)18341834- break;18351809 }18361810 _endloop:18111811+ set_current_state(TASK_RUNNING);18371812 remove_wait_queue(&runtime->tsleep, &wait);18381813 *availp = avail;18391814 return err;
···579579 return -1;580580 }581581 recursive++;582582- for (i = 0; i < nums; i++)582582+ for (i = 0; i < nums; i++) {583583+ unsigned int type = get_wcaps_type(get_wcaps(codec, conn[i]));584584+ if (type == AC_WID_PIN || type == AC_WID_AUD_OUT)585585+ continue;583586 if (snd_hda_get_conn_index(codec, conn[i], nid, recursive) >= 0)584587 return i;588588+ }585589 return -1;586590}587591EXPORT_SYMBOL_HDA(snd_hda_get_conn_index);
+5-4
sound/pci/hda/hda_intel.c
···19241924}1925192519261926static unsigned int azx_get_position(struct azx *chip,19271927- struct azx_dev *azx_dev)19271927+ struct azx_dev *azx_dev,19281928+ bool with_check)19281929{19291930 unsigned int pos;19301931 int stream = azx_dev->substream->stream;···19411940 default:19421941 /* use the position buffer */19431942 pos = le32_to_cpu(*azx_dev->posbuf);19441944- if (chip->position_fix[stream] == POS_FIX_AUTO) {19431943+ if (with_check && chip->position_fix[stream] == POS_FIX_AUTO) {19451944 if (!pos || pos == (u32)-1) {19461945 printk(KERN_WARNING19471946 "hda-intel: Invalid position buffer, "···19651964 struct azx *chip = apcm->chip;19661965 struct azx_dev *azx_dev = get_azx_dev(substream);19671966 return bytes_to_frames(substream->runtime,19681968- azx_get_position(chip, azx_dev));19671967+ azx_get_position(chip, azx_dev, false));19691968}1970196919711970/*···19881987 return -1; /* bogus (too early) interrupt */1989198819901989 stream = azx_dev->substream->stream;19911991- pos = azx_get_position(chip, azx_dev);19901990+ pos = azx_get_position(chip, azx_dev, true);1992199119931992 if (WARN_ONCE(!azx_dev->period_bytes,19941993 "hda-intel: zero azx_dev->period_bytes"))
+1-1
sound/pci/hda/patch_cirrus.c
···535535 int index, unsigned int pval, int dir,536536 struct snd_kcontrol **kctlp)537537{538538- char tmp[32];538538+ char tmp[44];539539 struct snd_kcontrol_new knew =540540 HDA_CODEC_VOLUME_IDX(tmp, index, 0, 0, HDA_OUTPUT);541541 knew.private_value = pval;
+12-5
sound/pci/hda/patch_realtek.c
···168168 unsigned int auto_mic_valid_imux:1; /* valid imux for auto-mic */169169 unsigned int automute:1; /* HP automute enabled */170170 unsigned int detect_line:1; /* Line-out detection enabled */171171- unsigned int automute_lines:1; /* automute line-out as well */171171+ unsigned int automute_lines:1; /* automute line-out as well; NOP when automute_hp_lo isn't set */172172 unsigned int automute_hp_lo:1; /* both HP and LO available */173173174174 /* other flags */···551551 if (spec->autocfg.line_out_pins[0] == spec->autocfg.hp_pins[0] ||552552 spec->autocfg.line_out_pins[0] == spec->autocfg.speaker_pins[0])553553 return;554554- if (!spec->automute_lines || !spec->automute)554554+ if (!spec->automute || (spec->automute_hp_lo && !spec->automute_lines))555555 on = 0;556556 else557557 on = spec->jack_present;···577577static void alc_line_automute(struct hda_codec *codec)578578{579579 struct alc_spec *spec = codec->spec;580580+581581+ /* check LO jack only when it's different from HP */582582+ if (spec->autocfg.line_out_pins[0] == spec->autocfg.hp_pins[0])583583+ return;580584581585 spec->line_jack_present =582586 detect_jacks(codec, ARRAY_SIZE(spec->autocfg.line_out_pins),···807803 unsigned int val;808804 if (!spec->automute)809805 val = 0;810810- else if (!spec->automute_lines)806806+ else if (!spec->automute_hp_lo || !spec->automute_lines)811807 val = 1;812808 else813809 val = 2;···828824 spec->automute = 0;829825 break;830826 case 1:831831- if (spec->automute && !spec->automute_lines)827827+ if (spec->automute &&828828+ (!spec->automute_hp_lo || !spec->automute_lines))832829 return 0;833830 spec->automute = 1;834831 spec->automute_lines = 0;···13251320 * 15 : 1 --> enable the function "Mute internal speaker13261321 * when the external headphone out jack is plugged"13271322 */13281328- if (!spec->autocfg.hp_pins[0]) {13231323+ if (!spec->autocfg.hp_pins[0] &&13241324+ !(spec->autocfg.line_out_pins[0] &&13251325+ spec->autocfg.line_out_type == AUTO_PIN_HP_OUT)) {13291326 hda_nid_t nid;13301327 tmp = (ass >> 11) & 0x3; /* HP to chassis */13311328 if (tmp == 0)
···128128 return 0;129129}130130131131-static int bf5xx_probe(struct platform_device *pdev)131131+static int bf5xx_probe(struct snd_soc_card *card)132132{133133 int err;134134 if (gpio_request(GPIO_SE, "AD73311_SE")) {
···150150extern void omap_mcpdm_free(void);151151extern int omap_mcpdm_set_offset(int offset1, int offset2);152152int __devinit omap_mcpdm_probe(struct platform_device *pdev);153153-int __devexit omap_mcpdm_remove(struct platform_device *pdev);153153+int omap_mcpdm_remove(struct platform_device *pdev);
+6
sound/soc/omap/omap-mcbsp.c
···516516 struct omap_mcbsp_reg_cfg *regs = &mcbsp_data->regs;517517 int err = 0;518518519519+ if (mcbsp_data->active)520520+ if (freq == mcbsp_data->in_freq)521521+ return 0;522522+ else523523+ return -EBUSY;524524+519525 /* The McBSP signal muxing functions are only available on McBSP1 */520526 if (clk_id == OMAP_MCBSP_CLKR_SRC_CLKR ||521527 clk_id == OMAP_MCBSP_CLKR_SRC_CLKX ||
+4-4
sound/soc/pxa/zylonite.c
···196196 if (clk_pout) {197197 pout = clk_get(NULL, "CLK_POUT");198198 if (IS_ERR(pout)) {199199- dev_err(&pdev->dev, "Unable to obtain CLK_POUT: %ld\n",199199+ dev_err(card->dev, "Unable to obtain CLK_POUT: %ld\n",200200 PTR_ERR(pout));201201 return PTR_ERR(pout);202202 }203203204204 ret = clk_enable(pout);205205 if (ret != 0) {206206- dev_err(&pdev->dev, "Unable to enable CLK_POUT: %d\n",206206+ dev_err(card->dev, "Unable to enable CLK_POUT: %d\n",207207 ret);208208 clk_put(pout);209209 return ret;210210 }211211212212- dev_dbg(&pdev->dev, "MCLK enabled at %luHz\n",212212+ dev_dbg(card->dev, "MCLK enabled at %luHz\n",213213 clk_get_rate(pout));214214 }215215···241241 if (clk_pout) {242242 ret = clk_enable(pout);243243 if (ret != 0)244244- dev_err(&pdev->dev, "Unable to enable CLK_POUT: %d\n",244244+ dev_err(card->dev, "Unable to enable CLK_POUT: %d\n",245245 ret);246246 }247247
+6-6
sound/soc/soc-cache.c
···203203 rbnode = rb_entry(node, struct snd_soc_rbtree_node, node);204204 for (i = 0; i < rbnode->blklen; ++i) {205205 regtmp = rbnode->base_reg + i;206206- WARN_ON(codec->writable_register &&207207- codec->writable_register(codec, regtmp));208206 val = snd_soc_rbtree_get_register(rbnode, i);209207 def = snd_soc_get_cache_val(codec->reg_def_copy, i,210208 rbnode->word_size);211209 if (val == def)212210 continue;211211+212212+ WARN_ON(!snd_soc_codec_writable_register(codec, regtmp));213213214214 codec->cache_bypass = 1;215215 ret = snd_soc_write(codec, regtmp, val);···563563564564 lzo_blocks = codec->reg_cache;565565 for_each_set_bit(i, lzo_blocks[0]->sync_bmp, lzo_blocks[0]->sync_bmp_nbits) {566566- WARN_ON(codec->writable_register &&567567- codec->writable_register(codec, i));566566+ WARN_ON(!snd_soc_codec_writable_register(codec, i));568567 ret = snd_soc_cache_read(codec, i, &val);569568 if (ret)570569 return ret;···822823823824 codec_drv = codec->driver;824825 for (i = 0; i < codec_drv->reg_cache_size; ++i) {825825- WARN_ON(codec->writable_register &&826826- codec->writable_register(codec, i));827826 ret = snd_soc_cache_read(codec, i, &val);828827 if (ret)829828 return ret;···829832 if (snd_soc_get_cache_val(codec->reg_def_copy,830833 i, codec_drv->reg_word_size) == val)831834 continue;835835+836836+ WARN_ON(!snd_soc_codec_writable_register(codec, i));837837+832838 ret = snd_soc_write(codec, i, val);833839 if (ret)834840 return ret;
+17-5
sound/soc/soc-core.c
···3030#include <linux/bitops.h>3131#include <linux/debugfs.h>3232#include <linux/platform_device.h>3333+#include <linux/ctype.h>3334#include <linux/slab.h>3435#include <sound/ac97_codec.h>3536#include <sound/core.h>···14351434 "%s", card->name);14361435 snprintf(card->snd_card->longname, sizeof(card->snd_card->longname),14371436 "%s", card->long_name ? card->long_name : card->name);14381438- if (card->driver_name)14391439- strlcpy(card->snd_card->driver, card->driver_name,14401440- sizeof(card->snd_card->driver));14371437+ snprintf(card->snd_card->driver, sizeof(card->snd_card->driver),14381438+ "%s", card->driver_name ? card->driver_name : card->name);14391439+ for (i = 0; i < ARRAY_SIZE(card->snd_card->driver); i++) {14401440+ switch (card->snd_card->driver[i]) {14411441+ case '_':14421442+ case '-':14431443+ case '\0':14441444+ break;14451445+ default:14461446+ if (!isalnum(card->snd_card->driver[i]))14471447+ card->snd_card->driver[i] = '_';14481448+ break;14491449+ }14501450+ }1441145114421452 if (card->late_probe) {14431453 ret = card->late_probe(card);···16451633 if (codec->readable_register)16461634 return codec->readable_register(codec, reg);16471635 else16481648- return 0;16361636+ return 1;16491637}16501638EXPORT_SYMBOL_GPL(snd_soc_codec_readable_register);16511639···16631651 if (codec->writable_register)16641652 return codec->writable_register(codec, reg);16651653 else16661666- return 0;16541654+ return 1;16671655}16681656EXPORT_SYMBOL_GPL(snd_soc_codec_writable_register);16691657
···77 * Released under the GPL v2. (and only v2, not any later version)88 */991010+#include <byteswap.h>1111+#include "asm/bug.h"1012#include "evsel.h"1113#include "evlist.h"1214#include "util.h"···344342345343int perf_event__parse_sample(const union perf_event *event, u64 type,346344 int sample_size, bool sample_id_all,347347- struct perf_sample *data)345345+ struct perf_sample *data, bool swapped)348346{349347 const u64 *array;348348+349349+ /*350350+ * used for cross-endian analysis. See git commit 65014ab3351351+ * for why this goofiness is needed.352352+ */353353+ union {354354+ u64 val64;355355+ u32 val32[2];356356+ } u;357357+350358351359 data->cpu = data->pid = data->tid = -1;352360 data->stream_id = data->id = data->time = -1ULL;···378366 }379367380368 if (type & PERF_SAMPLE_TID) {381381- u32 *p = (u32 *)array;382382- data->pid = p[0];383383- data->tid = p[1];369369+ u.val64 = *array;370370+ if (swapped) {371371+ /* undo swap of u64, then swap on individual u32s */372372+ u.val64 = bswap_64(u.val64);373373+ u.val32[0] = bswap_32(u.val32[0]);374374+ u.val32[1] = bswap_32(u.val32[1]);375375+ }376376+377377+ data->pid = u.val32[0];378378+ data->tid = u.val32[1];384379 array++;385380 }386381···414395 }415396416397 if (type & PERF_SAMPLE_CPU) {417417- u32 *p = (u32 *)array;418418- data->cpu = *p;398398+399399+ u.val64 = *array;400400+ if (swapped) {401401+ /* undo swap of u64, then swap on individual u32s */402402+ u.val64 = bswap_64(u.val64);403403+ u.val32[0] = bswap_32(u.val32[0]);404404+ }405405+406406+ data->cpu = u.val32[0];419407 array++;420408 }421409···449423 }450424451425 if (type & PERF_SAMPLE_RAW) {452452- u32 *p = (u32 *)array;426426+ const u64 *pdata;427427+428428+ u.val64 = *array;429429+ if (WARN_ONCE(swapped,430430+ "Endianness of raw data not corrected!\n")) {431431+ /* undo swap of u64, then swap on individual u32s */432432+ u.val64 = bswap_64(u.val64);433433+ u.val32[0] = bswap_32(u.val32[0]);434434+ u.val32[1] = bswap_32(u.val32[1]);435435+ }453436454437 if (sample_overlap(event, array, sizeof(u32)))455438 return -EFAULT;456439457457- data->raw_size = *p;458458- p++;440440+ data->raw_size = u.val32[0];441441+ pdata = (void *) array + sizeof(u32);459442460460- if (sample_overlap(event, p, data->raw_size))443443+ if (sample_overlap(event, pdata, data->raw_size))461444 return -EFAULT;462445463463- data->raw_data = p;446446+ data->raw_data = (void *) pdata;464447 }465448466449 return 0;
+1-1
tools/perf/util/probe-finder.c
···659659 if (!die_find_variable_at(&pf->cu_die, pf->pvar->var, 0, &vr_die))660660 ret = -ENOENT;661661 }662662- if (ret == 0)662662+ if (ret >= 0)663663 ret = convert_variable(&vr_die, pf);664664665665 if (ret < 0)