···11-What: /sys/class/usb_host/usb_hostN/wusb_chid11+What: /sys/class/uwb_rc/uwbN/wusbhc/wusb_chid22Date: July 200833KernelVersion: 2.6.2744Contact: David Vrabel <david.vrabel@csr.com>···991010 Set an all zero CHID to stop the host controller.11111212-What: /sys/class/usb_host/usb_hostN/wusb_trust_timeout1212+What: /sys/class/uwb_rc/uwbN/wusbhc/wusb_trust_timeout1313Date: July 20081414KernelVersion: 2.6.271515Contact: David Vrabel <david.vrabel@csr.com>
+4-4
Documentation/debugging-via-ohci1394.txt
···64646565Bernhard Kaindl enhanced firescope to support accessing 64-bit machines6666from 32-bit firescope and vice versa:6767-- ftp://ftp.suse.de/private/bk/firewire/tools/firescope-0.2.2.tar.bz26767+- http://halobates.de/firewire/firescope-0.2.2.tar.bz268686969and he implemented fast system dump (alpha version - read README.txt):7070-- ftp://ftp.suse.de/private/bk/firewire/tools/firedump-0.1.tar.bz27070+- http://halobates.de/firewire/firedump-0.1.tar.bz271717272There is also a gdb proxy for firewire which allows to use gdb to access7373data which can be referenced from symbols found by gdb in vmlinux:7474-- ftp://ftp.suse.de/private/bk/firewire/tools/fireproxy-0.33.tar.bz27474+- http://halobates.de/firewire/fireproxy-0.33.tar.bz275757676The latest version of this gdb proxy (fireproxy-0.34) can communicate (not7777yet stable) with kgdb over an memory-based communication module (kgdbom).···178178179179Notes180180-----181181-Documentation and specifications: ftp://ftp.suse.de/private/bk/firewire/docs181181+Documentation and specifications: http://halobates.de/firewire/182182183183FireWire is a trademark of Apple Inc. - for more information please refer to:184184http://en.wikipedia.org/wiki/FireWire
+30
Documentation/feature-removal-schedule.txt
···451451 will also allow making ALSA OSS emulation independent of452452 sound_core. The dependency will be broken then too.453453Who: Tejun Heo <tj@kernel.org>454454+455455+----------------------------456456+457457+What: Support for VMware's guest paravirtuliazation technique [VMI] will be458458+ dropped.459459+When: 2.6.37 or earlier.460460+Why: With the recent innovations in CPU hardware acceleration technologies461461+ from Intel and AMD, VMware ran a few experiments to compare these462462+ techniques to guest paravirtualization technique on VMware's platform.463463+ These hardware assisted virtualization techniques have outperformed the464464+ performance benefits provided by VMI in most of the workloads. VMware465465+ expects that these hardware features will be ubiquitous in a couple of466466+ years, as a result, VMware has started a phased retirement of this467467+ feature from the hypervisor. We will be removing this feature from the468468+ Kernel too. Right now we are targeting 2.6.37 but can retire earlier if469469+ technical reasons (read opportunity to remove major chunk of pvops)470470+ arise.471471+472472+ Please note that VMI has always been an optimization and non-VMI kernels473473+ still work fine on VMware's platform.474474+ Latest versions of VMware's product which support VMI are,475475+ Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence476476+ releases for these products will continue supporting VMI.477477+478478+ For more details about VMI retirement take a look at this,479479+ http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html480480+481481+Who: Alok N Kataria <akataria@vmware.com>482482+483483+----------------------------
+12-4
Documentation/filesystems/ext3.txt
···123123124124sb=n Use alternate superblock at this location.125125126126-quota127127-noquota128128-grpquota129129-usrquota126126+quota These options are ignored by the filesystem. They127127+noquota are used only by quota tools to recognize volumes128128+grpquota where quota should be turned on. See documentation129129+usrquota in the quota-tools package for more details130130+ (http://sourceforge.net/projects/linuxquota).131131+132132+jqfmt=<quota type> These options tell filesystem details about quota133133+usrjquota=<file> so that quota information can be properly updated134134+grpjquota=<file> during journal replay. They replace the above135135+ quota options. See documentation in the quota-tools136136+ package for more details137137+ (http://sourceforge.net/projects/linuxquota).130138131139bh (*) ext3 associates buffer heads to data pages to132140nobh (a) cache disk block mapping information
+31-10
Documentation/flexible-arrays.txt
···11Using flexible arrays in the kernel22-Last updated for 2.6.3122+Last updated for 2.6.3233Jonathan Corbet <corbet@lwn.net>4455Large contiguous memory allocations can be unreliable in the Linux kernel.···4040the current code, using flags to ask for high memory is likely to lead to4141notably unpleasant side effects.42424343+It is also possible to define flexible arrays at compile time with:4444+4545+ DEFINE_FLEX_ARRAY(name, element_size, total);4646+4747+This macro will result in a definition of an array with the given name; the4848+element size and total will be checked for validity at compile time.4949+4350Storing data into a flexible array is accomplished with a call to:44514552 int flex_array_put(struct flex_array *array, unsigned int element_nr,···8376Note that it is possible to get back a valid pointer for an element which8477has never been stored in the array. Memory for array elements is allocated8578one page at a time; a single allocation could provide memory for several8686-adjacent elements. The flexible array code does not know if a specific8787-element has been written; it only knows if the associated memory is8888-present. So a flex_array_get() call on an element which was never stored8989-in the array has the potential to return a pointer to random data. If the9090-caller does not have a separate way to know which elements were actually9191-stored, it might be wise, at least, to add GFP_ZERO to the flags argument9292-to ensure that all elements are zeroed.7979+adjacent elements. Flexible array elements are normally initialized to the8080+value FLEX_ARRAY_FREE (defined as 0x6c in <linux/poison.h>), so errors8181+involving that number probably result from use of unstored array entries.8282+Note that, if array elements are allocated with __GFP_ZERO, they will be8383+initialized to zero and this poisoning will not happen.93849494-There is no way to remove a single element from the array. It is possible,9595-though, to remove all elements with a call to:8585+Individual elements in the array can be cleared with:8686+8787+ int flex_array_clear(struct flex_array *array, unsigned int element_nr);8888+8989+This function will set the given element to FLEX_ARRAY_FREE and return9090+zero. If storage for the indicated element is not allocated for the array,9191+flex_array_clear() will return -EINVAL instead. Note that clearing an9292+element does not release the storage associated with it; to reduce the9393+allocated size of an array, call:9494+9595+ int flex_array_shrink(struct flex_array *array);9696+9797+The return value will be the number of pages of memory actually freed.9898+This function works by scanning the array for pages containing nothing but9999+FLEX_ARRAY_FREE bytes, so (1) it can be expensive, and (2) it will not work100100+if the array's pages are allocated with __GFP_ZERO.101101+102102+It is possible to remove all elements of an array with a call to:9610397104 void flex_array_free_parts(struct flex_array *array);98105
+1
Documentation/sound/alsa/HD-Audio-Models.txt
···359359 5stack-no-fp D965 5stack without front panel360360 dell-3stack Dell Dimension E520361361 dell-bios Fixes with Dell BIOS setup362362+ volknob Fixes with volume-knob widget 0x24362363 auto BIOS setup (default)363364364365STAC92HD71B*
+16
MAINTAINERS
···26152615W: http://www.linux1394.org/26162616T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git26172617S: Maintained26182618+F: Documentation/debugging-via-ohci1394.txt26182619F: drivers/ieee1394/2619262026202621IEEE 1394 RAW I/O DRIVER···36673666M: "David S. Miller" <davem@davemloft.net>36683667L: netdev@vger.kernel.org36693668W: http://www.linuxfoundation.org/en/Net36693669+W: http://patchwork.ozlabs.org/project/netdev/list/36703670T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git36713671S: Maintained36723672F: net/···40784076M: Paul Mackerras <paulus@samba.org>40794077M: Ingo Molnar <mingo@elte.hu>40804078S: Supported40794079+F: kernel/perf_event.c40804080+F: include/linux/perf_event.h40814081+F: arch/*/*/kernel/perf_event.c40824082+F: arch/*/include/asm/perf_event.h40834083+F: arch/*/lib/perf_event.c40844084+F: arch/*/kernel/perf_callchain.c40854085+F: tools/perf/4081408640824087PERSONALITY HANDLING40834088M: Christoph Hellwig <hch@infradead.org>···56645655S: Maintained56655656F: drivers/vlynq/vlynq.c56665657F: include/linux/vlynq.h56585658+56595659+VMWARE VMXNET3 ETHERNET DRIVER56605660+M: Shreyas Bhatewara <sbhatewara@vmware.com>56615661+M: VMware, Inc. <pv-drivers@vmware.com>56625662+L: netdev@vger.kernel.org56635663+S: Maintained56645664+F: drivers/net/vmxnet3/5667566556685666VOLTAGE AND CURRENT REGULATOR FRAMEWORK56695667M: Liam Girdwood <lrg@slimlogic.co.uk>
+3-45
Makefile
···11VERSION = 222PATCHLEVEL = 633SUBLEVEL = 3244-EXTRAVERSION = -rc444+EXTRAVERSION = -rc555NAME = Man-Eating Seals of Antiquity6677# *DOCUMENTATION*···179179# Alternatively CROSS_COMPILE can be set in the environment.180180# Default value for CROSS_COMPILE is not to prefix executables181181# Note: Some architectures assign CROSS_COMPILE in their arch/*/Makefile182182-#183183-# To force ARCH and CROSS_COMPILE settings include kernel.* files184184-# in the kernel tree - do not patch this file.185182export KBUILD_BUILDHOST := $(SUBARCH)186186-187187-# Kbuild save the ARCH and CROSS_COMPILE setting in kernel.* files.188188-# Restore these settings and check that user did not specify189189-# conflicting values.190190-191191-saved_arch := $(shell cat include/generated/kernel.arch 2> /dev/null)192192-saved_cross := $(shell cat include/generated/kernel.cross 2> /dev/null)193193-194194-ifneq ($(CROSS_COMPILE),)195195- ifneq ($(saved_cross),)196196- ifneq ($(CROSS_COMPILE),$(saved_cross))197197- $(error CROSS_COMPILE changed from \198198- "$(saved_cross)" to \199199- to "$(CROSS_COMPILE)". \200200- Use "make mrproper" to fix it up)201201- endif202202- endif203203-else204204- CROSS_COMPILE := $(saved_cross)205205-endif206206-207207-ifneq ($(ARCH),)208208- ifneq ($(saved_arch),)209209- ifneq ($(saved_arch),$(ARCH))210210- $(error ARCH changed from \211211- "$(saved_arch)" to "$(ARCH)". \212212- Use "make mrproper" to fix it up)213213- endif214214- endif215215-else216216- ifneq ($(saved_arch),)217217- ARCH := $(saved_arch)218218- else219219- ARCH := $(SUBARCH)220220- endif221221-endif183183+ARCH ?= $(SUBARCH)184184+CROSS_COMPILE ?=222185223186# Architecture as present in compile.h224187UTS_MACHINE := $(ARCH)···445482# used for 'make defconfig'446483include $(srctree)/arch/$(SRCARCH)/Makefile447484export KBUILD_DEFCONFIG KBUILD_KCONFIG448448-449449-# save ARCH & CROSS_COMPILE settings450450-$(shell mkdir -p include/generated && \451451- echo $(ARCH) > include/generated/kernel.arch && \452452- echo $(CROSS_COMPILE) > include/generated/kernel.cross)453485454486config: scripts_basic outputmakefile FORCE455487 $(Q)mkdir -p include/linux include/config
-1
arch/arm/configs/omap3_beagle_defconfig
···969969#970970CONFIG_USB_OTG_UTILS=y971971# CONFIG_USB_GPIO_VBUS is not set972972-# CONFIG_ISP1301_OMAP is not set973972CONFIG_TWL4030_USB=y974973# CONFIG_NOP_USB_XCEIV is not set975974CONFIG_MMC=y
···769769 if (c->cpu & cpu_mask) {770770 clkdev_add(&c->lk);771771 clk_register(c->lk.clk);772772+ omap2_init_clk_clkdm(c->lk.clk);772773 }773774774775 /* Check the MPU rate set by bootloader */
+44-30
arch/arm/mach-omap2/clockdomain.c
···137137 }138138}139139140140+/*141141+ * _omap2_clkdm_set_hwsup - set the hwsup idle transition bit142142+ * @clkdm: struct clockdomain *143143+ * @enable: int 0 to disable, 1 to enable144144+ *145145+ * Internal helper for actually switching the bit that controls hwsup146146+ * idle transitions for clkdm.147147+ */148148+static void _omap2_clkdm_set_hwsup(struct clockdomain *clkdm, int enable)149149+{150150+ u32 v;151151+152152+ if (cpu_is_omap24xx()) {153153+ if (enable)154154+ v = OMAP24XX_CLKSTCTRL_ENABLE_AUTO;155155+ else156156+ v = OMAP24XX_CLKSTCTRL_DISABLE_AUTO;157157+ } else if (cpu_is_omap34xx()) {158158+ if (enable)159159+ v = OMAP34XX_CLKSTCTRL_ENABLE_AUTO;160160+ else161161+ v = OMAP34XX_CLKSTCTRL_DISABLE_AUTO;162162+ } else {163163+ BUG();164164+ }165165+166166+ cm_rmw_mod_reg_bits(clkdm->clktrctrl_mask,167167+ v << __ffs(clkdm->clktrctrl_mask),168168+ clkdm->pwrdm.ptr->prcm_offs, CM_CLKSTCTRL);169169+}140170141171static struct clockdomain *_clkdm_lookup(const char *name)142172{···486456 */487457void omap2_clkdm_allow_idle(struct clockdomain *clkdm)488458{489489- u32 v;490490-491459 if (!clkdm)492460 return;493461···501473 if (atomic_read(&clkdm->usecount) > 0)502474 _clkdm_add_autodeps(clkdm);503475504504- if (cpu_is_omap24xx())505505- v = OMAP24XX_CLKSTCTRL_ENABLE_AUTO;506506- else if (cpu_is_omap34xx())507507- v = OMAP34XX_CLKSTCTRL_ENABLE_AUTO;508508- else509509- BUG();510510-511511-512512- cm_rmw_mod_reg_bits(clkdm->clktrctrl_mask,513513- v << __ffs(clkdm->clktrctrl_mask),514514- clkdm->pwrdm.ptr->prcm_offs,515515- CM_CLKSTCTRL);476476+ _omap2_clkdm_set_hwsup(clkdm, 1);516477517478 pwrdm_clkdm_state_switch(clkdm);518479}···517500 */518501void omap2_clkdm_deny_idle(struct clockdomain *clkdm)519502{520520- u32 v;521521-522503 if (!clkdm)523504 return;524505···529514 pr_debug("clockdomain: disabling automatic idle transitions for %s\n",530515 clkdm->name);531516532532- if (cpu_is_omap24xx())533533- v = OMAP24XX_CLKSTCTRL_DISABLE_AUTO;534534- else if (cpu_is_omap34xx())535535- v = OMAP34XX_CLKSTCTRL_DISABLE_AUTO;536536- else537537- BUG();538538-539539- cm_rmw_mod_reg_bits(clkdm->clktrctrl_mask,540540- v << __ffs(clkdm->clktrctrl_mask),541541- clkdm->pwrdm.ptr->prcm_offs, CM_CLKSTCTRL);517517+ _omap2_clkdm_set_hwsup(clkdm, 0);542518543519 if (atomic_read(&clkdm->usecount) > 0)544520 _clkdm_del_autodeps(clkdm);···575569 v = omap2_clkdm_clktrctrl_read(clkdm);576570577571 if ((cpu_is_omap34xx() && v == OMAP34XX_CLKSTCTRL_ENABLE_AUTO) ||578578- (cpu_is_omap24xx() && v == OMAP24XX_CLKSTCTRL_ENABLE_AUTO))572572+ (cpu_is_omap24xx() && v == OMAP24XX_CLKSTCTRL_ENABLE_AUTO)) {573573+ /* Disable HW transitions when we are changing deps */574574+ _omap2_clkdm_set_hwsup(clkdm, 0);579575 _clkdm_add_autodeps(clkdm);580580- else576576+ _omap2_clkdm_set_hwsup(clkdm, 1);577577+ } else {581578 omap2_clkdm_wakeup(clkdm);579579+ }582580583581 pwrdm_wait_transition(clkdm->pwrdm.ptr);584582 pwrdm_clkdm_state_switch(clkdm);···633623 v = omap2_clkdm_clktrctrl_read(clkdm);634624635625 if ((cpu_is_omap34xx() && v == OMAP34XX_CLKSTCTRL_ENABLE_AUTO) ||636636- (cpu_is_omap24xx() && v == OMAP24XX_CLKSTCTRL_ENABLE_AUTO))626626+ (cpu_is_omap24xx() && v == OMAP24XX_CLKSTCTRL_ENABLE_AUTO)) {627627+ /* Disable HW transitions when we are changing deps */628628+ _omap2_clkdm_set_hwsup(clkdm, 0);637629 _clkdm_del_autodeps(clkdm);638638- else630630+ _omap2_clkdm_set_hwsup(clkdm, 1);631631+ } else {639632 omap2_clkdm_sleep(clkdm);633633+ }640634641635 pwrdm_clkdm_state_switch(clkdm);642636
+9-6
arch/arm/plat-omap/dma.c
···829829 *830830 * @param arb_rate831831 * @param max_fifo_depth832832- * @param tparams - Number of thereads to reserve : DMA_THREAD_RESERVE_NORM833833- * DMA_THREAD_RESERVE_ONET834834- * DMA_THREAD_RESERVE_TWOT835835- * DMA_THREAD_RESERVE_THREET832832+ * @param tparams - Number of threads to reserve : DMA_THREAD_RESERVE_NORM833833+ * DMA_THREAD_RESERVE_ONET834834+ * DMA_THREAD_RESERVE_TWOT835835+ * DMA_THREAD_RESERVE_THREET836836 */837837void838838omap_dma_set_global_params(int arb_rate, int max_fifo_depth, int tparams)···844844 return;845845 }846846847847+ if (max_fifo_depth == 0)848848+ max_fifo_depth = 1;847849 if (arb_rate == 0)848850 arb_rate = 1;849851850850- reg = (arb_rate & 0xff) << 16;851851- reg |= (0xff & max_fifo_depth);852852+ reg = 0xff & max_fifo_depth;853853+ reg |= (0x3 & tparams) << 12;854854+ reg |= (arb_rate & 0xff) << 16;852855853856 dma_write(reg, GCR);854857}
+1-1
arch/arm/plat-omap/mcbsp.c
···595595 rx &= 1;596596 if (cpu_is_omap2430() || cpu_is_omap34xx()) {597597 w = OMAP_MCBSP_READ(io_base, RCCR);598598- w |= (tx ? RDISABLE : 0);598598+ w |= (rx ? RDISABLE : 0);599599 OMAP_MCBSP_WRITE(io_base, RCCR, w);600600 }601601 w = OMAP_MCBSP_READ(io_base, SPCR1);
···10381038 * We are in a module using the module's TOC.10391039 * Switch to our TOC to run inside the core kernel.10401040 */10411041- LOAD_REG_IMMEDIATE(r4,ftrace_return_to_handler)10421042- ld r2, 8(r4)10411041+ ld r2, PACATOC(r13)1043104210441043 bl .ftrace_return_to_handler10451044 nop
-6
arch/powerpc/kernel/kgdb.c
···282282{283283 unsigned long *ptr = gdb_regs;284284 int reg;285285-#ifdef CONFIG_SPE286286- union {287287- u32 v32[2];288288- u64 v64;289289- } acc;290290-#endif291285292286 for (reg = 0; reg < 32; reg++)293287 UNPACK64(regs->gpr[reg], ptr);
+1-1
arch/powerpc/kernel/pci-common.c
···11901190 * Reparent resource children of pr that conflict with res11911191 * under res, and make res replace those children.11921192 */11931193-static int __init reparent_resources(struct resource *parent,11931193+static int reparent_resources(struct resource *parent,11941194 struct resource *res)11951195{11961196 struct resource *p, **pp;
+7-3
arch/powerpc/kernel/process.c
···10161016#ifdef CONFIG_FUNCTION_GRAPH_TRACER10171017 int curr_frame = current->curr_ret_stack;10181018 extern void return_to_handler(void);10191019- unsigned long addr = (unsigned long)return_to_handler;10191019+ unsigned long rth = (unsigned long)return_to_handler;10201020+ unsigned long mrth = -1;10201021#ifdef CONFIG_PPC6410211021- addr = *(unsigned long*)addr;10221022+ extern void mod_return_to_handler(void);10231023+ rth = *(unsigned long *)rth;10241024+ mrth = (unsigned long)mod_return_to_handler;10251025+ mrth = *(unsigned long *)mrth;10221026#endif10231027#endif10241028···10481044 if (!firstframe || ip != lr) {10491045 printk("["REG"] ["REG"] %pS", sp, ip, (void *)ip);10501046#ifdef CONFIG_FUNCTION_GRAPH_TRACER10511051- if (ip == addr && curr_frame >= 0) {10471047+ if ((ip == rth || ip == mrth) && curr_frame >= 0) {10521048 printk(" (%pS)",10531049 (void *)current->ret_stack[curr_frame].ret);10541050 curr_frame--;
···72721:7373#endif /* CONFIG_SPARSEMEM_VMEMMAP */74747575- /* vmalloc/ioremap mapping encoding bits, the "li" instructions below7676- * will be patched by the kernel at boot7575+ /* vmalloc mapping gets the encoding from the PACA as the mapping7676+ * can be demoted from 64K -> 4K dynamically on some machines7777 */7878-BEGIN_FTR_SECTION7979- /* check whether this is in vmalloc or ioremap space */8078 clrldi r11,r10,488179 cmpldi r11,(VMALLOC_SIZE >> 28) - 18280 bgt 5f8381 lhz r11,PACAVMALLOCSLLP(r13)8482 b 6f85835:8686-END_FTR_SECTION_IFCLR(CPU_FTR_CI_LARGE_PAGE)8787-_GLOBAL(slb_miss_kernel_load_io)8484+ /* IO mapping */8585+ _GLOBAL(slb_miss_kernel_load_io)8886 li r11,089876:9088BEGIN_FTR_SECTION
···540540 /* Make sure IRQ is disabled */541541 kw_write_reg(reg_ier, 0);542542543543- /* Request chip interrupt */544544- if (request_irq(host->irq, kw_i2c_irq, 0, "keywest i2c", host))543543+ /* Request chip interrupt. We set IRQF_TIMER because we don't544544+ * want that interrupt disabled between the 2 passes of driver545545+ * suspend or we'll have issues running the pfuncs546546+ */547547+ if (request_irq(host->irq, kw_i2c_irq, IRQF_TIMER, "keywest i2c", host))545548 host->irq = NO_IRQ;546549547550 printk(KERN_INFO "KeyWest i2c @0x%08x irq %d %s\n",
+1-2
arch/powerpc/platforms/pseries/firmware.c
···5151 {FW_FEATURE_VIO, "hcall-vio"},5252 {FW_FEATURE_RDMA, "hcall-rdma"},5353 {FW_FEATURE_LLAN, "hcall-lLAN"},5454- {FW_FEATURE_BULK, "hcall-bulk"},5454+ {FW_FEATURE_BULK_REMOVE, "hcall-bulk"},5555 {FW_FEATURE_XDABR, "hcall-xdabr"},5656 {FW_FEATURE_MULTITCE, "hcall-multi-tce"},5757 {FW_FEATURE_SPLPAR, "hcall-splpar"},5858- {FW_FEATURE_BULK_REMOVE, "hcall-bulk"},5958};60596160/* Build up the firmware features bitmask using the contents of
···31313232static int show_cpuinfo(struct seq_file *m, void *v)3333{3434- static const char *hwcap_str[9] = {3434+ static const char *hwcap_str[10] = {3535 "esan3", "zarch", "stfle", "msa", "ldisp", "eimm", "dfp",3636- "edat", "etf3eh"3636+ "edat", "etf3eh", "highgprs"3737 };3838 struct _lowcore *lc;3939 unsigned long n = (unsigned long) v - 1;···4848 num_online_cpus(), loops_per_jiffy/(500000/HZ),4949 (loops_per_jiffy/(5000/HZ))%100);5050 seq_puts(m, "features\t: ");5151- for (i = 0; i < 9; i++)5151+ for (i = 0; i < 10; i++)5252 if (hwcap_str[i] && (elf_hwcap & (1UL << i)))5353 seq_printf(m, "%s ", hwcap_str[i]);5454 seq_puts(m, "\n");
+4-6
arch/sh/boards/mach-landisk/gio.c
···1414 */1515#include <linux/module.h>1616#include <linux/init.h>1717-#include <linux/smp_lock.h>1817#include <linux/kdev_t.h>1918#include <linux/cdev.h>2019#include <linux/fs.h>···3435 int minor;3536 int ret = -ENOENT;36373737- lock_kernel();3838+ preempt_disable();3839 minor = MINOR(inode->i_rdev);3940 if (minor < DEVCOUNT) {4041 if (openCnt > 0) {···4445 ret = 0;4546 }4647 }4747- unlock_kernel();4848+ preempt_enable();4849 return ret;4950}5051···5960 return 0;6061}61626262-static int gio_ioctl(struct inode *inode, struct file *filp,6363- unsigned int cmd, unsigned long arg)6363+static long gio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)6464{6565 unsigned int data;6666 static unsigned int addr = 0;···127129 .owner = THIS_MODULE,128130 .open = gio_open, /* open */129131 .release = gio_close, /* release */130130- .ioctl = gio_ioctl, /* ioctl */132132+ .unlocked_ioctl = gio_ioctl,131133};132134133135static int __init gio_init(void)
+12-14
arch/sh/mm/cache-sh4.c
···2727 */2828#define MAX_ICACHE_PAGES 3229293030-static void __flush_cache_4096(unsigned long addr, unsigned long phys,3030+static void __flush_cache_one(unsigned long addr, unsigned long phys,3131 unsigned long exec_offset);32323333/*···8282 local_irq_restore(flags);8383}84848585-static inline void flush_cache_4096(unsigned long start,8686- unsigned long phys)8585+static inline void flush_cache_one(unsigned long start, unsigned long phys)8786{8887 unsigned long flags, exec_offset = 0;8988···9596 exec_offset = cached_to_uncached;96979798 local_irq_save(flags);9898- __flush_cache_4096(start | SH_CACHE_ASSOC,9999- virt_to_phys(phys), exec_offset);9999+ __flush_cache_one(start | SH_CACHE_ASSOC,100100+ virt_to_phys(phys), exec_offset);100101 local_irq_restore(flags);101102}102103···120121 int i, n;121122122123 /* Loop all the D-cache */123123- n = boot_cpu_data.dcache.way_incr >> 12;124124- for (i = 0; i < n; i++, addr += 4096)125125- flush_cache_4096(addr, phys);124124+ n = boot_cpu_data.dcache.n_aliases;125125+ for (i = 0; i <= n; i++, addr += PAGE_SIZE)126126+ flush_cache_one(addr, phys);126127 }127128128129 wmb();···219220 void *vaddr;220221221222 vma = data->vma;222222- address = data->addr1;223223+ address = data->addr1 & PAGE_MASK;223224 pfn = data->addr2;224225 phys = pfn << PAGE_SHIFT;225226 page = pfn_to_page(pfn);···227228 if (cpu_context(smp_processor_id(), vma->vm_mm) == NO_CONTEXT)228229 return;229230230230- address &= PAGE_MASK;231231 pgd = pgd_offset(vma->vm_mm, address);232232 pud = pud_offset(pgd, address);233233 pmd = pmd_offset(pud, address);···255257 }256258257259 if (pages_do_alias(address, phys))258258- flush_cache_4096(CACHE_OC_ADDRESS_ARRAY |260260+ flush_cache_one(CACHE_OC_ADDRESS_ARRAY |259261 (address & shm_align_mask), phys);260262261263 if (vma->vm_flags & VM_EXEC)···305307}306308307309/**308308- * __flush_cache_4096310310+ * __flush_cache_one309311 *310312 * @addr: address in memory mapped cache array311313 * @phys: P1 address to flush (has to match tags if addr has 'A' bit···318320 * operation (purge/write-back) is selected by the lower 2 bits of319321 * 'phys'.320322 */321321-static void __flush_cache_4096(unsigned long addr, unsigned long phys,323323+static void __flush_cache_one(unsigned long addr, unsigned long phys,322324 unsigned long exec_offset)323325{324326 int way_count;···355357 * pointless nead-of-loop check for 0 iterations.356358 */357359 do {358358- ea = base_addr + 4096;360360+ ea = base_addr + PAGE_SIZE;359361 a = base_addr;360362 p = phys;361363
+10
arch/sh/mm/cache.c
···271271272272void __init cpu_cache_init(void)273273{274274+ unsigned int cache_disabled = !(__raw_readl(CCR) & CCR_CACHE_ENABLE);275275+274276 compute_alias(&boot_cpu_data.icache);275277 compute_alias(&boot_cpu_data.dcache);276278 compute_alias(&boot_cpu_data.scache);···280278 __flush_wback_region = noop__flush_region;281279 __flush_purge_region = noop__flush_region;282280 __flush_invalidate_region = noop__flush_region;281281+282282+ /*283283+ * No flushing is necessary in the disabled cache case so we can284284+ * just keep the noop functions in local_flush_..() and __flush_..()285285+ */286286+ if (unlikely(cache_disabled))287287+ goto skip;283288284289 if (boot_cpu_data.family == CPU_FAMILY_SH2) {285290 extern void __weak sh2_cache_init(void);···327318 sh5_cache_init();328319 }329320321321+skip:330322 emit_cache_params();331323}
···265265 struct page *page;266266267267 page = pfn_to_page(pfn);268268- if (page && page_mapping(page)) {268268+ if (page) {269269 unsigned long pg_flags;270270271271 pg_flags = page->flags;
+10-1
arch/x86/Kconfig
···491491source "arch/x86/xen/Kconfig"492492493493config VMI494494- bool "VMI Guest support"494494+ bool "VMI Guest support (DEPRECATED)"495495 select PARAVIRT496496 depends on X86_32497497 ---help---···499499 (it could be used by other hypervisors in theory too, but is not500500 at the moment), by linking the kernel to a GPL-ed ROM module501501 provided by the hypervisor.502502+503503+ As of September 2009, VMware has started a phased retirement504504+ of this feature from VMware's products. Please see505505+ feature-removal-schedule.txt for details. If you are506506+ planning to enable this option, please note that you cannot507507+ live migrate a VMI enabled VM to a future VMware product,508508+ which doesn't support VMI. So if you expect your kernel to509509+ seamlessly migrate to newer VMware products, keep this510510+ disabled.502511503512config KVM_CLOCK504513 bool "KVM paravirtualized clock"
···311311 amd_iommu_shutdown();312312}313313/* Must execute after PCI subsystem */314314-fs_initcall(pci_iommu_init);314314+rootfs_initcall(pci_iommu_init);315315316316#ifdef CONFIG_PCI317317/* Many VIA bridges seem to corrupt data for DAC. Disable it here */
-1
arch/x86/kernel/smp.c
···198198{199199 ack_APIC_irq();200200 inc_irq_stat(irq_resched_count);201201- run_local_timers();202201 /*203202 * KVM uses this interrupt to force a cpu out of guest mode204203 */
+2-1
arch/x86/kernel/time.c
···3838#ifdef CONFIG_FRAME_POINTER3939 return *(unsigned long *)(regs->bp + sizeof(long));4040#else4141- unsigned long *sp = (unsigned long *)regs->sp;4141+ unsigned long *sp =4242+ (unsigned long *)kernel_stack_pointer(regs);4243 /*4344 * Return address is either directly at stack pointer4445 * or above a saved flags. Eflags has bits 22-31 zero,
+10-2
arch/x86/kernel/trampoline.c
···33#include <asm/trampoline.h>44#include <asm/e820.h>5566+#if defined(CONFIG_X86_64) && defined(CONFIG_ACPI_SLEEP)77+#define __trampinit88+#define __trampinitdata99+#else1010+#define __trampinit __cpuinit1111+#define __trampinitdata __cpuinitdata1212+#endif1313+614/* ready for x86_64 and x86 */77-unsigned char *__cpuinitdata trampoline_base = __va(TRAMPOLINE_BASE);1515+unsigned char *__trampinitdata trampoline_base = __va(TRAMPOLINE_BASE);816917void __init reserve_trampoline_memory(void)1018{···3426 * bootstrap into the page concerned. The caller3527 * has made sure it's suitably aligned.3628 */3737-unsigned long __cpuinit setup_trampoline(void)2929+unsigned long __trampinit setup_trampoline(void)3830{3931 memcpy(trampoline_base, trampoline_data, TRAMPOLINE_SIZE);4032 return virt_to_phys(trampoline_base);
+4
arch/x86/kernel/trampoline_64.S
···3232#include <asm/segment.h>3333#include <asm/processor-flags.h>34343535+#ifdef CONFIG_ACPI_SLEEP3636+.section .rodata, "a", @progbits3737+#else3538/* We can free up the trampoline after bootup if cpu hotplug is not supported. */3639__CPUINITRODATA4040+#endif3741.code1638423943ENTRY(trampoline_data)
···242242/**243243 * blk_queue_max_discard_sectors - set max sectors for a single discard244244 * @q: the request queue for the device245245- * @max_discard: maximum number of sectors to discard245245+ * @max_discard_sectors: maximum number of sectors to discard246246 **/247247void blk_queue_max_discard_sectors(struct request_queue *q,248248 unsigned int max_discard_sectors)
+1-1
block/blk-tag.c
···359359 max_depth -= 2;360360 if (!max_depth)361361 max_depth = 1;362362- if (q->in_flight[0] > max_depth)362362+ if (q->in_flight[BLK_RW_ASYNC] > max_depth)363363 return 1;364364 }365365
+142-117
block/cfq-iosched.c
···150150 * idle window management151151 */152152 struct timer_list idle_slice_timer;153153- struct delayed_work unplug_work;153153+ struct work_struct unplug_work;154154155155 struct cfq_queue *active_queue;156156 struct cfq_io_context *active_cic;···230230 blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args)231231232232static void cfq_dispatch_insert(struct request_queue *, struct request *);233233-static struct cfq_queue *cfq_get_queue(struct cfq_data *, int,233233+static struct cfq_queue *cfq_get_queue(struct cfq_data *, bool,234234 struct io_context *, gfp_t);235235static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *,236236 struct io_context *);···241241}242242243243static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic,244244- int is_sync)244244+ bool is_sync)245245{246246- return cic->cfqq[!!is_sync];246246+ return cic->cfqq[is_sync];247247}248248249249static inline void cic_set_cfqq(struct cfq_io_context *cic,250250- struct cfq_queue *cfqq, int is_sync)250250+ struct cfq_queue *cfqq, bool is_sync)251251{252252- cic->cfqq[!!is_sync] = cfqq;252252+ cic->cfqq[is_sync] = cfqq;253253}254254255255/*256256 * We regard a request as SYNC, if it's either a read or has the SYNC bit257257 * set (in which case it could also be direct WRITE).258258 */259259-static inline int cfq_bio_sync(struct bio *bio)259259+static inline bool cfq_bio_sync(struct bio *bio)260260{261261- if (bio_data_dir(bio) == READ || bio_rw_flagged(bio, BIO_RW_SYNCIO))262262- return 1;263263-264264- return 0;261261+ return bio_data_dir(bio) == READ || bio_rw_flagged(bio, BIO_RW_SYNCIO);265262}266263267264/*268265 * scheduler run of queue, if there are requests pending and no one in the269266 * driver that will restart queueing270267 */271271-static inline void cfq_schedule_dispatch(struct cfq_data *cfqd,272272- unsigned long delay)268268+static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)273269{274270 if (cfqd->busy_queues) {275271 cfq_log(cfqd, "schedule dispatch");276276- kblockd_schedule_delayed_work(cfqd->queue, &cfqd->unplug_work,277277- delay);272272+ kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work);278273 }279274}280275···285290 * if a queue is marked sync and has sync io queued. A sync queue with async286291 * io only, should not get full sync slice length.287292 */288288-static inline int cfq_prio_slice(struct cfq_data *cfqd, int sync,293293+static inline int cfq_prio_slice(struct cfq_data *cfqd, bool sync,289294 unsigned short prio)290295{291296 const int base_slice = cfqd->cfq_slice[sync];···313318 * isn't valid until the first request from the dispatch is activated314319 * and the slice time set.315320 */316316-static inline int cfq_slice_used(struct cfq_queue *cfqq)321321+static inline bool cfq_slice_used(struct cfq_queue *cfqq)317322{318323 if (cfq_cfqq_slice_new(cfqq))319324 return 0;···488493 * we will service the queues.489494 */490495static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,491491- int add_front)496496+ bool add_front)492497{493498 struct rb_node **p, *parent;494499 struct cfq_queue *__cfqq;···504509 } else505510 rb_key += jiffies;506511 } else if (!add_front) {512512+ /*513513+ * Get our rb key offset. Subtract any residual slice514514+ * value carried from last service. A negative resid515515+ * count indicates slice overrun, and this should position516516+ * the next service time further away in the tree.517517+ */507518 rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies;508508- rb_key += cfqq->slice_resid;519519+ rb_key -= cfqq->slice_resid;509520 cfqq->slice_resid = 0;510510- } else511511- rb_key = 0;521521+ } else {522522+ rb_key = -HZ;523523+ __cfqq = cfq_rb_first(&cfqd->service_tree);524524+ rb_key += __cfqq ? __cfqq->rb_key : jiffies;525525+ }512526513527 if (!RB_EMPTY_NODE(&cfqq->rb_node)) {514528 /*···551547 n = &(*p)->rb_left;552548 else if (cfq_class_idle(cfqq) > cfq_class_idle(__cfqq))553549 n = &(*p)->rb_right;554554- else if (rb_key < __cfqq->rb_key)550550+ else if (time_before(rb_key, __cfqq->rb_key))555551 n = &(*p)->rb_left;556552 else557553 n = &(*p)->rb_right;···831827 * reposition in fifo if next is older than rq832828 */833829 if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&834834- time_before(next->start_time, rq->start_time))830830+ time_before(rq_fifo_time(next), rq_fifo_time(rq))) {835831 list_move(&rq->queuelist, &next->queuelist);832832+ rq_set_fifo_time(rq, rq_fifo_time(next));833833+ }836834837835 cfq_remove_request(next);838836}···850844 * Disallow merge of a sync bio into an async request.851845 */852846 if (cfq_bio_sync(bio) && !rq_is_sync(rq))853853- return 0;847847+ return false;854848855849 /*856850 * Lookup the cfqq that this bio will be queued with. Allow···858852 */859853 cic = cfq_cic_lookup(cfqd, current->io_context);860854 if (!cic)861861- return 0;855855+ return false;862856863857 cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio));864864- if (cfqq == RQ_CFQQ(rq))865865- return 1;866866-867867- return 0;858858+ return cfqq == RQ_CFQQ(rq);868859}869860870861static void __cfq_set_active_queue(struct cfq_data *cfqd,···889886 */890887static void891888__cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,892892- int timed_out)889889+ bool timed_out)893890{894891 cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out);895892···917914 }918915}919916920920-static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out)917917+static inline void cfq_slice_expired(struct cfq_data *cfqd, bool timed_out)921918{922919 struct cfq_queue *cfqq = cfqd->active_queue;923920···10291026 */10301027static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,10311028 struct cfq_queue *cur_cfqq,10321032- int probe)10291029+ bool probe)10331030{10341031 struct cfq_queue *cfqq;10351032···10931090 if (!cic || !atomic_read(&cic->ioc->nr_tasks))10941091 return;1095109210931093+ /*10941094+ * If our average think time is larger than the remaining time10951095+ * slice, then don't idle. This avoids overrunning the allotted10961096+ * time slice.10971097+ */10981098+ if (sample_valid(cic->ttime_samples) &&10991099+ (cfqq->slice_end - jiffies < cic->ttime_mean))11001100+ return;11011101+10961102 cfq_mark_cfqq_wait_request(cfqq);1097110310981104 /*···11411129 */11421130static struct request *cfq_check_fifo(struct cfq_queue *cfqq)11431131{11441144- struct cfq_data *cfqd = cfqq->cfqd;11451145- struct request *rq;11461146- int fifo;11321132+ struct request *rq = NULL;1147113311481134 if (cfq_cfqq_fifo_expire(cfqq))11491135 return NULL;···11511141 if (list_empty(&cfqq->fifo))11521142 return NULL;1153114311541154- fifo = cfq_cfqq_sync(cfqq);11551144 rq = rq_entry_fifo(cfqq->fifo.next);11561156-11571157- if (time_before(jiffies, rq->start_time + cfqd->cfq_fifo_expire[fifo]))11451145+ if (time_before(jiffies, rq_fifo_time(rq)))11581146 rq = NULL;1159114711601160- cfq_log_cfqq(cfqd, cfqq, "fifo=%p", rq);11481148+ cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq);11611149 return rq;11621150}11631151···12561248 return dispatched;12571249}1258125012591259-/*12601260- * Dispatch a request from cfqq, moving them to the request queue12611261- * dispatch list.12621262- */12631263-static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)12511251+static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq)12641252{12651265- struct request *rq;12661266-12671267- BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list));12681268-12691269- /*12701270- * follow expired path, else get first next available12711271- */12721272- rq = cfq_check_fifo(cfqq);12731273- if (!rq)12741274- rq = cfqq->next_rq;12751275-12761276- /*12771277- * insert request into driver dispatch list12781278- */12791279- cfq_dispatch_insert(cfqd->queue, rq);12801280-12811281- if (!cfqd->active_cic) {12821282- struct cfq_io_context *cic = RQ_CIC(rq);12831283-12841284- atomic_long_inc(&cic->ioc->refcount);12851285- cfqd->active_cic = cic;12861286- }12871287-}12881288-12891289-/*12901290- * Find the cfqq that we need to service and move a request from that to the12911291- * dispatch list12921292- */12931293-static int cfq_dispatch_requests(struct request_queue *q, int force)12941294-{12951295- struct cfq_data *cfqd = q->elevator->elevator_data;12961296- struct cfq_queue *cfqq;12971253 unsigned int max_dispatch;12981298-12991299- if (!cfqd->busy_queues)13001300- return 0;13011301-13021302- if (unlikely(force))13031303- return cfq_forced_dispatch(cfqd);13041304-13051305- cfqq = cfq_select_queue(cfqd);13061306- if (!cfqq)13071307- return 0;1308125413091255 /*13101256 * Drain async requests before we start sync IO13111257 */13121258 if (cfq_cfqq_idle_window(cfqq) && cfqd->rq_in_driver[BLK_RW_ASYNC])13131313- return 0;12591259+ return false;1314126013151261 /*13161262 * If this is an async queue and we have sync IO in flight, let it wait13171263 */13181264 if (cfqd->sync_flight && !cfq_cfqq_sync(cfqq))13191319- return 0;12651265+ return false;1320126613211267 max_dispatch = cfqd->cfq_quantum;13221268 if (cfq_class_idle(cfqq))···12841322 * idle queue must always only have a single IO in flight12851323 */12861324 if (cfq_class_idle(cfqq))12871287- return 0;13251325+ return false;1288132612891327 /*12901328 * We have other queues, don't allow more IO from this one12911329 */12921330 if (cfqd->busy_queues > 1)12931293- return 0;13311331+ return false;1294133212951333 /*12961334 * Sole queue user, allow bigger slice···13141352 max_dispatch = depth;13151353 }1316135413171317- if (cfqq->dispatched >= max_dispatch)13551355+ /*13561356+ * If we're below the current max, allow a dispatch13571357+ */13581358+ return cfqq->dispatched < max_dispatch;13591359+}13601360+13611361+/*13621362+ * Dispatch a request from cfqq, moving them to the request queue13631363+ * dispatch list.13641364+ */13651365+static bool cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)13661366+{13671367+ struct request *rq;13681368+13691369+ BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list));13701370+13711371+ if (!cfq_may_dispatch(cfqd, cfqq))13721372+ return false;13731373+13741374+ /*13751375+ * follow expired path, else get first next available13761376+ */13771377+ rq = cfq_check_fifo(cfqq);13781378+ if (!rq)13791379+ rq = cfqq->next_rq;13801380+13811381+ /*13821382+ * insert request into driver dispatch list13831383+ */13841384+ cfq_dispatch_insert(cfqd->queue, rq);13851385+13861386+ if (!cfqd->active_cic) {13871387+ struct cfq_io_context *cic = RQ_CIC(rq);13881388+13891389+ atomic_long_inc(&cic->ioc->refcount);13901390+ cfqd->active_cic = cic;13911391+ }13921392+13931393+ return true;13941394+}13951395+13961396+/*13971397+ * Find the cfqq that we need to service and move a request from that to the13981398+ * dispatch list13991399+ */14001400+static int cfq_dispatch_requests(struct request_queue *q, int force)14011401+{14021402+ struct cfq_data *cfqd = q->elevator->elevator_data;14031403+ struct cfq_queue *cfqq;14041404+14051405+ if (!cfqd->busy_queues)14061406+ return 0;14071407+14081408+ if (unlikely(force))14091409+ return cfq_forced_dispatch(cfqd);14101410+14111411+ cfqq = cfq_select_queue(cfqd);14121412+ if (!cfqq)13181413 return 0;1319141413201415 /*13211321- * Dispatch a request from this cfqq14161416+ * Dispatch a request from this cfqq, if it is allowed13221417 */13231323- cfq_dispatch_request(cfqd, cfqq);14181418+ if (!cfq_dispatch_request(cfqd, cfqq))14191419+ return 0;14201420+13241421 cfqq->slice_dispatch++;13251422 cfq_clear_cfqq_must_dispatch(cfqq);13261423···1420139914211400 if (unlikely(cfqd->active_queue == cfqq)) {14221401 __cfq_slice_expired(cfqd, cfqq, 0);14231423- cfq_schedule_dispatch(cfqd, 0);14021402+ cfq_schedule_dispatch(cfqd);14241403 }1425140414261405 kmem_cache_free(cfq_pool, cfqq);···15151494{15161495 if (unlikely(cfqq == cfqd->active_queue)) {15171496 __cfq_slice_expired(cfqd, cfqq, 0);15181518- cfq_schedule_dispatch(cfqd, 0);14971497+ cfq_schedule_dispatch(cfqd);15191498 }1520149915211500 cfq_put_queue(cfqq);···16791658}1680165916811660static void cfq_init_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq,16821682- pid_t pid, int is_sync)16611661+ pid_t pid, bool is_sync)16831662{16841663 RB_CLEAR_NODE(&cfqq->rb_node);16851664 RB_CLEAR_NODE(&cfqq->p_node);···16991678}1700167917011680static struct cfq_queue *17021702-cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync,16811681+cfq_find_alloc_queue(struct cfq_data *cfqd, bool is_sync,17031682 struct io_context *ioc, gfp_t gfp_mask)17041683{17051684 struct cfq_queue *cfqq, *new_cfqq = NULL;···17631742}1764174317651744static struct cfq_queue *17661766-cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc,17451745+cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct io_context *ioc,17671746 gfp_t gfp_mask)17681747{17691748 const int ioprio = task_ioprio(ioc);···19981977 (!cfqd->cfq_latency && cfqd->hw_tag && CIC_SEEKY(cic)))19991978 enable_idle = 0;20001979 else if (sample_valid(cic->ttime_samples)) {20012001- if (cic->ttime_mean > cfqd->cfq_slice_idle)19801980+ unsigned int slice_idle = cfqd->cfq_slice_idle;19811981+ if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic))19821982+ slice_idle = msecs_to_jiffies(CFQ_MIN_TT);19831983+ if (cic->ttime_mean > slice_idle)20021984 enable_idle = 0;20031985 else20041986 enable_idle = 1;···20201996 * Check if new_cfqq should preempt the currently active queue. Return 0 for20211997 * no or if we aren't sure, a 1 will cause a preempt.20221998 */20232023-static int19991999+static bool20242000cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,20252001 struct request *rq)20262002{···2028200420292005 cfqq = cfqd->active_queue;20302006 if (!cfqq)20312031- return 0;20072007+ return false;2032200820332009 if (cfq_slice_used(cfqq))20342034- return 1;20102010+ return true;2035201120362012 if (cfq_class_idle(new_cfqq))20372037- return 0;20132013+ return false;2038201420392015 if (cfq_class_idle(cfqq))20402040- return 1;20162016+ return true;2041201720422018 /*20432019 * if the new request is sync, but the currently running queue is20442020 * not, let the sync request have priority.20452021 */20462022 if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq))20472047- return 1;20232023+ return true;2048202420492025 /*20502026 * So both queues are sync. Let the new request get disk time if20512027 * it's a metadata request and the current queue is doing regular IO.20522028 */20532029 if (rq_is_meta(rq) && !cfqq->meta_pending)20542054- return 1;20302030+ return false;2055203120562032 /*20572033 * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice.20582034 */20592035 if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq))20602060- return 1;20362036+ return true;2061203720622038 if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq))20632063- return 0;20392039+ return false;2064204020652041 /*20662042 * if this request is as-good as one we would expect from the20672043 * current cfqq, let it preempt20682044 */20692045 if (cfq_rq_close(cfqd, rq))20702070- return 1;20462046+ return true;2071204720722072- return 0;20482048+ return false;20732049}2074205020752051/*···2154213021552131 cfq_add_rq_rb(rq);2156213221332133+ rq_set_fifo_time(rq, jiffies + cfqd->cfq_fifo_expire[rq_is_sync(rq)]);21572134 list_add_tail(&rq->queuelist, &cfqq->fifo);2158213521592136 cfq_rq_enqueued(cfqd, cfqq, rq);···22362211 }2237221222382213 if (!rq_in_driver(cfqd))22392239- cfq_schedule_dispatch(cfqd, 0);22142214+ cfq_schedule_dispatch(cfqd);22402215}2241221622422217/*···23342309 struct cfq_data *cfqd = q->elevator->elevator_data;23352310 struct cfq_io_context *cic;23362311 const int rw = rq_data_dir(rq);23372337- const int is_sync = rq_is_sync(rq);23122312+ const bool is_sync = rq_is_sync(rq);23382313 struct cfq_queue *cfqq;23392314 unsigned long flags;23402315···23662341 if (cic)23672342 put_io_context(cic->ioc);2368234323692369- cfq_schedule_dispatch(cfqd, 0);23442344+ cfq_schedule_dispatch(cfqd);23702345 spin_unlock_irqrestore(q->queue_lock, flags);23712346 cfq_log(cfqd, "set_request fail");23722347 return 1;···23752350static void cfq_kick_queue(struct work_struct *work)23762351{23772352 struct cfq_data *cfqd =23782378- container_of(work, struct cfq_data, unplug_work.work);23532353+ container_of(work, struct cfq_data, unplug_work);23792354 struct request_queue *q = cfqd->queue;2380235523812356 spin_lock_irq(q->queue_lock);···24292404expire:24302405 cfq_slice_expired(cfqd, timed_out);24312406out_kick:24322432- cfq_schedule_dispatch(cfqd, 0);24072407+ cfq_schedule_dispatch(cfqd);24332408out_cont:24342409 spin_unlock_irqrestore(cfqd->queue->queue_lock, flags);24352410}···24372412static void cfq_shutdown_timer_wq(struct cfq_data *cfqd)24382413{24392414 del_timer_sync(&cfqd->idle_slice_timer);24402440- cancel_delayed_work_sync(&cfqd->unplug_work);24152415+ cancel_work_sync(&cfqd->unplug_work);24412416}2442241724432418static void cfq_put_async_queues(struct cfq_data *cfqd)···25192494 cfqd->idle_slice_timer.function = cfq_idle_slice_timer;25202495 cfqd->idle_slice_timer.data = (unsigned long) cfqd;2521249625222522- INIT_DELAYED_WORK(&cfqd->unplug_work, cfq_kick_queue);24972497+ INIT_WORK(&cfqd->unplug_work, cfq_kick_queue);2523249825242499 cfqd->cfq_quantum = cfq_quantum;25252500 cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0];
+1-3
block/elevator.c
···10591059 return count;1060106010611061 strlcpy(elevator_name, name, sizeof(elevator_name));10621062- strstrip(elevator_name);10631063-10641064- e = elevator_get(elevator_name);10621062+ e = elevator_get(strstrip(elevator_name));10651063 if (!e) {10661064 printk(KERN_ERR "elevator: type %s not found\n", elevator_name);10671065 return -EINVAL;
···218218 depends on X86219219 help220220 ACPI 4.0 defines processor Aggregator, which enables OS to perform221221- specfic processor configuration and control that applies to all221221+ specific processor configuration and control that applies to all222222 processors in the platform. Currently only logical processor idling223223 is defined, which is to reduce power consumption. This driver224224- support the new device.224224+ supports the new device.225225226226config ACPI_THERMAL227227 tristate "Thermal Zone"
···251251 acpi_status status;252252 unsigned long long state;253253254254+ if (!lid_device)255255+ return -ENODEV;256256+254257 status = acpi_evaluate_integer(lid_device->handle, "_LID", NULL,255258 &state);256259 if (ACPI_FAILURE(status))
+11
drivers/acpi/pci_root.c
···389389390390 pbus = pdev->subordinate;391391 pci_dev_put(pdev);392392+393393+ /*394394+ * This function may be called for a non-PCI device that has a395395+ * PCI parent (eg. a disk under a PCI SATA controller). In that396396+ * case pdev->subordinate will be NULL for the parent.397397+ */398398+ if (!pbus) {399399+ dev_dbg(&pdev->dev, "Not a PCI-to-PCI bridge\n");400400+ pdev = NULL;401401+ break;402402+ }392403 }393404out:394405 list_for_each_entry_safe(node, tmp, &device_list, node)
+6-1
drivers/acpi/video.c
···11091109 */1110111011111111 /* Does this device support video switching? */11121112- if (video->cap._DOS) {11121112+ if (video->cap._DOS || video->cap._DOD) {11131113+ if (!video->cap._DOS) {11141114+ printk(KERN_WARNING FW_BUG11151115+ "ACPI(%s) defines _DOD but not _DOS\n",11161116+ acpi_device_bid(video->device));11171117+ }11131118 video->flags.multihead = 1;11141119 status = 0;11151120 }
+1-1
drivers/acpi/video_detect.c
···8484 return 0;85858686 /* Does this device able to support video switching ? */8787- if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOD", &h_dummy)) &&8787+ if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOD", &h_dummy)) ||8888 ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOS", &h_dummy)))8989 video_caps |= ACPI_VIDEO_OUTPUT_SWITCHING;9090
+37-42
drivers/block/cciss.c
···6868MODULE_VERSION("3.6.20");6969MODULE_LICENSE("GPL");70707171+static int cciss_allow_hpsa;7272+module_param(cciss_allow_hpsa, int, S_IRUGO|S_IWUSR);7373+MODULE_PARM_DESC(cciss_allow_hpsa,7474+ "Prevent cciss driver from accessing hardware known to be "7575+ " supported by the hpsa driver");7676+7177#include "cciss_cmd.h"7278#include "cciss.h"7379#include <linux/cciss_ioctl.h>···107101 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3249},108102 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324A},109103 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324B},110110- {PCI_VENDOR_ID_HP, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,111111- PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0},112104 {0,}113105};114106···127123 {0x409D0E11, "Smart Array 6400 EM", &SA5_access},128124 {0x40910E11, "Smart Array 6i", &SA5_access},129125 {0x3225103C, "Smart Array P600", &SA5_access},130130- {0x3223103C, "Smart Array P800", &SA5_access},131131- {0x3234103C, "Smart Array P400", &SA5_access},132126 {0x3235103C, "Smart Array P400i", &SA5_access},133127 {0x3211103C, "Smart Array E200i", &SA5_access},134128 {0x3212103C, "Smart Array E200", &SA5_access},···134132 {0x3214103C, "Smart Array E200i", &SA5_access},135133 {0x3215103C, "Smart Array E200i", &SA5_access},136134 {0x3237103C, "Smart Array E500", &SA5_access},135135+/* controllers below this line are also supported by the hpsa driver. */136136+#define HPSA_BOUNDARY 0x3223103C137137+ {0x3223103C, "Smart Array P800", &SA5_access},138138+ {0x3234103C, "Smart Array P400", &SA5_access},137139 {0x323D103C, "Smart Array P700m", &SA5_access},138140 {0x3241103C, "Smart Array P212", &SA5_access},139141 {0x3243103C, "Smart Array P410", &SA5_access},···146140 {0x3249103C, "Smart Array P812", &SA5_access},147141 {0x324A103C, "Smart Array P712m", &SA5_access},148142 {0x324B103C, "Smart Array P711m", &SA5_access},149149- {0xFFFF103C, "Unknown Smart Array", &SA5_access},150143};151144152145/* How long to wait (in milliseconds) for board to go into simple mode */···37593754 __u64 cfg_offset;37603755 __u32 cfg_base_addr;37613756 __u64 cfg_base_addr_index;37623762- int i, err;37573757+ int i, prod_index, err;37583758+37593759+ subsystem_vendor_id = pdev->subsystem_vendor;37603760+ subsystem_device_id = pdev->subsystem_device;37613761+ board_id = (((__u32) (subsystem_device_id << 16) & 0xffff0000) |37623762+ subsystem_vendor_id);37633763+37643764+ for (i = 0; i < ARRAY_SIZE(products); i++) {37653765+ /* Stand aside for hpsa driver on request */37663766+ if (cciss_allow_hpsa && products[i].board_id == HPSA_BOUNDARY)37673767+ return -ENODEV;37683768+ if (board_id == products[i].board_id)37693769+ break;37703770+ }37713771+ prod_index = i;37723772+ if (prod_index == ARRAY_SIZE(products)) {37733773+ dev_warn(&pdev->dev,37743774+ "unrecognized board ID: 0x%08lx, ignoring.\n",37753775+ (unsigned long) board_id);37763776+ return -ENODEV;37773777+ }3763377837643779 /* check to see if controller has been disabled */37653780 /* BEFORE trying to enable it */···38023777 "aborting\n");38033778 return err;38043779 }38053805-38063806- subsystem_vendor_id = pdev->subsystem_vendor;38073807- subsystem_device_id = pdev->subsystem_device;38083808- board_id = (((__u32) (subsystem_device_id << 16) & 0xffff0000) |38093809- subsystem_vendor_id);3810378038113781#ifdef CCISS_DEBUG38123782 printk("command = %x\n", command);···38883868 * leave a little room for ioctl calls.38893869 */38903870 c->max_commands = readl(&(c->cfgtable->CmdsOutMax));38913891- for (i = 0; i < ARRAY_SIZE(products); i++) {38923892- if (board_id == products[i].board_id) {38933893- c->product_name = products[i].product_name;38943894- c->access = *(products[i].access);38953895- c->nr_cmds = c->max_commands - 4;38963896- break;38973897- }38983898- }38713871+ c->product_name = products[prod_index].product_name;38723872+ c->access = *(products[prod_index].access);38733873+ c->nr_cmds = c->max_commands - 4;38993874 if ((readb(&c->cfgtable->Signature[0]) != 'C') ||39003875 (readb(&c->cfgtable->Signature[1]) != 'I') ||39013876 (readb(&c->cfgtable->Signature[2]) != 'S') ||···38983883 printk("Does not appear to be a valid CISS config table\n");38993884 err = -ENODEV;39003885 goto err_out_free_res;39013901- }39023902- /* We didn't find the controller in our list. We know the39033903- * signature is valid. If it's an HP device let's try to39043904- * bind to the device and fire it up. Otherwise we bail.39053905- */39063906- if (i == ARRAY_SIZE(products)) {39073907- if (subsystem_vendor_id == PCI_VENDOR_ID_HP) {39083908- c->product_name = products[i-1].product_name;39093909- c->access = *(products[i-1].access);39103910- c->nr_cmds = c->max_commands - 4;39113911- printk(KERN_WARNING "cciss: This is an unknown "39123912- "Smart Array controller.\n"39133913- "cciss: Please update to the latest driver "39143914- "available from www.hp.com.\n");39153915- } else {39163916- printk(KERN_WARNING "cciss: Sorry, I don't know how"39173917- " to access the Smart Array controller %08lx\n"39183918- , (unsigned long)board_id);39193919- err = -ENODEV;39203920- goto err_out_free_res;39213921- }39223886 }39233887#ifdef CONFIG_X8639243888 {···42484254 mutex_init(&hba[i]->busy_shutting_down);4249425542504256 if (cciss_pci_init(hba[i], pdev) != 0)42514251- goto clean0;42574257+ goto clean_no_release_regions;4252425842534259 sprintf(hba[i]->devname, "cciss%d", i);42544260 hba[i]->ctlr = i;···43854391clean1:43864392 cciss_destroy_hba_sysfs_entry(hba[i]);43874393clean0:43944394+ pci_release_regions(pdev);43954395+clean_no_release_regions:43884396 hba[i]->busy_initializing = 0;4389439743904398 /*43914399 * Deliberately omit pci_disable_device(): it does something nasty to43924400 * Smart Array controllers that pci_enable_device does not undo43934401 */43944394- pci_release_regions(pdev);43954402 pci_set_drvdata(pdev, NULL);43964403 free_hba(i);43974404 return -1;
···402402 container_of(work, struct tty_struct, buf.work.work);403403 unsigned long flags;404404 struct tty_ldisc *disc;405405- struct tty_buffer *tbuf, *head;406406- char *char_buf;407407- unsigned char *flag_buf;408405409406 disc = tty_ldisc_ref(tty);410407 if (disc == NULL) /* !TTY_LDISC */411408 return;412409413410 spin_lock_irqsave(&tty->buf.lock, flags);414414- /* So we know a flush is running */415415- set_bit(TTY_FLUSHING, &tty->flags);416416- head = tty->buf.head;417417- if (head != NULL) {418418- tty->buf.head = NULL;419419- for (;;) {420420- int count = head->commit - head->read;411411+412412+ if (!test_and_set_bit(TTY_FLUSHING, &tty->flags)) {413413+ struct tty_buffer *head;414414+ while ((head = tty->buf.head) != NULL) {415415+ int count;416416+ char *char_buf;417417+ unsigned char *flag_buf;418418+419419+ count = head->commit - head->read;421420 if (!count) {422421 if (head->next == NULL)423422 break;424424- tbuf = head;425425- head = head->next;426426- tty_buffer_free(tty, tbuf);423423+ tty->buf.head = head->next;424424+ tty_buffer_free(tty, head);427425 continue;428426 }429427 /* Ldisc or user is trying to flush the buffers···443445 flag_buf, count);444446 spin_lock_irqsave(&tty->buf.lock, flags);445447 }446446- /* Restore the queue head */447447- tty->buf.head = head;448448+ clear_bit(TTY_FLUSHING, &tty->flags);448449 }450450+449451 /* We may have a deferred request to flush the input buffer,450452 if so pull the chain under the lock and empty the queue */451453 if (test_bit(TTY_FLUSHPENDING, &tty->flags)) {···453455 clear_bit(TTY_FLUSHPENDING, &tty->flags);454456 wake_up(&tty->read_wait);455457 }456456- clear_bit(TTY_FLUSHING, &tty->flags);457458 spin_unlock_irqrestore(&tty->buf.lock, flags);458459459460 tty_ldisc_deref(disc);···468471 */469472void tty_flush_to_ldisc(struct tty_struct *tty)470473{471471- flush_to_ldisc(&tty->buf.work.work);474474+ flush_delayed_work(&tty->buf.work);472475}473476474477/**
···188188/* Impossible login_id, to detect logout attempt before successful login */189189#define INVALID_LOGIN_ID 0x10000190190191191-/*192192- * Per section 7.4.8 of the SBP-2 spec, a mgt_ORB_timeout value can be193193- * provided in the config rom. Most devices do provide a value, which194194- * we'll use for login management orbs, but with some sane limits.195195- */196196-#define SBP2_MIN_LOGIN_ORB_TIMEOUT 5000U /* Timeout in ms */197197-#define SBP2_MAX_LOGIN_ORB_TIMEOUT 40000U /* Timeout in ms */198198-#define SBP2_ORB_TIMEOUT 2000U /* Timeout in ms */191191+#define SBP2_ORB_TIMEOUT 2000U /* Timeout in ms */199192#define SBP2_ORB_NULL 0x80000000200193#define SBP2_RETRY_LIMIT 0xf /* 15 retries */201194#define SBP2_CYCLE_LIMIT (0xc8 << 12) /* 200 125us cycles */···10271034{10281035 struct fw_csr_iterator ci;10291036 int key, value;10301030- unsigned int timeout;1031103710321038 fw_csr_iterator_init(&ci, directory);10331039 while (fw_csr_iterator_next(&ci, &key, &value)) {···1051105910521060 case SBP2_CSR_UNIT_CHARACTERISTICS:10531061 /* the timeout value is stored in 500ms units */10541054- timeout = ((unsigned int) value >> 8 & 0xff) * 500;10551055- timeout = max(timeout, SBP2_MIN_LOGIN_ORB_TIMEOUT);10561056- tgt->mgt_orb_timeout =10571057- min(timeout, SBP2_MAX_LOGIN_ORB_TIMEOUT);10581058-10591059- if (timeout > tgt->mgt_orb_timeout)10601060- fw_notify("%s: config rom contains %ds "10611061- "management ORB timeout, limiting "10621062- "to %ds\n", tgt->bus_id,10631063- timeout / 1000,10641064- tgt->mgt_orb_timeout / 1000);10621062+ tgt->mgt_orb_timeout = (value >> 8 & 0xff) * 500;10651063 break;1066106410671065 case SBP2_CSR_LOGICAL_UNIT_NUMBER:···10671085 }10681086 }10691087 return 0;10881088+}10891089+10901090+/*10911091+ * Per section 7.4.8 of the SBP-2 spec, a mgt_ORB_timeout value can be10921092+ * provided in the config rom. Most devices do provide a value, which10931093+ * we'll use for login management orbs, but with some sane limits.10941094+ */10951095+static void sbp2_clamp_management_orb_timeout(struct sbp2_target *tgt)10961096+{10971097+ unsigned int timeout = tgt->mgt_orb_timeout;10981098+10991099+ if (timeout > 40000)11001100+ fw_notify("%s: %ds mgt_ORB_timeout limited to 40s\n",11011101+ tgt->bus_id, timeout / 1000);11021102+11031103+ tgt->mgt_orb_timeout = clamp_val(timeout, 5000, 40000);10701104}1071110510721106static void sbp2_init_workarounds(struct sbp2_target *tgt, u32 model,···11691171 &firmware_revision) < 0)11701172 goto fail_tgt_put;1171117311741174+ sbp2_clamp_management_orb_timeout(tgt);11721175 sbp2_init_workarounds(tgt, model, firmware_revision);1173117611741177 /*
+1-1
drivers/hid/hid-core.c
···10661066 * @type: HID report type (HID_*_REPORT)10671067 * @data: report contents10681068 * @size: size of data parameter10691069- * @interrupt: called from atomic?10691069+ * @interrupt: distinguish between interrupt and control transfers10701070 *10711071 * This is data entry for lower layers.10721072 */
+2-2
drivers/hid/hid-twinhan.c
···132132 .input_mapping = twinhan_input_mapping,133133};134134135135-static int twinhan_init(void)135135+static int __init twinhan_init(void)136136{137137 return hid_register_driver(&twinhan_driver);138138}139139140140-static void twinhan_exit(void)140140+static void __exit twinhan_exit(void)141141{142142 hid_unregister_driver(&twinhan_driver);143143}
+2-3
drivers/hid/hidraw.c
···4848 char *report;4949 DECLARE_WAITQUEUE(wait, current);50505151+ mutex_lock(&list->read_mutex);5252+5153 while (ret == 0) {5252-5353- mutex_lock(&list->read_mutex);5454-5554 if (list->head == list->tail) {5655 add_wait_queue(&list->hidraw->wait, &wait);5756 set_current_state(TASK_INTERRUPTIBLE);
+23-17
drivers/macintosh/via-pmu.c
···405405 printk(KERN_ERR "via-pmu: can't map interrupt\n");406406 return -ENODEV;407407 }408408- if (request_irq(irq, via_pmu_interrupt, 0, "VIA-PMU", (void *)0)) {408408+ /* We set IRQF_TIMER because we don't want the interrupt to be disabled409409+ * between the 2 passes of driver suspend, we control our own disabling410410+ * for that one411411+ */412412+ if (request_irq(irq, via_pmu_interrupt, IRQF_TIMER, "VIA-PMU", (void *)0)) {409413 printk(KERN_ERR "via-pmu: can't request irq %d\n", irq);410414 return -ENODEV;411415 }···423419 gpio_irq = irq_of_parse_and_map(gpio_node, 0);424420425421 if (gpio_irq != NO_IRQ) {426426- if (request_irq(gpio_irq, gpio1_interrupt, 0,422422+ if (request_irq(gpio_irq, gpio1_interrupt, IRQF_TIMER,427423 "GPIO1 ADB", (void *)0))428424 printk(KERN_ERR "pmu: can't get irq %d"429425 " (GPIO1)\n", gpio_irq);···929925930926#ifdef CONFIG_ADB931927/* Send an ADB command */932932-static int933933-pmu_send_request(struct adb_request *req, int sync)928928+static int pmu_send_request(struct adb_request *req, int sync)934929{935930 int i, ret;936931···10081005}1009100610101007/* Enable/disable autopolling */10111011-static int10121012-pmu_adb_autopoll(int devs)10081008+static int __pmu_adb_autopoll(int devs)10131009{10141010 struct adb_request req;1015101110161016- if ((vias == NULL) || (!pmu_fully_inited) || !pmu_has_adb)10171017- return -ENXIO;10181018-10191012 if (devs) {10201020- adb_dev_map = devs;10211013 pmu_request(&req, NULL, 5, PMU_ADB_CMD, 0, 0x86,10221014 adb_dev_map >> 8, adb_dev_map);10231015 pmu_adb_flags = 2;···10251027 return 0;10261028}1027102910301030+static int pmu_adb_autopoll(int devs)10311031+{10321032+ if ((vias == NULL) || (!pmu_fully_inited) || !pmu_has_adb)10331033+ return -ENXIO;10341034+10351035+ adb_dev_map = devs;10361036+ return __pmu_adb_autopoll(devs);10371037+}10381038+10281039/* Reset the ADB bus */10291029-static int10301030-pmu_adb_reset_bus(void)10401040+static int pmu_adb_reset_bus(void)10311041{10321042 struct adb_request req;10331043 int save_autopoll = adb_dev_map;···10441038 return -ENXIO;1045103910461040 /* anyone got a better idea?? */10471047- pmu_adb_autopoll(0);10411041+ __pmu_adb_autopoll(0);1048104210491049- req.nbytes = 5;10431043+ req.nbytes = 4;10501044 req.done = NULL;10511045 req.data[0] = PMU_ADB_CMD;10521052- req.data[1] = 0;10531053- req.data[2] = ADB_BUSRESET;10461046+ req.data[1] = ADB_BUSRESET;10471047+ req.data[2] = 0;10541048 req.data[3] = 0;10551049 req.data[4] = 0;10561050 req.reply_len = 0;···10621056 pmu_wait_complete(&req);1063105710641058 if (save_autopoll != 0)10651065- pmu_adb_autopoll(save_autopoll);10591059+ __pmu_adb_autopoll(save_autopoll);1066106010671061 return 0;10681062}
+10-6
drivers/md/dm.c
···130130 /*131131 * A list of ios that arrived while we were suspended.132132 */133133- atomic_t pending;133133+ atomic_t pending[2];134134 wait_queue_head_t wait;135135 struct work_struct work;136136 struct bio_list deferred;···453453{454454 struct mapped_device *md = io->md;455455 int cpu;456456+ int rw = bio_data_dir(io->bio);456457457458 io->start_time = jiffies;458459459460 cpu = part_stat_lock();460461 part_round_stats(cpu, &dm_disk(md)->part0);461462 part_stat_unlock();462462- dm_disk(md)->part0.in_flight = atomic_inc_return(&md->pending);463463+ dm_disk(md)->part0.in_flight[rw] = atomic_inc_return(&md->pending[rw]);463464}464465465466static void end_io_acct(struct dm_io *io)···480479 * After this is decremented the bio must not be touched if it is481480 * a barrier.482481 */483483- dm_disk(md)->part0.in_flight = pending =484484- atomic_dec_return(&md->pending);482482+ dm_disk(md)->part0.in_flight[rw] = pending =483483+ atomic_dec_return(&md->pending[rw]);484484+ pending += atomic_read(&md->pending[rw^0x1]);485485486486 /* nudge anyone waiting on suspend queue */487487 if (!pending)···17871785 if (!md->disk)17881786 goto bad_disk;1789178717901790- atomic_set(&md->pending, 0);17881788+ atomic_set(&md->pending[0], 0);17891789+ atomic_set(&md->pending[1], 0);17911790 init_waitqueue_head(&md->wait);17921791 INIT_WORK(&md->work, dm_wq_work);17931792 init_waitqueue_head(&md->eventq);···20912088 break;20922089 }20932090 spin_unlock_irqrestore(q->queue_lock, flags);20942094- } else if (!atomic_read(&md->pending))20912091+ } else if (!atomic_read(&md->pending[0]) &&20922092+ !atomic_read(&md->pending[1]))20952093 break;2096209420972095 if (interruptible == TASK_INTERRUPTIBLE &&
+46-43
drivers/mfd/twl4030-core.c
···480480add_children(struct twl4030_platform_data *pdata, unsigned long features)481481{482482 struct device *child;483483- struct device *usb_transceiver = NULL;484483485484 if (twl_has_bci() && pdata->bci && !(features & TPS_SUBSET)) {486485 child = add_child(3, "twl4030_bci",···531532 }532533533534 if (twl_has_usb() && pdata->usb) {535535+536536+ static struct regulator_consumer_supply usb1v5 = {537537+ .supply = "usb1v5",538538+ };539539+ static struct regulator_consumer_supply usb1v8 = {540540+ .supply = "usb1v8",541541+ };542542+ static struct regulator_consumer_supply usb3v1 = {543543+ .supply = "usb3v1",544544+ };545545+546546+ /* First add the regulators so that they can be used by transceiver */547547+ if (twl_has_regulator()) {548548+ /* this is a template that gets copied */549549+ struct regulator_init_data usb_fixed = {550550+ .constraints.valid_modes_mask =551551+ REGULATOR_MODE_NORMAL552552+ | REGULATOR_MODE_STANDBY,553553+ .constraints.valid_ops_mask =554554+ REGULATOR_CHANGE_MODE555555+ | REGULATOR_CHANGE_STATUS,556556+ };557557+558558+ child = add_regulator_linked(TWL4030_REG_VUSB1V5,559559+ &usb_fixed, &usb1v5, 1);560560+ if (IS_ERR(child))561561+ return PTR_ERR(child);562562+563563+ child = add_regulator_linked(TWL4030_REG_VUSB1V8,564564+ &usb_fixed, &usb1v8, 1);565565+ if (IS_ERR(child))566566+ return PTR_ERR(child);567567+568568+ child = add_regulator_linked(TWL4030_REG_VUSB3V1,569569+ &usb_fixed, &usb3v1, 1);570570+ if (IS_ERR(child))571571+ return PTR_ERR(child);572572+573573+ }574574+534575 child = add_child(0, "twl4030_usb",535576 pdata->usb, sizeof(*pdata->usb),536577 true,537578 /* irq0 = USB_PRES, irq1 = USB */538579 pdata->irq_base + 8 + 2, pdata->irq_base + 4);580580+539581 if (IS_ERR(child))540582 return PTR_ERR(child);541583542584 /* we need to connect regulators to this transceiver */543543- usb_transceiver = child;585585+ if (twl_has_regulator() && child) {586586+ usb1v5.dev = child;587587+ usb1v8.dev = child;588588+ usb3v1.dev = child;589589+ }544590 }545591546592 if (twl_has_watchdog()) {···620576 ? TWL4030_REG_VAUX2_4030621577 : TWL4030_REG_VAUX2,622578 pdata->vaux2);623623- if (IS_ERR(child))624624- return PTR_ERR(child);625625- }626626-627627- if (twl_has_regulator() && usb_transceiver) {628628- static struct regulator_consumer_supply usb1v5 = {629629- .supply = "usb1v5",630630- };631631- static struct regulator_consumer_supply usb1v8 = {632632- .supply = "usb1v8",633633- };634634- static struct regulator_consumer_supply usb3v1 = {635635- .supply = "usb3v1",636636- };637637-638638- /* this is a template that gets copied */639639- struct regulator_init_data usb_fixed = {640640- .constraints.valid_modes_mask =641641- REGULATOR_MODE_NORMAL642642- | REGULATOR_MODE_STANDBY,643643- .constraints.valid_ops_mask =644644- REGULATOR_CHANGE_MODE645645- | REGULATOR_CHANGE_STATUS,646646- };647647-648648- usb1v5.dev = usb_transceiver;649649- usb1v8.dev = usb_transceiver;650650- usb3v1.dev = usb_transceiver;651651-652652- child = add_regulator_linked(TWL4030_REG_VUSB1V5, &usb_fixed,653653- &usb1v5, 1);654654- if (IS_ERR(child))655655- return PTR_ERR(child);656656-657657- child = add_regulator_linked(TWL4030_REG_VUSB1V8, &usb_fixed,658658- &usb1v8, 1);659659- if (IS_ERR(child))660660- return PTR_ERR(child);661661-662662- child = add_regulator_linked(TWL4030_REG_VUSB3V1, &usb_fixed,663663- &usb3v1, 1);664579 if (IS_ERR(child))665580 return PTR_ERR(child);666581 }
+11
drivers/net/Kconfig
···17411741config KS8851_MLL17421742 tristate "Micrel KS8851 MLL"17431743 depends on HAS_IOMEM17441744+ select MII17441745 help17451746 This platform driver is for Micrel KS8851 Address/data bus17461747 multiplexed network chip.···2482248124832482 To compile this driver as a module, choose M here. The module24842483 will be called s6gmac.24842484+24852485+source "drivers/net/stmmac/Kconfig"2485248624862487endif # NETDEV_100024872488···32323229 ---help---32333230 This is the virtual network driver for virtio. It can be used with32343231 lguest or QEMU based VMMs (like KVM or Xen). Say Y or M.32323232+32333233+config VMXNET332343234+ tristate "VMware VMXNET3 ethernet driver"32353235+ depends on PCI && X86 && INET32363236+ help32373237+ This driver supports VMware's vmxnet3 virtual ethernet NIC.32383238+ To compile this driver as a module, choose M here: the32393239+ module will be called vmxnet3.3235324032363241endif # NETDEVICES
+6-4
drivers/net/Makefile
···22# Makefile for the Linux network (ethercard) device drivers.33#4455+obj-$(CONFIG_MII) += mii.o66+obj-$(CONFIG_MDIO) += mdio.o77+obj-$(CONFIG_PHYLIB) += phy/88+59obj-$(CONFIG_TI_DAVINCI_EMAC) += davinci_emac.o610711obj-$(CONFIG_E1000) += e1000/···3026obj-$(CONFIG_ENIC) += enic/3127obj-$(CONFIG_JME) += jme.o3228obj-$(CONFIG_BE2NET) += benet/2929+obj-$(CONFIG_VMXNET3) += vmxnet3/33303431gianfar_driver-objs := gianfar.o \3532 gianfar_ethtool.o \···10095obj-$(CONFIG_ADAPTEC_STARFIRE) += starfire.o10196obj-$(CONFIG_RIONET) += rionet.o10297obj-$(CONFIG_SH_ETH) += sh_eth.o9898+obj-$(CONFIG_STMMAC_ETH) += stmmac/10399104100#105101# end link order section106102#107107-108108-obj-$(CONFIG_MII) += mii.o109109-obj-$(CONFIG_MDIO) += mdio.o110110-obj-$(CONFIG_PHYLIB) += phy/111103112104obj-$(CONFIG_SUNDANCE) += sundance.o113105obj-$(CONFIG_HAMACHI) += hamachi.o
···232232 /*233233 * Ensure that the ports for this device are setup correctly.234234 */235235- if (si->pdata->startup)236236- si->pdata->startup(si->dev);235235+ if (si->pdata->startup) {236236+ ret = si->pdata->startup(si->dev);237237+ if (ret)238238+ return ret;239239+ }237240238241 /*239242 * Configure PPC for IRDA - we want to drive TXD2 low.
+1-17
drivers/net/ixp2000/enp2611.c
···119119 }120120};121121122122-struct enp2611_ixpdev_priv123123-{124124- struct ixpdev_priv ixpdev_priv;125125- struct net_device_stats stats;126126-};127127-128122static struct net_device *nds[3];129123static struct timer_list link_check_timer;130130-131131-static struct net_device_stats *enp2611_get_stats(struct net_device *dev)132132-{133133- struct enp2611_ixpdev_priv *ip = netdev_priv(dev);134134-135135- pm3386_get_stats(ip->ixpdev_priv.channel, &(ip->stats));136136-137137- return &(ip->stats);138138-}139124140125/* @@@ Poll the SFP moddef0 line too. */141126/* @@@ Try to use the pm3386 DOOL interrupt as well. */···188203189204 ports = pm3386_port_count();190205 for (i = 0; i < ports; i++) {191191- nds[i] = ixpdev_alloc(i, sizeof(struct enp2611_ixpdev_priv));206206+ nds[i] = ixpdev_alloc(i, sizeof(struct ixpdev_priv));192207 if (nds[i] == NULL) {193208 while (--i >= 0)194209 free_netdev(nds[i]);195210 return -ENOMEM;196211 }197212198198- nds[i]->get_stats = enp2611_get_stats;199213 pm3386_init_port(i);200214 pm3386_get_mac(i, nds[i]->dev_addr);201215 }
···11+config STMMAC_ETH22+ tristate "STMicroelectronics 10/100/1000 Ethernet driver"33+ select MII44+ select PHYLIB55+ depends on NETDEVICES && CPU_SUBTYPE_ST4066+ help77+ This is the driver for the ST MAC 10/100/1000 on-chip Ethernet88+ controllers. ST Ethernet IPs are built around a Synopsys IP Core.99+1010+if STMMAC_ETH1111+1212+config STMMAC_DA1313+ bool "STMMAC DMA arbitration scheme"1414+ default n1515+ help1616+ Selecting this option, rx has priority over Tx (only for Giga1717+ Ethernet device).1818+ By default, the DMA arbitration scheme is based on Round-robin1919+ (rx:tx priority is 1:1).2020+2121+config STMMAC_DUAL_MAC2222+ bool "STMMAC: dual mac support (EXPERIMENTAL)"2323+ default n2424+ depends on EXPERIMENTAL && STMMAC_ETH && !STMMAC_TIMER2525+ help2626+ Some ST SoCs (for example the stx7141 and stx7200c2) have two2727+ Ethernet Controllers. This option turns on the second Ethernet2828+ device on this kind of platforms.2929+3030+config STMMAC_TIMER3131+ bool "STMMAC Timer optimisation"3232+ default n3333+ help3434+ Use an external timer for mitigating the number of network3535+ interrupts.3636+3737+choice3838+ prompt "Select Timer device"3939+ depends on STMMAC_TIMER4040+4141+config STMMAC_TMU_TIMER4242+ bool "TMU channel 2"4343+ depends on CPU_SH44444+ help4545+4646+config STMMAC_RTC_TIMER4747+ bool "Real time clock"4848+ depends on RTC_CLASS4949+ help5050+5151+endchoice5252+5353+endif
···11+/*******************************************************************************22+ STMMAC Common Header File33+44+ Copyright (C) 2007-2009 STMicroelectronics Ltd55+66+ This program is free software; you can redistribute it and/or modify it77+ under the terms and conditions of the GNU General Public License,88+ version 2, as published by the Free Software Foundation.99+1010+ This program is distributed in the hope it will be useful, but WITHOUT1111+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ more details.1414+1515+ You should have received a copy of the GNU General Public License along with1616+ this program; if not, write to the Free Software Foundation, Inc.,1717+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1818+1919+ The full GNU General Public License is included in this distribution in2020+ the file called "COPYING".2121+2222+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2323+*******************************************************************************/2424+2525+#include "descs.h"2626+#include <linux/io.h>2727+2828+/* *********************************************2929+ DMA CRS Control and Status Register Mapping3030+ * *********************************************/3131+#define DMA_BUS_MODE 0x00001000 /* Bus Mode */3232+#define DMA_XMT_POLL_DEMAND 0x00001004 /* Transmit Poll Demand */3333+#define DMA_RCV_POLL_DEMAND 0x00001008 /* Received Poll Demand */3434+#define DMA_RCV_BASE_ADDR 0x0000100c /* Receive List Base */3535+#define DMA_TX_BASE_ADDR 0x00001010 /* Transmit List Base */3636+#define DMA_STATUS 0x00001014 /* Status Register */3737+#define DMA_CONTROL 0x00001018 /* Ctrl (Operational Mode) */3838+#define DMA_INTR_ENA 0x0000101c /* Interrupt Enable */3939+#define DMA_MISSED_FRAME_CTR 0x00001020 /* Missed Frame Counter */4040+#define DMA_CUR_TX_BUF_ADDR 0x00001050 /* Current Host Tx Buffer */4141+#define DMA_CUR_RX_BUF_ADDR 0x00001054 /* Current Host Rx Buffer */4242+4343+/* ********************************4444+ DMA Control register defines4545+ * ********************************/4646+#define DMA_CONTROL_ST 0x00002000 /* Start/Stop Transmission */4747+#define DMA_CONTROL_SR 0x00000002 /* Start/Stop Receive */4848+4949+/* **************************************5050+ DMA Interrupt Enable register defines5151+ * **************************************/5252+/**** NORMAL INTERRUPT ****/5353+#define DMA_INTR_ENA_NIE 0x00010000 /* Normal Summary */5454+#define DMA_INTR_ENA_TIE 0x00000001 /* Transmit Interrupt */5555+#define DMA_INTR_ENA_TUE 0x00000004 /* Transmit Buffer Unavailable */5656+#define DMA_INTR_ENA_RIE 0x00000040 /* Receive Interrupt */5757+#define DMA_INTR_ENA_ERE 0x00004000 /* Early Receive */5858+5959+#define DMA_INTR_NORMAL (DMA_INTR_ENA_NIE | DMA_INTR_ENA_RIE | \6060+ DMA_INTR_ENA_TIE)6161+6262+/**** ABNORMAL INTERRUPT ****/6363+#define DMA_INTR_ENA_AIE 0x00008000 /* Abnormal Summary */6464+#define DMA_INTR_ENA_FBE 0x00002000 /* Fatal Bus Error */6565+#define DMA_INTR_ENA_ETE 0x00000400 /* Early Transmit */6666+#define DMA_INTR_ENA_RWE 0x00000200 /* Receive Watchdog */6767+#define DMA_INTR_ENA_RSE 0x00000100 /* Receive Stopped */6868+#define DMA_INTR_ENA_RUE 0x00000080 /* Receive Buffer Unavailable */6969+#define DMA_INTR_ENA_UNE 0x00000020 /* Tx Underflow */7070+#define DMA_INTR_ENA_OVE 0x00000010 /* Receive Overflow */7171+#define DMA_INTR_ENA_TJE 0x00000008 /* Transmit Jabber */7272+#define DMA_INTR_ENA_TSE 0x00000002 /* Transmit Stopped */7373+7474+#define DMA_INTR_ABNORMAL (DMA_INTR_ENA_AIE | DMA_INTR_ENA_FBE | \7575+ DMA_INTR_ENA_UNE)7676+7777+/* DMA default interrupt mask */7878+#define DMA_INTR_DEFAULT_MASK (DMA_INTR_NORMAL | DMA_INTR_ABNORMAL)7979+8080+/* ****************************8181+ * DMA Status register defines8282+ * ****************************/8383+#define DMA_STATUS_GPI 0x10000000 /* PMT interrupt */8484+#define DMA_STATUS_GMI 0x08000000 /* MMC interrupt */8585+#define DMA_STATUS_GLI 0x04000000 /* GMAC Line interface int. */8686+#define DMA_STATUS_GMI 0x080000008787+#define DMA_STATUS_GLI 0x040000008888+#define DMA_STATUS_EB_MASK 0x00380000 /* Error Bits Mask */8989+#define DMA_STATUS_EB_TX_ABORT 0x00080000 /* Error Bits - TX Abort */9090+#define DMA_STATUS_EB_RX_ABORT 0x00100000 /* Error Bits - RX Abort */9191+#define DMA_STATUS_TS_MASK 0x00700000 /* Transmit Process State */9292+#define DMA_STATUS_TS_SHIFT 209393+#define DMA_STATUS_RS_MASK 0x000e0000 /* Receive Process State */9494+#define DMA_STATUS_RS_SHIFT 179595+#define DMA_STATUS_NIS 0x00010000 /* Normal Interrupt Summary */9696+#define DMA_STATUS_AIS 0x00008000 /* Abnormal Interrupt Summary */9797+#define DMA_STATUS_ERI 0x00004000 /* Early Receive Interrupt */9898+#define DMA_STATUS_FBI 0x00002000 /* Fatal Bus Error Interrupt */9999+#define DMA_STATUS_ETI 0x00000400 /* Early Transmit Interrupt */100100+#define DMA_STATUS_RWT 0x00000200 /* Receive Watchdog Timeout */101101+#define DMA_STATUS_RPS 0x00000100 /* Receive Process Stopped */102102+#define DMA_STATUS_RU 0x00000080 /* Receive Buffer Unavailable */103103+#define DMA_STATUS_RI 0x00000040 /* Receive Interrupt */104104+#define DMA_STATUS_UNF 0x00000020 /* Transmit Underflow */105105+#define DMA_STATUS_OVF 0x00000010 /* Receive Overflow */106106+#define DMA_STATUS_TJT 0x00000008 /* Transmit Jabber Timeout */107107+#define DMA_STATUS_TU 0x00000004 /* Transmit Buffer Unavailable */108108+#define DMA_STATUS_TPS 0x00000002 /* Transmit Process Stopped */109109+#define DMA_STATUS_TI 0x00000001 /* Transmit Interrupt */110110+111111+/* Other defines */112112+#define HASH_TABLE_SIZE 64113113+#define PAUSE_TIME 0x200114114+115115+/* Flow Control defines */116116+#define FLOW_OFF 0117117+#define FLOW_RX 1118118+#define FLOW_TX 2119119+#define FLOW_AUTO (FLOW_TX | FLOW_RX)120120+121121+/* DMA STORE-AND-FORWARD Operation Mode */122122+#define SF_DMA_MODE 1123123+124124+#define HW_CSUM 1125125+#define NO_HW_CSUM 0126126+127127+/* GMAC TX FIFO is 8K, Rx FIFO is 16K */128128+#define BUF_SIZE_16KiB 16384129129+#define BUF_SIZE_8KiB 8192130130+#define BUF_SIZE_4KiB 4096131131+#define BUF_SIZE_2KiB 2048132132+133133+/* Power Down and WOL */134134+#define PMT_NOT_SUPPORTED 0135135+#define PMT_SUPPORTED 1136136+137137+/* Common MAC defines */138138+#define MAC_CTRL_REG 0x00000000 /* MAC Control */139139+#define MAC_ENABLE_TX 0x00000008 /* Transmitter Enable */140140+#define MAC_RNABLE_RX 0x00000004 /* Receiver Enable */141141+142142+/* MAC Management Counters register */143143+#define MMC_CONTROL 0x00000100 /* MMC Control */144144+#define MMC_HIGH_INTR 0x00000104 /* MMC High Interrupt */145145+#define MMC_LOW_INTR 0x00000108 /* MMC Low Interrupt */146146+#define MMC_HIGH_INTR_MASK 0x0000010c /* MMC High Interrupt Mask */147147+#define MMC_LOW_INTR_MASK 0x00000110 /* MMC Low Interrupt Mask */148148+149149+#define MMC_CONTROL_MAX_FRM_MASK 0x0003ff8 /* Maximum Frame Size */150150+#define MMC_CONTROL_MAX_FRM_SHIFT 3151151+#define MMC_CONTROL_MAX_FRAME 0x7FF152152+153153+struct stmmac_extra_stats {154154+ /* Transmit errors */155155+ unsigned long tx_underflow ____cacheline_aligned;156156+ unsigned long tx_carrier;157157+ unsigned long tx_losscarrier;158158+ unsigned long tx_heartbeat;159159+ unsigned long tx_deferred;160160+ unsigned long tx_vlan;161161+ unsigned long tx_jabber;162162+ unsigned long tx_frame_flushed;163163+ unsigned long tx_payload_error;164164+ unsigned long tx_ip_header_error;165165+ /* Receive errors */166166+ unsigned long rx_desc;167167+ unsigned long rx_partial;168168+ unsigned long rx_runt;169169+ unsigned long rx_toolong;170170+ unsigned long rx_collision;171171+ unsigned long rx_crc;172172+ unsigned long rx_lenght;173173+ unsigned long rx_mii;174174+ unsigned long rx_multicast;175175+ unsigned long rx_gmac_overflow;176176+ unsigned long rx_watchdog;177177+ unsigned long da_rx_filter_fail;178178+ unsigned long sa_rx_filter_fail;179179+ unsigned long rx_missed_cntr;180180+ unsigned long rx_overflow_cntr;181181+ unsigned long rx_vlan;182182+ /* Tx/Rx IRQ errors */183183+ unsigned long tx_undeflow_irq;184184+ unsigned long tx_process_stopped_irq;185185+ unsigned long tx_jabber_irq;186186+ unsigned long rx_overflow_irq;187187+ unsigned long rx_buf_unav_irq;188188+ unsigned long rx_process_stopped_irq;189189+ unsigned long rx_watchdog_irq;190190+ unsigned long tx_early_irq;191191+ unsigned long fatal_bus_error_irq;192192+ /* Extra info */193193+ unsigned long threshold;194194+ unsigned long tx_pkt_n;195195+ unsigned long rx_pkt_n;196196+ unsigned long poll_n;197197+ unsigned long sched_timer_n;198198+ unsigned long normal_irq_n;199199+};200200+201201+/* GMAC core can compute the checksums in HW. */202202+enum rx_frame_status {203203+ good_frame = 0,204204+ discard_frame = 1,205205+ csum_none = 2,206206+};207207+208208+static inline void stmmac_set_mac_addr(unsigned long ioaddr, u8 addr[6],209209+ unsigned int high, unsigned int low)210210+{211211+ unsigned long data;212212+213213+ data = (addr[5] << 8) | addr[4];214214+ writel(data, ioaddr + high);215215+ data = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0];216216+ writel(data, ioaddr + low);217217+218218+ return;219219+}220220+221221+static inline void stmmac_get_mac_addr(unsigned long ioaddr,222222+ unsigned char *addr, unsigned int high,223223+ unsigned int low)224224+{225225+ unsigned int hi_addr, lo_addr;226226+227227+ /* Read the MAC address from the hardware */228228+ hi_addr = readl(ioaddr + high);229229+ lo_addr = readl(ioaddr + low);230230+231231+ /* Extract the MAC address from the high and low words */232232+ addr[0] = lo_addr & 0xff;233233+ addr[1] = (lo_addr >> 8) & 0xff;234234+ addr[2] = (lo_addr >> 16) & 0xff;235235+ addr[3] = (lo_addr >> 24) & 0xff;236236+ addr[4] = hi_addr & 0xff;237237+ addr[5] = (hi_addr >> 8) & 0xff;238238+239239+ return;240240+}241241+242242+struct stmmac_ops {243243+ /* MAC core initialization */244244+ void (*core_init) (unsigned long ioaddr) ____cacheline_aligned;245245+ /* DMA core initialization */246246+ int (*dma_init) (unsigned long ioaddr, int pbl, u32 dma_tx, u32 dma_rx);247247+ /* Dump MAC registers */248248+ void (*dump_mac_regs) (unsigned long ioaddr);249249+ /* Dump DMA registers */250250+ void (*dump_dma_regs) (unsigned long ioaddr);251251+ /* Set tx/rx threshold in the csr6 register252252+ * An invalid value enables the store-and-forward mode */253253+ void (*dma_mode) (unsigned long ioaddr, int txmode, int rxmode);254254+ /* To track extra statistic (if supported) */255255+ void (*dma_diagnostic_fr) (void *data, struct stmmac_extra_stats *x,256256+ unsigned long ioaddr);257257+ /* RX descriptor ring initialization */258258+ void (*init_rx_desc) (struct dma_desc *p, unsigned int ring_size,259259+ int disable_rx_ic);260260+ /* TX descriptor ring initialization */261261+ void (*init_tx_desc) (struct dma_desc *p, unsigned int ring_size);262262+263263+ /* Invoked by the xmit function to prepare the tx descriptor */264264+ void (*prepare_tx_desc) (struct dma_desc *p, int is_fs, int len,265265+ int csum_flag);266266+ /* Set/get the owner of the descriptor */267267+ void (*set_tx_owner) (struct dma_desc *p);268268+ int (*get_tx_owner) (struct dma_desc *p);269269+ /* Invoked by the xmit function to close the tx descriptor */270270+ void (*close_tx_desc) (struct dma_desc *p);271271+ /* Clean the tx descriptor as soon as the tx irq is received */272272+ void (*release_tx_desc) (struct dma_desc *p);273273+ /* Clear interrupt on tx frame completion. When this bit is274274+ * set an interrupt happens as soon as the frame is transmitted */275275+ void (*clear_tx_ic) (struct dma_desc *p);276276+ /* Last tx segment reports the transmit status */277277+ int (*get_tx_ls) (struct dma_desc *p);278278+ /* Return the transmit status looking at the TDES1 */279279+ int (*tx_status) (void *data, struct stmmac_extra_stats *x,280280+ struct dma_desc *p, unsigned long ioaddr);281281+ /* Get the buffer size from the descriptor */282282+ int (*get_tx_len) (struct dma_desc *p);283283+ /* Handle extra events on specific interrupts hw dependent */284284+ void (*host_irq_status) (unsigned long ioaddr);285285+ int (*get_rx_owner) (struct dma_desc *p);286286+ void (*set_rx_owner) (struct dma_desc *p);287287+ /* Get the receive frame size */288288+ int (*get_rx_frame_len) (struct dma_desc *p);289289+ /* Return the reception status looking at the RDES1 */290290+ int (*rx_status) (void *data, struct stmmac_extra_stats *x,291291+ struct dma_desc *p);292292+ /* Multicast filter setting */293293+ void (*set_filter) (struct net_device *dev);294294+ /* Flow control setting */295295+ void (*flow_ctrl) (unsigned long ioaddr, unsigned int duplex,296296+ unsigned int fc, unsigned int pause_time);297297+ /* Set power management mode (e.g. magic frame) */298298+ void (*pmt) (unsigned long ioaddr, unsigned long mode);299299+ /* Set/Get Unicast MAC addresses */300300+ void (*set_umac_addr) (unsigned long ioaddr, unsigned char *addr,301301+ unsigned int reg_n);302302+ void (*get_umac_addr) (unsigned long ioaddr, unsigned char *addr,303303+ unsigned int reg_n);304304+};305305+306306+struct mac_link {307307+ int port;308308+ int duplex;309309+ int speed;310310+};311311+312312+struct mii_regs {313313+ unsigned int addr; /* MII Address */314314+ unsigned int data; /* MII Data */315315+};316316+317317+struct hw_cap {318318+ unsigned int version; /* Core Version register (GMAC) */319319+ unsigned int pmt; /* Power-Down mode (GMAC) */320320+ struct mac_link link;321321+ struct mii_regs mii;322322+};323323+324324+struct mac_device_info {325325+ struct hw_cap hw;326326+ struct stmmac_ops *ops;327327+};328328+329329+struct mac_device_info *gmac_setup(unsigned long addr);330330+struct mac_device_info *mac100_setup(unsigned long addr);
+163
drivers/net/stmmac/descs.h
···11+/*******************************************************************************22+ Header File to describe the DMA descriptors33+ Use enhanced descriptors in case of GMAC Cores.44+55+ This program is free software; you can redistribute it and/or modify it66+ under the terms and conditions of the GNU General Public License,77+ version 2, as published by the Free Software Foundation.88+99+ This program is distributed in the hope it will be useful, but WITHOUT1010+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1111+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1212+ more details.1313+1414+ You should have received a copy of the GNU General Public License along with1515+ this program; if not, write to the Free Software Foundation, Inc.,1616+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1717+1818+ The full GNU General Public License is included in this distribution in1919+ the file called "COPYING".2020+2121+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2222+*******************************************************************************/2323+struct dma_desc {2424+ /* Receive descriptor */2525+ union {2626+ struct {2727+ /* RDES0 */2828+ u32 reserved1:1;2929+ u32 crc_error:1;3030+ u32 dribbling:1;3131+ u32 mii_error:1;3232+ u32 receive_watchdog:1;3333+ u32 frame_type:1;3434+ u32 collision:1;3535+ u32 frame_too_long:1;3636+ u32 last_descriptor:1;3737+ u32 first_descriptor:1;3838+ u32 multicast_frame:1;3939+ u32 run_frame:1;4040+ u32 length_error:1;4141+ u32 partial_frame_error:1;4242+ u32 descriptor_error:1;4343+ u32 error_summary:1;4444+ u32 frame_length:14;4545+ u32 filtering_fail:1;4646+ u32 own:1;4747+ /* RDES1 */4848+ u32 buffer1_size:11;4949+ u32 buffer2_size:11;5050+ u32 reserved2:2;5151+ u32 second_address_chained:1;5252+ u32 end_ring:1;5353+ u32 reserved3:5;5454+ u32 disable_ic:1;5555+ } rx;5656+ struct {5757+ /* RDES0 */5858+ u32 payload_csum_error:1;5959+ u32 crc_error:1;6060+ u32 dribbling:1;6161+ u32 error_gmii:1;6262+ u32 receive_watchdog:1;6363+ u32 frame_type:1;6464+ u32 late_collision:1;6565+ u32 ipc_csum_error:1;6666+ u32 last_descriptor:1;6767+ u32 first_descriptor:1;6868+ u32 vlan_tag:1;6969+ u32 overflow_error:1;7070+ u32 length_error:1;7171+ u32 sa_filter_fail:1;7272+ u32 descriptor_error:1;7373+ u32 error_summary:1;7474+ u32 frame_length:14;7575+ u32 da_filter_fail:1;7676+ u32 own:1;7777+ /* RDES1 */7878+ u32 buffer1_size:13;7979+ u32 reserved1:1;8080+ u32 second_address_chained:1;8181+ u32 end_ring:1;8282+ u32 buffer2_size:13;8383+ u32 reserved2:2;8484+ u32 disable_ic:1;8585+ } erx; /* -- enhanced -- */8686+8787+ /* Transmit descriptor */8888+ struct {8989+ /* TDES0 */9090+ u32 deferred:1;9191+ u32 underflow_error:1;9292+ u32 excessive_deferral:1;9393+ u32 collision_count:4;9494+ u32 heartbeat_fail:1;9595+ u32 excessive_collisions:1;9696+ u32 late_collision:1;9797+ u32 no_carrier:1;9898+ u32 loss_carrier:1;9999+ u32 reserved1:3;100100+ u32 error_summary:1;101101+ u32 reserved2:15;102102+ u32 own:1;103103+ /* TDES1 */104104+ u32 buffer1_size:11;105105+ u32 buffer2_size:11;106106+ u32 reserved3:1;107107+ u32 disable_padding:1;108108+ u32 second_address_chained:1;109109+ u32 end_ring:1;110110+ u32 crc_disable:1;111111+ u32 reserved4:2;112112+ u32 first_segment:1;113113+ u32 last_segment:1;114114+ u32 interrupt:1;115115+ } tx;116116+ struct {117117+ /* TDES0 */118118+ u32 deferred:1;119119+ u32 underflow_error:1;120120+ u32 excessive_deferral:1;121121+ u32 collision_count:4;122122+ u32 vlan_frame:1;123123+ u32 excessive_collisions:1;124124+ u32 late_collision:1;125125+ u32 no_carrier:1;126126+ u32 loss_carrier:1;127127+ u32 payload_error:1;128128+ u32 frame_flushed:1;129129+ u32 jabber_timeout:1;130130+ u32 error_summary:1;131131+ u32 ip_header_error:1;132132+ u32 time_stamp_status:1;133133+ u32 reserved1:2;134134+ u32 second_address_chained:1;135135+ u32 end_ring:1;136136+ u32 checksum_insertion:2;137137+ u32 reserved2:1;138138+ u32 time_stamp_enable:1;139139+ u32 disable_padding:1;140140+ u32 crc_disable:1;141141+ u32 first_segment:1;142142+ u32 last_segment:1;143143+ u32 interrupt:1;144144+ u32 own:1;145145+ /* TDES1 */146146+ u32 buffer1_size:13;147147+ u32 reserved3:3;148148+ u32 buffer2_size:13;149149+ u32 reserved4:3;150150+ } etx; /* -- enhanced -- */151151+ } des01;152152+ unsigned int des2;153153+ unsigned int des3;154154+};155155+156156+/* Transmit checksum insertion control */157157+enum tdes_csum_insertion {158158+ cic_disabled = 0, /* Checksum Insertion Control */159159+ cic_only_ip = 1, /* Only IP header */160160+ cic_no_pseudoheader = 2, /* IP header but pseudoheader161161+ * is not calculated */162162+ cic_full = 3, /* IP header and pseudoheader */163163+};
+693
drivers/net/stmmac/gmac.c
···11+/*******************************************************************************22+ This is the driver for the GMAC on-chip Ethernet controller for ST SoCs.33+ DWC Ether MAC 10/100/1000 Universal version 3.41a has been used for44+ developing this code.55+66+ Copyright (C) 2007-2009 STMicroelectronics Ltd77+88+ This program is free software; you can redistribute it and/or modify it99+ under the terms and conditions of the GNU General Public License,1010+ version 2, as published by the Free Software Foundation.1111+1212+ This program is distributed in the hope it will be useful, but WITHOUT1313+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1414+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1515+ more details.1616+1717+ You should have received a copy of the GNU General Public License along with1818+ this program; if not, write to the Free Software Foundation, Inc.,1919+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.2020+2121+ The full GNU General Public License is included in this distribution in2222+ the file called "COPYING".2323+2424+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2525+*******************************************************************************/2626+2727+#include <linux/netdevice.h>2828+#include <linux/crc32.h>2929+#include <linux/mii.h>3030+#include <linux/phy.h>3131+3232+#include "stmmac.h"3333+#include "gmac.h"3434+3535+#undef GMAC_DEBUG3636+/*#define GMAC_DEBUG*/3737+#undef FRAME_FILTER_DEBUG3838+/*#define FRAME_FILTER_DEBUG*/3939+#ifdef GMAC_DEBUG4040+#define DBG(fmt, args...) printk(fmt, ## args)4141+#else4242+#define DBG(fmt, args...) do { } while (0)4343+#endif4444+4545+static void gmac_dump_regs(unsigned long ioaddr)4646+{4747+ int i;4848+ pr_info("\t----------------------------------------------\n"4949+ "\t GMAC registers (base addr = 0x%8x)\n"5050+ "\t----------------------------------------------\n",5151+ (unsigned int)ioaddr);5252+5353+ for (i = 0; i < 55; i++) {5454+ int offset = i * 4;5555+ pr_info("\tReg No. %d (offset 0x%x): 0x%08x\n", i,5656+ offset, readl(ioaddr + offset));5757+ }5858+ return;5959+}6060+6161+static int gmac_dma_init(unsigned long ioaddr, int pbl, u32 dma_tx, u32 dma_rx)6262+{6363+ u32 value = readl(ioaddr + DMA_BUS_MODE);6464+ /* DMA SW reset */6565+ value |= DMA_BUS_MODE_SFT_RESET;6666+ writel(value, ioaddr + DMA_BUS_MODE);6767+ do {} while ((readl(ioaddr + DMA_BUS_MODE) & DMA_BUS_MODE_SFT_RESET));6868+6969+ value = /* DMA_BUS_MODE_FB | */ DMA_BUS_MODE_4PBL |7070+ ((pbl << DMA_BUS_MODE_PBL_SHIFT) |7171+ (pbl << DMA_BUS_MODE_RPBL_SHIFT));7272+7373+#ifdef CONFIG_STMMAC_DA7474+ value |= DMA_BUS_MODE_DA; /* Rx has priority over tx */7575+#endif7676+ writel(value, ioaddr + DMA_BUS_MODE);7777+7878+ /* Mask interrupts by writing to CSR7 */7979+ writel(DMA_INTR_DEFAULT_MASK, ioaddr + DMA_INTR_ENA);8080+8181+ /* The base address of the RX/TX descriptor lists must be written into8282+ * DMA CSR3 and CSR4, respectively. */8383+ writel(dma_tx, ioaddr + DMA_TX_BASE_ADDR);8484+ writel(dma_rx, ioaddr + DMA_RCV_BASE_ADDR);8585+8686+ return 0;8787+}8888+8989+/* Transmit FIFO flush operation */9090+static void gmac_flush_tx_fifo(unsigned long ioaddr)9191+{9292+ u32 csr6 = readl(ioaddr + DMA_CONTROL);9393+ writel((csr6 | DMA_CONTROL_FTF), ioaddr + DMA_CONTROL);9494+9595+ do {} while ((readl(ioaddr + DMA_CONTROL) & DMA_CONTROL_FTF));9696+}9797+9898+static void gmac_dma_operation_mode(unsigned long ioaddr, int txmode,9999+ int rxmode)100100+{101101+ u32 csr6 = readl(ioaddr + DMA_CONTROL);102102+103103+ if (txmode == SF_DMA_MODE) {104104+ DBG(KERN_DEBUG "GMAC: enabling TX store and forward mode\n");105105+ /* Transmit COE type 2 cannot be done in cut-through mode. */106106+ csr6 |= DMA_CONTROL_TSF;107107+ /* Operating on second frame increase the performance108108+ * especially when transmit store-and-forward is used.*/109109+ csr6 |= DMA_CONTROL_OSF;110110+ } else {111111+ DBG(KERN_DEBUG "GMAC: disabling TX store and forward mode"112112+ " (threshold = %d)\n", txmode);113113+ csr6 &= ~DMA_CONTROL_TSF;114114+ csr6 &= DMA_CONTROL_TC_TX_MASK;115115+ /* Set the transmit threashold */116116+ if (txmode <= 32)117117+ csr6 |= DMA_CONTROL_TTC_32;118118+ else if (txmode <= 64)119119+ csr6 |= DMA_CONTROL_TTC_64;120120+ else if (txmode <= 128)121121+ csr6 |= DMA_CONTROL_TTC_128;122122+ else if (txmode <= 192)123123+ csr6 |= DMA_CONTROL_TTC_192;124124+ else125125+ csr6 |= DMA_CONTROL_TTC_256;126126+ }127127+128128+ if (rxmode == SF_DMA_MODE) {129129+ DBG(KERN_DEBUG "GMAC: enabling RX store and forward mode\n");130130+ csr6 |= DMA_CONTROL_RSF;131131+ } else {132132+ DBG(KERN_DEBUG "GMAC: disabling RX store and forward mode"133133+ " (threshold = %d)\n", rxmode);134134+ csr6 &= ~DMA_CONTROL_RSF;135135+ csr6 &= DMA_CONTROL_TC_RX_MASK;136136+ if (rxmode <= 32)137137+ csr6 |= DMA_CONTROL_RTC_32;138138+ else if (rxmode <= 64)139139+ csr6 |= DMA_CONTROL_RTC_64;140140+ else if (rxmode <= 96)141141+ csr6 |= DMA_CONTROL_RTC_96;142142+ else143143+ csr6 |= DMA_CONTROL_RTC_128;144144+ }145145+146146+ writel(csr6, ioaddr + DMA_CONTROL);147147+ return;148148+}149149+150150+/* Not yet implemented --- no RMON module */151151+static void gmac_dma_diagnostic_fr(void *data, struct stmmac_extra_stats *x,152152+ unsigned long ioaddr)153153+{154154+ return;155155+}156156+157157+static void gmac_dump_dma_regs(unsigned long ioaddr)158158+{159159+ int i;160160+ pr_info(" DMA registers\n");161161+ for (i = 0; i < 22; i++) {162162+ if ((i < 9) || (i > 17)) {163163+ int offset = i * 4;164164+ pr_err("\t Reg No. %d (offset 0x%x): 0x%08x\n", i,165165+ (DMA_BUS_MODE + offset),166166+ readl(ioaddr + DMA_BUS_MODE + offset));167167+ }168168+ }169169+ return;170170+}171171+172172+static int gmac_get_tx_frame_status(void *data, struct stmmac_extra_stats *x,173173+ struct dma_desc *p, unsigned long ioaddr)174174+{175175+ int ret = 0;176176+ struct net_device_stats *stats = (struct net_device_stats *)data;177177+178178+ if (unlikely(p->des01.etx.error_summary)) {179179+ DBG(KERN_ERR "GMAC TX error... 0x%08x\n", p->des01.etx);180180+ if (unlikely(p->des01.etx.jabber_timeout)) {181181+ DBG(KERN_ERR "\tjabber_timeout error\n");182182+ x->tx_jabber++;183183+ }184184+185185+ if (unlikely(p->des01.etx.frame_flushed)) {186186+ DBG(KERN_ERR "\tframe_flushed error\n");187187+ x->tx_frame_flushed++;188188+ gmac_flush_tx_fifo(ioaddr);189189+ }190190+191191+ if (unlikely(p->des01.etx.loss_carrier)) {192192+ DBG(KERN_ERR "\tloss_carrier error\n");193193+ x->tx_losscarrier++;194194+ stats->tx_carrier_errors++;195195+ }196196+ if (unlikely(p->des01.etx.no_carrier)) {197197+ DBG(KERN_ERR "\tno_carrier error\n");198198+ x->tx_carrier++;199199+ stats->tx_carrier_errors++;200200+ }201201+ if (unlikely(p->des01.etx.late_collision)) {202202+ DBG(KERN_ERR "\tlate_collision error\n");203203+ stats->collisions += p->des01.etx.collision_count;204204+ }205205+ if (unlikely(p->des01.etx.excessive_collisions)) {206206+ DBG(KERN_ERR "\texcessive_collisions\n");207207+ stats->collisions += p->des01.etx.collision_count;208208+ }209209+ if (unlikely(p->des01.etx.excessive_deferral)) {210210+ DBG(KERN_INFO "\texcessive tx_deferral\n");211211+ x->tx_deferred++;212212+ }213213+214214+ if (unlikely(p->des01.etx.underflow_error)) {215215+ DBG(KERN_ERR "\tunderflow error\n");216216+ gmac_flush_tx_fifo(ioaddr);217217+ x->tx_underflow++;218218+ }219219+220220+ if (unlikely(p->des01.etx.ip_header_error)) {221221+ DBG(KERN_ERR "\tTX IP header csum error\n");222222+ x->tx_ip_header_error++;223223+ }224224+225225+ if (unlikely(p->des01.etx.payload_error)) {226226+ DBG(KERN_ERR "\tAddr/Payload csum error\n");227227+ x->tx_payload_error++;228228+ gmac_flush_tx_fifo(ioaddr);229229+ }230230+231231+ ret = -1;232232+ }233233+234234+ if (unlikely(p->des01.etx.deferred)) {235235+ DBG(KERN_INFO "GMAC TX status: tx deferred\n");236236+ x->tx_deferred++;237237+ }238238+#ifdef STMMAC_VLAN_TAG_USED239239+ if (p->des01.etx.vlan_frame) {240240+ DBG(KERN_INFO "GMAC TX status: VLAN frame\n");241241+ x->tx_vlan++;242242+ }243243+#endif244244+245245+ return ret;246246+}247247+248248+static int gmac_get_tx_len(struct dma_desc *p)249249+{250250+ return p->des01.etx.buffer1_size;251251+}252252+253253+static int gmac_coe_rdes0(int ipc_err, int type, int payload_err)254254+{255255+ int ret = good_frame;256256+ u32 status = (type << 2 | ipc_err << 1 | payload_err) & 0x7;257257+258258+ /* bits 5 7 0 | Frame status259259+ * ----------------------------------------------------------260260+ * 0 0 0 | IEEE 802.3 Type frame (lenght < 1536 octects)261261+ * 1 0 0 | IPv4/6 No CSUM errorS.262262+ * 1 0 1 | IPv4/6 CSUM PAYLOAD error263263+ * 1 1 0 | IPv4/6 CSUM IP HR error264264+ * 1 1 1 | IPv4/6 IP PAYLOAD AND HEADER errorS265265+ * 0 0 1 | IPv4/6 unsupported IP PAYLOAD266266+ * 0 1 1 | COE bypassed.. no IPv4/6 frame267267+ * 0 1 0 | Reserved.268268+ */269269+ if (status == 0x0) {270270+ DBG(KERN_INFO "RX Des0 status: IEEE 802.3 Type frame.\n");271271+ ret = good_frame;272272+ } else if (status == 0x4) {273273+ DBG(KERN_INFO "RX Des0 status: IPv4/6 No CSUM errorS.\n");274274+ ret = good_frame;275275+ } else if (status == 0x5) {276276+ DBG(KERN_ERR "RX Des0 status: IPv4/6 Payload Error.\n");277277+ ret = csum_none;278278+ } else if (status == 0x6) {279279+ DBG(KERN_ERR "RX Des0 status: IPv4/6 Header Error.\n");280280+ ret = csum_none;281281+ } else if (status == 0x7) {282282+ DBG(KERN_ERR283283+ "RX Des0 status: IPv4/6 Header and Payload Error.\n");284284+ ret = csum_none;285285+ } else if (status == 0x1) {286286+ DBG(KERN_ERR287287+ "RX Des0 status: IPv4/6 unsupported IP PAYLOAD.\n");288288+ ret = discard_frame;289289+ } else if (status == 0x3) {290290+ DBG(KERN_ERR "RX Des0 status: No IPv4, IPv6 frame.\n");291291+ ret = discard_frame;292292+ }293293+ return ret;294294+}295295+296296+static int gmac_get_rx_frame_status(void *data, struct stmmac_extra_stats *x,297297+ struct dma_desc *p)298298+{299299+ int ret = good_frame;300300+ struct net_device_stats *stats = (struct net_device_stats *)data;301301+302302+ if (unlikely(p->des01.erx.error_summary)) {303303+ DBG(KERN_ERR "GMAC RX Error Summary... 0x%08x\n", p->des01.erx);304304+ if (unlikely(p->des01.erx.descriptor_error)) {305305+ DBG(KERN_ERR "\tdescriptor error\n");306306+ x->rx_desc++;307307+ stats->rx_length_errors++;308308+ }309309+ if (unlikely(p->des01.erx.overflow_error)) {310310+ DBG(KERN_ERR "\toverflow error\n");311311+ x->rx_gmac_overflow++;312312+ }313313+314314+ if (unlikely(p->des01.erx.ipc_csum_error))315315+ DBG(KERN_ERR "\tIPC Csum Error/Giant frame\n");316316+317317+ if (unlikely(p->des01.erx.late_collision)) {318318+ DBG(KERN_ERR "\tlate_collision error\n");319319+ stats->collisions++;320320+ stats->collisions++;321321+ }322322+ if (unlikely(p->des01.erx.receive_watchdog)) {323323+ DBG(KERN_ERR "\treceive_watchdog error\n");324324+ x->rx_watchdog++;325325+ }326326+ if (unlikely(p->des01.erx.error_gmii)) {327327+ DBG(KERN_ERR "\tReceive Error\n");328328+ x->rx_mii++;329329+ }330330+ if (unlikely(p->des01.erx.crc_error)) {331331+ DBG(KERN_ERR "\tCRC error\n");332332+ x->rx_crc++;333333+ stats->rx_crc_errors++;334334+ }335335+ ret = discard_frame;336336+ }337337+338338+ /* After a payload csum error, the ES bit is set.339339+ * It doesn't match with the information reported into the databook.340340+ * At any rate, we need to understand if the CSUM hw computation is ok341341+ * and report this info to the upper layers. */342342+ ret = gmac_coe_rdes0(p->des01.erx.ipc_csum_error,343343+ p->des01.erx.frame_type, p->des01.erx.payload_csum_error);344344+345345+ if (unlikely(p->des01.erx.dribbling)) {346346+ DBG(KERN_ERR "GMAC RX: dribbling error\n");347347+ ret = discard_frame;348348+ }349349+ if (unlikely(p->des01.erx.sa_filter_fail)) {350350+ DBG(KERN_ERR "GMAC RX : Source Address filter fail\n");351351+ x->sa_rx_filter_fail++;352352+ ret = discard_frame;353353+ }354354+ if (unlikely(p->des01.erx.da_filter_fail)) {355355+ DBG(KERN_ERR "GMAC RX : Destination Address filter fail\n");356356+ x->da_rx_filter_fail++;357357+ ret = discard_frame;358358+ }359359+ if (unlikely(p->des01.erx.length_error)) {360360+ DBG(KERN_ERR "GMAC RX: length_error error\n");361361+ x->rx_lenght++;362362+ ret = discard_frame;363363+ }364364+#ifdef STMMAC_VLAN_TAG_USED365365+ if (p->des01.erx.vlan_tag) {366366+ DBG(KERN_INFO "GMAC RX: VLAN frame tagged\n");367367+ x->rx_vlan++;368368+ }369369+#endif370370+ return ret;371371+}372372+373373+static void gmac_irq_status(unsigned long ioaddr)374374+{375375+ u32 intr_status = readl(ioaddr + GMAC_INT_STATUS);376376+377377+ /* Not used events (e.g. MMC interrupts) are not handled. */378378+ if ((intr_status & mmc_tx_irq))379379+ DBG(KERN_DEBUG "GMAC: MMC tx interrupt: 0x%08x\n",380380+ readl(ioaddr + GMAC_MMC_TX_INTR));381381+ if (unlikely(intr_status & mmc_rx_irq))382382+ DBG(KERN_DEBUG "GMAC: MMC rx interrupt: 0x%08x\n",383383+ readl(ioaddr + GMAC_MMC_RX_INTR));384384+ if (unlikely(intr_status & mmc_rx_csum_offload_irq))385385+ DBG(KERN_DEBUG "GMAC: MMC rx csum offload: 0x%08x\n",386386+ readl(ioaddr + GMAC_MMC_RX_CSUM_OFFLOAD));387387+ if (unlikely(intr_status & pmt_irq)) {388388+ DBG(KERN_DEBUG "GMAC: received Magic frame\n");389389+ /* clear the PMT bits 5 and 6 by reading the PMT390390+ * status register. */391391+ readl(ioaddr + GMAC_PMT);392392+ }393393+394394+ return;395395+}396396+397397+static void gmac_core_init(unsigned long ioaddr)398398+{399399+ u32 value = readl(ioaddr + GMAC_CONTROL);400400+ value |= GMAC_CORE_INIT;401401+ writel(value, ioaddr + GMAC_CONTROL);402402+403403+ /* STBus Bridge Configuration */404404+ /*writel(0xc5608, ioaddr + 0x00007000);*/405405+406406+ /* Freeze MMC counters */407407+ writel(0x8, ioaddr + GMAC_MMC_CTRL);408408+ /* Mask GMAC interrupts */409409+ writel(0x207, ioaddr + GMAC_INT_MASK);410410+411411+#ifdef STMMAC_VLAN_TAG_USED412412+ /* Tag detection without filtering */413413+ writel(0x0, ioaddr + GMAC_VLAN_TAG);414414+#endif415415+ return;416416+}417417+418418+static void gmac_set_umac_addr(unsigned long ioaddr, unsigned char *addr,419419+ unsigned int reg_n)420420+{421421+ stmmac_set_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),422422+ GMAC_ADDR_LOW(reg_n));423423+}424424+425425+static void gmac_get_umac_addr(unsigned long ioaddr, unsigned char *addr,426426+ unsigned int reg_n)427427+{428428+ stmmac_get_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),429429+ GMAC_ADDR_LOW(reg_n));430430+}431431+432432+static void gmac_set_filter(struct net_device *dev)433433+{434434+ unsigned long ioaddr = dev->base_addr;435435+ unsigned int value = 0;436436+437437+ DBG(KERN_INFO "%s: # mcasts %d, # unicast %d\n",438438+ __func__, dev->mc_count, dev->uc_count);439439+440440+ if (dev->flags & IFF_PROMISC)441441+ value = GMAC_FRAME_FILTER_PR;442442+ else if ((dev->mc_count > HASH_TABLE_SIZE)443443+ || (dev->flags & IFF_ALLMULTI)) {444444+ value = GMAC_FRAME_FILTER_PM; /* pass all multi */445445+ writel(0xffffffff, ioaddr + GMAC_HASH_HIGH);446446+ writel(0xffffffff, ioaddr + GMAC_HASH_LOW);447447+ } else if (dev->mc_count > 0) {448448+ int i;449449+ u32 mc_filter[2];450450+ struct dev_mc_list *mclist;451451+452452+ /* Hash filter for multicast */453453+ value = GMAC_FRAME_FILTER_HMC;454454+455455+ memset(mc_filter, 0, sizeof(mc_filter));456456+ for (i = 0, mclist = dev->mc_list;457457+ mclist && i < dev->mc_count; i++, mclist = mclist->next) {458458+ /* The upper 6 bits of the calculated CRC are used to459459+ index the contens of the hash table */460460+ int bit_nr =461461+ bitrev32(~crc32_le(~0, mclist->dmi_addr, 6)) >> 26;462462+ /* The most significant bit determines the register to463463+ * use (H/L) while the other 5 bits determine the bit464464+ * within the register. */465465+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);466466+ }467467+ writel(mc_filter[0], ioaddr + GMAC_HASH_LOW);468468+ writel(mc_filter[1], ioaddr + GMAC_HASH_HIGH);469469+ }470470+471471+ /* Handle multiple unicast addresses (perfect filtering)*/472472+ if (dev->uc_count > GMAC_MAX_UNICAST_ADDRESSES)473473+ /* Switch to promiscuous mode is more than 16 addrs474474+ are required */475475+ value |= GMAC_FRAME_FILTER_PR;476476+ else {477477+ int i;478478+ struct dev_addr_list *uc_ptr = dev->uc_list;479479+480480+ for (i = 0; i < dev->uc_count; i++) {481481+ gmac_set_umac_addr(ioaddr, uc_ptr->da_addr,482482+ i + 1);483483+484484+ DBG(KERN_INFO "\t%d "485485+ "- Unicast addr %02x:%02x:%02x:%02x:%02x:"486486+ "%02x\n", i + 1,487487+ uc_ptr->da_addr[0], uc_ptr->da_addr[1],488488+ uc_ptr->da_addr[2], uc_ptr->da_addr[3],489489+ uc_ptr->da_addr[4], uc_ptr->da_addr[5]);490490+ uc_ptr = uc_ptr->next;491491+ }492492+ }493493+494494+#ifdef FRAME_FILTER_DEBUG495495+ /* Enable Receive all mode (to debug filtering_fail errors) */496496+ value |= GMAC_FRAME_FILTER_RA;497497+#endif498498+ writel(value, ioaddr + GMAC_FRAME_FILTER);499499+500500+ DBG(KERN_INFO "\tFrame Filter reg: 0x%08x\n\tHash regs: "501501+ "HI 0x%08x, LO 0x%08x\n", readl(ioaddr + GMAC_FRAME_FILTER),502502+ readl(ioaddr + GMAC_HASH_HIGH), readl(ioaddr + GMAC_HASH_LOW));503503+504504+ return;505505+}506506+507507+static void gmac_flow_ctrl(unsigned long ioaddr, unsigned int duplex,508508+ unsigned int fc, unsigned int pause_time)509509+{510510+ unsigned int flow = 0;511511+512512+ DBG(KERN_DEBUG "GMAC Flow-Control:\n");513513+ if (fc & FLOW_RX) {514514+ DBG(KERN_DEBUG "\tReceive Flow-Control ON\n");515515+ flow |= GMAC_FLOW_CTRL_RFE;516516+ }517517+ if (fc & FLOW_TX) {518518+ DBG(KERN_DEBUG "\tTransmit Flow-Control ON\n");519519+ flow |= GMAC_FLOW_CTRL_TFE;520520+ }521521+522522+ if (duplex) {523523+ DBG(KERN_DEBUG "\tduplex mode: pause time: %d\n", pause_time);524524+ flow |= (pause_time << GMAC_FLOW_CTRL_PT_SHIFT);525525+ }526526+527527+ writel(flow, ioaddr + GMAC_FLOW_CTRL);528528+ return;529529+}530530+531531+static void gmac_pmt(unsigned long ioaddr, unsigned long mode)532532+{533533+ unsigned int pmt = 0;534534+535535+ if (mode == WAKE_MAGIC) {536536+ DBG(KERN_DEBUG "GMAC: WOL Magic frame\n");537537+ pmt |= power_down | magic_pkt_en;538538+ } else if (mode == WAKE_UCAST) {539539+ DBG(KERN_DEBUG "GMAC: WOL on global unicast\n");540540+ pmt |= global_unicast;541541+ }542542+543543+ writel(pmt, ioaddr + GMAC_PMT);544544+ return;545545+}546546+547547+static void gmac_init_rx_desc(struct dma_desc *p, unsigned int ring_size,548548+ int disable_rx_ic)549549+{550550+ int i;551551+ for (i = 0; i < ring_size; i++) {552552+ p->des01.erx.own = 1;553553+ p->des01.erx.buffer1_size = BUF_SIZE_8KiB - 1;554554+ /* To support jumbo frames */555555+ p->des01.erx.buffer2_size = BUF_SIZE_8KiB - 1;556556+ if (i == ring_size - 1)557557+ p->des01.erx.end_ring = 1;558558+ if (disable_rx_ic)559559+ p->des01.erx.disable_ic = 1;560560+ p++;561561+ }562562+ return;563563+}564564+565565+static void gmac_init_tx_desc(struct dma_desc *p, unsigned int ring_size)566566+{567567+ int i;568568+569569+ for (i = 0; i < ring_size; i++) {570570+ p->des01.etx.own = 0;571571+ if (i == ring_size - 1)572572+ p->des01.etx.end_ring = 1;573573+ p++;574574+ }575575+576576+ return;577577+}578578+579579+static int gmac_get_tx_owner(struct dma_desc *p)580580+{581581+ return p->des01.etx.own;582582+}583583+584584+static int gmac_get_rx_owner(struct dma_desc *p)585585+{586586+ return p->des01.erx.own;587587+}588588+589589+static void gmac_set_tx_owner(struct dma_desc *p)590590+{591591+ p->des01.etx.own = 1;592592+}593593+594594+static void gmac_set_rx_owner(struct dma_desc *p)595595+{596596+ p->des01.erx.own = 1;597597+}598598+599599+static int gmac_get_tx_ls(struct dma_desc *p)600600+{601601+ return p->des01.etx.last_segment;602602+}603603+604604+static void gmac_release_tx_desc(struct dma_desc *p)605605+{606606+ int ter = p->des01.etx.end_ring;607607+608608+ memset(p, 0, sizeof(struct dma_desc));609609+ p->des01.etx.end_ring = ter;610610+611611+ return;612612+}613613+614614+static void gmac_prepare_tx_desc(struct dma_desc *p, int is_fs, int len,615615+ int csum_flag)616616+{617617+ p->des01.etx.first_segment = is_fs;618618+ if (unlikely(len > BUF_SIZE_4KiB)) {619619+ p->des01.etx.buffer1_size = BUF_SIZE_4KiB;620620+ p->des01.etx.buffer2_size = len - BUF_SIZE_4KiB;621621+ } else {622622+ p->des01.etx.buffer1_size = len;623623+ }624624+ if (likely(csum_flag))625625+ p->des01.etx.checksum_insertion = cic_full;626626+}627627+628628+static void gmac_clear_tx_ic(struct dma_desc *p)629629+{630630+ p->des01.etx.interrupt = 0;631631+}632632+633633+static void gmac_close_tx_desc(struct dma_desc *p)634634+{635635+ p->des01.etx.last_segment = 1;636636+ p->des01.etx.interrupt = 1;637637+}638638+639639+static int gmac_get_rx_frame_len(struct dma_desc *p)640640+{641641+ return p->des01.erx.frame_length;642642+}643643+644644+struct stmmac_ops gmac_driver = {645645+ .core_init = gmac_core_init,646646+ .dump_mac_regs = gmac_dump_regs,647647+ .dma_init = gmac_dma_init,648648+ .dump_dma_regs = gmac_dump_dma_regs,649649+ .dma_mode = gmac_dma_operation_mode,650650+ .dma_diagnostic_fr = gmac_dma_diagnostic_fr,651651+ .tx_status = gmac_get_tx_frame_status,652652+ .rx_status = gmac_get_rx_frame_status,653653+ .get_tx_len = gmac_get_tx_len,654654+ .set_filter = gmac_set_filter,655655+ .flow_ctrl = gmac_flow_ctrl,656656+ .pmt = gmac_pmt,657657+ .init_rx_desc = gmac_init_rx_desc,658658+ .init_tx_desc = gmac_init_tx_desc,659659+ .get_tx_owner = gmac_get_tx_owner,660660+ .get_rx_owner = gmac_get_rx_owner,661661+ .release_tx_desc = gmac_release_tx_desc,662662+ .prepare_tx_desc = gmac_prepare_tx_desc,663663+ .clear_tx_ic = gmac_clear_tx_ic,664664+ .close_tx_desc = gmac_close_tx_desc,665665+ .get_tx_ls = gmac_get_tx_ls,666666+ .set_tx_owner = gmac_set_tx_owner,667667+ .set_rx_owner = gmac_set_rx_owner,668668+ .get_rx_frame_len = gmac_get_rx_frame_len,669669+ .host_irq_status = gmac_irq_status,670670+ .set_umac_addr = gmac_set_umac_addr,671671+ .get_umac_addr = gmac_get_umac_addr,672672+};673673+674674+struct mac_device_info *gmac_setup(unsigned long ioaddr)675675+{676676+ struct mac_device_info *mac;677677+ u32 uid = readl(ioaddr + GMAC_VERSION);678678+679679+ pr_info("\tGMAC - user ID: 0x%x, Synopsys ID: 0x%x\n",680680+ ((uid & 0x0000ff00) >> 8), (uid & 0x000000ff));681681+682682+ mac = kzalloc(sizeof(const struct mac_device_info), GFP_KERNEL);683683+684684+ mac->ops = &gmac_driver;685685+ mac->hw.pmt = PMT_SUPPORTED;686686+ mac->hw.link.port = GMAC_CONTROL_PS;687687+ mac->hw.link.duplex = GMAC_CONTROL_DM;688688+ mac->hw.link.speed = GMAC_CONTROL_FES;689689+ mac->hw.mii.addr = GMAC_MII_ADDR;690690+ mac->hw.mii.data = GMAC_MII_DATA;691691+692692+ return mac;693693+}
+204
drivers/net/stmmac/gmac.h
···11+/*******************************************************************************22+ Copyright (C) 2007-2009 STMicroelectronics Ltd33+44+ This program is free software; you can redistribute it and/or modify it55+ under the terms and conditions of the GNU General Public License,66+ version 2, as published by the Free Software Foundation.77+88+ This program is distributed in the hope it will be useful, but WITHOUT99+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1010+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1111+ more details.1212+1313+ You should have received a copy of the GNU General Public License along with1414+ this program; if not, write to the Free Software Foundation, Inc.,1515+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1616+1717+ The full GNU General Public License is included in this distribution in1818+ the file called "COPYING".1919+2020+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2121+*******************************************************************************/2222+2323+#define GMAC_CONTROL 0x00000000 /* Configuration */2424+#define GMAC_FRAME_FILTER 0x00000004 /* Frame Filter */2525+#define GMAC_HASH_HIGH 0x00000008 /* Multicast Hash Table High */2626+#define GMAC_HASH_LOW 0x0000000c /* Multicast Hash Table Low */2727+#define GMAC_MII_ADDR 0x00000010 /* MII Address */2828+#define GMAC_MII_DATA 0x00000014 /* MII Data */2929+#define GMAC_FLOW_CTRL 0x00000018 /* Flow Control */3030+#define GMAC_VLAN_TAG 0x0000001c /* VLAN Tag */3131+#define GMAC_VERSION 0x00000020 /* GMAC CORE Version */3232+#define GMAC_WAKEUP_FILTER 0x00000028 /* Wake-up Frame Filter */3333+3434+#define GMAC_INT_STATUS 0x00000038 /* interrupt status register */3535+enum gmac_irq_status {3636+ time_stamp_irq = 0x0200,3737+ mmc_rx_csum_offload_irq = 0x0080,3838+ mmc_tx_irq = 0x0040,3939+ mmc_rx_irq = 0x0020,4040+ mmc_irq = 0x0010,4141+ pmt_irq = 0x0008,4242+ pcs_ane_irq = 0x0004,4343+ pcs_link_irq = 0x0002,4444+ rgmii_irq = 0x0001,4545+};4646+#define GMAC_INT_MASK 0x0000003c /* interrupt mask register */4747+4848+/* PMT Control and Status */4949+#define GMAC_PMT 0x0000002c5050+enum power_event {5151+ pointer_reset = 0x80000000,5252+ global_unicast = 0x00000200,5353+ wake_up_rx_frame = 0x00000040,5454+ magic_frame = 0x00000020,5555+ wake_up_frame_en = 0x00000004,5656+ magic_pkt_en = 0x00000002,5757+ power_down = 0x00000001,5858+};5959+6060+/* GMAC HW ADDR regs */6161+#define GMAC_ADDR_HIGH(reg) (0x00000040+(reg * 8))6262+#define GMAC_ADDR_LOW(reg) (0x00000044+(reg * 8))6363+#define GMAC_MAX_UNICAST_ADDRESSES 166464+6565+#define GMAC_AN_CTRL 0x000000c0 /* AN control */6666+#define GMAC_AN_STATUS 0x000000c4 /* AN status */6767+#define GMAC_ANE_ADV 0x000000c8 /* Auto-Neg. Advertisement */6868+#define GMAC_ANE_LINK 0x000000cc /* Auto-Neg. link partener ability */6969+#define GMAC_ANE_EXP 0x000000d0 /* ANE expansion */7070+#define GMAC_TBI 0x000000d4 /* TBI extend status */7171+#define GMAC_GMII_STATUS 0x000000d8 /* S/R-GMII status */7272+7373+/* GMAC Configuration defines */7474+#define GMAC_CONTROL_TC 0x01000000 /* Transmit Conf. in RGMII/SGMII */7575+#define GMAC_CONTROL_WD 0x00800000 /* Disable Watchdog on receive */7676+#define GMAC_CONTROL_JD 0x00400000 /* Jabber disable */7777+#define GMAC_CONTROL_BE 0x00200000 /* Frame Burst Enable */7878+#define GMAC_CONTROL_JE 0x00100000 /* Jumbo frame */7979+enum inter_frame_gap {8080+ GMAC_CONTROL_IFG_88 = 0x00040000,8181+ GMAC_CONTROL_IFG_80 = 0x00020000,8282+ GMAC_CONTROL_IFG_40 = 0x000e0000,8383+};8484+#define GMAC_CONTROL_DCRS 0x00010000 /* Disable carrier sense during tx */8585+#define GMAC_CONTROL_PS 0x00008000 /* Port Select 0:GMI 1:MII */8686+#define GMAC_CONTROL_FES 0x00004000 /* Speed 0:10 1:100 */8787+#define GMAC_CONTROL_DO 0x00002000 /* Disable Rx Own */8888+#define GMAC_CONTROL_LM 0x00001000 /* Loop-back mode */8989+#define GMAC_CONTROL_DM 0x00000800 /* Duplex Mode */9090+#define GMAC_CONTROL_IPC 0x00000400 /* Checksum Offload */9191+#define GMAC_CONTROL_DR 0x00000200 /* Disable Retry */9292+#define GMAC_CONTROL_LUD 0x00000100 /* Link up/down */9393+#define GMAC_CONTROL_ACS 0x00000080 /* Automatic Pad Stripping */9494+#define GMAC_CONTROL_DC 0x00000010 /* Deferral Check */9595+#define GMAC_CONTROL_TE 0x00000008 /* Transmitter Enable */9696+#define GMAC_CONTROL_RE 0x00000004 /* Receiver Enable */9797+9898+#define GMAC_CORE_INIT (GMAC_CONTROL_JD | GMAC_CONTROL_PS | GMAC_CONTROL_ACS | \9999+ GMAC_CONTROL_IPC | GMAC_CONTROL_JE | GMAC_CONTROL_BE)100100+101101+/* GMAC Frame Filter defines */102102+#define GMAC_FRAME_FILTER_PR 0x00000001 /* Promiscuous Mode */103103+#define GMAC_FRAME_FILTER_HUC 0x00000002 /* Hash Unicast */104104+#define GMAC_FRAME_FILTER_HMC 0x00000004 /* Hash Multicast */105105+#define GMAC_FRAME_FILTER_DAIF 0x00000008 /* DA Inverse Filtering */106106+#define GMAC_FRAME_FILTER_PM 0x00000010 /* Pass all multicast */107107+#define GMAC_FRAME_FILTER_DBF 0x00000020 /* Disable Broadcast frames */108108+#define GMAC_FRAME_FILTER_SAIF 0x00000100 /* Inverse Filtering */109109+#define GMAC_FRAME_FILTER_SAF 0x00000200 /* Source Address Filter */110110+#define GMAC_FRAME_FILTER_HPF 0x00000400 /* Hash or perfect Filter */111111+#define GMAC_FRAME_FILTER_RA 0x80000000 /* Receive all mode */112112+/* GMII ADDR defines */113113+#define GMAC_MII_ADDR_WRITE 0x00000002 /* MII Write */114114+#define GMAC_MII_ADDR_BUSY 0x00000001 /* MII Busy */115115+/* GMAC FLOW CTRL defines */116116+#define GMAC_FLOW_CTRL_PT_MASK 0xffff0000 /* Pause Time Mask */117117+#define GMAC_FLOW_CTRL_PT_SHIFT 16118118+#define GMAC_FLOW_CTRL_RFE 0x00000004 /* Rx Flow Control Enable */119119+#define GMAC_FLOW_CTRL_TFE 0x00000002 /* Tx Flow Control Enable */120120+#define GMAC_FLOW_CTRL_FCB_BPA 0x00000001 /* Flow Control Busy ... */121121+122122+/*--- DMA BLOCK defines ---*/123123+/* DMA Bus Mode register defines */124124+#define DMA_BUS_MODE_SFT_RESET 0x00000001 /* Software Reset */125125+#define DMA_BUS_MODE_DA 0x00000002 /* Arbitration scheme */126126+#define DMA_BUS_MODE_DSL_MASK 0x0000007c /* Descriptor Skip Length */127127+#define DMA_BUS_MODE_DSL_SHIFT 2 /* (in DWORDS) */128128+/* Programmable burst length (passed thorugh platform)*/129129+#define DMA_BUS_MODE_PBL_MASK 0x00003f00 /* Programmable Burst Len */130130+#define DMA_BUS_MODE_PBL_SHIFT 8131131+132132+enum rx_tx_priority_ratio {133133+ double_ratio = 0x00004000, /*2:1 */134134+ triple_ratio = 0x00008000, /*3:1 */135135+ quadruple_ratio = 0x0000c000, /*4:1 */136136+};137137+138138+#define DMA_BUS_MODE_FB 0x00010000 /* Fixed burst */139139+#define DMA_BUS_MODE_RPBL_MASK 0x003e0000 /* Rx-Programmable Burst Len */140140+#define DMA_BUS_MODE_RPBL_SHIFT 17141141+#define DMA_BUS_MODE_USP 0x00800000142142+#define DMA_BUS_MODE_4PBL 0x01000000143143+#define DMA_BUS_MODE_AAL 0x02000000144144+145145+/* DMA CRS Control and Status Register Mapping */146146+#define DMA_HOST_TX_DESC 0x00001048 /* Current Host Tx descriptor */147147+#define DMA_HOST_RX_DESC 0x0000104c /* Current Host Rx descriptor */148148+/* DMA Bus Mode register defines */149149+#define DMA_BUS_PR_RATIO_MASK 0x0000c000 /* Rx/Tx priority ratio */150150+#define DMA_BUS_PR_RATIO_SHIFT 14151151+#define DMA_BUS_FB 0x00010000 /* Fixed Burst */152152+153153+/* DMA operation mode defines (start/stop tx/rx are placed in common header)*/154154+#define DMA_CONTROL_DT 0x04000000 /* Disable Drop TCP/IP csum error */155155+#define DMA_CONTROL_RSF 0x02000000 /* Receive Store and Forward */156156+#define DMA_CONTROL_DFF 0x01000000 /* Disaable flushing */157157+/* Theshold for Activating the FC */158158+enum rfa {159159+ act_full_minus_1 = 0x00800000,160160+ act_full_minus_2 = 0x00800200,161161+ act_full_minus_3 = 0x00800400,162162+ act_full_minus_4 = 0x00800600,163163+};164164+/* Theshold for Deactivating the FC */165165+enum rfd {166166+ deac_full_minus_1 = 0x00400000,167167+ deac_full_minus_2 = 0x00400800,168168+ deac_full_minus_3 = 0x00401000,169169+ deac_full_minus_4 = 0x00401800,170170+};171171+#define DMA_CONTROL_TSF 0x00200000 /* Transmit Store and Forward */172172+#define DMA_CONTROL_FTF 0x00100000 /* Flush transmit FIFO */173173+174174+enum ttc_control {175175+ DMA_CONTROL_TTC_64 = 0x00000000,176176+ DMA_CONTROL_TTC_128 = 0x00004000,177177+ DMA_CONTROL_TTC_192 = 0x00008000,178178+ DMA_CONTROL_TTC_256 = 0x0000c000,179179+ DMA_CONTROL_TTC_40 = 0x00010000,180180+ DMA_CONTROL_TTC_32 = 0x00014000,181181+ DMA_CONTROL_TTC_24 = 0x00018000,182182+ DMA_CONTROL_TTC_16 = 0x0001c000,183183+};184184+#define DMA_CONTROL_TC_TX_MASK 0xfffe3fff185185+186186+#define DMA_CONTROL_EFC 0x00000100187187+#define DMA_CONTROL_FEF 0x00000080188188+#define DMA_CONTROL_FUF 0x00000040189189+190190+enum rtc_control {191191+ DMA_CONTROL_RTC_64 = 0x00000000,192192+ DMA_CONTROL_RTC_32 = 0x00000008,193193+ DMA_CONTROL_RTC_96 = 0x00000010,194194+ DMA_CONTROL_RTC_128 = 0x00000018,195195+};196196+#define DMA_CONTROL_TC_RX_MASK 0xffffffe7197197+198198+#define DMA_CONTROL_OSF 0x00000004 /* Operate on second frame */199199+200200+/* MMC registers offset */201201+#define GMAC_MMC_CTRL 0x100202202+#define GMAC_MMC_RX_INTR 0x104203203+#define GMAC_MMC_TX_INTR 0x108204204+#define GMAC_MMC_RX_CSUM_OFFLOAD 0x208
+517
drivers/net/stmmac/mac100.c
···11+/*******************************************************************************22+ This is the driver for the MAC 10/100 on-chip Ethernet controller33+ currently tested on all the ST boards based on STb7109 and stx7200 SoCs.44+55+ DWC Ether MAC 10/100 Universal version 4.0 has been used for developing66+ this code.77+88+ Copyright (C) 2007-2009 STMicroelectronics Ltd99+1010+ This program is free software; you can redistribute it and/or modify it1111+ under the terms and conditions of the GNU General Public License,1212+ version 2, as published by the Free Software Foundation.1313+1414+ This program is distributed in the hope it will be useful, but WITHOUT1515+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1616+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1717+ more details.1818+1919+ You should have received a copy of the GNU General Public License along with2020+ this program; if not, write to the Free Software Foundation, Inc.,2121+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.2222+2323+ The full GNU General Public License is included in this distribution in2424+ the file called "COPYING".2525+2626+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2727+*******************************************************************************/2828+2929+#include <linux/netdevice.h>3030+#include <linux/crc32.h>3131+#include <linux/mii.h>3232+#include <linux/phy.h>3333+3434+#include "common.h"3535+#include "mac100.h"3636+3737+#undef MAC100_DEBUG3838+/*#define MAC100_DEBUG*/3939+#ifdef MAC100_DEBUG4040+#define DBG(fmt, args...) printk(fmt, ## args)4141+#else4242+#define DBG(fmt, args...) do { } while (0)4343+#endif4444+4545+static void mac100_core_init(unsigned long ioaddr)4646+{4747+ u32 value = readl(ioaddr + MAC_CONTROL);4848+4949+ writel((value | MAC_CORE_INIT), ioaddr + MAC_CONTROL);5050+5151+#ifdef STMMAC_VLAN_TAG_USED5252+ writel(ETH_P_8021Q, ioaddr + MAC_VLAN1);5353+#endif5454+ return;5555+}5656+5757+static void mac100_dump_mac_regs(unsigned long ioaddr)5858+{5959+ pr_info("\t----------------------------------------------\n"6060+ "\t MAC100 CSR (base addr = 0x%8x)\n"6161+ "\t----------------------------------------------\n",6262+ (unsigned int)ioaddr);6363+ pr_info("\tcontrol reg (offset 0x%x): 0x%08x\n", MAC_CONTROL,6464+ readl(ioaddr + MAC_CONTROL));6565+ pr_info("\taddr HI (offset 0x%x): 0x%08x\n ", MAC_ADDR_HIGH,6666+ readl(ioaddr + MAC_ADDR_HIGH));6767+ pr_info("\taddr LO (offset 0x%x): 0x%08x\n", MAC_ADDR_LOW,6868+ readl(ioaddr + MAC_ADDR_LOW));6969+ pr_info("\tmulticast hash HI (offset 0x%x): 0x%08x\n",7070+ MAC_HASH_HIGH, readl(ioaddr + MAC_HASH_HIGH));7171+ pr_info("\tmulticast hash LO (offset 0x%x): 0x%08x\n",7272+ MAC_HASH_LOW, readl(ioaddr + MAC_HASH_LOW));7373+ pr_info("\tflow control (offset 0x%x): 0x%08x\n",7474+ MAC_FLOW_CTRL, readl(ioaddr + MAC_FLOW_CTRL));7575+ pr_info("\tVLAN1 tag (offset 0x%x): 0x%08x\n", MAC_VLAN1,7676+ readl(ioaddr + MAC_VLAN1));7777+ pr_info("\tVLAN2 tag (offset 0x%x): 0x%08x\n", MAC_VLAN2,7878+ readl(ioaddr + MAC_VLAN2));7979+ pr_info("\n\tMAC management counter registers\n");8080+ pr_info("\t MMC crtl (offset 0x%x): 0x%08x\n",8181+ MMC_CONTROL, readl(ioaddr + MMC_CONTROL));8282+ pr_info("\t MMC High Interrupt (offset 0x%x): 0x%08x\n",8383+ MMC_HIGH_INTR, readl(ioaddr + MMC_HIGH_INTR));8484+ pr_info("\t MMC Low Interrupt (offset 0x%x): 0x%08x\n",8585+ MMC_LOW_INTR, readl(ioaddr + MMC_LOW_INTR));8686+ pr_info("\t MMC High Interrupt Mask (offset 0x%x): 0x%08x\n",8787+ MMC_HIGH_INTR_MASK, readl(ioaddr + MMC_HIGH_INTR_MASK));8888+ pr_info("\t MMC Low Interrupt Mask (offset 0x%x): 0x%08x\n",8989+ MMC_LOW_INTR_MASK, readl(ioaddr + MMC_LOW_INTR_MASK));9090+ return;9191+}9292+9393+static int mac100_dma_init(unsigned long ioaddr, int pbl, u32 dma_tx,9494+ u32 dma_rx)9595+{9696+ u32 value = readl(ioaddr + DMA_BUS_MODE);9797+ /* DMA SW reset */9898+ value |= DMA_BUS_MODE_SFT_RESET;9999+ writel(value, ioaddr + DMA_BUS_MODE);100100+ do {} while ((readl(ioaddr + DMA_BUS_MODE) & DMA_BUS_MODE_SFT_RESET));101101+102102+ /* Enable Application Access by writing to DMA CSR0 */103103+ writel(DMA_BUS_MODE_DEFAULT | (pbl << DMA_BUS_MODE_PBL_SHIFT),104104+ ioaddr + DMA_BUS_MODE);105105+106106+ /* Mask interrupts by writing to CSR7 */107107+ writel(DMA_INTR_DEFAULT_MASK, ioaddr + DMA_INTR_ENA);108108+109109+ /* The base address of the RX/TX descriptor lists must be written into110110+ * DMA CSR3 and CSR4, respectively. */111111+ writel(dma_tx, ioaddr + DMA_TX_BASE_ADDR);112112+ writel(dma_rx, ioaddr + DMA_RCV_BASE_ADDR);113113+114114+ return 0;115115+}116116+117117+/* Store and Forward capability is not used at all..118118+ * The transmit threshold can be programmed by119119+ * setting the TTC bits in the DMA control register.*/120120+static void mac100_dma_operation_mode(unsigned long ioaddr, int txmode,121121+ int rxmode)122122+{123123+ u32 csr6 = readl(ioaddr + DMA_CONTROL);124124+125125+ if (txmode <= 32)126126+ csr6 |= DMA_CONTROL_TTC_32;127127+ else if (txmode <= 64)128128+ csr6 |= DMA_CONTROL_TTC_64;129129+ else130130+ csr6 |= DMA_CONTROL_TTC_128;131131+132132+ writel(csr6, ioaddr + DMA_CONTROL);133133+134134+ return;135135+}136136+137137+static void mac100_dump_dma_regs(unsigned long ioaddr)138138+{139139+ int i;140140+141141+ DBG(KERN_DEBUG "MAC100 DMA CSR \n");142142+ for (i = 0; i < 9; i++)143143+ pr_debug("\t CSR%d (offset 0x%x): 0x%08x\n", i,144144+ (DMA_BUS_MODE + i * 4),145145+ readl(ioaddr + DMA_BUS_MODE + i * 4));146146+ DBG(KERN_DEBUG "\t CSR20 (offset 0x%x): 0x%08x\n",147147+ DMA_CUR_TX_BUF_ADDR, readl(ioaddr + DMA_CUR_TX_BUF_ADDR));148148+ DBG(KERN_DEBUG "\t CSR21 (offset 0x%x): 0x%08x\n",149149+ DMA_CUR_RX_BUF_ADDR, readl(ioaddr + DMA_CUR_RX_BUF_ADDR));150150+ return;151151+}152152+153153+/* DMA controller has two counters to track the number of154154+ the receive missed frames. */155155+static void mac100_dma_diagnostic_fr(void *data, struct stmmac_extra_stats *x,156156+ unsigned long ioaddr)157157+{158158+ struct net_device_stats *stats = (struct net_device_stats *)data;159159+ u32 csr8 = readl(ioaddr + DMA_MISSED_FRAME_CTR);160160+161161+ if (unlikely(csr8)) {162162+ if (csr8 & DMA_MISSED_FRAME_OVE) {163163+ stats->rx_over_errors += 0x800;164164+ x->rx_overflow_cntr += 0x800;165165+ } else {166166+ unsigned int ove_cntr;167167+ ove_cntr = ((csr8 & DMA_MISSED_FRAME_OVE_CNTR) >> 17);168168+ stats->rx_over_errors += ove_cntr;169169+ x->rx_overflow_cntr += ove_cntr;170170+ }171171+172172+ if (csr8 & DMA_MISSED_FRAME_OVE_M) {173173+ stats->rx_missed_errors += 0xffff;174174+ x->rx_missed_cntr += 0xffff;175175+ } else {176176+ unsigned int miss_f = (csr8 & DMA_MISSED_FRAME_M_CNTR);177177+ stats->rx_missed_errors += miss_f;178178+ x->rx_missed_cntr += miss_f;179179+ }180180+ }181181+ return;182182+}183183+184184+static int mac100_get_tx_frame_status(void *data, struct stmmac_extra_stats *x,185185+ struct dma_desc *p, unsigned long ioaddr)186186+{187187+ int ret = 0;188188+ struct net_device_stats *stats = (struct net_device_stats *)data;189189+190190+ if (unlikely(p->des01.tx.error_summary)) {191191+ if (unlikely(p->des01.tx.underflow_error)) {192192+ x->tx_underflow++;193193+ stats->tx_fifo_errors++;194194+ }195195+ if (unlikely(p->des01.tx.no_carrier)) {196196+ x->tx_carrier++;197197+ stats->tx_carrier_errors++;198198+ }199199+ if (unlikely(p->des01.tx.loss_carrier)) {200200+ x->tx_losscarrier++;201201+ stats->tx_carrier_errors++;202202+ }203203+ if (unlikely((p->des01.tx.excessive_deferral) ||204204+ (p->des01.tx.excessive_collisions) ||205205+ (p->des01.tx.late_collision)))206206+ stats->collisions += p->des01.tx.collision_count;207207+ ret = -1;208208+ }209209+ if (unlikely(p->des01.tx.heartbeat_fail)) {210210+ x->tx_heartbeat++;211211+ stats->tx_heartbeat_errors++;212212+ ret = -1;213213+ }214214+ if (unlikely(p->des01.tx.deferred))215215+ x->tx_deferred++;216216+217217+ return ret;218218+}219219+220220+static int mac100_get_tx_len(struct dma_desc *p)221221+{222222+ return p->des01.tx.buffer1_size;223223+}224224+225225+/* This function verifies if each incoming frame has some errors226226+ * and, if required, updates the multicast statistics.227227+ * In case of success, it returns csum_none becasue the device228228+ * is not able to compute the csum in HW. */229229+static int mac100_get_rx_frame_status(void *data, struct stmmac_extra_stats *x,230230+ struct dma_desc *p)231231+{232232+ int ret = csum_none;233233+ struct net_device_stats *stats = (struct net_device_stats *)data;234234+235235+ if (unlikely(p->des01.rx.last_descriptor == 0)) {236236+ pr_warning("mac100 Error: Oversized Ethernet "237237+ "frame spanned multiple buffers\n");238238+ stats->rx_length_errors++;239239+ return discard_frame;240240+ }241241+242242+ if (unlikely(p->des01.rx.error_summary)) {243243+ if (unlikely(p->des01.rx.descriptor_error))244244+ x->rx_desc++;245245+ if (unlikely(p->des01.rx.partial_frame_error))246246+ x->rx_partial++;247247+ if (unlikely(p->des01.rx.run_frame))248248+ x->rx_runt++;249249+ if (unlikely(p->des01.rx.frame_too_long))250250+ x->rx_toolong++;251251+ if (unlikely(p->des01.rx.collision)) {252252+ x->rx_collision++;253253+ stats->collisions++;254254+ }255255+ if (unlikely(p->des01.rx.crc_error)) {256256+ x->rx_crc++;257257+ stats->rx_crc_errors++;258258+ }259259+ ret = discard_frame;260260+ }261261+ if (unlikely(p->des01.rx.dribbling))262262+ ret = discard_frame;263263+264264+ if (unlikely(p->des01.rx.length_error)) {265265+ x->rx_lenght++;266266+ ret = discard_frame;267267+ }268268+ if (unlikely(p->des01.rx.mii_error)) {269269+ x->rx_mii++;270270+ ret = discard_frame;271271+ }272272+ if (p->des01.rx.multicast_frame) {273273+ x->rx_multicast++;274274+ stats->multicast++;275275+ }276276+ return ret;277277+}278278+279279+static void mac100_irq_status(unsigned long ioaddr)280280+{281281+ return;282282+}283283+284284+static void mac100_set_umac_addr(unsigned long ioaddr, unsigned char *addr,285285+ unsigned int reg_n)286286+{287287+ stmmac_set_mac_addr(ioaddr, addr, MAC_ADDR_HIGH, MAC_ADDR_LOW);288288+}289289+290290+static void mac100_get_umac_addr(unsigned long ioaddr, unsigned char *addr,291291+ unsigned int reg_n)292292+{293293+ stmmac_get_mac_addr(ioaddr, addr, MAC_ADDR_HIGH, MAC_ADDR_LOW);294294+}295295+296296+static void mac100_set_filter(struct net_device *dev)297297+{298298+ unsigned long ioaddr = dev->base_addr;299299+ u32 value = readl(ioaddr + MAC_CONTROL);300300+301301+ if (dev->flags & IFF_PROMISC) {302302+ value |= MAC_CONTROL_PR;303303+ value &= ~(MAC_CONTROL_PM | MAC_CONTROL_IF | MAC_CONTROL_HO |304304+ MAC_CONTROL_HP);305305+ } else if ((dev->mc_count > HASH_TABLE_SIZE)306306+ || (dev->flags & IFF_ALLMULTI)) {307307+ value |= MAC_CONTROL_PM;308308+ value &= ~(MAC_CONTROL_PR | MAC_CONTROL_IF | MAC_CONTROL_HO);309309+ writel(0xffffffff, ioaddr + MAC_HASH_HIGH);310310+ writel(0xffffffff, ioaddr + MAC_HASH_LOW);311311+ } else if (dev->mc_count == 0) { /* no multicast */312312+ value &= ~(MAC_CONTROL_PM | MAC_CONTROL_PR | MAC_CONTROL_IF |313313+ MAC_CONTROL_HO | MAC_CONTROL_HP);314314+ } else {315315+ int i;316316+ u32 mc_filter[2];317317+ struct dev_mc_list *mclist;318318+319319+ /* Perfect filter mode for physical address and Hash320320+ filter for multicast */321321+ value |= MAC_CONTROL_HP;322322+ value &= ~(MAC_CONTROL_PM | MAC_CONTROL_PR | MAC_CONTROL_IF323323+ | MAC_CONTROL_HO);324324+325325+ memset(mc_filter, 0, sizeof(mc_filter));326326+ for (i = 0, mclist = dev->mc_list;327327+ mclist && i < dev->mc_count; i++, mclist = mclist->next) {328328+ /* The upper 6 bits of the calculated CRC are used to329329+ * index the contens of the hash table */330330+ int bit_nr =331331+ ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;332332+ /* The most significant bit determines the register to333333+ * use (H/L) while the other 5 bits determine the bit334334+ * within the register. */335335+ mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);336336+ }337337+ writel(mc_filter[0], ioaddr + MAC_HASH_LOW);338338+ writel(mc_filter[1], ioaddr + MAC_HASH_HIGH);339339+ }340340+341341+ writel(value, ioaddr + MAC_CONTROL);342342+343343+ DBG(KERN_INFO "%s: CTRL reg: 0x%08x Hash regs: "344344+ "HI 0x%08x, LO 0x%08x\n",345345+ __func__, readl(ioaddr + MAC_CONTROL),346346+ readl(ioaddr + MAC_HASH_HIGH), readl(ioaddr + MAC_HASH_LOW));347347+ return;348348+}349349+350350+static void mac100_flow_ctrl(unsigned long ioaddr, unsigned int duplex,351351+ unsigned int fc, unsigned int pause_time)352352+{353353+ unsigned int flow = MAC_FLOW_CTRL_ENABLE;354354+355355+ if (duplex)356356+ flow |= (pause_time << MAC_FLOW_CTRL_PT_SHIFT);357357+ writel(flow, ioaddr + MAC_FLOW_CTRL);358358+359359+ return;360360+}361361+362362+/* No PMT module supported in our SoC for the Ethernet Controller. */363363+static void mac100_pmt(unsigned long ioaddr, unsigned long mode)364364+{365365+ return;366366+}367367+368368+static void mac100_init_rx_desc(struct dma_desc *p, unsigned int ring_size,369369+ int disable_rx_ic)370370+{371371+ int i;372372+ for (i = 0; i < ring_size; i++) {373373+ p->des01.rx.own = 1;374374+ p->des01.rx.buffer1_size = BUF_SIZE_2KiB - 1;375375+ if (i == ring_size - 1)376376+ p->des01.rx.end_ring = 1;377377+ if (disable_rx_ic)378378+ p->des01.rx.disable_ic = 1;379379+ p++;380380+ }381381+ return;382382+}383383+384384+static void mac100_init_tx_desc(struct dma_desc *p, unsigned int ring_size)385385+{386386+ int i;387387+ for (i = 0; i < ring_size; i++) {388388+ p->des01.tx.own = 0;389389+ if (i == ring_size - 1)390390+ p->des01.tx.end_ring = 1;391391+ p++;392392+ }393393+ return;394394+}395395+396396+static int mac100_get_tx_owner(struct dma_desc *p)397397+{398398+ return p->des01.tx.own;399399+}400400+401401+static int mac100_get_rx_owner(struct dma_desc *p)402402+{403403+ return p->des01.rx.own;404404+}405405+406406+static void mac100_set_tx_owner(struct dma_desc *p)407407+{408408+ p->des01.tx.own = 1;409409+}410410+411411+static void mac100_set_rx_owner(struct dma_desc *p)412412+{413413+ p->des01.rx.own = 1;414414+}415415+416416+static int mac100_get_tx_ls(struct dma_desc *p)417417+{418418+ return p->des01.tx.last_segment;419419+}420420+421421+static void mac100_release_tx_desc(struct dma_desc *p)422422+{423423+ int ter = p->des01.tx.end_ring;424424+425425+ /* clean field used within the xmit */426426+ p->des01.tx.first_segment = 0;427427+ p->des01.tx.last_segment = 0;428428+ p->des01.tx.buffer1_size = 0;429429+430430+ /* clean status reported */431431+ p->des01.tx.error_summary = 0;432432+ p->des01.tx.underflow_error = 0;433433+ p->des01.tx.no_carrier = 0;434434+ p->des01.tx.loss_carrier = 0;435435+ p->des01.tx.excessive_deferral = 0;436436+ p->des01.tx.excessive_collisions = 0;437437+ p->des01.tx.late_collision = 0;438438+ p->des01.tx.heartbeat_fail = 0;439439+ p->des01.tx.deferred = 0;440440+441441+ /* set termination field */442442+ p->des01.tx.end_ring = ter;443443+444444+ return;445445+}446446+447447+static void mac100_prepare_tx_desc(struct dma_desc *p, int is_fs, int len,448448+ int csum_flag)449449+{450450+ p->des01.tx.first_segment = is_fs;451451+ p->des01.tx.buffer1_size = len;452452+}453453+454454+static void mac100_clear_tx_ic(struct dma_desc *p)455455+{456456+ p->des01.tx.interrupt = 0;457457+}458458+459459+static void mac100_close_tx_desc(struct dma_desc *p)460460+{461461+ p->des01.tx.last_segment = 1;462462+ p->des01.tx.interrupt = 1;463463+}464464+465465+static int mac100_get_rx_frame_len(struct dma_desc *p)466466+{467467+ return p->des01.rx.frame_length;468468+}469469+470470+struct stmmac_ops mac100_driver = {471471+ .core_init = mac100_core_init,472472+ .dump_mac_regs = mac100_dump_mac_regs,473473+ .dma_init = mac100_dma_init,474474+ .dump_dma_regs = mac100_dump_dma_regs,475475+ .dma_mode = mac100_dma_operation_mode,476476+ .dma_diagnostic_fr = mac100_dma_diagnostic_fr,477477+ .tx_status = mac100_get_tx_frame_status,478478+ .rx_status = mac100_get_rx_frame_status,479479+ .get_tx_len = mac100_get_tx_len,480480+ .set_filter = mac100_set_filter,481481+ .flow_ctrl = mac100_flow_ctrl,482482+ .pmt = mac100_pmt,483483+ .init_rx_desc = mac100_init_rx_desc,484484+ .init_tx_desc = mac100_init_tx_desc,485485+ .get_tx_owner = mac100_get_tx_owner,486486+ .get_rx_owner = mac100_get_rx_owner,487487+ .release_tx_desc = mac100_release_tx_desc,488488+ .prepare_tx_desc = mac100_prepare_tx_desc,489489+ .clear_tx_ic = mac100_clear_tx_ic,490490+ .close_tx_desc = mac100_close_tx_desc,491491+ .get_tx_ls = mac100_get_tx_ls,492492+ .set_tx_owner = mac100_set_tx_owner,493493+ .set_rx_owner = mac100_set_rx_owner,494494+ .get_rx_frame_len = mac100_get_rx_frame_len,495495+ .host_irq_status = mac100_irq_status,496496+ .set_umac_addr = mac100_set_umac_addr,497497+ .get_umac_addr = mac100_get_umac_addr,498498+};499499+500500+struct mac_device_info *mac100_setup(unsigned long ioaddr)501501+{502502+ struct mac_device_info *mac;503503+504504+ mac = kzalloc(sizeof(const struct mac_device_info), GFP_KERNEL);505505+506506+ pr_info("\tMAC 10/100\n");507507+508508+ mac->ops = &mac100_driver;509509+ mac->hw.pmt = PMT_NOT_SUPPORTED;510510+ mac->hw.link.port = MAC_CONTROL_PS;511511+ mac->hw.link.duplex = MAC_CONTROL_F;512512+ mac->hw.link.speed = 0;513513+ mac->hw.mii.addr = MAC_MII_ADDR;514514+ mac->hw.mii.data = MAC_MII_DATA;515515+516516+ return mac;517517+}
+116
drivers/net/stmmac/mac100.h
···11+/*******************************************************************************22+ MAC 10/100 Header File33+44+ Copyright (C) 2007-2009 STMicroelectronics Ltd55+66+ This program is free software; you can redistribute it and/or modify it77+ under the terms and conditions of the GNU General Public License,88+ version 2, as published by the Free Software Foundation.99+1010+ This program is distributed in the hope it will be useful, but WITHOUT1111+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ more details.1414+1515+ You should have received a copy of the GNU General Public License along with1616+ this program; if not, write to the Free Software Foundation, Inc.,1717+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1818+1919+ The full GNU General Public License is included in this distribution in2020+ the file called "COPYING".2121+2222+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2323+*******************************************************************************/2424+2525+/*----------------------------------------------------------------------------2626+ * MAC BLOCK defines2727+ *---------------------------------------------------------------------------*/2828+/* MAC CSR offset */2929+#define MAC_CONTROL 0x00000000 /* MAC Control */3030+#define MAC_ADDR_HIGH 0x00000004 /* MAC Address High */3131+#define MAC_ADDR_LOW 0x00000008 /* MAC Address Low */3232+#define MAC_HASH_HIGH 0x0000000c /* Multicast Hash Table High */3333+#define MAC_HASH_LOW 0x00000010 /* Multicast Hash Table Low */3434+#define MAC_MII_ADDR 0x00000014 /* MII Address */3535+#define MAC_MII_DATA 0x00000018 /* MII Data */3636+#define MAC_FLOW_CTRL 0x0000001c /* Flow Control */3737+#define MAC_VLAN1 0x00000020 /* VLAN1 Tag */3838+#define MAC_VLAN2 0x00000024 /* VLAN2 Tag */3939+4040+/* MAC CTRL defines */4141+#define MAC_CONTROL_RA 0x80000000 /* Receive All Mode */4242+#define MAC_CONTROL_BLE 0x40000000 /* Endian Mode */4343+#define MAC_CONTROL_HBD 0x10000000 /* Heartbeat Disable */4444+#define MAC_CONTROL_PS 0x08000000 /* Port Select */4545+#define MAC_CONTROL_DRO 0x00800000 /* Disable Receive Own */4646+#define MAC_CONTROL_EXT_LOOPBACK 0x00400000 /* Reserved (ext loopback?) */4747+#define MAC_CONTROL_OM 0x00200000 /* Loopback Operating Mode */4848+#define MAC_CONTROL_F 0x00100000 /* Full Duplex Mode */4949+#define MAC_CONTROL_PM 0x00080000 /* Pass All Multicast */5050+#define MAC_CONTROL_PR 0x00040000 /* Promiscuous Mode */5151+#define MAC_CONTROL_IF 0x00020000 /* Inverse Filtering */5252+#define MAC_CONTROL_PB 0x00010000 /* Pass Bad Frames */5353+#define MAC_CONTROL_HO 0x00008000 /* Hash Only Filtering Mode */5454+#define MAC_CONTROL_HP 0x00002000 /* Hash/Perfect Filtering Mode */5555+#define MAC_CONTROL_LCC 0x00001000 /* Late Collision Control */5656+#define MAC_CONTROL_DBF 0x00000800 /* Disable Broadcast Frames */5757+#define MAC_CONTROL_DRTY 0x00000400 /* Disable Retry */5858+#define MAC_CONTROL_ASTP 0x00000100 /* Automatic Pad Stripping */5959+#define MAC_CONTROL_BOLMT_10 0x00000000 /* Back Off Limit 10 */6060+#define MAC_CONTROL_BOLMT_8 0x00000040 /* Back Off Limit 8 */6161+#define MAC_CONTROL_BOLMT_4 0x00000080 /* Back Off Limit 4 */6262+#define MAC_CONTROL_BOLMT_1 0x000000c0 /* Back Off Limit 1 */6363+#define MAC_CONTROL_DC 0x00000020 /* Deferral Check */6464+#define MAC_CONTROL_TE 0x00000008 /* Transmitter Enable */6565+#define MAC_CONTROL_RE 0x00000004 /* Receiver Enable */6666+6767+#define MAC_CORE_INIT (MAC_CONTROL_HBD | MAC_CONTROL_ASTP)6868+6969+/* MAC FLOW CTRL defines */7070+#define MAC_FLOW_CTRL_PT_MASK 0xffff0000 /* Pause Time Mask */7171+#define MAC_FLOW_CTRL_PT_SHIFT 167272+#define MAC_FLOW_CTRL_PASS 0x00000004 /* Pass Control Frames */7373+#define MAC_FLOW_CTRL_ENABLE 0x00000002 /* Flow Control Enable */7474+#define MAC_FLOW_CTRL_PAUSE 0x00000001 /* Flow Control Busy ... */7575+7676+/* MII ADDR defines */7777+#define MAC_MII_ADDR_WRITE 0x00000002 /* MII Write */7878+#define MAC_MII_ADDR_BUSY 0x00000001 /* MII Busy */7979+8080+/*----------------------------------------------------------------------------8181+ * DMA BLOCK defines8282+ *---------------------------------------------------------------------------*/8383+8484+/* DMA Bus Mode register defines */8585+#define DMA_BUS_MODE_DBO 0x00100000 /* Descriptor Byte Ordering */8686+#define DMA_BUS_MODE_BLE 0x00000080 /* Big Endian/Little Endian */8787+#define DMA_BUS_MODE_PBL_MASK 0x00003f00 /* Programmable Burst Len */8888+#define DMA_BUS_MODE_PBL_SHIFT 88989+#define DMA_BUS_MODE_DSL_MASK 0x0000007c /* Descriptor Skip Length */9090+#define DMA_BUS_MODE_DSL_SHIFT 2 /* (in DWORDS) */9191+#define DMA_BUS_MODE_BAR_BUS 0x00000002 /* Bar-Bus Arbitration */9292+#define DMA_BUS_MODE_SFT_RESET 0x00000001 /* Software Reset */9393+#define DMA_BUS_MODE_DEFAULT 0x000000009494+9595+/* DMA Control register defines */9696+#define DMA_CONTROL_SF 0x00200000 /* Store And Forward */9797+9898+/* Transmit Threshold Control */9999+enum ttc_control {100100+ DMA_CONTROL_TTC_DEFAULT = 0x00000000, /* Threshold is 32 DWORDS */101101+ DMA_CONTROL_TTC_64 = 0x00004000, /* Threshold is 64 DWORDS */102102+ DMA_CONTROL_TTC_128 = 0x00008000, /* Threshold is 128 DWORDS */103103+ DMA_CONTROL_TTC_256 = 0x0000c000, /* Threshold is 256 DWORDS */104104+ DMA_CONTROL_TTC_18 = 0x00400000, /* Threshold is 18 DWORDS */105105+ DMA_CONTROL_TTC_24 = 0x00404000, /* Threshold is 24 DWORDS */106106+ DMA_CONTROL_TTC_32 = 0x00408000, /* Threshold is 32 DWORDS */107107+ DMA_CONTROL_TTC_40 = 0x0040c000, /* Threshold is 40 DWORDS */108108+ DMA_CONTROL_SE = 0x00000008, /* Stop On Empty */109109+ DMA_CONTROL_OSF = 0x00000004, /* Operate On 2nd Frame */110110+};111111+112112+/* STMAC110 DMA Missed Frame Counter register defines */113113+#define DMA_MISSED_FRAME_OVE 0x10000000 /* FIFO Overflow Overflow */114114+#define DMA_MISSED_FRAME_OVE_CNTR 0x0ffe0000 /* Overflow Frame Counter */115115+#define DMA_MISSED_FRAME_OVE_M 0x00010000 /* Missed Frame Overflow */116116+#define DMA_MISSED_FRAME_M_CNTR 0x0000ffff /* Missed Frame Couinter */
+98
drivers/net/stmmac/stmmac.h
···11+/*******************************************************************************22+ Copyright (C) 2007-2009 STMicroelectronics Ltd33+44+ This program is free software; you can redistribute it and/or modify it55+ under the terms and conditions of the GNU General Public License,66+ version 2, as published by the Free Software Foundation.77+88+ This program is distributed in the hope it will be useful, but WITHOUT99+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1010+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1111+ more details.1212+1313+ You should have received a copy of the GNU General Public License along with1414+ this program; if not, write to the Free Software Foundation, Inc.,1515+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1616+1717+ The full GNU General Public License is included in this distribution in1818+ the file called "COPYING".1919+2020+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2121+*******************************************************************************/2222+2323+#define DRV_MODULE_VERSION "Oct_09"2424+2525+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)2626+#define STMMAC_VLAN_TAG_USED2727+#include <linux/if_vlan.h>2828+#endif2929+3030+#include "common.h"3131+#ifdef CONFIG_STMMAC_TIMER3232+#include "stmmac_timer.h"3333+#endif3434+3535+struct stmmac_priv {3636+ /* Frequently used values are kept adjacent for cache effect */3737+ struct dma_desc *dma_tx ____cacheline_aligned;3838+ dma_addr_t dma_tx_phy;3939+ struct sk_buff **tx_skbuff;4040+ unsigned int cur_tx;4141+ unsigned int dirty_tx;4242+ unsigned int dma_tx_size;4343+ int tx_coe;4444+ int tx_coalesce;4545+4646+ struct dma_desc *dma_rx ;4747+ unsigned int cur_rx;4848+ unsigned int dirty_rx;4949+ struct sk_buff **rx_skbuff;5050+ dma_addr_t *rx_skbuff_dma;5151+ struct sk_buff_head rx_recycle;5252+5353+ struct net_device *dev;5454+ int is_gmac;5555+ dma_addr_t dma_rx_phy;5656+ unsigned int dma_rx_size;5757+ int rx_csum;5858+ unsigned int dma_buf_sz;5959+ struct device *device;6060+ struct mac_device_info *mac_type;6161+6262+ struct stmmac_extra_stats xstats;6363+ struct napi_struct napi;6464+6565+ phy_interface_t phy_interface;6666+ int pbl;6767+ int bus_id;6868+ int phy_addr;6969+ int phy_mask;7070+ int (*phy_reset) (void *priv);7171+ void (*fix_mac_speed) (void *priv, unsigned int speed);7272+ void *bsp_priv;7373+7474+ int phy_irq;7575+ struct phy_device *phydev;7676+ int oldlink;7777+ int speed;7878+ int oldduplex;7979+ unsigned int flow_ctrl;8080+ unsigned int pause;8181+ struct mii_bus *mii;8282+8383+ u32 msg_enable;8484+ spinlock_t lock;8585+ int wolopts;8686+ int wolenabled;8787+ int shutdown;8888+#ifdef CONFIG_STMMAC_TIMER8989+ struct stmmac_timer *tm;9090+#endif9191+#ifdef STMMAC_VLAN_TAG_USED9292+ struct vlan_group *vlgrp;9393+#endif9494+};9595+9696+extern int stmmac_mdio_unregister(struct net_device *ndev);9797+extern int stmmac_mdio_register(struct net_device *ndev);9898+extern void stmmac_set_ethtool_ops(struct net_device *netdev);
+395
drivers/net/stmmac/stmmac_ethtool.c
···11+/*******************************************************************************22+ STMMAC Ethtool support33+44+ Copyright (C) 2007-2009 STMicroelectronics Ltd55+66+ This program is free software; you can redistribute it and/or modify it77+ under the terms and conditions of the GNU General Public License,88+ version 2, as published by the Free Software Foundation.99+1010+ This program is distributed in the hope it will be useful, but WITHOUT1111+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ more details.1414+1515+ You should have received a copy of the GNU General Public License along with1616+ this program; if not, write to the Free Software Foundation, Inc.,1717+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1818+1919+ The full GNU General Public License is included in this distribution in2020+ the file called "COPYING".2121+2222+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2323+*******************************************************************************/2424+2525+#include <linux/etherdevice.h>2626+#include <linux/ethtool.h>2727+#include <linux/mii.h>2828+#include <linux/phy.h>2929+3030+#include "stmmac.h"3131+3232+#define REG_SPACE_SIZE 0x10543333+#define MAC100_ETHTOOL_NAME "st_mac100"3434+#define GMAC_ETHTOOL_NAME "st_gmac"3535+3636+struct stmmac_stats {3737+ char stat_string[ETH_GSTRING_LEN];3838+ int sizeof_stat;3939+ int stat_offset;4040+};4141+4242+#define STMMAC_STAT(m) \4343+ { #m, FIELD_SIZEOF(struct stmmac_extra_stats, m), \4444+ offsetof(struct stmmac_priv, xstats.m)}4545+4646+static const struct stmmac_stats stmmac_gstrings_stats[] = {4747+ STMMAC_STAT(tx_underflow),4848+ STMMAC_STAT(tx_carrier),4949+ STMMAC_STAT(tx_losscarrier),5050+ STMMAC_STAT(tx_heartbeat),5151+ STMMAC_STAT(tx_deferred),5252+ STMMAC_STAT(tx_vlan),5353+ STMMAC_STAT(rx_vlan),5454+ STMMAC_STAT(tx_jabber),5555+ STMMAC_STAT(tx_frame_flushed),5656+ STMMAC_STAT(tx_payload_error),5757+ STMMAC_STAT(tx_ip_header_error),5858+ STMMAC_STAT(rx_desc),5959+ STMMAC_STAT(rx_partial),6060+ STMMAC_STAT(rx_runt),6161+ STMMAC_STAT(rx_toolong),6262+ STMMAC_STAT(rx_collision),6363+ STMMAC_STAT(rx_crc),6464+ STMMAC_STAT(rx_lenght),6565+ STMMAC_STAT(rx_mii),6666+ STMMAC_STAT(rx_multicast),6767+ STMMAC_STAT(rx_gmac_overflow),6868+ STMMAC_STAT(rx_watchdog),6969+ STMMAC_STAT(da_rx_filter_fail),7070+ STMMAC_STAT(sa_rx_filter_fail),7171+ STMMAC_STAT(rx_missed_cntr),7272+ STMMAC_STAT(rx_overflow_cntr),7373+ STMMAC_STAT(tx_undeflow_irq),7474+ STMMAC_STAT(tx_process_stopped_irq),7575+ STMMAC_STAT(tx_jabber_irq),7676+ STMMAC_STAT(rx_overflow_irq),7777+ STMMAC_STAT(rx_buf_unav_irq),7878+ STMMAC_STAT(rx_process_stopped_irq),7979+ STMMAC_STAT(rx_watchdog_irq),8080+ STMMAC_STAT(tx_early_irq),8181+ STMMAC_STAT(fatal_bus_error_irq),8282+ STMMAC_STAT(threshold),8383+ STMMAC_STAT(tx_pkt_n),8484+ STMMAC_STAT(rx_pkt_n),8585+ STMMAC_STAT(poll_n),8686+ STMMAC_STAT(sched_timer_n),8787+ STMMAC_STAT(normal_irq_n),8888+};8989+#define STMMAC_STATS_LEN ARRAY_SIZE(stmmac_gstrings_stats)9090+9191+void stmmac_ethtool_getdrvinfo(struct net_device *dev,9292+ struct ethtool_drvinfo *info)9393+{9494+ struct stmmac_priv *priv = netdev_priv(dev);9595+9696+ if (!priv->is_gmac)9797+ strcpy(info->driver, MAC100_ETHTOOL_NAME);9898+ else9999+ strcpy(info->driver, GMAC_ETHTOOL_NAME);100100+101101+ strcpy(info->version, DRV_MODULE_VERSION);102102+ info->fw_version[0] = '\0';103103+ info->n_stats = STMMAC_STATS_LEN;104104+ return;105105+}106106+107107+int stmmac_ethtool_getsettings(struct net_device *dev, struct ethtool_cmd *cmd)108108+{109109+ struct stmmac_priv *priv = netdev_priv(dev);110110+ struct phy_device *phy = priv->phydev;111111+ int rc;112112+ if (phy == NULL) {113113+ pr_err("%s: %s: PHY is not registered\n",114114+ __func__, dev->name);115115+ return -ENODEV;116116+ }117117+ if (!netif_running(dev)) {118118+ pr_err("%s: interface is disabled: we cannot track "119119+ "link speed / duplex setting\n", dev->name);120120+ return -EBUSY;121121+ }122122+ cmd->transceiver = XCVR_INTERNAL;123123+ spin_lock_irq(&priv->lock);124124+ rc = phy_ethtool_gset(phy, cmd);125125+ spin_unlock_irq(&priv->lock);126126+ return rc;127127+}128128+129129+int stmmac_ethtool_setsettings(struct net_device *dev, struct ethtool_cmd *cmd)130130+{131131+ struct stmmac_priv *priv = netdev_priv(dev);132132+ struct phy_device *phy = priv->phydev;133133+ int rc;134134+135135+ spin_lock(&priv->lock);136136+ rc = phy_ethtool_sset(phy, cmd);137137+ spin_unlock(&priv->lock);138138+139139+ return rc;140140+}141141+142142+u32 stmmac_ethtool_getmsglevel(struct net_device *dev)143143+{144144+ struct stmmac_priv *priv = netdev_priv(dev);145145+ return priv->msg_enable;146146+}147147+148148+void stmmac_ethtool_setmsglevel(struct net_device *dev, u32 level)149149+{150150+ struct stmmac_priv *priv = netdev_priv(dev);151151+ priv->msg_enable = level;152152+153153+}154154+155155+int stmmac_check_if_running(struct net_device *dev)156156+{157157+ if (!netif_running(dev))158158+ return -EBUSY;159159+ return 0;160160+}161161+162162+int stmmac_ethtool_get_regs_len(struct net_device *dev)163163+{164164+ return REG_SPACE_SIZE;165165+}166166+167167+void stmmac_ethtool_gregs(struct net_device *dev,168168+ struct ethtool_regs *regs, void *space)169169+{170170+ int i;171171+ u32 *reg_space = (u32 *) space;172172+173173+ struct stmmac_priv *priv = netdev_priv(dev);174174+175175+ memset(reg_space, 0x0, REG_SPACE_SIZE);176176+177177+ if (!priv->is_gmac) {178178+ /* MAC registers */179179+ for (i = 0; i < 12; i++)180180+ reg_space[i] = readl(dev->base_addr + (i * 4));181181+ /* DMA registers */182182+ for (i = 0; i < 9; i++)183183+ reg_space[i + 12] =184184+ readl(dev->base_addr + (DMA_BUS_MODE + (i * 4)));185185+ reg_space[22] = readl(dev->base_addr + DMA_CUR_TX_BUF_ADDR);186186+ reg_space[23] = readl(dev->base_addr + DMA_CUR_RX_BUF_ADDR);187187+ } else {188188+ /* MAC registers */189189+ for (i = 0; i < 55; i++)190190+ reg_space[i] = readl(dev->base_addr + (i * 4));191191+ /* DMA registers */192192+ for (i = 0; i < 22; i++)193193+ reg_space[i + 55] =194194+ readl(dev->base_addr + (DMA_BUS_MODE + (i * 4)));195195+ }196196+197197+ return;198198+}199199+200200+int stmmac_ethtool_set_tx_csum(struct net_device *netdev, u32 data)201201+{202202+ if (data)203203+ netdev->features |= NETIF_F_HW_CSUM;204204+ else205205+ netdev->features &= ~NETIF_F_HW_CSUM;206206+207207+ return 0;208208+}209209+210210+u32 stmmac_ethtool_get_rx_csum(struct net_device *dev)211211+{212212+ struct stmmac_priv *priv = netdev_priv(dev);213213+214214+ return priv->rx_csum;215215+}216216+217217+static void218218+stmmac_get_pauseparam(struct net_device *netdev,219219+ struct ethtool_pauseparam *pause)220220+{221221+ struct stmmac_priv *priv = netdev_priv(netdev);222222+223223+ spin_lock(&priv->lock);224224+225225+ pause->rx_pause = 0;226226+ pause->tx_pause = 0;227227+ pause->autoneg = priv->phydev->autoneg;228228+229229+ if (priv->flow_ctrl & FLOW_RX)230230+ pause->rx_pause = 1;231231+ if (priv->flow_ctrl & FLOW_TX)232232+ pause->tx_pause = 1;233233+234234+ spin_unlock(&priv->lock);235235+ return;236236+}237237+238238+static int239239+stmmac_set_pauseparam(struct net_device *netdev,240240+ struct ethtool_pauseparam *pause)241241+{242242+ struct stmmac_priv *priv = netdev_priv(netdev);243243+ struct phy_device *phy = priv->phydev;244244+ int new_pause = FLOW_OFF;245245+ int ret = 0;246246+247247+ spin_lock(&priv->lock);248248+249249+ if (pause->rx_pause)250250+ new_pause |= FLOW_RX;251251+ if (pause->tx_pause)252252+ new_pause |= FLOW_TX;253253+254254+ priv->flow_ctrl = new_pause;255255+256256+ if (phy->autoneg) {257257+ if (netif_running(netdev)) {258258+ struct ethtool_cmd cmd;259259+ /* auto-negotiation automatically restarted */260260+ cmd.cmd = ETHTOOL_NWAY_RST;261261+ cmd.supported = phy->supported;262262+ cmd.advertising = phy->advertising;263263+ cmd.autoneg = phy->autoneg;264264+ cmd.speed = phy->speed;265265+ cmd.duplex = phy->duplex;266266+ cmd.phy_address = phy->addr;267267+ ret = phy_ethtool_sset(phy, &cmd);268268+ }269269+ } else {270270+ unsigned long ioaddr = netdev->base_addr;271271+ priv->mac_type->ops->flow_ctrl(ioaddr, phy->duplex,272272+ priv->flow_ctrl, priv->pause);273273+ }274274+ spin_unlock(&priv->lock);275275+ return ret;276276+}277277+278278+static void stmmac_get_ethtool_stats(struct net_device *dev,279279+ struct ethtool_stats *dummy, u64 *data)280280+{281281+ struct stmmac_priv *priv = netdev_priv(dev);282282+ unsigned long ioaddr = dev->base_addr;283283+ int i;284284+285285+ /* Update HW stats if supported */286286+ priv->mac_type->ops->dma_diagnostic_fr(&dev->stats, &priv->xstats,287287+ ioaddr);288288+289289+ for (i = 0; i < STMMAC_STATS_LEN; i++) {290290+ char *p = (char *)priv + stmmac_gstrings_stats[i].stat_offset;291291+ data[i] = (stmmac_gstrings_stats[i].sizeof_stat ==292292+ sizeof(u64)) ? (*(u64 *)p) : (*(u32 *)p);293293+ }294294+295295+ return;296296+}297297+298298+static int stmmac_get_sset_count(struct net_device *netdev, int sset)299299+{300300+ switch (sset) {301301+ case ETH_SS_STATS:302302+ return STMMAC_STATS_LEN;303303+ default:304304+ return -EOPNOTSUPP;305305+ }306306+}307307+308308+static void stmmac_get_strings(struct net_device *dev, u32 stringset, u8 *data)309309+{310310+ int i;311311+ u8 *p = data;312312+313313+ switch (stringset) {314314+ case ETH_SS_STATS:315315+ for (i = 0; i < STMMAC_STATS_LEN; i++) {316316+ memcpy(p, stmmac_gstrings_stats[i].stat_string,317317+ ETH_GSTRING_LEN);318318+ p += ETH_GSTRING_LEN;319319+ }320320+ break;321321+ default:322322+ WARN_ON(1);323323+ break;324324+ }325325+ return;326326+}327327+328328+/* Currently only support WOL through Magic packet. */329329+static void stmmac_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)330330+{331331+ struct stmmac_priv *priv = netdev_priv(dev);332332+333333+ spin_lock_irq(&priv->lock);334334+ if (priv->wolenabled == PMT_SUPPORTED) {335335+ wol->supported = WAKE_MAGIC;336336+ wol->wolopts = priv->wolopts;337337+ }338338+ spin_unlock_irq(&priv->lock);339339+}340340+341341+static int stmmac_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)342342+{343343+ struct stmmac_priv *priv = netdev_priv(dev);344344+ u32 support = WAKE_MAGIC;345345+346346+ if (priv->wolenabled == PMT_NOT_SUPPORTED)347347+ return -EINVAL;348348+349349+ if (wol->wolopts & ~support)350350+ return -EINVAL;351351+352352+ if (wol->wolopts == 0)353353+ device_set_wakeup_enable(priv->device, 0);354354+ else355355+ device_set_wakeup_enable(priv->device, 1);356356+357357+ spin_lock_irq(&priv->lock);358358+ priv->wolopts = wol->wolopts;359359+ spin_unlock_irq(&priv->lock);360360+361361+ return 0;362362+}363363+364364+static struct ethtool_ops stmmac_ethtool_ops = {365365+ .begin = stmmac_check_if_running,366366+ .get_drvinfo = stmmac_ethtool_getdrvinfo,367367+ .get_settings = stmmac_ethtool_getsettings,368368+ .set_settings = stmmac_ethtool_setsettings,369369+ .get_msglevel = stmmac_ethtool_getmsglevel,370370+ .set_msglevel = stmmac_ethtool_setmsglevel,371371+ .get_regs = stmmac_ethtool_gregs,372372+ .get_regs_len = stmmac_ethtool_get_regs_len,373373+ .get_link = ethtool_op_get_link,374374+ .get_rx_csum = stmmac_ethtool_get_rx_csum,375375+ .get_tx_csum = ethtool_op_get_tx_csum,376376+ .set_tx_csum = stmmac_ethtool_set_tx_csum,377377+ .get_sg = ethtool_op_get_sg,378378+ .set_sg = ethtool_op_set_sg,379379+ .get_pauseparam = stmmac_get_pauseparam,380380+ .set_pauseparam = stmmac_set_pauseparam,381381+ .get_ethtool_stats = stmmac_get_ethtool_stats,382382+ .get_strings = stmmac_get_strings,383383+ .get_wol = stmmac_get_wol,384384+ .set_wol = stmmac_set_wol,385385+ .get_sset_count = stmmac_get_sset_count,386386+#ifdef NETIF_F_TSO387387+ .get_tso = ethtool_op_get_tso,388388+ .set_tso = ethtool_op_set_tso,389389+#endif390390+};391391+392392+void stmmac_set_ethtool_ops(struct net_device *netdev)393393+{394394+ SET_ETHTOOL_OPS(netdev, &stmmac_ethtool_ops);395395+}
+2204
drivers/net/stmmac/stmmac_main.c
···11+/*******************************************************************************22+ This is the driver for the ST MAC 10/100/1000 on-chip Ethernet controllers.33+ ST Ethernet IPs are built around a Synopsys IP Core.44+55+ Copyright (C) 2007-2009 STMicroelectronics Ltd66+77+ This program is free software; you can redistribute it and/or modify it88+ under the terms and conditions of the GNU General Public License,99+ version 2, as published by the Free Software Foundation.1010+1111+ This program is distributed in the hope it will be useful, but WITHOUT1212+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1313+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1414+ more details.1515+1616+ You should have received a copy of the GNU General Public License along with1717+ this program; if not, write to the Free Software Foundation, Inc.,1818+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1919+2020+ The full GNU General Public License is included in this distribution in2121+ the file called "COPYING".2222+2323+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2424+2525+ Documentation available at:2626+ http://www.stlinux.com2727+ Support available at:2828+ https://bugzilla.stlinux.com/2929+*******************************************************************************/3030+3131+#include <linux/module.h>3232+#include <linux/init.h>3333+#include <linux/kernel.h>3434+#include <linux/interrupt.h>3535+#include <linux/netdevice.h>3636+#include <linux/etherdevice.h>3737+#include <linux/platform_device.h>3838+#include <linux/ip.h>3939+#include <linux/tcp.h>4040+#include <linux/skbuff.h>4141+#include <linux/ethtool.h>4242+#include <linux/if_ether.h>4343+#include <linux/crc32.h>4444+#include <linux/mii.h>4545+#include <linux/phy.h>4646+#include <linux/if_vlan.h>4747+#include <linux/dma-mapping.h>4848+#include <linux/stm/soc.h>4949+#include "stmmac.h"5050+5151+#define STMMAC_RESOURCE_NAME "stmmaceth"5252+#define PHY_RESOURCE_NAME "stmmacphy"5353+5454+#undef STMMAC_DEBUG5555+/*#define STMMAC_DEBUG*/5656+#ifdef STMMAC_DEBUG5757+#define DBG(nlevel, klevel, fmt, args...) \5858+ ((void)(netif_msg_##nlevel(priv) && \5959+ printk(KERN_##klevel fmt, ## args)))6060+#else6161+#define DBG(nlevel, klevel, fmt, args...) do { } while (0)6262+#endif6363+6464+#undef STMMAC_RX_DEBUG6565+/*#define STMMAC_RX_DEBUG*/6666+#ifdef STMMAC_RX_DEBUG6767+#define RX_DBG(fmt, args...) printk(fmt, ## args)6868+#else6969+#define RX_DBG(fmt, args...) do { } while (0)7070+#endif7171+7272+#undef STMMAC_XMIT_DEBUG7373+/*#define STMMAC_XMIT_DEBUG*/7474+#ifdef STMMAC_TX_DEBUG7575+#define TX_DBG(fmt, args...) printk(fmt, ## args)7676+#else7777+#define TX_DBG(fmt, args...) do { } while (0)7878+#endif7979+8080+#define STMMAC_ALIGN(x) L1_CACHE_ALIGN(x)8181+#define JUMBO_LEN 90008282+8383+/* Module parameters */8484+#define TX_TIMEO 5000 /* default 5 seconds */8585+static int watchdog = TX_TIMEO;8686+module_param(watchdog, int, S_IRUGO | S_IWUSR);8787+MODULE_PARM_DESC(watchdog, "Transmit timeout in milliseconds");8888+8989+static int debug = -1; /* -1: default, 0: no output, 16: all */9090+module_param(debug, int, S_IRUGO | S_IWUSR);9191+MODULE_PARM_DESC(debug, "Message Level (0: no output, 16: all)");9292+9393+static int phyaddr = -1;9494+module_param(phyaddr, int, S_IRUGO);9595+MODULE_PARM_DESC(phyaddr, "Physical device address");9696+9797+#define DMA_TX_SIZE 2569898+static int dma_txsize = DMA_TX_SIZE;9999+module_param(dma_txsize, int, S_IRUGO | S_IWUSR);100100+MODULE_PARM_DESC(dma_txsize, "Number of descriptors in the TX list");101101+102102+#define DMA_RX_SIZE 256103103+static int dma_rxsize = DMA_RX_SIZE;104104+module_param(dma_rxsize, int, S_IRUGO | S_IWUSR);105105+MODULE_PARM_DESC(dma_rxsize, "Number of descriptors in the RX list");106106+107107+static int flow_ctrl = FLOW_OFF;108108+module_param(flow_ctrl, int, S_IRUGO | S_IWUSR);109109+MODULE_PARM_DESC(flow_ctrl, "Flow control ability [on/off]");110110+111111+static int pause = PAUSE_TIME;112112+module_param(pause, int, S_IRUGO | S_IWUSR);113113+MODULE_PARM_DESC(pause, "Flow Control Pause Time");114114+115115+#define TC_DEFAULT 64116116+static int tc = TC_DEFAULT;117117+module_param(tc, int, S_IRUGO | S_IWUSR);118118+MODULE_PARM_DESC(tc, "DMA threshold control value");119119+120120+#define RX_NO_COALESCE 1 /* Always interrupt on completion */121121+#define TX_NO_COALESCE -1 /* No moderation by default */122122+123123+/* Pay attention to tune this parameter; take care of both124124+ * hardware capability and network stabitily/performance impact.125125+ * Many tests showed that ~4ms latency seems to be good enough. */126126+#ifdef CONFIG_STMMAC_TIMER127127+#define DEFAULT_PERIODIC_RATE 256128128+static int tmrate = DEFAULT_PERIODIC_RATE;129129+module_param(tmrate, int, S_IRUGO | S_IWUSR);130130+MODULE_PARM_DESC(tmrate, "External timer freq. (default: 256Hz)");131131+#endif132132+133133+#define DMA_BUFFER_SIZE BUF_SIZE_2KiB134134+static int buf_sz = DMA_BUFFER_SIZE;135135+module_param(buf_sz, int, S_IRUGO | S_IWUSR);136136+MODULE_PARM_DESC(buf_sz, "DMA buffer size");137137+138138+/* In case of Giga ETH, we can enable/disable the COE for the139139+ * transmit HW checksum computation.140140+ * Note that, if tx csum is off in HW, SG will be still supported. */141141+static int tx_coe = HW_CSUM;142142+module_param(tx_coe, int, S_IRUGO | S_IWUSR);143143+MODULE_PARM_DESC(tx_coe, "GMAC COE type 2 [on/off]");144144+145145+static const u32 default_msg_level = (NETIF_MSG_DRV | NETIF_MSG_PROBE |146146+ NETIF_MSG_LINK | NETIF_MSG_IFUP |147147+ NETIF_MSG_IFDOWN | NETIF_MSG_TIMER);148148+149149+static irqreturn_t stmmac_interrupt(int irq, void *dev_id);150150+static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev);151151+152152+/**153153+ * stmmac_verify_args - verify the driver parameters.154154+ * Description: it verifies if some wrong parameter is passed to the driver.155155+ * Note that wrong parameters are replaced with the default values.156156+ */157157+static void stmmac_verify_args(void)158158+{159159+ if (unlikely(watchdog < 0))160160+ watchdog = TX_TIMEO;161161+ if (unlikely(dma_rxsize < 0))162162+ dma_rxsize = DMA_RX_SIZE;163163+ if (unlikely(dma_txsize < 0))164164+ dma_txsize = DMA_TX_SIZE;165165+ if (unlikely((buf_sz < DMA_BUFFER_SIZE) || (buf_sz > BUF_SIZE_16KiB)))166166+ buf_sz = DMA_BUFFER_SIZE;167167+ if (unlikely(flow_ctrl > 1))168168+ flow_ctrl = FLOW_AUTO;169169+ else if (likely(flow_ctrl < 0))170170+ flow_ctrl = FLOW_OFF;171171+ if (unlikely((pause < 0) || (pause > 0xffff)))172172+ pause = PAUSE_TIME;173173+174174+ return;175175+}176176+177177+#if defined(STMMAC_XMIT_DEBUG) || defined(STMMAC_RX_DEBUG)178178+static void print_pkt(unsigned char *buf, int len)179179+{180180+ int j;181181+ pr_info("len = %d byte, buf addr: 0x%p", len, buf);182182+ for (j = 0; j < len; j++) {183183+ if ((j % 16) == 0)184184+ pr_info("\n %03x:", j);185185+ pr_info(" %02x", buf[j]);186186+ }187187+ pr_info("\n");188188+ return;189189+}190190+#endif191191+192192+/* minimum number of free TX descriptors required to wake up TX process */193193+#define STMMAC_TX_THRESH(x) (x->dma_tx_size/4)194194+195195+static inline u32 stmmac_tx_avail(struct stmmac_priv *priv)196196+{197197+ return priv->dirty_tx + priv->dma_tx_size - priv->cur_tx - 1;198198+}199199+200200+/**201201+ * stmmac_adjust_link202202+ * @dev: net device structure203203+ * Description: it adjusts the link parameters.204204+ */205205+static void stmmac_adjust_link(struct net_device *dev)206206+{207207+ struct stmmac_priv *priv = netdev_priv(dev);208208+ struct phy_device *phydev = priv->phydev;209209+ unsigned long ioaddr = dev->base_addr;210210+ unsigned long flags;211211+ int new_state = 0;212212+ unsigned int fc = priv->flow_ctrl, pause_time = priv->pause;213213+214214+ if (phydev == NULL)215215+ return;216216+217217+ DBG(probe, DEBUG, "stmmac_adjust_link: called. address %d link %d\n",218218+ phydev->addr, phydev->link);219219+220220+ spin_lock_irqsave(&priv->lock, flags);221221+ if (phydev->link) {222222+ u32 ctrl = readl(ioaddr + MAC_CTRL_REG);223223+224224+ /* Now we make sure that we can be in full duplex mode.225225+ * If not, we operate in half-duplex mode. */226226+ if (phydev->duplex != priv->oldduplex) {227227+ new_state = 1;228228+ if (!(phydev->duplex))229229+ ctrl &= ~priv->mac_type->hw.link.duplex;230230+ else231231+ ctrl |= priv->mac_type->hw.link.duplex;232232+ priv->oldduplex = phydev->duplex;233233+ }234234+ /* Flow Control operation */235235+ if (phydev->pause)236236+ priv->mac_type->ops->flow_ctrl(ioaddr, phydev->duplex,237237+ fc, pause_time);238238+239239+ if (phydev->speed != priv->speed) {240240+ new_state = 1;241241+ switch (phydev->speed) {242242+ case 1000:243243+ if (likely(priv->is_gmac))244244+ ctrl &= ~priv->mac_type->hw.link.port;245245+ break;246246+ case 100:247247+ case 10:248248+ if (priv->is_gmac) {249249+ ctrl |= priv->mac_type->hw.link.port;250250+ if (phydev->speed == SPEED_100) {251251+ ctrl |=252252+ priv->mac_type->hw.link.253253+ speed;254254+ } else {255255+ ctrl &=256256+ ~(priv->mac_type->hw.257257+ link.speed);258258+ }259259+ } else {260260+ ctrl &= ~priv->mac_type->hw.link.port;261261+ }262262+ priv->fix_mac_speed(priv->bsp_priv,263263+ phydev->speed);264264+ break;265265+ default:266266+ if (netif_msg_link(priv))267267+ pr_warning("%s: Speed (%d) is not 10"268268+ " or 100!\n", dev->name, phydev->speed);269269+ break;270270+ }271271+272272+ priv->speed = phydev->speed;273273+ }274274+275275+ writel(ctrl, ioaddr + MAC_CTRL_REG);276276+277277+ if (!priv->oldlink) {278278+ new_state = 1;279279+ priv->oldlink = 1;280280+ }281281+ } else if (priv->oldlink) {282282+ new_state = 1;283283+ priv->oldlink = 0;284284+ priv->speed = 0;285285+ priv->oldduplex = -1;286286+ }287287+288288+ if (new_state && netif_msg_link(priv))289289+ phy_print_status(phydev);290290+291291+ spin_unlock_irqrestore(&priv->lock, flags);292292+293293+ DBG(probe, DEBUG, "stmmac_adjust_link: exiting\n");294294+}295295+296296+/**297297+ * stmmac_init_phy - PHY initialization298298+ * @dev: net device structure299299+ * Description: it initializes the driver's PHY state, and attaches the PHY300300+ * to the mac driver.301301+ * Return value:302302+ * 0 on success303303+ */304304+static int stmmac_init_phy(struct net_device *dev)305305+{306306+ struct stmmac_priv *priv = netdev_priv(dev);307307+ struct phy_device *phydev;308308+ char phy_id[BUS_ID_SIZE]; /* PHY to connect */309309+ char bus_id[BUS_ID_SIZE];310310+311311+ priv->oldlink = 0;312312+ priv->speed = 0;313313+ priv->oldduplex = -1;314314+315315+ if (priv->phy_addr == -1) {316316+ /* We don't have a PHY, so do nothing */317317+ return 0;318318+ }319319+320320+ snprintf(bus_id, MII_BUS_ID_SIZE, "%x", priv->bus_id);321321+ snprintf(phy_id, BUS_ID_SIZE, PHY_ID_FMT, bus_id, priv->phy_addr);322322+ pr_debug("stmmac_init_phy: trying to attach to %s\n", phy_id);323323+324324+ phydev = phy_connect(dev, phy_id, &stmmac_adjust_link, 0,325325+ priv->phy_interface);326326+327327+ if (IS_ERR(phydev)) {328328+ pr_err("%s: Could not attach to PHY\n", dev->name);329329+ return PTR_ERR(phydev);330330+ }331331+332332+ /*333333+ * Broken HW is sometimes missing the pull-up resistor on the334334+ * MDIO line, which results in reads to non-existent devices returning335335+ * 0 rather than 0xffff. Catch this here and treat 0 as a non-existent336336+ * device as well.337337+ * Note: phydev->phy_id is the result of reading the UID PHY registers.338338+ */339339+ if (phydev->phy_id == 0) {340340+ phy_disconnect(phydev);341341+ return -ENODEV;342342+ }343343+ pr_debug("stmmac_init_phy: %s: attached to PHY (UID 0x%x)"344344+ " Link = %d\n", dev->name, phydev->phy_id, phydev->link);345345+346346+ priv->phydev = phydev;347347+348348+ return 0;349349+}350350+351351+static inline void stmmac_mac_enable_rx(unsigned long ioaddr)352352+{353353+ u32 value = readl(ioaddr + MAC_CTRL_REG);354354+ value |= MAC_RNABLE_RX;355355+ /* Set the RE (receive enable bit into the MAC CTRL register). */356356+ writel(value, ioaddr + MAC_CTRL_REG);357357+}358358+359359+static inline void stmmac_mac_enable_tx(unsigned long ioaddr)360360+{361361+ u32 value = readl(ioaddr + MAC_CTRL_REG);362362+ value |= MAC_ENABLE_TX;363363+ /* Set the TE (transmit enable bit into the MAC CTRL register). */364364+ writel(value, ioaddr + MAC_CTRL_REG);365365+}366366+367367+static inline void stmmac_mac_disable_rx(unsigned long ioaddr)368368+{369369+ u32 value = readl(ioaddr + MAC_CTRL_REG);370370+ value &= ~MAC_RNABLE_RX;371371+ writel(value, ioaddr + MAC_CTRL_REG);372372+}373373+374374+static inline void stmmac_mac_disable_tx(unsigned long ioaddr)375375+{376376+ u32 value = readl(ioaddr + MAC_CTRL_REG);377377+ value &= ~MAC_ENABLE_TX;378378+ writel(value, ioaddr + MAC_CTRL_REG);379379+}380380+381381+/**382382+ * display_ring383383+ * @p: pointer to the ring.384384+ * @size: size of the ring.385385+ * Description: display all the descriptors within the ring.386386+ */387387+static void display_ring(struct dma_desc *p, int size)388388+{389389+ struct tmp_s {390390+ u64 a;391391+ unsigned int b;392392+ unsigned int c;393393+ };394394+ int i;395395+ for (i = 0; i < size; i++) {396396+ struct tmp_s *x = (struct tmp_s *)(p + i);397397+ pr_info("\t%d [0x%x]: DES0=0x%x DES1=0x%x BUF1=0x%x BUF2=0x%x",398398+ i, (unsigned int)virt_to_phys(&p[i]),399399+ (unsigned int)(x->a), (unsigned int)((x->a) >> 32),400400+ x->b, x->c);401401+ pr_info("\n");402402+ }403403+}404404+405405+/**406406+ * init_dma_desc_rings - init the RX/TX descriptor rings407407+ * @dev: net device structure408408+ * Description: this function initializes the DMA RX/TX descriptors409409+ * and allocates the socket buffers.410410+ */411411+static void init_dma_desc_rings(struct net_device *dev)412412+{413413+ int i;414414+ struct stmmac_priv *priv = netdev_priv(dev);415415+ struct sk_buff *skb;416416+ unsigned int txsize = priv->dma_tx_size;417417+ unsigned int rxsize = priv->dma_rx_size;418418+ unsigned int bfsize = priv->dma_buf_sz;419419+ int buff2_needed = 0;420420+ int dis_ic = 0;421421+422422+#ifdef CONFIG_STMMAC_TIMER423423+ /* Using Timers disable interrupts on completion for the reception */424424+ dis_ic = 1;425425+#endif426426+ /* Set the Buffer size according to the MTU;427427+ * indeed, in case of jumbo we need to bump-up the buffer sizes.428428+ */429429+ if (unlikely(dev->mtu >= BUF_SIZE_8KiB))430430+ bfsize = BUF_SIZE_16KiB;431431+ else if (unlikely(dev->mtu >= BUF_SIZE_4KiB))432432+ bfsize = BUF_SIZE_8KiB;433433+ else if (unlikely(dev->mtu >= BUF_SIZE_2KiB))434434+ bfsize = BUF_SIZE_4KiB;435435+ else if (unlikely(dev->mtu >= DMA_BUFFER_SIZE))436436+ bfsize = BUF_SIZE_2KiB;437437+ else438438+ bfsize = DMA_BUFFER_SIZE;439439+440440+ /* If the MTU exceeds 8k so use the second buffer in the chain */441441+ if (bfsize >= BUF_SIZE_8KiB)442442+ buff2_needed = 1;443443+444444+ DBG(probe, INFO, "stmmac: txsize %d, rxsize %d, bfsize %d\n",445445+ txsize, rxsize, bfsize);446446+447447+ priv->rx_skbuff_dma = kmalloc(rxsize * sizeof(dma_addr_t), GFP_KERNEL);448448+ priv->rx_skbuff =449449+ kmalloc(sizeof(struct sk_buff *) * rxsize, GFP_KERNEL);450450+ priv->dma_rx =451451+ (struct dma_desc *)dma_alloc_coherent(priv->device,452452+ rxsize *453453+ sizeof(struct dma_desc),454454+ &priv->dma_rx_phy,455455+ GFP_KERNEL);456456+ priv->tx_skbuff = kmalloc(sizeof(struct sk_buff *) * txsize,457457+ GFP_KERNEL);458458+ priv->dma_tx =459459+ (struct dma_desc *)dma_alloc_coherent(priv->device,460460+ txsize *461461+ sizeof(struct dma_desc),462462+ &priv->dma_tx_phy,463463+ GFP_KERNEL);464464+465465+ if ((priv->dma_rx == NULL) || (priv->dma_tx == NULL)) {466466+ pr_err("%s:ERROR allocating the DMA Tx/Rx desc\n", __func__);467467+ return;468468+ }469469+470470+ DBG(probe, INFO, "stmmac (%s) DMA desc rings: virt addr (Rx %p, "471471+ "Tx %p)\n\tDMA phy addr (Rx 0x%08x, Tx 0x%08x)\n",472472+ dev->name, priv->dma_rx, priv->dma_tx,473473+ (unsigned int)priv->dma_rx_phy, (unsigned int)priv->dma_tx_phy);474474+475475+ /* RX INITIALIZATION */476476+ DBG(probe, INFO, "stmmac: SKB addresses:\n"477477+ "skb\t\tskb data\tdma data\n");478478+479479+ for (i = 0; i < rxsize; i++) {480480+ struct dma_desc *p = priv->dma_rx + i;481481+482482+ skb = netdev_alloc_skb_ip_align(dev, bfsize);483483+ if (unlikely(skb == NULL)) {484484+ pr_err("%s: Rx init fails; skb is NULL\n", __func__);485485+ break;486486+ }487487+ priv->rx_skbuff[i] = skb;488488+ priv->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,489489+ bfsize, DMA_FROM_DEVICE);490490+491491+ p->des2 = priv->rx_skbuff_dma[i];492492+ if (unlikely(buff2_needed))493493+ p->des3 = p->des2 + BUF_SIZE_8KiB;494494+ DBG(probe, INFO, "[%p]\t[%p]\t[%x]\n", priv->rx_skbuff[i],495495+ priv->rx_skbuff[i]->data, priv->rx_skbuff_dma[i]);496496+ }497497+ priv->cur_rx = 0;498498+ priv->dirty_rx = (unsigned int)(i - rxsize);499499+ priv->dma_buf_sz = bfsize;500500+ buf_sz = bfsize;501501+502502+ /* TX INITIALIZATION */503503+ for (i = 0; i < txsize; i++) {504504+ priv->tx_skbuff[i] = NULL;505505+ priv->dma_tx[i].des2 = 0;506506+ }507507+ priv->dirty_tx = 0;508508+ priv->cur_tx = 0;509509+510510+ /* Clear the Rx/Tx descriptors */511511+ priv->mac_type->ops->init_rx_desc(priv->dma_rx, rxsize, dis_ic);512512+ priv->mac_type->ops->init_tx_desc(priv->dma_tx, txsize);513513+514514+ if (netif_msg_hw(priv)) {515515+ pr_info("RX descriptor ring:\n");516516+ display_ring(priv->dma_rx, rxsize);517517+ pr_info("TX descriptor ring:\n");518518+ display_ring(priv->dma_tx, txsize);519519+ }520520+ return;521521+}522522+523523+static void dma_free_rx_skbufs(struct stmmac_priv *priv)524524+{525525+ int i;526526+527527+ for (i = 0; i < priv->dma_rx_size; i++) {528528+ if (priv->rx_skbuff[i]) {529529+ dma_unmap_single(priv->device, priv->rx_skbuff_dma[i],530530+ priv->dma_buf_sz, DMA_FROM_DEVICE);531531+ dev_kfree_skb_any(priv->rx_skbuff[i]);532532+ }533533+ priv->rx_skbuff[i] = NULL;534534+ }535535+ return;536536+}537537+538538+static void dma_free_tx_skbufs(struct stmmac_priv *priv)539539+{540540+ int i;541541+542542+ for (i = 0; i < priv->dma_tx_size; i++) {543543+ if (priv->tx_skbuff[i] != NULL) {544544+ struct dma_desc *p = priv->dma_tx + i;545545+ if (p->des2)546546+ dma_unmap_single(priv->device, p->des2,547547+ priv->mac_type->ops->get_tx_len(p),548548+ DMA_TO_DEVICE);549549+ dev_kfree_skb_any(priv->tx_skbuff[i]);550550+ priv->tx_skbuff[i] = NULL;551551+ }552552+ }553553+ return;554554+}555555+556556+static void free_dma_desc_resources(struct stmmac_priv *priv)557557+{558558+ /* Release the DMA TX/RX socket buffers */559559+ dma_free_rx_skbufs(priv);560560+ dma_free_tx_skbufs(priv);561561+562562+ /* Free the region of consistent memory previously allocated for563563+ * the DMA */564564+ dma_free_coherent(priv->device,565565+ priv->dma_tx_size * sizeof(struct dma_desc),566566+ priv->dma_tx, priv->dma_tx_phy);567567+ dma_free_coherent(priv->device,568568+ priv->dma_rx_size * sizeof(struct dma_desc),569569+ priv->dma_rx, priv->dma_rx_phy);570570+ kfree(priv->rx_skbuff_dma);571571+ kfree(priv->rx_skbuff);572572+ kfree(priv->tx_skbuff);573573+574574+ return;575575+}576576+577577+/**578578+ * stmmac_dma_start_tx579579+ * @ioaddr: device I/O address580580+ * Description: this function starts the DMA tx process.581581+ */582582+static void stmmac_dma_start_tx(unsigned long ioaddr)583583+{584584+ u32 value = readl(ioaddr + DMA_CONTROL);585585+ value |= DMA_CONTROL_ST;586586+ writel(value, ioaddr + DMA_CONTROL);587587+ return;588588+}589589+590590+static void stmmac_dma_stop_tx(unsigned long ioaddr)591591+{592592+ u32 value = readl(ioaddr + DMA_CONTROL);593593+ value &= ~DMA_CONTROL_ST;594594+ writel(value, ioaddr + DMA_CONTROL);595595+ return;596596+}597597+598598+/**599599+ * stmmac_dma_start_rx600600+ * @ioaddr: device I/O address601601+ * Description: this function starts the DMA rx process.602602+ */603603+static void stmmac_dma_start_rx(unsigned long ioaddr)604604+{605605+ u32 value = readl(ioaddr + DMA_CONTROL);606606+ value |= DMA_CONTROL_SR;607607+ writel(value, ioaddr + DMA_CONTROL);608608+609609+ return;610610+}611611+612612+static void stmmac_dma_stop_rx(unsigned long ioaddr)613613+{614614+ u32 value = readl(ioaddr + DMA_CONTROL);615615+ value &= ~DMA_CONTROL_SR;616616+ writel(value, ioaddr + DMA_CONTROL);617617+618618+ return;619619+}620620+621621+/**622622+ * stmmac_dma_operation_mode - HW DMA operation mode623623+ * @priv : pointer to the private device structure.624624+ * Description: it sets the DMA operation mode: tx/rx DMA thresholds625625+ * or Store-And-Forward capability. It also verifies the COE for the626626+ * transmission in case of Giga ETH.627627+ */628628+static void stmmac_dma_operation_mode(struct stmmac_priv *priv)629629+{630630+ if (!priv->is_gmac) {631631+ /* MAC 10/100 */632632+ priv->mac_type->ops->dma_mode(priv->dev->base_addr, tc, 0);633633+ priv->tx_coe = NO_HW_CSUM;634634+ } else {635635+ if ((priv->dev->mtu <= ETH_DATA_LEN) && (tx_coe)) {636636+ priv->mac_type->ops->dma_mode(priv->dev->base_addr,637637+ SF_DMA_MODE, SF_DMA_MODE);638638+ tc = SF_DMA_MODE;639639+ priv->tx_coe = HW_CSUM;640640+ } else {641641+ /* Checksum computation is performed in software. */642642+ priv->mac_type->ops->dma_mode(priv->dev->base_addr, tc,643643+ SF_DMA_MODE);644644+ priv->tx_coe = NO_HW_CSUM;645645+ }646646+ }647647+ tx_coe = priv->tx_coe;648648+649649+ return;650650+}651651+652652+#ifdef STMMAC_DEBUG653653+/**654654+ * show_tx_process_state655655+ * @status: tx descriptor status field656656+ * Description: it shows the Transmit Process State for CSR5[22:20]657657+ */658658+static void show_tx_process_state(unsigned int status)659659+{660660+ unsigned int state;661661+ state = (status & DMA_STATUS_TS_MASK) >> DMA_STATUS_TS_SHIFT;662662+663663+ switch (state) {664664+ case 0:665665+ pr_info("- TX (Stopped): Reset or Stop command\n");666666+ break;667667+ case 1:668668+ pr_info("- TX (Running):Fetching the Tx desc\n");669669+ break;670670+ case 2:671671+ pr_info("- TX (Running): Waiting for end of tx\n");672672+ break;673673+ case 3:674674+ pr_info("- TX (Running): Reading the data "675675+ "and queuing the data into the Tx buf\n");676676+ break;677677+ case 6:678678+ pr_info("- TX (Suspended): Tx Buff Underflow "679679+ "or an unavailable Transmit descriptor\n");680680+ break;681681+ case 7:682682+ pr_info("- TX (Running): Closing Tx descriptor\n");683683+ break;684684+ default:685685+ break;686686+ }687687+ return;688688+}689689+690690+/**691691+ * show_rx_process_state692692+ * @status: rx descriptor status field693693+ * Description: it shows the Receive Process State for CSR5[19:17]694694+ */695695+static void show_rx_process_state(unsigned int status)696696+{697697+ unsigned int state;698698+ state = (status & DMA_STATUS_RS_MASK) >> DMA_STATUS_RS_SHIFT;699699+700700+ switch (state) {701701+ case 0:702702+ pr_info("- RX (Stopped): Reset or Stop command\n");703703+ break;704704+ case 1:705705+ pr_info("- RX (Running): Fetching the Rx desc\n");706706+ break;707707+ case 2:708708+ pr_info("- RX (Running):Checking for end of pkt\n");709709+ break;710710+ case 3:711711+ pr_info("- RX (Running): Waiting for Rx pkt\n");712712+ break;713713+ case 4:714714+ pr_info("- RX (Suspended): Unavailable Rx buf\n");715715+ break;716716+ case 5:717717+ pr_info("- RX (Running): Closing Rx descriptor\n");718718+ break;719719+ case 6:720720+ pr_info("- RX(Running): Flushing the current frame"721721+ " from the Rx buf\n");722722+ break;723723+ case 7:724724+ pr_info("- RX (Running): Queuing the Rx frame"725725+ " from the Rx buf into memory\n");726726+ break;727727+ default:728728+ break;729729+ }730730+ return;731731+}732732+#endif733733+734734+/**735735+ * stmmac_tx:736736+ * @priv: private driver structure737737+ * Description: it reclaims resources after transmission completes.738738+ */739739+static void stmmac_tx(struct stmmac_priv *priv)740740+{741741+ unsigned int txsize = priv->dma_tx_size;742742+ unsigned long ioaddr = priv->dev->base_addr;743743+744744+ while (priv->dirty_tx != priv->cur_tx) {745745+ int last;746746+ unsigned int entry = priv->dirty_tx % txsize;747747+ struct sk_buff *skb = priv->tx_skbuff[entry];748748+ struct dma_desc *p = priv->dma_tx + entry;749749+750750+ /* Check if the descriptor is owned by the DMA. */751751+ if (priv->mac_type->ops->get_tx_owner(p))752752+ break;753753+754754+ /* Verify tx error by looking at the last segment */755755+ last = priv->mac_type->ops->get_tx_ls(p);756756+ if (likely(last)) {757757+ int tx_error =758758+ priv->mac_type->ops->tx_status(&priv->dev->stats,759759+ &priv->xstats,760760+ p, ioaddr);761761+ if (likely(tx_error == 0)) {762762+ priv->dev->stats.tx_packets++;763763+ priv->xstats.tx_pkt_n++;764764+ } else765765+ priv->dev->stats.tx_errors++;766766+ }767767+ TX_DBG("%s: curr %d, dirty %d\n", __func__,768768+ priv->cur_tx, priv->dirty_tx);769769+770770+ if (likely(p->des2))771771+ dma_unmap_single(priv->device, p->des2,772772+ priv->mac_type->ops->get_tx_len(p),773773+ DMA_TO_DEVICE);774774+ if (unlikely(p->des3))775775+ p->des3 = 0;776776+777777+ if (likely(skb != NULL)) {778778+ /*779779+ * If there's room in the queue (limit it to size)780780+ * we add this skb back into the pool,781781+ * if it's the right size.782782+ */783783+ if ((skb_queue_len(&priv->rx_recycle) <784784+ priv->dma_rx_size) &&785785+ skb_recycle_check(skb, priv->dma_buf_sz))786786+ __skb_queue_head(&priv->rx_recycle, skb);787787+ else788788+ dev_kfree_skb(skb);789789+790790+ priv->tx_skbuff[entry] = NULL;791791+ }792792+793793+ priv->mac_type->ops->release_tx_desc(p);794794+795795+ entry = (++priv->dirty_tx) % txsize;796796+ }797797+ if (unlikely(netif_queue_stopped(priv->dev) &&798798+ stmmac_tx_avail(priv) > STMMAC_TX_THRESH(priv))) {799799+ netif_tx_lock(priv->dev);800800+ if (netif_queue_stopped(priv->dev) &&801801+ stmmac_tx_avail(priv) > STMMAC_TX_THRESH(priv)) {802802+ TX_DBG("%s: restart transmit\n", __func__);803803+ netif_wake_queue(priv->dev);804804+ }805805+ netif_tx_unlock(priv->dev);806806+ }807807+ return;808808+}809809+810810+static inline void stmmac_enable_irq(struct stmmac_priv *priv)811811+{812812+#ifndef CONFIG_STMMAC_TIMER813813+ writel(DMA_INTR_DEFAULT_MASK, priv->dev->base_addr + DMA_INTR_ENA);814814+#else815815+ priv->tm->timer_start(tmrate);816816+#endif817817+}818818+819819+static inline void stmmac_disable_irq(struct stmmac_priv *priv)820820+{821821+#ifndef CONFIG_STMMAC_TIMER822822+ writel(0, priv->dev->base_addr + DMA_INTR_ENA);823823+#else824824+ priv->tm->timer_stop();825825+#endif826826+}827827+828828+static int stmmac_has_work(struct stmmac_priv *priv)829829+{830830+ unsigned int has_work = 0;831831+ int rxret, tx_work = 0;832832+833833+ rxret = priv->mac_type->ops->get_rx_owner(priv->dma_rx +834834+ (priv->cur_rx % priv->dma_rx_size));835835+836836+ if (priv->dirty_tx != priv->cur_tx)837837+ tx_work = 1;838838+839839+ if (likely(!rxret || tx_work))840840+ has_work = 1;841841+842842+ return has_work;843843+}844844+845845+static inline void _stmmac_schedule(struct stmmac_priv *priv)846846+{847847+ if (likely(stmmac_has_work(priv))) {848848+ stmmac_disable_irq(priv);849849+ napi_schedule(&priv->napi);850850+ }851851+}852852+853853+#ifdef CONFIG_STMMAC_TIMER854854+void stmmac_schedule(struct net_device *dev)855855+{856856+ struct stmmac_priv *priv = netdev_priv(dev);857857+858858+ priv->xstats.sched_timer_n++;859859+860860+ _stmmac_schedule(priv);861861+862862+ return;863863+}864864+865865+static void stmmac_no_timer_started(unsigned int x)866866+{;867867+};868868+869869+static void stmmac_no_timer_stopped(void)870870+{;871871+};872872+#endif873873+874874+/**875875+ * stmmac_tx_err:876876+ * @priv: pointer to the private device structure877877+ * Description: it cleans the descriptors and restarts the transmission878878+ * in case of errors.879879+ */880880+static void stmmac_tx_err(struct stmmac_priv *priv)881881+{882882+ netif_stop_queue(priv->dev);883883+884884+ stmmac_dma_stop_tx(priv->dev->base_addr);885885+ dma_free_tx_skbufs(priv);886886+ priv->mac_type->ops->init_tx_desc(priv->dma_tx, priv->dma_tx_size);887887+ priv->dirty_tx = 0;888888+ priv->cur_tx = 0;889889+ stmmac_dma_start_tx(priv->dev->base_addr);890890+891891+ priv->dev->stats.tx_errors++;892892+ netif_wake_queue(priv->dev);893893+894894+ return;895895+}896896+897897+/**898898+ * stmmac_dma_interrupt - Interrupt handler for the driver899899+ * @dev: net device structure900900+ * Description: Interrupt handler for the driver (DMA).901901+ */902902+static void stmmac_dma_interrupt(struct net_device *dev)903903+{904904+ unsigned long ioaddr = dev->base_addr;905905+ struct stmmac_priv *priv = netdev_priv(dev);906906+ /* read the status register (CSR5) */907907+ u32 intr_status = readl(ioaddr + DMA_STATUS);908908+909909+ DBG(intr, INFO, "%s: [CSR5: 0x%08x]\n", __func__, intr_status);910910+911911+#ifdef STMMAC_DEBUG912912+ /* It displays the DMA transmit process state (CSR5 register) */913913+ if (netif_msg_tx_done(priv))914914+ show_tx_process_state(intr_status);915915+ if (netif_msg_rx_status(priv))916916+ show_rx_process_state(intr_status);917917+#endif918918+ /* ABNORMAL interrupts */919919+ if (unlikely(intr_status & DMA_STATUS_AIS)) {920920+ DBG(intr, INFO, "CSR5[15] DMA ABNORMAL IRQ: ");921921+ if (unlikely(intr_status & DMA_STATUS_UNF)) {922922+ DBG(intr, INFO, "transmit underflow\n");923923+ if (unlikely(tc != SF_DMA_MODE)924924+ && (tc <= 256)) {925925+ /* Try to bump up the threshold */926926+ tc += 64;927927+ priv->mac_type->ops->dma_mode(ioaddr, tc,928928+ SF_DMA_MODE);929929+ priv->xstats.threshold = tc;930930+ }931931+ stmmac_tx_err(priv);932932+ priv->xstats.tx_undeflow_irq++;933933+ }934934+ if (unlikely(intr_status & DMA_STATUS_TJT)) {935935+ DBG(intr, INFO, "transmit jabber\n");936936+ priv->xstats.tx_jabber_irq++;937937+ }938938+ if (unlikely(intr_status & DMA_STATUS_OVF)) {939939+ DBG(intr, INFO, "recv overflow\n");940940+ priv->xstats.rx_overflow_irq++;941941+ }942942+ if (unlikely(intr_status & DMA_STATUS_RU)) {943943+ DBG(intr, INFO, "receive buffer unavailable\n");944944+ priv->xstats.rx_buf_unav_irq++;945945+ }946946+ if (unlikely(intr_status & DMA_STATUS_RPS)) {947947+ DBG(intr, INFO, "receive process stopped\n");948948+ priv->xstats.rx_process_stopped_irq++;949949+ }950950+ if (unlikely(intr_status & DMA_STATUS_RWT)) {951951+ DBG(intr, INFO, "receive watchdog\n");952952+ priv->xstats.rx_watchdog_irq++;953953+ }954954+ if (unlikely(intr_status & DMA_STATUS_ETI)) {955955+ DBG(intr, INFO, "transmit early interrupt\n");956956+ priv->xstats.tx_early_irq++;957957+ }958958+ if (unlikely(intr_status & DMA_STATUS_TPS)) {959959+ DBG(intr, INFO, "transmit process stopped\n");960960+ priv->xstats.tx_process_stopped_irq++;961961+ stmmac_tx_err(priv);962962+ }963963+ if (unlikely(intr_status & DMA_STATUS_FBI)) {964964+ DBG(intr, INFO, "fatal bus error\n");965965+ priv->xstats.fatal_bus_error_irq++;966966+ stmmac_tx_err(priv);967967+ }968968+ }969969+970970+ /* TX/RX NORMAL interrupts */971971+ if (intr_status & DMA_STATUS_NIS) {972972+ priv->xstats.normal_irq_n++;973973+ if (likely((intr_status & DMA_STATUS_RI) ||974974+ (intr_status & (DMA_STATUS_TI))))975975+ _stmmac_schedule(priv);976976+ }977977+978978+ /* Optional hardware blocks, interrupts should be disabled */979979+ if (unlikely(intr_status &980980+ (DMA_STATUS_GPI | DMA_STATUS_GMI | DMA_STATUS_GLI)))981981+ pr_info("%s: unexpected status %08x\n", __func__, intr_status);982982+983983+ /* Clear the interrupt by writing a logic 1 to the CSR5[15-0] */984984+ writel((intr_status & 0x1ffff), ioaddr + DMA_STATUS);985985+986986+ DBG(intr, INFO, "\n\n");987987+988988+ return;989989+}990990+991991+/**992992+ * stmmac_open - open entry point of the driver993993+ * @dev : pointer to the device structure.994994+ * Description:995995+ * This function is the open entry point of the driver.996996+ * Return value:997997+ * 0 on success and an appropriate (-)ve integer as defined in errno.h998998+ * file on failure.999999+ */10001000+static int stmmac_open(struct net_device *dev)10011001+{10021002+ struct stmmac_priv *priv = netdev_priv(dev);10031003+ unsigned long ioaddr = dev->base_addr;10041004+ int ret;10051005+10061006+ /* Check that the MAC address is valid. If its not, refuse10071007+ * to bring the device up. The user must specify an10081008+ * address using the following linux command:10091009+ * ifconfig eth0 hw ether xx:xx:xx:xx:xx:xx */10101010+ if (!is_valid_ether_addr(dev->dev_addr)) {10111011+ random_ether_addr(dev->dev_addr);10121012+ pr_warning("%s: generated random MAC address %pM\n", dev->name,10131013+ dev->dev_addr);10141014+ }10151015+10161016+ stmmac_verify_args();10171017+10181018+ ret = stmmac_init_phy(dev);10191019+ if (unlikely(ret)) {10201020+ pr_err("%s: Cannot attach to PHY (error: %d)\n", __func__, ret);10211021+ return ret;10221022+ }10231023+10241024+ /* Request the IRQ lines */10251025+ ret = request_irq(dev->irq, &stmmac_interrupt,10261026+ IRQF_SHARED, dev->name, dev);10271027+ if (unlikely(ret < 0)) {10281028+ pr_err("%s: ERROR: allocating the IRQ %d (error: %d)\n",10291029+ __func__, dev->irq, ret);10301030+ return ret;10311031+ }10321032+10331033+#ifdef CONFIG_STMMAC_TIMER10341034+ priv->tm = kmalloc(sizeof(struct stmmac_timer *), GFP_KERNEL);10351035+ if (unlikely(priv->tm == NULL)) {10361036+ pr_err("%s: ERROR: timer memory alloc failed \n", __func__);10371037+ return -ENOMEM;10381038+ }10391039+ priv->tm->freq = tmrate;10401040+10411041+ /* Test if the HW timer can be actually used.10421042+ * In case of failure continue with no timer. */10431043+ if (unlikely((stmmac_open_ext_timer(dev, priv->tm)) < 0)) {10441044+ pr_warning("stmmaceth: cannot attach the HW timer\n");10451045+ tmrate = 0;10461046+ priv->tm->freq = 0;10471047+ priv->tm->timer_start = stmmac_no_timer_started;10481048+ priv->tm->timer_stop = stmmac_no_timer_stopped;10491049+ }10501050+#endif10511051+10521052+ /* Create and initialize the TX/RX descriptors chains. */10531053+ priv->dma_tx_size = STMMAC_ALIGN(dma_txsize);10541054+ priv->dma_rx_size = STMMAC_ALIGN(dma_rxsize);10551055+ priv->dma_buf_sz = STMMAC_ALIGN(buf_sz);10561056+ init_dma_desc_rings(dev);10571057+10581058+ /* DMA initialization and SW reset */10591059+ if (unlikely(priv->mac_type->ops->dma_init(ioaddr,10601060+ priv->pbl, priv->dma_tx_phy, priv->dma_rx_phy) < 0)) {10611061+10621062+ pr_err("%s: DMA initialization failed\n", __func__);10631063+ return -1;10641064+ }10651065+10661066+ /* Copy the MAC addr into the HW */10671067+ priv->mac_type->ops->set_umac_addr(ioaddr, dev->dev_addr, 0);10681068+ /* Initialize the MAC Core */10691069+ priv->mac_type->ops->core_init(ioaddr);10701070+10711071+ priv->shutdown = 0;10721072+10731073+ /* Initialise the MMC (if present) to disable all interrupts. */10741074+ writel(0xffffffff, ioaddr + MMC_HIGH_INTR_MASK);10751075+ writel(0xffffffff, ioaddr + MMC_LOW_INTR_MASK);10761076+10771077+ /* Enable the MAC Rx/Tx */10781078+ stmmac_mac_enable_rx(ioaddr);10791079+ stmmac_mac_enable_tx(ioaddr);10801080+10811081+ /* Set the HW DMA mode and the COE */10821082+ stmmac_dma_operation_mode(priv);10831083+10841084+ /* Extra statistics */10851085+ memset(&priv->xstats, 0, sizeof(struct stmmac_extra_stats));10861086+ priv->xstats.threshold = tc;10871087+10881088+ /* Start the ball rolling... */10891089+ DBG(probe, DEBUG, "%s: DMA RX/TX processes started...\n", dev->name);10901090+ stmmac_dma_start_tx(ioaddr);10911091+ stmmac_dma_start_rx(ioaddr);10921092+10931093+#ifdef CONFIG_STMMAC_TIMER10941094+ priv->tm->timer_start(tmrate);10951095+#endif10961096+ /* Dump DMA/MAC registers */10971097+ if (netif_msg_hw(priv)) {10981098+ priv->mac_type->ops->dump_mac_regs(ioaddr);10991099+ priv->mac_type->ops->dump_dma_regs(ioaddr);11001100+ }11011101+11021102+ if (priv->phydev)11031103+ phy_start(priv->phydev);11041104+11051105+ napi_enable(&priv->napi);11061106+ skb_queue_head_init(&priv->rx_recycle);11071107+ netif_start_queue(dev);11081108+ return 0;11091109+}11101110+11111111+/**11121112+ * stmmac_release - close entry point of the driver11131113+ * @dev : device pointer.11141114+ * Description:11151115+ * This is the stop entry point of the driver.11161116+ */11171117+static int stmmac_release(struct net_device *dev)11181118+{11191119+ struct stmmac_priv *priv = netdev_priv(dev);11201120+11211121+ /* Stop and disconnect the PHY */11221122+ if (priv->phydev) {11231123+ phy_stop(priv->phydev);11241124+ phy_disconnect(priv->phydev);11251125+ priv->phydev = NULL;11261126+ }11271127+11281128+ netif_stop_queue(dev);11291129+11301130+#ifdef CONFIG_STMMAC_TIMER11311131+ /* Stop and release the timer */11321132+ stmmac_close_ext_timer();11331133+ if (priv->tm != NULL)11341134+ kfree(priv->tm);11351135+#endif11361136+ napi_disable(&priv->napi);11371137+ skb_queue_purge(&priv->rx_recycle);11381138+11391139+ /* Free the IRQ lines */11401140+ free_irq(dev->irq, dev);11411141+11421142+ /* Stop TX/RX DMA and clear the descriptors */11431143+ stmmac_dma_stop_tx(dev->base_addr);11441144+ stmmac_dma_stop_rx(dev->base_addr);11451145+11461146+ /* Release and free the Rx/Tx resources */11471147+ free_dma_desc_resources(priv);11481148+11491149+ /* Disable the MAC core */11501150+ stmmac_mac_disable_tx(dev->base_addr);11511151+ stmmac_mac_disable_rx(dev->base_addr);11521152+11531153+ netif_carrier_off(dev);11541154+11551155+ return 0;11561156+}11571157+11581158+/*11591159+ * To perform emulated hardware segmentation on skb.11601160+ */11611161+static int stmmac_sw_tso(struct stmmac_priv *priv, struct sk_buff *skb)11621162+{11631163+ struct sk_buff *segs, *curr_skb;11641164+ int gso_segs = skb_shinfo(skb)->gso_segs;11651165+11661166+ /* Estimate the number of fragments in the worst case */11671167+ if (unlikely(stmmac_tx_avail(priv) < gso_segs)) {11681168+ netif_stop_queue(priv->dev);11691169+ TX_DBG(KERN_ERR "%s: TSO BUG! Tx Ring full when queue awake\n",11701170+ __func__);11711171+ if (stmmac_tx_avail(priv) < gso_segs)11721172+ return NETDEV_TX_BUSY;11731173+11741174+ netif_wake_queue(priv->dev);11751175+ }11761176+ TX_DBG("\tstmmac_sw_tso: segmenting: skb %p (len %d)\n",11771177+ skb, skb->len);11781178+11791179+ segs = skb_gso_segment(skb, priv->dev->features & ~NETIF_F_TSO);11801180+ if (unlikely(IS_ERR(segs)))11811181+ goto sw_tso_end;11821182+11831183+ do {11841184+ curr_skb = segs;11851185+ segs = segs->next;11861186+ TX_DBG("\t\tcurrent skb->len: %d, *curr %p,"11871187+ "*next %p\n", curr_skb->len, curr_skb, segs);11881188+ curr_skb->next = NULL;11891189+ stmmac_xmit(curr_skb, priv->dev);11901190+ } while (segs);11911191+11921192+sw_tso_end:11931193+ dev_kfree_skb(skb);11941194+11951195+ return NETDEV_TX_OK;11961196+}11971197+11981198+static unsigned int stmmac_handle_jumbo_frames(struct sk_buff *skb,11991199+ struct net_device *dev,12001200+ int csum_insertion)12011201+{12021202+ struct stmmac_priv *priv = netdev_priv(dev);12031203+ unsigned int nopaged_len = skb_headlen(skb);12041204+ unsigned int txsize = priv->dma_tx_size;12051205+ unsigned int entry = priv->cur_tx % txsize;12061206+ struct dma_desc *desc = priv->dma_tx + entry;12071207+12081208+ if (nopaged_len > BUF_SIZE_8KiB) {12091209+12101210+ int buf2_size = nopaged_len - BUF_SIZE_8KiB;12111211+12121212+ desc->des2 = dma_map_single(priv->device, skb->data,12131213+ BUF_SIZE_8KiB, DMA_TO_DEVICE);12141214+ desc->des3 = desc->des2 + BUF_SIZE_4KiB;12151215+ priv->mac_type->ops->prepare_tx_desc(desc, 1, BUF_SIZE_8KiB,12161216+ csum_insertion);12171217+12181218+ entry = (++priv->cur_tx) % txsize;12191219+ desc = priv->dma_tx + entry;12201220+12211221+ desc->des2 = dma_map_single(priv->device,12221222+ skb->data + BUF_SIZE_8KiB,12231223+ buf2_size, DMA_TO_DEVICE);12241224+ desc->des3 = desc->des2 + BUF_SIZE_4KiB;12251225+ priv->mac_type->ops->prepare_tx_desc(desc, 0,12261226+ buf2_size, csum_insertion);12271227+ priv->mac_type->ops->set_tx_owner(desc);12281228+ priv->tx_skbuff[entry] = NULL;12291229+ } else {12301230+ desc->des2 = dma_map_single(priv->device, skb->data,12311231+ nopaged_len, DMA_TO_DEVICE);12321232+ desc->des3 = desc->des2 + BUF_SIZE_4KiB;12331233+ priv->mac_type->ops->prepare_tx_desc(desc, 1, nopaged_len,12341234+ csum_insertion);12351235+ }12361236+ return entry;12371237+}12381238+12391239+/**12401240+ * stmmac_xmit:12411241+ * @skb : the socket buffer12421242+ * @dev : device pointer12431243+ * Description : Tx entry point of the driver.12441244+ */12451245+static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)12461246+{12471247+ struct stmmac_priv *priv = netdev_priv(dev);12481248+ unsigned int txsize = priv->dma_tx_size;12491249+ unsigned int entry;12501250+ int i, csum_insertion = 0;12511251+ int nfrags = skb_shinfo(skb)->nr_frags;12521252+ struct dma_desc *desc, *first;12531253+12541254+ if (unlikely(stmmac_tx_avail(priv) < nfrags + 1)) {12551255+ if (!netif_queue_stopped(dev)) {12561256+ netif_stop_queue(dev);12571257+ /* This is a hard error, log it. */12581258+ pr_err("%s: BUG! Tx Ring full when queue awake\n",12591259+ __func__);12601260+ }12611261+ return NETDEV_TX_BUSY;12621262+ }12631263+12641264+ entry = priv->cur_tx % txsize;12651265+12661266+#ifdef STMMAC_XMIT_DEBUG12671267+ if ((skb->len > ETH_FRAME_LEN) || nfrags)12681268+ pr_info("stmmac xmit:\n"12691269+ "\tskb addr %p - len: %d - nopaged_len: %d\n"12701270+ "\tn_frags: %d - ip_summed: %d - %s gso\n",12711271+ skb, skb->len, skb_headlen(skb), nfrags, skb->ip_summed,12721272+ !skb_is_gso(skb) ? "isn't" : "is");12731273+#endif12741274+12751275+ if (unlikely(skb_is_gso(skb)))12761276+ return stmmac_sw_tso(priv, skb);12771277+12781278+ if (likely((skb->ip_summed == CHECKSUM_PARTIAL))) {12791279+ if (likely(priv->tx_coe == NO_HW_CSUM))12801280+ skb_checksum_help(skb);12811281+ else12821282+ csum_insertion = 1;12831283+ }12841284+12851285+ desc = priv->dma_tx + entry;12861286+ first = desc;12871287+12881288+#ifdef STMMAC_XMIT_DEBUG12891289+ if ((nfrags > 0) || (skb->len > ETH_FRAME_LEN))12901290+ pr_debug("stmmac xmit: skb len: %d, nopaged_len: %d,\n"12911291+ "\t\tn_frags: %d, ip_summed: %d\n",12921292+ skb->len, skb_headlen(skb), nfrags, skb->ip_summed);12931293+#endif12941294+ priv->tx_skbuff[entry] = skb;12951295+ if (unlikely(skb->len >= BUF_SIZE_4KiB)) {12961296+ entry = stmmac_handle_jumbo_frames(skb, dev, csum_insertion);12971297+ desc = priv->dma_tx + entry;12981298+ } else {12991299+ unsigned int nopaged_len = skb_headlen(skb);13001300+ desc->des2 = dma_map_single(priv->device, skb->data,13011301+ nopaged_len, DMA_TO_DEVICE);13021302+ priv->mac_type->ops->prepare_tx_desc(desc, 1, nopaged_len,13031303+ csum_insertion);13041304+ }13051305+13061306+ for (i = 0; i < nfrags; i++) {13071307+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];13081308+ int len = frag->size;13091309+13101310+ entry = (++priv->cur_tx) % txsize;13111311+ desc = priv->dma_tx + entry;13121312+13131313+ TX_DBG("\t[entry %d] segment len: %d\n", entry, len);13141314+ desc->des2 = dma_map_page(priv->device, frag->page,13151315+ frag->page_offset,13161316+ len, DMA_TO_DEVICE);13171317+ priv->tx_skbuff[entry] = NULL;13181318+ priv->mac_type->ops->prepare_tx_desc(desc, 0, len,13191319+ csum_insertion);13201320+ priv->mac_type->ops->set_tx_owner(desc);13211321+ }13221322+13231323+ /* Interrupt on completition only for the latest segment */13241324+ priv->mac_type->ops->close_tx_desc(desc);13251325+#ifdef CONFIG_STMMAC_TIMER13261326+ /* Clean IC while using timers */13271327+ priv->mac_type->ops->clear_tx_ic(desc);13281328+#endif13291329+ /* To avoid raise condition */13301330+ priv->mac_type->ops->set_tx_owner(first);13311331+13321332+ priv->cur_tx++;13331333+13341334+#ifdef STMMAC_XMIT_DEBUG13351335+ if (netif_msg_pktdata(priv)) {13361336+ pr_info("stmmac xmit: current=%d, dirty=%d, entry=%d, "13371337+ "first=%p, nfrags=%d\n",13381338+ (priv->cur_tx % txsize), (priv->dirty_tx % txsize),13391339+ entry, first, nfrags);13401340+ display_ring(priv->dma_tx, txsize);13411341+ pr_info(">>> frame to be transmitted: ");13421342+ print_pkt(skb->data, skb->len);13431343+ }13441344+#endif13451345+ if (unlikely(stmmac_tx_avail(priv) <= (MAX_SKB_FRAGS + 1))) {13461346+ TX_DBG("%s: stop transmitted packets\n", __func__);13471347+ netif_stop_queue(dev);13481348+ }13491349+13501350+ dev->stats.tx_bytes += skb->len;13511351+13521352+ /* CSR1 enables the transmit DMA to check for new descriptor */13531353+ writel(1, dev->base_addr + DMA_XMT_POLL_DEMAND);13541354+13551355+ return NETDEV_TX_OK;13561356+}13571357+13581358+static inline void stmmac_rx_refill(struct stmmac_priv *priv)13591359+{13601360+ unsigned int rxsize = priv->dma_rx_size;13611361+ int bfsize = priv->dma_buf_sz;13621362+ struct dma_desc *p = priv->dma_rx;13631363+13641364+ for (; priv->cur_rx - priv->dirty_rx > 0; priv->dirty_rx++) {13651365+ unsigned int entry = priv->dirty_rx % rxsize;13661366+ if (likely(priv->rx_skbuff[entry] == NULL)) {13671367+ struct sk_buff *skb;13681368+13691369+ skb = __skb_dequeue(&priv->rx_recycle);13701370+ if (skb == NULL)13711371+ skb = netdev_alloc_skb_ip_align(priv->dev,13721372+ bfsize);13731373+13741374+ if (unlikely(skb == NULL))13751375+ break;13761376+13771377+ priv->rx_skbuff[entry] = skb;13781378+ priv->rx_skbuff_dma[entry] =13791379+ dma_map_single(priv->device, skb->data, bfsize,13801380+ DMA_FROM_DEVICE);13811381+13821382+ (p + entry)->des2 = priv->rx_skbuff_dma[entry];13831383+ if (unlikely(priv->is_gmac)) {13841384+ if (bfsize >= BUF_SIZE_8KiB)13851385+ (p + entry)->des3 =13861386+ (p + entry)->des2 + BUF_SIZE_8KiB;13871387+ }13881388+ RX_DBG(KERN_INFO "\trefill entry #%d\n", entry);13891389+ }13901390+ priv->mac_type->ops->set_rx_owner(p + entry);13911391+ }13921392+ return;13931393+}13941394+13951395+static int stmmac_rx(struct stmmac_priv *priv, int limit)13961396+{13971397+ unsigned int rxsize = priv->dma_rx_size;13981398+ unsigned int entry = priv->cur_rx % rxsize;13991399+ unsigned int next_entry;14001400+ unsigned int count = 0;14011401+ struct dma_desc *p = priv->dma_rx + entry;14021402+ struct dma_desc *p_next;14031403+14041404+#ifdef STMMAC_RX_DEBUG14051405+ if (netif_msg_hw(priv)) {14061406+ pr_debug(">>> stmmac_rx: descriptor ring:\n");14071407+ display_ring(priv->dma_rx, rxsize);14081408+ }14091409+#endif14101410+ count = 0;14111411+ while (!priv->mac_type->ops->get_rx_owner(p)) {14121412+ int status;14131413+14141414+ if (count >= limit)14151415+ break;14161416+14171417+ count++;14181418+14191419+ next_entry = (++priv->cur_rx) % rxsize;14201420+ p_next = priv->dma_rx + next_entry;14211421+ prefetch(p_next);14221422+14231423+ /* read the status of the incoming frame */14241424+ status = (priv->mac_type->ops->rx_status(&priv->dev->stats,14251425+ &priv->xstats, p));14261426+ if (unlikely(status == discard_frame))14271427+ priv->dev->stats.rx_errors++;14281428+ else {14291429+ struct sk_buff *skb;14301430+ /* Length should omit the CRC */14311431+ int frame_len =14321432+ priv->mac_type->ops->get_rx_frame_len(p) - 4;14331433+14341434+#ifdef STMMAC_RX_DEBUG14351435+ if (frame_len > ETH_FRAME_LEN)14361436+ pr_debug("\tRX frame size %d, COE status: %d\n",14371437+ frame_len, status);14381438+14391439+ if (netif_msg_hw(priv))14401440+ pr_debug("\tdesc: %p [entry %d] buff=0x%x\n",14411441+ p, entry, p->des2);14421442+#endif14431443+ skb = priv->rx_skbuff[entry];14441444+ if (unlikely(!skb)) {14451445+ pr_err("%s: Inconsistent Rx descriptor chain\n",14461446+ priv->dev->name);14471447+ priv->dev->stats.rx_dropped++;14481448+ break;14491449+ }14501450+ prefetch(skb->data - NET_IP_ALIGN);14511451+ priv->rx_skbuff[entry] = NULL;14521452+14531453+ skb_put(skb, frame_len);14541454+ dma_unmap_single(priv->device,14551455+ priv->rx_skbuff_dma[entry],14561456+ priv->dma_buf_sz, DMA_FROM_DEVICE);14571457+#ifdef STMMAC_RX_DEBUG14581458+ if (netif_msg_pktdata(priv)) {14591459+ pr_info(" frame received (%dbytes)", frame_len);14601460+ print_pkt(skb->data, frame_len);14611461+ }14621462+#endif14631463+ skb->protocol = eth_type_trans(skb, priv->dev);14641464+14651465+ if (unlikely(status == csum_none)) {14661466+ /* always for the old mac 10/100 */14671467+ skb->ip_summed = CHECKSUM_NONE;14681468+ netif_receive_skb(skb);14691469+ } else {14701470+ skb->ip_summed = CHECKSUM_UNNECESSARY;14711471+ napi_gro_receive(&priv->napi, skb);14721472+ }14731473+14741474+ priv->dev->stats.rx_packets++;14751475+ priv->dev->stats.rx_bytes += frame_len;14761476+ priv->dev->last_rx = jiffies;14771477+ }14781478+ entry = next_entry;14791479+ p = p_next; /* use prefetched values */14801480+ }14811481+14821482+ stmmac_rx_refill(priv);14831483+14841484+ priv->xstats.rx_pkt_n += count;14851485+14861486+ return count;14871487+}14881488+14891489+/**14901490+ * stmmac_poll - stmmac poll method (NAPI)14911491+ * @napi : pointer to the napi structure.14921492+ * @budget : maximum number of packets that the current CPU can receive from14931493+ * all interfaces.14941494+ * Description :14951495+ * This function implements the the reception process.14961496+ * Also it runs the TX completion thread14971497+ */14981498+static int stmmac_poll(struct napi_struct *napi, int budget)14991499+{15001500+ struct stmmac_priv *priv = container_of(napi, struct stmmac_priv, napi);15011501+ int work_done = 0;15021502+15031503+ priv->xstats.poll_n++;15041504+ stmmac_tx(priv);15051505+ work_done = stmmac_rx(priv, budget);15061506+15071507+ if (work_done < budget) {15081508+ napi_complete(napi);15091509+ stmmac_enable_irq(priv);15101510+ }15111511+ return work_done;15121512+}15131513+15141514+/**15151515+ * stmmac_tx_timeout15161516+ * @dev : Pointer to net device structure15171517+ * Description: this function is called when a packet transmission fails to15181518+ * complete within a reasonable tmrate. The driver will mark the error in the15191519+ * netdev structure and arrange for the device to be reset to a sane state15201520+ * in order to transmit a new packet.15211521+ */15221522+static void stmmac_tx_timeout(struct net_device *dev)15231523+{15241524+ struct stmmac_priv *priv = netdev_priv(dev);15251525+15261526+ /* Clear Tx resources and restart transmitting again */15271527+ stmmac_tx_err(priv);15281528+ return;15291529+}15301530+15311531+/* Configuration changes (passed on by ifconfig) */15321532+static int stmmac_config(struct net_device *dev, struct ifmap *map)15331533+{15341534+ if (dev->flags & IFF_UP) /* can't act on a running interface */15351535+ return -EBUSY;15361536+15371537+ /* Don't allow changing the I/O address */15381538+ if (map->base_addr != dev->base_addr) {15391539+ pr_warning("%s: can't change I/O address\n", dev->name);15401540+ return -EOPNOTSUPP;15411541+ }15421542+15431543+ /* Don't allow changing the IRQ */15441544+ if (map->irq != dev->irq) {15451545+ pr_warning("%s: can't change IRQ number %d\n",15461546+ dev->name, dev->irq);15471547+ return -EOPNOTSUPP;15481548+ }15491549+15501550+ /* ignore other fields */15511551+ return 0;15521552+}15531553+15541554+/**15551555+ * stmmac_multicast_list - entry point for multicast addressing15561556+ * @dev : pointer to the device structure15571557+ * Description:15581558+ * This function is a driver entry point which gets called by the kernel15591559+ * whenever multicast addresses must be enabled/disabled.15601560+ * Return value:15611561+ * void.15621562+ */15631563+static void stmmac_multicast_list(struct net_device *dev)15641564+{15651565+ struct stmmac_priv *priv = netdev_priv(dev);15661566+15671567+ spin_lock(&priv->lock);15681568+ priv->mac_type->ops->set_filter(dev);15691569+ spin_unlock(&priv->lock);15701570+ return;15711571+}15721572+15731573+/**15741574+ * stmmac_change_mtu - entry point to change MTU size for the device.15751575+ * @dev : device pointer.15761576+ * @new_mtu : the new MTU size for the device.15771577+ * Description: the Maximum Transfer Unit (MTU) is used by the network layer15781578+ * to drive packet transmission. Ethernet has an MTU of 1500 octets15791579+ * (ETH_DATA_LEN). This value can be changed with ifconfig.15801580+ * Return value:15811581+ * 0 on success and an appropriate (-)ve integer as defined in errno.h15821582+ * file on failure.15831583+ */15841584+static int stmmac_change_mtu(struct net_device *dev, int new_mtu)15851585+{15861586+ struct stmmac_priv *priv = netdev_priv(dev);15871587+ int max_mtu;15881588+15891589+ if (netif_running(dev)) {15901590+ pr_err("%s: must be stopped to change its MTU\n", dev->name);15911591+ return -EBUSY;15921592+ }15931593+15941594+ if (priv->is_gmac)15951595+ max_mtu = JUMBO_LEN;15961596+ else15971597+ max_mtu = ETH_DATA_LEN;15981598+15991599+ if ((new_mtu < 46) || (new_mtu > max_mtu)) {16001600+ pr_err("%s: invalid MTU, max MTU is: %d\n", dev->name, max_mtu);16011601+ return -EINVAL;16021602+ }16031603+16041604+ dev->mtu = new_mtu;16051605+16061606+ return 0;16071607+}16081608+16091609+static irqreturn_t stmmac_interrupt(int irq, void *dev_id)16101610+{16111611+ struct net_device *dev = (struct net_device *)dev_id;16121612+ struct stmmac_priv *priv = netdev_priv(dev);16131613+16141614+ if (unlikely(!dev)) {16151615+ pr_err("%s: invalid dev pointer\n", __func__);16161616+ return IRQ_NONE;16171617+ }16181618+16191619+ if (priv->is_gmac) {16201620+ unsigned long ioaddr = dev->base_addr;16211621+ /* To handle GMAC own interrupts */16221622+ priv->mac_type->ops->host_irq_status(ioaddr);16231623+ }16241624+ stmmac_dma_interrupt(dev);16251625+16261626+ return IRQ_HANDLED;16271627+}16281628+16291629+#ifdef CONFIG_NET_POLL_CONTROLLER16301630+/* Polling receive - used by NETCONSOLE and other diagnostic tools16311631+ * to allow network I/O with interrupts disabled. */16321632+static void stmmac_poll_controller(struct net_device *dev)16331633+{16341634+ disable_irq(dev->irq);16351635+ stmmac_interrupt(dev->irq, dev);16361636+ enable_irq(dev->irq);16371637+}16381638+#endif16391639+16401640+/**16411641+ * stmmac_ioctl - Entry point for the Ioctl16421642+ * @dev: Device pointer.16431643+ * @rq: An IOCTL specefic structure, that can contain a pointer to16441644+ * a proprietary structure used to pass information to the driver.16451645+ * @cmd: IOCTL command16461646+ * Description:16471647+ * Currently there are no special functionality supported in IOCTL, just the16481648+ * phy_mii_ioctl(...) can be invoked.16491649+ */16501650+static int stmmac_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)16511651+{16521652+ struct stmmac_priv *priv = netdev_priv(dev);16531653+ int ret = -EOPNOTSUPP;16541654+16551655+ if (!netif_running(dev))16561656+ return -EINVAL;16571657+16581658+ switch (cmd) {16591659+ case SIOCGMIIPHY:16601660+ case SIOCGMIIREG:16611661+ case SIOCSMIIREG:16621662+ if (!priv->phydev)16631663+ return -EINVAL;16641664+16651665+ spin_lock(&priv->lock);16661666+ ret = phy_mii_ioctl(priv->phydev, if_mii(rq), cmd);16671667+ spin_unlock(&priv->lock);16681668+ default:16691669+ break;16701670+ }16711671+ return ret;16721672+}16731673+16741674+#ifdef STMMAC_VLAN_TAG_USED16751675+static void stmmac_vlan_rx_register(struct net_device *dev,16761676+ struct vlan_group *grp)16771677+{16781678+ struct stmmac_priv *priv = netdev_priv(dev);16791679+16801680+ DBG(probe, INFO, "%s: Setting vlgrp to %p\n", dev->name, grp);16811681+16821682+ spin_lock(&priv->lock);16831683+ priv->vlgrp = grp;16841684+ spin_unlock(&priv->lock);16851685+16861686+ return;16871687+}16881688+#endif16891689+16901690+static const struct net_device_ops stmmac_netdev_ops = {16911691+ .ndo_open = stmmac_open,16921692+ .ndo_start_xmit = stmmac_xmit,16931693+ .ndo_stop = stmmac_release,16941694+ .ndo_change_mtu = stmmac_change_mtu,16951695+ .ndo_set_multicast_list = stmmac_multicast_list,16961696+ .ndo_tx_timeout = stmmac_tx_timeout,16971697+ .ndo_do_ioctl = stmmac_ioctl,16981698+ .ndo_set_config = stmmac_config,16991699+#ifdef STMMAC_VLAN_TAG_USED17001700+ .ndo_vlan_rx_register = stmmac_vlan_rx_register,17011701+#endif17021702+#ifdef CONFIG_NET_POLL_CONTROLLER17031703+ .ndo_poll_controller = stmmac_poll_controller,17041704+#endif17051705+ .ndo_set_mac_address = eth_mac_addr,17061706+};17071707+17081708+/**17091709+ * stmmac_probe - Initialization of the adapter .17101710+ * @dev : device pointer17111711+ * Description: The function initializes the network device structure for17121712+ * the STMMAC driver. It also calls the low level routines17131713+ * in order to init the HW (i.e. the DMA engine)17141714+ */17151715+static int stmmac_probe(struct net_device *dev)17161716+{17171717+ int ret = 0;17181718+ struct stmmac_priv *priv = netdev_priv(dev);17191719+17201720+ ether_setup(dev);17211721+17221722+ dev->netdev_ops = &stmmac_netdev_ops;17231723+ stmmac_set_ethtool_ops(dev);17241724+17251725+ dev->features |= (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA);17261726+ dev->watchdog_timeo = msecs_to_jiffies(watchdog);17271727+#ifdef STMMAC_VLAN_TAG_USED17281728+ /* Both mac100 and gmac support receive VLAN tag detection */17291729+ dev->features |= NETIF_F_HW_VLAN_RX;17301730+#endif17311731+ priv->msg_enable = netif_msg_init(debug, default_msg_level);17321732+17331733+ if (priv->is_gmac)17341734+ priv->rx_csum = 1;17351735+17361736+ if (flow_ctrl)17371737+ priv->flow_ctrl = FLOW_AUTO; /* RX/TX pause on */17381738+17391739+ priv->pause = pause;17401740+ netif_napi_add(dev, &priv->napi, stmmac_poll, 64);17411741+17421742+ /* Get the MAC address */17431743+ priv->mac_type->ops->get_umac_addr(dev->base_addr, dev->dev_addr, 0);17441744+17451745+ if (!is_valid_ether_addr(dev->dev_addr))17461746+ pr_warning("\tno valid MAC address;"17471747+ "please, use ifconfig or nwhwconfig!\n");17481748+17491749+ ret = register_netdev(dev);17501750+ if (ret) {17511751+ pr_err("%s: ERROR %i registering the device\n",17521752+ __func__, ret);17531753+ return -ENODEV;17541754+ }17551755+17561756+ DBG(probe, DEBUG, "%s: Scatter/Gather: %s - HW checksums: %s\n",17571757+ dev->name, (dev->features & NETIF_F_SG) ? "on" : "off",17581758+ (dev->features & NETIF_F_HW_CSUM) ? "on" : "off");17591759+17601760+ spin_lock_init(&priv->lock);17611761+17621762+ return ret;17631763+}17641764+17651765+/**17661766+ * stmmac_mac_device_setup17671767+ * @dev : device pointer17681768+ * Description: select and initialise the mac device (mac100 or Gmac).17691769+ */17701770+static int stmmac_mac_device_setup(struct net_device *dev)17711771+{17721772+ struct stmmac_priv *priv = netdev_priv(dev);17731773+ unsigned long ioaddr = dev->base_addr;17741774+17751775+ struct mac_device_info *device;17761776+17771777+ if (priv->is_gmac)17781778+ device = gmac_setup(ioaddr);17791779+ else17801780+ device = mac100_setup(ioaddr);17811781+17821782+ if (!device)17831783+ return -ENOMEM;17841784+17851785+ priv->mac_type = device;17861786+17871787+ priv->wolenabled = priv->mac_type->hw.pmt; /* PMT supported */17881788+ if (priv->wolenabled == PMT_SUPPORTED)17891789+ priv->wolopts = WAKE_MAGIC; /* Magic Frame */17901790+17911791+ return 0;17921792+}17931793+17941794+static int stmmacphy_dvr_probe(struct platform_device *pdev)17951795+{17961796+ struct plat_stmmacphy_data *plat_dat;17971797+ plat_dat = (struct plat_stmmacphy_data *)((pdev->dev).platform_data);17981798+17991799+ pr_debug("stmmacphy_dvr_probe: added phy for bus %d\n",18001800+ plat_dat->bus_id);18011801+18021802+ return 0;18031803+}18041804+18051805+static int stmmacphy_dvr_remove(struct platform_device *pdev)18061806+{18071807+ return 0;18081808+}18091809+18101810+static struct platform_driver stmmacphy_driver = {18111811+ .driver = {18121812+ .name = PHY_RESOURCE_NAME,18131813+ },18141814+ .probe = stmmacphy_dvr_probe,18151815+ .remove = stmmacphy_dvr_remove,18161816+};18171817+18181818+/**18191819+ * stmmac_associate_phy18201820+ * @dev: pointer to device structure18211821+ * @data: points to the private structure.18221822+ * Description: Scans through all the PHYs we have registered and checks if18231823+ * any are associated with our MAC. If so, then just fill in18241824+ * the blanks in our local context structure18251825+ */18261826+static int stmmac_associate_phy(struct device *dev, void *data)18271827+{18281828+ struct stmmac_priv *priv = (struct stmmac_priv *)data;18291829+ struct plat_stmmacphy_data *plat_dat;18301830+18311831+ plat_dat = (struct plat_stmmacphy_data *)(dev->platform_data);18321832+18331833+ DBG(probe, DEBUG, "%s: checking phy for bus %d\n", __func__,18341834+ plat_dat->bus_id);18351835+18361836+ /* Check that this phy is for the MAC being initialised */18371837+ if (priv->bus_id != plat_dat->bus_id)18381838+ return 0;18391839+18401840+ /* OK, this PHY is connected to the MAC.18411841+ Go ahead and get the parameters */18421842+ DBG(probe, DEBUG, "%s: OK. Found PHY config\n", __func__);18431843+ priv->phy_irq =18441844+ platform_get_irq_byname(to_platform_device(dev), "phyirq");18451845+ DBG(probe, DEBUG, "%s: PHY irq on bus %d is %d\n", __func__,18461846+ plat_dat->bus_id, priv->phy_irq);18471847+18481848+ /* Override with kernel parameters if supplied XXX CRS XXX18491849+ * this needs to have multiple instances */18501850+ if ((phyaddr >= 0) && (phyaddr <= 31))18511851+ plat_dat->phy_addr = phyaddr;18521852+18531853+ priv->phy_addr = plat_dat->phy_addr;18541854+ priv->phy_mask = plat_dat->phy_mask;18551855+ priv->phy_interface = plat_dat->interface;18561856+ priv->phy_reset = plat_dat->phy_reset;18571857+18581858+ DBG(probe, DEBUG, "%s: exiting\n", __func__);18591859+ return 1; /* forces exit of driver_for_each_device() */18601860+}18611861+18621862+/**18631863+ * stmmac_dvr_probe18641864+ * @pdev: platform device pointer18651865+ * Description: the driver is initialized through platform_device.18661866+ */18671867+static int stmmac_dvr_probe(struct platform_device *pdev)18681868+{18691869+ int ret = 0;18701870+ struct resource *res;18711871+ unsigned int *addr = NULL;18721872+ struct net_device *ndev = NULL;18731873+ struct stmmac_priv *priv;18741874+ struct plat_stmmacenet_data *plat_dat;18751875+18761876+ pr_info("STMMAC driver:\n\tplatform registration... ");18771877+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);18781878+ if (!res) {18791879+ ret = -ENODEV;18801880+ goto out;18811881+ }18821882+ pr_info("done!\n");18831883+18841884+ if (!request_mem_region(res->start, (res->end - res->start),18851885+ pdev->name)) {18861886+ pr_err("%s: ERROR: memory allocation failed"18871887+ "cannot get the I/O addr 0x%x\n",18881888+ __func__, (unsigned int)res->start);18891889+ ret = -EBUSY;18901890+ goto out;18911891+ }18921892+18931893+ addr = ioremap(res->start, (res->end - res->start));18941894+ if (!addr) {18951895+ pr_err("%s: ERROR: memory mapping failed \n", __func__);18961896+ ret = -ENOMEM;18971897+ goto out;18981898+ }18991899+19001900+ ndev = alloc_etherdev(sizeof(struct stmmac_priv));19011901+ if (!ndev) {19021902+ pr_err("%s: ERROR: allocating the device\n", __func__);19031903+ ret = -ENOMEM;19041904+ goto out;19051905+ }19061906+19071907+ SET_NETDEV_DEV(ndev, &pdev->dev);19081908+19091909+ /* Get the MAC information */19101910+ ndev->irq = platform_get_irq_byname(pdev, "macirq");19111911+ if (ndev->irq == -ENXIO) {19121912+ pr_err("%s: ERROR: MAC IRQ configuration "19131913+ "information not found\n", __func__);19141914+ ret = -ENODEV;19151915+ goto out;19161916+ }19171917+19181918+ priv = netdev_priv(ndev);19191919+ priv->device = &(pdev->dev);19201920+ priv->dev = ndev;19211921+ plat_dat = (struct plat_stmmacenet_data *)((pdev->dev).platform_data);19221922+ priv->bus_id = plat_dat->bus_id;19231923+ priv->pbl = plat_dat->pbl; /* TLI */19241924+ priv->is_gmac = plat_dat->has_gmac; /* GMAC is on board */19251925+19261926+ platform_set_drvdata(pdev, ndev);19271927+19281928+ /* Set the I/O base addr */19291929+ ndev->base_addr = (unsigned long)addr;19301930+19311931+ /* MAC HW revice detection */19321932+ ret = stmmac_mac_device_setup(ndev);19331933+ if (ret < 0)19341934+ goto out;19351935+19361936+ /* Network Device Registration */19371937+ ret = stmmac_probe(ndev);19381938+ if (ret < 0)19391939+ goto out;19401940+19411941+ /* associate a PHY - it is provided by another platform bus */19421942+ if (!driver_for_each_device19431943+ (&(stmmacphy_driver.driver), NULL, (void *)priv,19441944+ stmmac_associate_phy)) {19451945+ pr_err("No PHY device is associated with this MAC!\n");19461946+ ret = -ENODEV;19471947+ goto out;19481948+ }19491949+19501950+ priv->fix_mac_speed = plat_dat->fix_mac_speed;19511951+ priv->bsp_priv = plat_dat->bsp_priv;19521952+19531953+ pr_info("\t%s - (dev. name: %s - id: %d, IRQ #%d\n"19541954+ "\tIO base addr: 0x%08x)\n", ndev->name, pdev->name,19551955+ pdev->id, ndev->irq, (unsigned int)addr);19561956+19571957+ /* MDIO bus Registration */19581958+ pr_debug("\tMDIO bus (id: %d)...", priv->bus_id);19591959+ ret = stmmac_mdio_register(ndev);19601960+ if (ret < 0)19611961+ goto out;19621962+ pr_debug("registered!\n");19631963+19641964+out:19651965+ if (ret < 0) {19661966+ platform_set_drvdata(pdev, NULL);19671967+ release_mem_region(res->start, (res->end - res->start));19681968+ if (addr != NULL)19691969+ iounmap(addr);19701970+ }19711971+19721972+ return ret;19731973+}19741974+19751975+/**19761976+ * stmmac_dvr_remove19771977+ * @pdev: platform device pointer19781978+ * Description: this function resets the TX/RX processes, disables the MAC RX/TX19791979+ * changes the link status, releases the DMA descriptor rings,19801980+ * unregisters the MDIO bus and unmaps the allocated memory.19811981+ */19821982+static int stmmac_dvr_remove(struct platform_device *pdev)19831983+{19841984+ struct net_device *ndev = platform_get_drvdata(pdev);19851985+ struct resource *res;19861986+19871987+ pr_info("%s:\n\tremoving driver", __func__);19881988+19891989+ stmmac_dma_stop_rx(ndev->base_addr);19901990+ stmmac_dma_stop_tx(ndev->base_addr);19911991+19921992+ stmmac_mac_disable_rx(ndev->base_addr);19931993+ stmmac_mac_disable_tx(ndev->base_addr);19941994+19951995+ netif_carrier_off(ndev);19961996+19971997+ stmmac_mdio_unregister(ndev);19981998+19991999+ platform_set_drvdata(pdev, NULL);20002000+ unregister_netdev(ndev);20012001+20022002+ iounmap((void *)ndev->base_addr);20032003+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);20042004+ release_mem_region(res->start, (res->end - res->start));20052005+20062006+ free_netdev(ndev);20072007+20082008+ return 0;20092009+}20102010+20112011+#ifdef CONFIG_PM20122012+static int stmmac_suspend(struct platform_device *pdev, pm_message_t state)20132013+{20142014+ struct net_device *dev = platform_get_drvdata(pdev);20152015+ struct stmmac_priv *priv = netdev_priv(dev);20162016+ int dis_ic = 0;20172017+20182018+ if (!dev || !netif_running(dev))20192019+ return 0;20202020+20212021+ spin_lock(&priv->lock);20222022+20232023+ if (state.event == PM_EVENT_SUSPEND) {20242024+ netif_device_detach(dev);20252025+ netif_stop_queue(dev);20262026+ if (priv->phydev)20272027+ phy_stop(priv->phydev);20282028+20292029+#ifdef CONFIG_STMMAC_TIMER20302030+ priv->tm->timer_stop();20312031+ dis_ic = 1;20322032+#endif20332033+ napi_disable(&priv->napi);20342034+20352035+ /* Stop TX/RX DMA */20362036+ stmmac_dma_stop_tx(dev->base_addr);20372037+ stmmac_dma_stop_rx(dev->base_addr);20382038+ /* Clear the Rx/Tx descriptors */20392039+ priv->mac_type->ops->init_rx_desc(priv->dma_rx,20402040+ priv->dma_rx_size, dis_ic);20412041+ priv->mac_type->ops->init_tx_desc(priv->dma_tx,20422042+ priv->dma_tx_size);20432043+20442044+ stmmac_mac_disable_tx(dev->base_addr);20452045+20462046+ if (device_may_wakeup(&(pdev->dev))) {20472047+ /* Enable Power down mode by programming the PMT regs */20482048+ if (priv->wolenabled == PMT_SUPPORTED)20492049+ priv->mac_type->ops->pmt(dev->base_addr,20502050+ priv->wolopts);20512051+ } else {20522052+ stmmac_mac_disable_rx(dev->base_addr);20532053+ }20542054+ } else {20552055+ priv->shutdown = 1;20562056+ /* Although this can appear slightly redundant it actually20572057+ * makes fast the standby operation and guarantees the driver20582058+ * working if hibernation is on media. */20592059+ stmmac_release(dev);20602060+ }20612061+20622062+ spin_unlock(&priv->lock);20632063+ return 0;20642064+}20652065+20662066+static int stmmac_resume(struct platform_device *pdev)20672067+{20682068+ struct net_device *dev = platform_get_drvdata(pdev);20692069+ struct stmmac_priv *priv = netdev_priv(dev);20702070+ unsigned long ioaddr = dev->base_addr;20712071+20722072+ if (!netif_running(dev))20732073+ return 0;20742074+20752075+ spin_lock(&priv->lock);20762076+20772077+ if (priv->shutdown) {20782078+ /* Re-open the interface and re-init the MAC/DMA20792079+ and the rings. */20802080+ stmmac_open(dev);20812081+ goto out_resume;20822082+ }20832083+20842084+ /* Power Down bit, into the PM register, is cleared20852085+ * automatically as soon as a magic packet or a Wake-up frame20862086+ * is received. Anyway, it's better to manually clear20872087+ * this bit because it can generate problems while resuming20882088+ * from another devices (e.g. serial console). */20892089+ if (device_may_wakeup(&(pdev->dev)))20902090+ if (priv->wolenabled == PMT_SUPPORTED)20912091+ priv->mac_type->ops->pmt(dev->base_addr, 0);20922092+20932093+ netif_device_attach(dev);20942094+20952095+ /* Enable the MAC and DMA */20962096+ stmmac_mac_enable_rx(ioaddr);20972097+ stmmac_mac_enable_tx(ioaddr);20982098+ stmmac_dma_start_tx(ioaddr);20992099+ stmmac_dma_start_rx(ioaddr);21002100+21012101+#ifdef CONFIG_STMMAC_TIMER21022102+ priv->tm->timer_start(tmrate);21032103+#endif21042104+ napi_enable(&priv->napi);21052105+21062106+ if (priv->phydev)21072107+ phy_start(priv->phydev);21082108+21092109+ netif_start_queue(dev);21102110+21112111+out_resume:21122112+ spin_unlock(&priv->lock);21132113+ return 0;21142114+}21152115+#endif21162116+21172117+static struct platform_driver stmmac_driver = {21182118+ .driver = {21192119+ .name = STMMAC_RESOURCE_NAME,21202120+ },21212121+ .probe = stmmac_dvr_probe,21222122+ .remove = stmmac_dvr_remove,21232123+#ifdef CONFIG_PM21242124+ .suspend = stmmac_suspend,21252125+ .resume = stmmac_resume,21262126+#endif21272127+21282128+};21292129+21302130+/**21312131+ * stmmac_init_module - Entry point for the driver21322132+ * Description: This function is the entry point for the driver.21332133+ */21342134+static int __init stmmac_init_module(void)21352135+{21362136+ int ret;21372137+21382138+ if (platform_driver_register(&stmmacphy_driver)) {21392139+ pr_err("No PHY devices registered!\n");21402140+ return -ENODEV;21412141+ }21422142+21432143+ ret = platform_driver_register(&stmmac_driver);21442144+ return ret;21452145+}21462146+21472147+/**21482148+ * stmmac_cleanup_module - Cleanup routine for the driver21492149+ * Description: This function is the cleanup routine for the driver.21502150+ */21512151+static void __exit stmmac_cleanup_module(void)21522152+{21532153+ platform_driver_unregister(&stmmacphy_driver);21542154+ platform_driver_unregister(&stmmac_driver);21552155+}21562156+21572157+#ifndef MODULE21582158+static int __init stmmac_cmdline_opt(char *str)21592159+{21602160+ char *opt;21612161+21622162+ if (!str || !*str)21632163+ return -EINVAL;21642164+ while ((opt = strsep(&str, ",")) != NULL) {21652165+ if (!strncmp(opt, "debug:", 6))21662166+ strict_strtoul(opt + 6, 0, (unsigned long *)&debug);21672167+ else if (!strncmp(opt, "phyaddr:", 8))21682168+ strict_strtoul(opt + 8, 0, (unsigned long *)&phyaddr);21692169+ else if (!strncmp(opt, "dma_txsize:", 11))21702170+ strict_strtoul(opt + 11, 0,21712171+ (unsigned long *)&dma_txsize);21722172+ else if (!strncmp(opt, "dma_rxsize:", 11))21732173+ strict_strtoul(opt + 11, 0,21742174+ (unsigned long *)&dma_rxsize);21752175+ else if (!strncmp(opt, "buf_sz:", 7))21762176+ strict_strtoul(opt + 7, 0, (unsigned long *)&buf_sz);21772177+ else if (!strncmp(opt, "tc:", 3))21782178+ strict_strtoul(opt + 3, 0, (unsigned long *)&tc);21792179+ else if (!strncmp(opt, "tx_coe:", 7))21802180+ strict_strtoul(opt + 7, 0, (unsigned long *)&tx_coe);21812181+ else if (!strncmp(opt, "watchdog:", 9))21822182+ strict_strtoul(opt + 9, 0, (unsigned long *)&watchdog);21832183+ else if (!strncmp(opt, "flow_ctrl:", 10))21842184+ strict_strtoul(opt + 10, 0,21852185+ (unsigned long *)&flow_ctrl);21862186+ else if (!strncmp(opt, "pause:", 6))21872187+ strict_strtoul(opt + 6, 0, (unsigned long *)&pause);21882188+#ifdef CONFIG_STMMAC_TIMER21892189+ else if (!strncmp(opt, "tmrate:", 7))21902190+ strict_strtoul(opt + 7, 0, (unsigned long *)&tmrate);21912191+#endif21922192+ }21932193+ return 0;21942194+}21952195+21962196+__setup("stmmaceth=", stmmac_cmdline_opt);21972197+#endif21982198+21992199+module_init(stmmac_init_module);22002200+module_exit(stmmac_cleanup_module);22012201+22022202+MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet driver");22032203+MODULE_AUTHOR("Giuseppe Cavallaro <peppe.cavallaro@st.com>");22042204+MODULE_LICENSE("GPL");
+217
drivers/net/stmmac/stmmac_mdio.c
···11+/*******************************************************************************22+ STMMAC Ethernet Driver -- MDIO bus implementation33+ Provides Bus interface for MII registers44+55+ Copyright (C) 2007-2009 STMicroelectronics Ltd66+77+ This program is free software; you can redistribute it and/or modify it88+ under the terms and conditions of the GNU General Public License,99+ version 2, as published by the Free Software Foundation.1010+1111+ This program is distributed in the hope it will be useful, but WITHOUT1212+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1313+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1414+ more details.1515+1616+ You should have received a copy of the GNU General Public License along with1717+ this program; if not, write to the Free Software Foundation, Inc.,1818+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1919+2020+ The full GNU General Public License is included in this distribution in2121+ the file called "COPYING".2222+2323+ Author: Carl Shaw <carl.shaw@st.com>2424+ Maintainer: Giuseppe Cavallaro <peppe.cavallaro@st.com>2525+*******************************************************************************/2626+2727+#include <linux/netdevice.h>2828+#include <linux/mii.h>2929+#include <linux/phy.h>3030+3131+#include "stmmac.h"3232+3333+#define MII_BUSY 0x000000013434+#define MII_WRITE 0x000000023535+3636+/**3737+ * stmmac_mdio_read3838+ * @bus: points to the mii_bus structure3939+ * @phyaddr: MII addr reg bits 15-114040+ * @phyreg: MII addr reg bits 10-64141+ * Description: it reads data from the MII register from within the phy device.4242+ * For the 7111 GMAC, we must set the bit 0 in the MII address register while4343+ * accessing the PHY registers.4444+ * Fortunately, it seems this has no drawback for the 7109 MAC.4545+ */4646+static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)4747+{4848+ struct net_device *ndev = bus->priv;4949+ struct stmmac_priv *priv = netdev_priv(ndev);5050+ unsigned long ioaddr = ndev->base_addr;5151+ unsigned int mii_address = priv->mac_type->hw.mii.addr;5252+ unsigned int mii_data = priv->mac_type->hw.mii.data;5353+5454+ int data;5555+ u16 regValue = (((phyaddr << 11) & (0x0000F800)) |5656+ ((phyreg << 6) & (0x000007C0)));5757+ regValue |= MII_BUSY; /* in case of GMAC */5858+5959+ do {} while (((readl(ioaddr + mii_address)) & MII_BUSY) == 1);6060+ writel(regValue, ioaddr + mii_address);6161+ do {} while (((readl(ioaddr + mii_address)) & MII_BUSY) == 1);6262+6363+ /* Read the data from the MII data register */6464+ data = (int)readl(ioaddr + mii_data);6565+6666+ return data;6767+}6868+6969+/**7070+ * stmmac_mdio_write7171+ * @bus: points to the mii_bus structure7272+ * @phyaddr: MII addr reg bits 15-117373+ * @phyreg: MII addr reg bits 10-67474+ * @phydata: phy data7575+ * Description: it writes the data into the MII register from within the device.7676+ */7777+static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,7878+ u16 phydata)7979+{8080+ struct net_device *ndev = bus->priv;8181+ struct stmmac_priv *priv = netdev_priv(ndev);8282+ unsigned long ioaddr = ndev->base_addr;8383+ unsigned int mii_address = priv->mac_type->hw.mii.addr;8484+ unsigned int mii_data = priv->mac_type->hw.mii.data;8585+8686+ u16 value =8787+ (((phyaddr << 11) & (0x0000F800)) | ((phyreg << 6) & (0x000007C0)))8888+ | MII_WRITE;8989+9090+ value |= MII_BUSY;9191+9292+ /* Wait until any existing MII operation is complete */9393+ do {} while (((readl(ioaddr + mii_address)) & MII_BUSY) == 1);9494+9595+ /* Set the MII address register to write */9696+ writel(phydata, ioaddr + mii_data);9797+ writel(value, ioaddr + mii_address);9898+9999+ /* Wait until any existing MII operation is complete */100100+ do {} while (((readl(ioaddr + mii_address)) & MII_BUSY) == 1);101101+102102+ return 0;103103+}104104+105105+/**106106+ * stmmac_mdio_reset107107+ * @bus: points to the mii_bus structure108108+ * Description: reset the MII bus109109+ */110110+static int stmmac_mdio_reset(struct mii_bus *bus)111111+{112112+ struct net_device *ndev = bus->priv;113113+ struct stmmac_priv *priv = netdev_priv(ndev);114114+ unsigned long ioaddr = ndev->base_addr;115115+ unsigned int mii_address = priv->mac_type->hw.mii.addr;116116+117117+ if (priv->phy_reset) {118118+ pr_debug("stmmac_mdio_reset: calling phy_reset\n");119119+ priv->phy_reset(priv->bsp_priv);120120+ }121121+122122+ /* This is a workaround for problems with the STE101P PHY.123123+ * It doesn't complete its reset until at least one clock cycle124124+ * on MDC, so perform a dummy mdio read.125125+ */126126+ writel(0, ioaddr + mii_address);127127+128128+ return 0;129129+}130130+131131+/**132132+ * stmmac_mdio_register133133+ * @ndev: net device structure134134+ * Description: it registers the MII bus135135+ */136136+int stmmac_mdio_register(struct net_device *ndev)137137+{138138+ int err = 0;139139+ struct mii_bus *new_bus;140140+ int *irqlist;141141+ struct stmmac_priv *priv = netdev_priv(ndev);142142+ int addr, found;143143+144144+ new_bus = mdiobus_alloc();145145+ if (new_bus == NULL)146146+ return -ENOMEM;147147+148148+ irqlist = kzalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL);149149+ if (irqlist == NULL) {150150+ err = -ENOMEM;151151+ goto irqlist_alloc_fail;152152+ }153153+154154+ /* Assign IRQ to phy at address phy_addr */155155+ if (priv->phy_addr != -1)156156+ irqlist[priv->phy_addr] = priv->phy_irq;157157+158158+ new_bus->name = "STMMAC MII Bus";159159+ new_bus->read = &stmmac_mdio_read;160160+ new_bus->write = &stmmac_mdio_write;161161+ new_bus->reset = &stmmac_mdio_reset;162162+ snprintf(new_bus->id, MII_BUS_ID_SIZE, "%x", priv->bus_id);163163+ new_bus->priv = ndev;164164+ new_bus->irq = irqlist;165165+ new_bus->phy_mask = priv->phy_mask;166166+ new_bus->parent = priv->device;167167+ err = mdiobus_register(new_bus);168168+ if (err != 0) {169169+ pr_err("%s: Cannot register as MDIO bus\n", new_bus->name);170170+ goto bus_register_fail;171171+ }172172+173173+ priv->mii = new_bus;174174+175175+ found = 0;176176+ for (addr = 0; addr < 32; addr++) {177177+ struct phy_device *phydev = new_bus->phy_map[addr];178178+ if (phydev) {179179+ if (priv->phy_addr == -1) {180180+ priv->phy_addr = addr;181181+ phydev->irq = priv->phy_irq;182182+ irqlist[addr] = priv->phy_irq;183183+ }184184+ pr_info("%s: PHY ID %08x at %d IRQ %d (%s)%s\n",185185+ ndev->name, phydev->phy_id, addr,186186+ phydev->irq, dev_name(&phydev->dev),187187+ (addr == priv->phy_addr) ? " active" : "");188188+ found = 1;189189+ }190190+ }191191+192192+ if (!found)193193+ pr_warning("%s: No PHY found\n", ndev->name);194194+195195+ return 0;196196+bus_register_fail:197197+ kfree(irqlist);198198+irqlist_alloc_fail:199199+ kfree(new_bus);200200+ return err;201201+}202202+203203+/**204204+ * stmmac_mdio_unregister205205+ * @ndev: net device structure206206+ * Description: it unregisters the MII bus207207+ */208208+int stmmac_mdio_unregister(struct net_device *ndev)209209+{210210+ struct stmmac_priv *priv = netdev_priv(ndev);211211+212212+ mdiobus_unregister(priv->mii);213213+ priv->mii->priv = NULL;214214+ kfree(priv->mii);215215+216216+ return 0;217217+}
+140
drivers/net/stmmac/stmmac_timer.c
···11+/*******************************************************************************22+ STMMAC external timer support.33+44+ Copyright (C) 2007-2009 STMicroelectronics Ltd55+66+ This program is free software; you can redistribute it and/or modify it77+ under the terms and conditions of the GNU General Public License,88+ version 2, as published by the Free Software Foundation.99+1010+ This program is distributed in the hope it will be useful, but WITHOUT1111+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ more details.1414+1515+ You should have received a copy of the GNU General Public License along with1616+ this program; if not, write to the Free Software Foundation, Inc.,1717+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1818+1919+ The full GNU General Public License is included in this distribution in2020+ the file called "COPYING".2121+2222+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2323+*******************************************************************************/2424+2525+#include <linux/kernel.h>2626+#include <linux/etherdevice.h>2727+#include "stmmac_timer.h"2828+2929+static void stmmac_timer_handler(void *data)3030+{3131+ struct net_device *dev = (struct net_device *)data;3232+3333+ stmmac_schedule(dev);3434+3535+ return;3636+}3737+3838+#define STMMAC_TIMER_MSG(timer, freq) \3939+printk(KERN_INFO "stmmac_timer: %s Timer ON (freq %dHz)\n", timer, freq);4040+4141+#if defined(CONFIG_STMMAC_RTC_TIMER)4242+#include <linux/rtc.h>4343+static struct rtc_device *stmmac_rtc;4444+static rtc_task_t stmmac_task;4545+4646+static void stmmac_rtc_start(unsigned int new_freq)4747+{4848+ rtc_irq_set_freq(stmmac_rtc, &stmmac_task, new_freq);4949+ rtc_irq_set_state(stmmac_rtc, &stmmac_task, 1);5050+ return;5151+}5252+5353+static void stmmac_rtc_stop(void)5454+{5555+ rtc_irq_set_state(stmmac_rtc, &stmmac_task, 0);5656+ return;5757+}5858+5959+int stmmac_open_ext_timer(struct net_device *dev, struct stmmac_timer *tm)6060+{6161+ stmmac_task.private_data = dev;6262+ stmmac_task.func = stmmac_timer_handler;6363+6464+ stmmac_rtc = rtc_class_open(CONFIG_RTC_HCTOSYS_DEVICE);6565+ if (stmmac_rtc == NULL) {6666+ pr_error("open rtc device failed\n");6767+ return -ENODEV;6868+ }6969+7070+ rtc_irq_register(stmmac_rtc, &stmmac_task);7171+7272+ /* Periodic mode is not supported */7373+ if ((rtc_irq_set_freq(stmmac_rtc, &stmmac_task, tm->freq) < 0)) {7474+ pr_error("set periodic failed\n");7575+ rtc_irq_unregister(stmmac_rtc, &stmmac_task);7676+ rtc_class_close(stmmac_rtc);7777+ return -1;7878+ }7979+8080+ STMMAC_TIMER_MSG(CONFIG_RTC_HCTOSYS_DEVICE, tm->freq);8181+8282+ tm->timer_start = stmmac_rtc_start;8383+ tm->timer_stop = stmmac_rtc_stop;8484+8585+ return 0;8686+}8787+8888+int stmmac_close_ext_timer(void)8989+{9090+ rtc_irq_set_state(stmmac_rtc, &stmmac_task, 0);9191+ rtc_irq_unregister(stmmac_rtc, &stmmac_task);9292+ rtc_class_close(stmmac_rtc);9393+ return 0;9494+}9595+9696+#elif defined(CONFIG_STMMAC_TMU_TIMER)9797+#include <linux/clk.h>9898+#define TMU_CHANNEL "tmu2_clk"9999+static struct clk *timer_clock;100100+101101+static void stmmac_tmu_start(unsigned int new_freq)102102+{103103+ clk_set_rate(timer_clock, new_freq);104104+ clk_enable(timer_clock);105105+ return;106106+}107107+108108+static void stmmac_tmu_stop(void)109109+{110110+ clk_disable(timer_clock);111111+ return;112112+}113113+114114+int stmmac_open_ext_timer(struct net_device *dev, struct stmmac_timer *tm)115115+{116116+ timer_clock = clk_get(NULL, TMU_CHANNEL);117117+118118+ if (timer_clock == NULL)119119+ return -1;120120+121121+ if (tmu2_register_user(stmmac_timer_handler, (void *)dev) < 0) {122122+ timer_clock = NULL;123123+ return -1;124124+ }125125+126126+ STMMAC_TIMER_MSG("TMU2", tm->freq);127127+ tm->timer_start = stmmac_tmu_start;128128+ tm->timer_stop = stmmac_tmu_stop;129129+130130+ return 0;131131+}132132+133133+int stmmac_close_ext_timer(void)134134+{135135+ clk_disable(timer_clock);136136+ tmu2_unregister_user();137137+ clk_put(timer_clock);138138+ return 0;139139+}140140+#endif
+41
drivers/net/stmmac/stmmac_timer.h
···11+/*******************************************************************************22+ STMMAC external timer Header File.33+44+ Copyright (C) 2007-2009 STMicroelectronics Ltd55+66+ This program is free software; you can redistribute it and/or modify it77+ under the terms and conditions of the GNU General Public License,88+ version 2, as published by the Free Software Foundation.99+1010+ This program is distributed in the hope it will be useful, but WITHOUT1111+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ more details.1414+1515+ You should have received a copy of the GNU General Public License along with1616+ this program; if not, write to the Free Software Foundation, Inc.,1717+ 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.1818+1919+ The full GNU General Public License is included in this distribution in2020+ the file called "COPYING".2121+2222+ Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>2323+*******************************************************************************/2424+2525+struct stmmac_timer {2626+ void (*timer_start) (unsigned int new_freq);2727+ void (*timer_stop) (void);2828+ unsigned int freq;2929+};3030+3131+/* Open the HW timer device and return 0 in case of success */3232+int stmmac_open_ext_timer(struct net_device *dev, struct stmmac_timer *tm);3333+/* Stop the timer and release it */3434+int stmmac_close_ext_timer(void);3535+/* Function used for scheduling task within the stmmac */3636+void stmmac_schedule(struct net_device *dev);3737+3838+#if defined(CONFIG_STMMAC_TMU_TIMER)3939+extern int tmu2_register_user(void *fnt, void *data);4040+extern void tmu2_unregister_user(void);4141+#endif
+13
drivers/net/usb/pegasus.c
···6262static struct usb_eth_dev usb_dev_id[] = {6363#define PEGASUS_DEV(pn, vid, pid, flags) \6464 {.name = pn, .vendor = vid, .device = pid, .private = flags},6565+#define PEGASUS_DEV_CLASS(pn, vid, pid, dclass, flags) \6666+ PEGASUS_DEV(pn, vid, pid, flags)6567#include "pegasus.h"6668#undef PEGASUS_DEV6969+#undef PEGASUS_DEV_CLASS6770 {NULL, 0, 0, 0},6871 {NULL, 0, 0, 0}6972};···7471static struct usb_device_id pegasus_ids[] = {7572#define PEGASUS_DEV(pn, vid, pid, flags) \7673 {.match_flags = USB_DEVICE_ID_MATCH_DEVICE, .idVendor = vid, .idProduct = pid},7474+/*7575+ * The Belkin F8T012xx1 bluetooth adaptor has the same vendor and product7676+ * IDs as the Belkin F5D5050, so we need to teach the pegasus driver to7777+ * ignore adaptors belonging to the "Wireless" class 0xE0. For this one7878+ * case anyway, seeing as the pegasus is for "Wired" adaptors.7979+ */8080+#define PEGASUS_DEV_CLASS(pn, vid, pid, dclass, flags) \8181+ {.match_flags = (USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_DEV_CLASS), \8282+ .idVendor = vid, .idProduct = pid, .bDeviceClass = dclass},7783#include "pegasus.h"7884#undef PEGASUS_DEV8585+#undef PEGASUS_DEV_CLASS7986 {},8087 {}8188};
+5-1
drivers/net/usb/pegasus.h
···202202 DEFAULT_GPIO_RESET | PEGASUS_II )203203PEGASUS_DEV( "Allied Telesyn Int. AT-USB100", VENDOR_ALLIEDTEL, 0xb100,204204 DEFAULT_GPIO_RESET | PEGASUS_II )205205-PEGASUS_DEV( "Belkin F5D5050 USB Ethernet", VENDOR_BELKIN, 0x0121,205205+/*206206+ * Distinguish between this Belkin adaptor and the Belkin bluetooth adaptors207207+ * with the same product IDs by checking the device class too.208208+ */209209+PEGASUS_DEV_CLASS( "Belkin F5D5050 USB Ethernet", VENDOR_BELKIN, 0x0121, 0x00,206210 DEFAULT_GPIO_RESET | PEGASUS_II )207211PEGASUS_DEV( "Billionton USB-100", VENDOR_BILLIONTON, 0x0986,208212 DEFAULT_GPIO_RESET )
+35
drivers/net/vmxnet3/Makefile
···11+################################################################################22+#33+# Linux driver for VMware's vmxnet3 ethernet NIC.44+#55+# Copyright (C) 2007-2009, VMware, Inc. All Rights Reserved.66+#77+# This program is free software; you can redistribute it and/or modify it88+# under the terms of the GNU General Public License as published by the99+# Free Software Foundation; version 2 of the License and no later version.1010+#1111+# This program is distributed in the hope that it will be useful, but1212+# WITHOUT ANY WARRANTY; without even the implied warranty of1313+# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1414+# NON INFRINGEMENT. See the GNU General Public License for more1515+# details.1616+#1717+# You should have received a copy of the GNU General Public License1818+# along with this program; if not, write to the Free Software1919+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.2020+#2121+# The full GNU General Public License is included in this distribution in2222+# the file called "COPYING".2323+#2424+# Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2525+#2626+#2727+################################################################################2828+2929+#3030+# Makefile for the VMware vmxnet3 ethernet NIC driver3131+#3232+3333+obj-$(CONFIG_VMXNET3) += vmxnet3.o3434+3535+vmxnet3-objs := vmxnet3_drv.o vmxnet3_ethtool.o
+96
drivers/net/vmxnet3/upt1_defs.h
···11+/*22+ * Linux driver for VMware's vmxnet3 ethernet NIC.33+ *44+ * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License as published by the88+ * Free Software Foundation; version 2 of the License and no later version.99+ *1010+ * This program is distributed in the hope that it will be useful, but1111+ * WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1313+ * NON INFRINGEMENT. See the GNU General Public License for more1414+ * details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919+ *2020+ * The full GNU General Public License is included in this distribution in2121+ * the file called "COPYING".2222+ *2323+ * Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2424+ *2525+ */2626+2727+#ifndef _UPT1_DEFS_H2828+#define _UPT1_DEFS_H2929+3030+struct UPT1_TxStats {3131+ u64 TSOPktsTxOK; /* TSO pkts post-segmentation */3232+ u64 TSOBytesTxOK;3333+ u64 ucastPktsTxOK;3434+ u64 ucastBytesTxOK;3535+ u64 mcastPktsTxOK;3636+ u64 mcastBytesTxOK;3737+ u64 bcastPktsTxOK;3838+ u64 bcastBytesTxOK;3939+ u64 pktsTxError;4040+ u64 pktsTxDiscard;4141+};4242+4343+struct UPT1_RxStats {4444+ u64 LROPktsRxOK; /* LRO pkts */4545+ u64 LROBytesRxOK; /* bytes from LRO pkts */4646+ /* the following counters are for pkts from the wire, i.e., pre-LRO */4747+ u64 ucastPktsRxOK;4848+ u64 ucastBytesRxOK;4949+ u64 mcastPktsRxOK;5050+ u64 mcastBytesRxOK;5151+ u64 bcastPktsRxOK;5252+ u64 bcastBytesRxOK;5353+ u64 pktsRxOutOfBuf;5454+ u64 pktsRxError;5555+};5656+5757+/* interrupt moderation level */5858+enum {5959+ UPT1_IML_NONE = 0, /* no interrupt moderation */6060+ UPT1_IML_HIGHEST = 7, /* least intr generated */6161+ UPT1_IML_ADAPTIVE = 8, /* adpative intr moderation */6262+};6363+/* values for UPT1_RSSConf.hashFunc */6464+enum {6565+ UPT1_RSS_HASH_TYPE_NONE = 0x0,6666+ UPT1_RSS_HASH_TYPE_IPV4 = 0x01,6767+ UPT1_RSS_HASH_TYPE_TCP_IPV4 = 0x02,6868+ UPT1_RSS_HASH_TYPE_IPV6 = 0x04,6969+ UPT1_RSS_HASH_TYPE_TCP_IPV6 = 0x08,7070+};7171+7272+enum {7373+ UPT1_RSS_HASH_FUNC_NONE = 0x0,7474+ UPT1_RSS_HASH_FUNC_TOEPLITZ = 0x01,7575+};7676+7777+#define UPT1_RSS_MAX_KEY_SIZE 407878+#define UPT1_RSS_MAX_IND_TABLE_SIZE 1287979+8080+struct UPT1_RSSConf {8181+ u16 hashType;8282+ u16 hashFunc;8383+ u16 hashKeySize;8484+ u16 indTableSize;8585+ u8 hashKey[UPT1_RSS_MAX_KEY_SIZE];8686+ u8 indTable[UPT1_RSS_MAX_IND_TABLE_SIZE];8787+};8888+8989+/* features */9090+enum {9191+ UPT1_F_RXCSUM = 0x0001, /* rx csum verification */9292+ UPT1_F_RSS = 0x0002,9393+ UPT1_F_RXVLAN = 0x0004, /* VLAN tag stripping */9494+ UPT1_F_LRO = 0x0008,9595+};9696+#endif
+535
drivers/net/vmxnet3/vmxnet3_defs.h
···11+/*22+ * Linux driver for VMware's vmxnet3 ethernet NIC.33+ *44+ * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License as published by the88+ * Free Software Foundation; version 2 of the License and no later version.99+ *1010+ * This program is distributed in the hope that it will be useful, but1111+ * WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1313+ * NON INFRINGEMENT. See the GNU General Public License for more1414+ * details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919+ *2020+ * The full GNU General Public License is included in this distribution in2121+ * the file called "COPYING".2222+ *2323+ * Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2424+ *2525+ */2626+2727+#ifndef _VMXNET3_DEFS_H_2828+#define _VMXNET3_DEFS_H_2929+3030+#include "upt1_defs.h"3131+3232+/* all registers are 32 bit wide */3333+/* BAR 1 */3434+enum {3535+ VMXNET3_REG_VRRS = 0x0, /* Vmxnet3 Revision Report Selection */3636+ VMXNET3_REG_UVRS = 0x8, /* UPT Version Report Selection */3737+ VMXNET3_REG_DSAL = 0x10, /* Driver Shared Address Low */3838+ VMXNET3_REG_DSAH = 0x18, /* Driver Shared Address High */3939+ VMXNET3_REG_CMD = 0x20, /* Command */4040+ VMXNET3_REG_MACL = 0x28, /* MAC Address Low */4141+ VMXNET3_REG_MACH = 0x30, /* MAC Address High */4242+ VMXNET3_REG_ICR = 0x38, /* Interrupt Cause Register */4343+ VMXNET3_REG_ECR = 0x40 /* Event Cause Register */4444+};4545+4646+/* BAR 0 */4747+enum {4848+ VMXNET3_REG_IMR = 0x0, /* Interrupt Mask Register */4949+ VMXNET3_REG_TXPROD = 0x600, /* Tx Producer Index */5050+ VMXNET3_REG_RXPROD = 0x800, /* Rx Producer Index for ring 1 */5151+ VMXNET3_REG_RXPROD2 = 0xA00 /* Rx Producer Index for ring 2 */5252+};5353+5454+#define VMXNET3_PT_REG_SIZE 4096 /* BAR 0 */5555+#define VMXNET3_VD_REG_SIZE 4096 /* BAR 1 */5656+5757+#define VMXNET3_REG_ALIGN 8 /* All registers are 8-byte aligned. */5858+#define VMXNET3_REG_ALIGN_MASK 0x75959+6060+/* I/O Mapped access to registers */6161+#define VMXNET3_IO_TYPE_PT 06262+#define VMXNET3_IO_TYPE_VD 16363+#define VMXNET3_IO_ADDR(type, reg) (((type) << 24) | ((reg) & 0xFFFFFF))6464+#define VMXNET3_IO_TYPE(addr) ((addr) >> 24)6565+#define VMXNET3_IO_REG(addr) ((addr) & 0xFFFFFF)6666+6767+enum {6868+ VMXNET3_CMD_FIRST_SET = 0xCAFE0000,6969+ VMXNET3_CMD_ACTIVATE_DEV = VMXNET3_CMD_FIRST_SET,7070+ VMXNET3_CMD_QUIESCE_DEV,7171+ VMXNET3_CMD_RESET_DEV,7272+ VMXNET3_CMD_UPDATE_RX_MODE,7373+ VMXNET3_CMD_UPDATE_MAC_FILTERS,7474+ VMXNET3_CMD_UPDATE_VLAN_FILTERS,7575+ VMXNET3_CMD_UPDATE_RSSIDT,7676+ VMXNET3_CMD_UPDATE_IML,7777+ VMXNET3_CMD_UPDATE_PMCFG,7878+ VMXNET3_CMD_UPDATE_FEATURE,7979+ VMXNET3_CMD_LOAD_PLUGIN,8080+8181+ VMXNET3_CMD_FIRST_GET = 0xF00D0000,8282+ VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET,8383+ VMXNET3_CMD_GET_STATS,8484+ VMXNET3_CMD_GET_LINK,8585+ VMXNET3_CMD_GET_PERM_MAC_LO,8686+ VMXNET3_CMD_GET_PERM_MAC_HI,8787+ VMXNET3_CMD_GET_DID_LO,8888+ VMXNET3_CMD_GET_DID_HI,8989+ VMXNET3_CMD_GET_DEV_EXTRA_INFO,9090+ VMXNET3_CMD_GET_CONF_INTR9191+};9292+9393+struct Vmxnet3_TxDesc {9494+ u64 addr;9595+9696+ u32 len:14;9797+ u32 gen:1; /* generation bit */9898+ u32 rsvd:1;9999+ u32 dtype:1; /* descriptor type */100100+ u32 ext1:1;101101+ u32 msscof:14; /* MSS, checksum offset, flags */102102+103103+ u32 hlen:10; /* header len */104104+ u32 om:2; /* offload mode */105105+ u32 eop:1; /* End Of Packet */106106+ u32 cq:1; /* completion request */107107+ u32 ext2:1;108108+ u32 ti:1; /* VLAN Tag Insertion */109109+ u32 tci:16; /* Tag to Insert */110110+};111111+112112+/* TxDesc.OM values */113113+#define VMXNET3_OM_NONE 0114114+#define VMXNET3_OM_CSUM 2115115+#define VMXNET3_OM_TSO 3116116+117117+/* fields in TxDesc we access w/o using bit fields */118118+#define VMXNET3_TXD_EOP_SHIFT 12119119+#define VMXNET3_TXD_CQ_SHIFT 13120120+#define VMXNET3_TXD_GEN_SHIFT 14121121+122122+#define VMXNET3_TXD_CQ (1 << VMXNET3_TXD_CQ_SHIFT)123123+#define VMXNET3_TXD_EOP (1 << VMXNET3_TXD_EOP_SHIFT)124124+#define VMXNET3_TXD_GEN (1 << VMXNET3_TXD_GEN_SHIFT)125125+126126+#define VMXNET3_HDR_COPY_SIZE 128127127+128128+129129+struct Vmxnet3_TxDataDesc {130130+ u8 data[VMXNET3_HDR_COPY_SIZE];131131+};132132+133133+134134+struct Vmxnet3_TxCompDesc {135135+ u32 txdIdx:12; /* Index of the EOP TxDesc */136136+ u32 ext1:20;137137+138138+ u32 ext2;139139+ u32 ext3;140140+141141+ u32 rsvd:24;142142+ u32 type:7; /* completion type */143143+ u32 gen:1; /* generation bit */144144+};145145+146146+147147+struct Vmxnet3_RxDesc {148148+ u64 addr;149149+150150+ u32 len:14;151151+ u32 btype:1; /* Buffer Type */152152+ u32 dtype:1; /* Descriptor type */153153+ u32 rsvd:15;154154+ u32 gen:1; /* Generation bit */155155+156156+ u32 ext1;157157+};158158+159159+/* values of RXD.BTYPE */160160+#define VMXNET3_RXD_BTYPE_HEAD 0 /* head only */161161+#define VMXNET3_RXD_BTYPE_BODY 1 /* body only */162162+163163+/* fields in RxDesc we access w/o using bit fields */164164+#define VMXNET3_RXD_BTYPE_SHIFT 14165165+#define VMXNET3_RXD_GEN_SHIFT 31166166+167167+168168+struct Vmxnet3_RxCompDesc {169169+ u32 rxdIdx:12; /* Index of the RxDesc */170170+ u32 ext1:2;171171+ u32 eop:1; /* End of Packet */172172+ u32 sop:1; /* Start of Packet */173173+ u32 rqID:10; /* rx queue/ring ID */174174+ u32 rssType:4; /* RSS hash type used */175175+ u32 cnc:1; /* Checksum Not Calculated */176176+ u32 ext2:1;177177+178178+ u32 rssHash; /* RSS hash value */179179+180180+ u32 len:14; /* data length */181181+ u32 err:1; /* Error */182182+ u32 ts:1; /* Tag is stripped */183183+ u32 tci:16; /* Tag stripped */184184+185185+ u32 csum:16;186186+ u32 tuc:1; /* TCP/UDP Checksum Correct */187187+ u32 udp:1; /* UDP packet */188188+ u32 tcp:1; /* TCP packet */189189+ u32 ipc:1; /* IP Checksum Correct */190190+ u32 v6:1; /* IPv6 */191191+ u32 v4:1; /* IPv4 */192192+ u32 frg:1; /* IP Fragment */193193+ u32 fcs:1; /* Frame CRC correct */194194+ u32 type:7; /* completion type */195195+ u32 gen:1; /* generation bit */196196+};197197+198198+/* fields in RxCompDesc we access via Vmxnet3_GenericDesc.dword[3] */199199+#define VMXNET3_RCD_TUC_SHIFT 16200200+#define VMXNET3_RCD_IPC_SHIFT 19201201+202202+/* fields in RxCompDesc we access via Vmxnet3_GenericDesc.qword[1] */203203+#define VMXNET3_RCD_TYPE_SHIFT 56204204+#define VMXNET3_RCD_GEN_SHIFT 63205205+206206+/* csum OK for TCP/UDP pkts over IP */207207+#define VMXNET3_RCD_CSUM_OK (1 << VMXNET3_RCD_TUC_SHIFT | \208208+ 1 << VMXNET3_RCD_IPC_SHIFT)209209+210210+/* value of RxCompDesc.rssType */211211+enum {212212+ VMXNET3_RCD_RSS_TYPE_NONE = 0,213213+ VMXNET3_RCD_RSS_TYPE_IPV4 = 1,214214+ VMXNET3_RCD_RSS_TYPE_TCPIPV4 = 2,215215+ VMXNET3_RCD_RSS_TYPE_IPV6 = 3,216216+ VMXNET3_RCD_RSS_TYPE_TCPIPV6 = 4,217217+};218218+219219+220220+/* a union for accessing all cmd/completion descriptors */221221+union Vmxnet3_GenericDesc {222222+ u64 qword[2];223223+ u32 dword[4];224224+ u16 word[8];225225+ struct Vmxnet3_TxDesc txd;226226+ struct Vmxnet3_RxDesc rxd;227227+ struct Vmxnet3_TxCompDesc tcd;228228+ struct Vmxnet3_RxCompDesc rcd;229229+};230230+231231+#define VMXNET3_INIT_GEN 1232232+233233+/* Max size of a single tx buffer */234234+#define VMXNET3_MAX_TX_BUF_SIZE (1 << 14)235235+236236+/* # of tx desc needed for a tx buffer size */237237+#define VMXNET3_TXD_NEEDED(size) (((size) + VMXNET3_MAX_TX_BUF_SIZE - 1) / \238238+ VMXNET3_MAX_TX_BUF_SIZE)239239+240240+/* max # of tx descs for a non-tso pkt */241241+#define VMXNET3_MAX_TXD_PER_PKT 16242242+243243+/* Max size of a single rx buffer */244244+#define VMXNET3_MAX_RX_BUF_SIZE ((1 << 14) - 1)245245+/* Minimum size of a type 0 buffer */246246+#define VMXNET3_MIN_T0_BUF_SIZE 128247247+#define VMXNET3_MAX_CSUM_OFFSET 1024248248+249249+/* Ring base address alignment */250250+#define VMXNET3_RING_BA_ALIGN 512251251+#define VMXNET3_RING_BA_MASK (VMXNET3_RING_BA_ALIGN - 1)252252+253253+/* Ring size must be a multiple of 32 */254254+#define VMXNET3_RING_SIZE_ALIGN 32255255+#define VMXNET3_RING_SIZE_MASK (VMXNET3_RING_SIZE_ALIGN - 1)256256+257257+/* Max ring size */258258+#define VMXNET3_TX_RING_MAX_SIZE 4096259259+#define VMXNET3_TC_RING_MAX_SIZE 4096260260+#define VMXNET3_RX_RING_MAX_SIZE 4096261261+#define VMXNET3_RC_RING_MAX_SIZE 8192262262+263263+/* a list of reasons for queue stop */264264+265265+enum {266266+ VMXNET3_ERR_NOEOP = 0x80000000, /* cannot find the EOP desc of a pkt */267267+ VMXNET3_ERR_TXD_REUSE = 0x80000001, /* reuse TxDesc before tx completion */268268+ VMXNET3_ERR_BIG_PKT = 0x80000002, /* too many TxDesc for a pkt */269269+ VMXNET3_ERR_DESC_NOT_SPT = 0x80000003, /* descriptor type not supported */270270+ VMXNET3_ERR_SMALL_BUF = 0x80000004, /* type 0 buffer too small */271271+ VMXNET3_ERR_STRESS = 0x80000005, /* stress option firing in vmkernel */272272+ VMXNET3_ERR_SWITCH = 0x80000006, /* mode switch failure */273273+ VMXNET3_ERR_TXD_INVALID = 0x80000007, /* invalid TxDesc */274274+};275275+276276+/* completion descriptor types */277277+#define VMXNET3_CDTYPE_TXCOMP 0 /* Tx Completion Descriptor */278278+#define VMXNET3_CDTYPE_RXCOMP 3 /* Rx Completion Descriptor */279279+280280+enum {281281+ VMXNET3_GOS_BITS_UNK = 0, /* unknown */282282+ VMXNET3_GOS_BITS_32 = 1,283283+ VMXNET3_GOS_BITS_64 = 2,284284+};285285+286286+#define VMXNET3_GOS_TYPE_LINUX 1287287+288288+289289+struct Vmxnet3_GOSInfo {290290+ u32 gosBits:2; /* 32-bit or 64-bit? */291291+ u32 gosType:4; /* which guest */292292+ u32 gosVer:16; /* gos version */293293+ u32 gosMisc:10; /* other info about gos */294294+};295295+296296+297297+struct Vmxnet3_DriverInfo {298298+ u32 version;299299+ struct Vmxnet3_GOSInfo gos;300300+ u32 vmxnet3RevSpt;301301+ u32 uptVerSpt;302302+};303303+304304+305305+#define VMXNET3_REV1_MAGIC 0xbabefee1306306+307307+/*308308+ * QueueDescPA must be 128 bytes aligned. It points to an array of309309+ * Vmxnet3_TxQueueDesc followed by an array of Vmxnet3_RxQueueDesc.310310+ * The number of Vmxnet3_TxQueueDesc/Vmxnet3_RxQueueDesc are specified by311311+ * Vmxnet3_MiscConf.numTxQueues/numRxQueues, respectively.312312+ */313313+#define VMXNET3_QUEUE_DESC_ALIGN 128314314+315315+316316+struct Vmxnet3_MiscConf {317317+ struct Vmxnet3_DriverInfo driverInfo;318318+ u64 uptFeatures;319319+ u64 ddPA; /* driver data PA */320320+ u64 queueDescPA; /* queue descriptor table PA */321321+ u32 ddLen; /* driver data len */322322+ u32 queueDescLen; /* queue desc. table len in bytes */323323+ u32 mtu;324324+ u16 maxNumRxSG;325325+ u8 numTxQueues;326326+ u8 numRxQueues;327327+ u32 reserved[4];328328+};329329+330330+331331+struct Vmxnet3_TxQueueConf {332332+ u64 txRingBasePA;333333+ u64 dataRingBasePA;334334+ u64 compRingBasePA;335335+ u64 ddPA; /* driver data */336336+ u64 reserved;337337+ u32 txRingSize; /* # of tx desc */338338+ u32 dataRingSize; /* # of data desc */339339+ u32 compRingSize; /* # of comp desc */340340+ u32 ddLen; /* size of driver data */341341+ u8 intrIdx;342342+ u8 _pad[7];343343+};344344+345345+346346+struct Vmxnet3_RxQueueConf {347347+ u64 rxRingBasePA[2];348348+ u64 compRingBasePA;349349+ u64 ddPA; /* driver data */350350+ u64 reserved;351351+ u32 rxRingSize[2]; /* # of rx desc */352352+ u32 compRingSize; /* # of rx comp desc */353353+ u32 ddLen; /* size of driver data */354354+ u8 intrIdx;355355+ u8 _pad[7];356356+};357357+358358+359359+enum vmxnet3_intr_mask_mode {360360+ VMXNET3_IMM_AUTO = 0,361361+ VMXNET3_IMM_ACTIVE = 1,362362+ VMXNET3_IMM_LAZY = 2363363+};364364+365365+enum vmxnet3_intr_type {366366+ VMXNET3_IT_AUTO = 0,367367+ VMXNET3_IT_INTX = 1,368368+ VMXNET3_IT_MSI = 2,369369+ VMXNET3_IT_MSIX = 3370370+};371371+372372+#define VMXNET3_MAX_TX_QUEUES 8373373+#define VMXNET3_MAX_RX_QUEUES 16374374+/* addition 1 for events */375375+#define VMXNET3_MAX_INTRS 25376376+377377+378378+struct Vmxnet3_IntrConf {379379+ bool autoMask;380380+ u8 numIntrs; /* # of interrupts */381381+ u8 eventIntrIdx;382382+ u8 modLevels[VMXNET3_MAX_INTRS]; /* moderation level for383383+ * each intr */384384+ u32 reserved[3];385385+};386386+387387+/* one bit per VLAN ID, the size is in the units of u32 */388388+#define VMXNET3_VFT_SIZE (4096 / (sizeof(u32) * 8))389389+390390+391391+struct Vmxnet3_QueueStatus {392392+ bool stopped;393393+ u8 _pad[3];394394+ u32 error;395395+};396396+397397+398398+struct Vmxnet3_TxQueueCtrl {399399+ u32 txNumDeferred;400400+ u32 txThreshold;401401+ u64 reserved;402402+};403403+404404+405405+struct Vmxnet3_RxQueueCtrl {406406+ bool updateRxProd;407407+ u8 _pad[7];408408+ u64 reserved;409409+};410410+411411+enum {412412+ VMXNET3_RXM_UCAST = 0x01, /* unicast only */413413+ VMXNET3_RXM_MCAST = 0x02, /* multicast passing the filters */414414+ VMXNET3_RXM_BCAST = 0x04, /* broadcast only */415415+ VMXNET3_RXM_ALL_MULTI = 0x08, /* all multicast */416416+ VMXNET3_RXM_PROMISC = 0x10 /* promiscuous */417417+};418418+419419+struct Vmxnet3_RxFilterConf {420420+ u32 rxMode; /* VMXNET3_RXM_xxx */421421+ u16 mfTableLen; /* size of the multicast filter table */422422+ u16 _pad1;423423+ u64 mfTablePA; /* PA of the multicast filters table */424424+ u32 vfTable[VMXNET3_VFT_SIZE]; /* vlan filter */425425+};426426+427427+428428+#define VMXNET3_PM_MAX_FILTERS 6429429+#define VMXNET3_PM_MAX_PATTERN_SIZE 128430430+#define VMXNET3_PM_MAX_MASK_SIZE (VMXNET3_PM_MAX_PATTERN_SIZE / 8)431431+432432+#define VMXNET3_PM_WAKEUP_MAGIC 0x01 /* wake up on magic pkts */433433+#define VMXNET3_PM_WAKEUP_FILTER 0x02 /* wake up on pkts matching434434+ * filters */435435+436436+437437+struct Vmxnet3_PM_PktFilter {438438+ u8 maskSize;439439+ u8 patternSize;440440+ u8 mask[VMXNET3_PM_MAX_MASK_SIZE];441441+ u8 pattern[VMXNET3_PM_MAX_PATTERN_SIZE];442442+ u8 pad[6];443443+};444444+445445+446446+struct Vmxnet3_PMConf {447447+ u16 wakeUpEvents; /* VMXNET3_PM_WAKEUP_xxx */448448+ u8 numFilters;449449+ u8 pad[5];450450+ struct Vmxnet3_PM_PktFilter filters[VMXNET3_PM_MAX_FILTERS];451451+};452452+453453+454454+struct Vmxnet3_VariableLenConfDesc {455455+ u32 confVer;456456+ u32 confLen;457457+ u64 confPA;458458+};459459+460460+461461+struct Vmxnet3_TxQueueDesc {462462+ struct Vmxnet3_TxQueueCtrl ctrl;463463+ struct Vmxnet3_TxQueueConf conf;464464+465465+ /* Driver read after a GET command */466466+ struct Vmxnet3_QueueStatus status;467467+ struct UPT1_TxStats stats;468468+ u8 _pad[88]; /* 128 aligned */469469+};470470+471471+472472+struct Vmxnet3_RxQueueDesc {473473+ struct Vmxnet3_RxQueueCtrl ctrl;474474+ struct Vmxnet3_RxQueueConf conf;475475+ /* Driver read after a GET commad */476476+ struct Vmxnet3_QueueStatus status;477477+ struct UPT1_RxStats stats;478478+ u8 __pad[88]; /* 128 aligned */479479+};480480+481481+482482+struct Vmxnet3_DSDevRead {483483+ /* read-only region for device, read by dev in response to a SET cmd */484484+ struct Vmxnet3_MiscConf misc;485485+ struct Vmxnet3_IntrConf intrConf;486486+ struct Vmxnet3_RxFilterConf rxFilterConf;487487+ struct Vmxnet3_VariableLenConfDesc rssConfDesc;488488+ struct Vmxnet3_VariableLenConfDesc pmConfDesc;489489+ struct Vmxnet3_VariableLenConfDesc pluginConfDesc;490490+};491491+492492+/* All structures in DriverShared are padded to multiples of 8 bytes */493493+struct Vmxnet3_DriverShared {494494+ u32 magic;495495+ /* make devRead start at 64bit boundaries */496496+ u32 pad;497497+ struct Vmxnet3_DSDevRead devRead;498498+ u32 ecr;499499+ u32 reserved[5];500500+};501501+502502+503503+#define VMXNET3_ECR_RQERR (1 << 0)504504+#define VMXNET3_ECR_TQERR (1 << 1)505505+#define VMXNET3_ECR_LINK (1 << 2)506506+#define VMXNET3_ECR_DIC (1 << 3)507507+#define VMXNET3_ECR_DEBUG (1 << 4)508508+509509+/* flip the gen bit of a ring */510510+#define VMXNET3_FLIP_RING_GEN(gen) ((gen) = (gen) ^ 0x1)511511+512512+/* only use this if moving the idx won't affect the gen bit */513513+#define VMXNET3_INC_RING_IDX_ONLY(idx, ring_size) \514514+ do {\515515+ (idx)++;\516516+ if (unlikely((idx) == (ring_size))) {\517517+ (idx) = 0;\518518+ } \519519+ } while (0)520520+521521+#define VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid) \522522+ (vfTable[vid >> 5] |= (1 << (vid & 31)))523523+#define VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vid) \524524+ (vfTable[vid >> 5] &= ~(1 << (vid & 31)))525525+526526+#define VMXNET3_VFTABLE_ENTRY_IS_SET(vfTable, vid) \527527+ ((vfTable[vid >> 5] & (1 << (vid & 31))) != 0)528528+529529+#define VMXNET3_MAX_MTU 9000530530+#define VMXNET3_MIN_MTU 60531531+532532+#define VMXNET3_LINK_UP (10000 << 16 | 1) /* 10 Gbps, up */533533+#define VMXNET3_LINK_DOWN 0534534+535535+#endif /* _VMXNET3_DEFS_H_ */
+2565
drivers/net/vmxnet3/vmxnet3_drv.c
···11+/*22+ * Linux driver for VMware's vmxnet3 ethernet NIC.33+ *44+ * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License as published by the88+ * Free Software Foundation; version 2 of the License and no later version.99+ *1010+ * This program is distributed in the hope that it will be useful, but1111+ * WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1313+ * NON INFRINGEMENT. See the GNU General Public License for more1414+ * details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919+ *2020+ * The full GNU General Public License is included in this distribution in2121+ * the file called "COPYING".2222+ *2323+ * Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2424+ *2525+ */2626+2727+#include "vmxnet3_int.h"2828+2929+char vmxnet3_driver_name[] = "vmxnet3";3030+#define VMXNET3_DRIVER_DESC "VMware vmxnet3 virtual NIC driver"3131+3232+3333+/*3434+ * PCI Device ID Table3535+ * Last entry must be all 0s3636+ */3737+static const struct pci_device_id vmxnet3_pciid_table[] = {3838+ {PCI_VDEVICE(VMWARE, PCI_DEVICE_ID_VMWARE_VMXNET3)},3939+ {0}4040+};4141+4242+MODULE_DEVICE_TABLE(pci, vmxnet3_pciid_table);4343+4444+static atomic_t devices_found;4545+4646+4747+/*4848+ * Enable/Disable the given intr4949+ */5050+static void5151+vmxnet3_enable_intr(struct vmxnet3_adapter *adapter, unsigned intr_idx)5252+{5353+ VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_IMR + intr_idx * 8, 0);5454+}5555+5656+5757+static void5858+vmxnet3_disable_intr(struct vmxnet3_adapter *adapter, unsigned intr_idx)5959+{6060+ VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_IMR + intr_idx * 8, 1);6161+}6262+6363+6464+/*6565+ * Enable/Disable all intrs used by the device6666+ */6767+static void6868+vmxnet3_enable_all_intrs(struct vmxnet3_adapter *adapter)6969+{7070+ int i;7171+7272+ for (i = 0; i < adapter->intr.num_intrs; i++)7373+ vmxnet3_enable_intr(adapter, i);7474+}7575+7676+7777+static void7878+vmxnet3_disable_all_intrs(struct vmxnet3_adapter *adapter)7979+{8080+ int i;8181+8282+ for (i = 0; i < adapter->intr.num_intrs; i++)8383+ vmxnet3_disable_intr(adapter, i);8484+}8585+8686+8787+static void8888+vmxnet3_ack_events(struct vmxnet3_adapter *adapter, u32 events)8989+{9090+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_ECR, events);9191+}9292+9393+9494+static bool9595+vmxnet3_tq_stopped(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter)9696+{9797+ return netif_queue_stopped(adapter->netdev);9898+}9999+100100+101101+static void102102+vmxnet3_tq_start(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter)103103+{104104+ tq->stopped = false;105105+ netif_start_queue(adapter->netdev);106106+}107107+108108+109109+static void110110+vmxnet3_tq_wake(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter)111111+{112112+ tq->stopped = false;113113+ netif_wake_queue(adapter->netdev);114114+}115115+116116+117117+static void118118+vmxnet3_tq_stop(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter)119119+{120120+ tq->stopped = true;121121+ tq->num_stop++;122122+ netif_stop_queue(adapter->netdev);123123+}124124+125125+126126+/*127127+ * Check the link state. This may start or stop the tx queue.128128+ */129129+static void130130+vmxnet3_check_link(struct vmxnet3_adapter *adapter)131131+{132132+ u32 ret;133133+134134+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_LINK);135135+ ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);136136+ adapter->link_speed = ret >> 16;137137+ if (ret & 1) { /* Link is up. */138138+ printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n",139139+ adapter->netdev->name, adapter->link_speed);140140+ if (!netif_carrier_ok(adapter->netdev))141141+ netif_carrier_on(adapter->netdev);142142+143143+ vmxnet3_tq_start(&adapter->tx_queue, adapter);144144+ } else {145145+ printk(KERN_INFO "%s: NIC Link is Down\n",146146+ adapter->netdev->name);147147+ if (netif_carrier_ok(adapter->netdev))148148+ netif_carrier_off(adapter->netdev);149149+150150+ vmxnet3_tq_stop(&adapter->tx_queue, adapter);151151+ }152152+}153153+154154+155155+static void156156+vmxnet3_process_events(struct vmxnet3_adapter *adapter)157157+{158158+ u32 events = adapter->shared->ecr;159159+ if (!events)160160+ return;161161+162162+ vmxnet3_ack_events(adapter, events);163163+164164+ /* Check if link state has changed */165165+ if (events & VMXNET3_ECR_LINK)166166+ vmxnet3_check_link(adapter);167167+168168+ /* Check if there is an error on xmit/recv queues */169169+ if (events & (VMXNET3_ECR_TQERR | VMXNET3_ECR_RQERR)) {170170+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,171171+ VMXNET3_CMD_GET_QUEUE_STATUS);172172+173173+ if (adapter->tqd_start->status.stopped) {174174+ printk(KERN_ERR "%s: tq error 0x%x\n",175175+ adapter->netdev->name,176176+ adapter->tqd_start->status.error);177177+ }178178+ if (adapter->rqd_start->status.stopped) {179179+ printk(KERN_ERR "%s: rq error 0x%x\n",180180+ adapter->netdev->name,181181+ adapter->rqd_start->status.error);182182+ }183183+184184+ schedule_work(&adapter->work);185185+ }186186+}187187+188188+189189+static void190190+vmxnet3_unmap_tx_buf(struct vmxnet3_tx_buf_info *tbi,191191+ struct pci_dev *pdev)192192+{193193+ if (tbi->map_type == VMXNET3_MAP_SINGLE)194194+ pci_unmap_single(pdev, tbi->dma_addr, tbi->len,195195+ PCI_DMA_TODEVICE);196196+ else if (tbi->map_type == VMXNET3_MAP_PAGE)197197+ pci_unmap_page(pdev, tbi->dma_addr, tbi->len,198198+ PCI_DMA_TODEVICE);199199+ else200200+ BUG_ON(tbi->map_type != VMXNET3_MAP_NONE);201201+202202+ tbi->map_type = VMXNET3_MAP_NONE; /* to help debugging */203203+}204204+205205+206206+static int207207+vmxnet3_unmap_pkt(u32 eop_idx, struct vmxnet3_tx_queue *tq,208208+ struct pci_dev *pdev, struct vmxnet3_adapter *adapter)209209+{210210+ struct sk_buff *skb;211211+ int entries = 0;212212+213213+ /* no out of order completion */214214+ BUG_ON(tq->buf_info[eop_idx].sop_idx != tq->tx_ring.next2comp);215215+ BUG_ON(tq->tx_ring.base[eop_idx].txd.eop != 1);216216+217217+ skb = tq->buf_info[eop_idx].skb;218218+ BUG_ON(skb == NULL);219219+ tq->buf_info[eop_idx].skb = NULL;220220+221221+ VMXNET3_INC_RING_IDX_ONLY(eop_idx, tq->tx_ring.size);222222+223223+ while (tq->tx_ring.next2comp != eop_idx) {224224+ vmxnet3_unmap_tx_buf(tq->buf_info + tq->tx_ring.next2comp,225225+ pdev);226226+227227+ /* update next2comp w/o tx_lock. Since we are marking more,228228+ * instead of less, tx ring entries avail, the worst case is229229+ * that the tx routine incorrectly re-queues a pkt due to230230+ * insufficient tx ring entries.231231+ */232232+ vmxnet3_cmd_ring_adv_next2comp(&tq->tx_ring);233233+ entries++;234234+ }235235+236236+ dev_kfree_skb_any(skb);237237+ return entries;238238+}239239+240240+241241+static int242242+vmxnet3_tq_tx_complete(struct vmxnet3_tx_queue *tq,243243+ struct vmxnet3_adapter *adapter)244244+{245245+ int completed = 0;246246+ union Vmxnet3_GenericDesc *gdesc;247247+248248+ gdesc = tq->comp_ring.base + tq->comp_ring.next2proc;249249+ while (gdesc->tcd.gen == tq->comp_ring.gen) {250250+ completed += vmxnet3_unmap_pkt(gdesc->tcd.txdIdx, tq,251251+ adapter->pdev, adapter);252252+253253+ vmxnet3_comp_ring_adv_next2proc(&tq->comp_ring);254254+ gdesc = tq->comp_ring.base + tq->comp_ring.next2proc;255255+ }256256+257257+ if (completed) {258258+ spin_lock(&tq->tx_lock);259259+ if (unlikely(vmxnet3_tq_stopped(tq, adapter) &&260260+ vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) >261261+ VMXNET3_WAKE_QUEUE_THRESHOLD(tq) &&262262+ netif_carrier_ok(adapter->netdev))) {263263+ vmxnet3_tq_wake(tq, adapter);264264+ }265265+ spin_unlock(&tq->tx_lock);266266+ }267267+ return completed;268268+}269269+270270+271271+static void272272+vmxnet3_tq_cleanup(struct vmxnet3_tx_queue *tq,273273+ struct vmxnet3_adapter *adapter)274274+{275275+ int i;276276+277277+ while (tq->tx_ring.next2comp != tq->tx_ring.next2fill) {278278+ struct vmxnet3_tx_buf_info *tbi;279279+ union Vmxnet3_GenericDesc *gdesc;280280+281281+ tbi = tq->buf_info + tq->tx_ring.next2comp;282282+ gdesc = tq->tx_ring.base + tq->tx_ring.next2comp;283283+284284+ vmxnet3_unmap_tx_buf(tbi, adapter->pdev);285285+ if (tbi->skb) {286286+ dev_kfree_skb_any(tbi->skb);287287+ tbi->skb = NULL;288288+ }289289+ vmxnet3_cmd_ring_adv_next2comp(&tq->tx_ring);290290+ }291291+292292+ /* sanity check, verify all buffers are indeed unmapped and freed */293293+ for (i = 0; i < tq->tx_ring.size; i++) {294294+ BUG_ON(tq->buf_info[i].skb != NULL ||295295+ tq->buf_info[i].map_type != VMXNET3_MAP_NONE);296296+ }297297+298298+ tq->tx_ring.gen = VMXNET3_INIT_GEN;299299+ tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0;300300+301301+ tq->comp_ring.gen = VMXNET3_INIT_GEN;302302+ tq->comp_ring.next2proc = 0;303303+}304304+305305+306306+void307307+vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq,308308+ struct vmxnet3_adapter *adapter)309309+{310310+ if (tq->tx_ring.base) {311311+ pci_free_consistent(adapter->pdev, tq->tx_ring.size *312312+ sizeof(struct Vmxnet3_TxDesc),313313+ tq->tx_ring.base, tq->tx_ring.basePA);314314+ tq->tx_ring.base = NULL;315315+ }316316+ if (tq->data_ring.base) {317317+ pci_free_consistent(adapter->pdev, tq->data_ring.size *318318+ sizeof(struct Vmxnet3_TxDataDesc),319319+ tq->data_ring.base, tq->data_ring.basePA);320320+ tq->data_ring.base = NULL;321321+ }322322+ if (tq->comp_ring.base) {323323+ pci_free_consistent(adapter->pdev, tq->comp_ring.size *324324+ sizeof(struct Vmxnet3_TxCompDesc),325325+ tq->comp_ring.base, tq->comp_ring.basePA);326326+ tq->comp_ring.base = NULL;327327+ }328328+ kfree(tq->buf_info);329329+ tq->buf_info = NULL;330330+}331331+332332+333333+static void334334+vmxnet3_tq_init(struct vmxnet3_tx_queue *tq,335335+ struct vmxnet3_adapter *adapter)336336+{337337+ int i;338338+339339+ /* reset the tx ring contents to 0 and reset the tx ring states */340340+ memset(tq->tx_ring.base, 0, tq->tx_ring.size *341341+ sizeof(struct Vmxnet3_TxDesc));342342+ tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0;343343+ tq->tx_ring.gen = VMXNET3_INIT_GEN;344344+345345+ memset(tq->data_ring.base, 0, tq->data_ring.size *346346+ sizeof(struct Vmxnet3_TxDataDesc));347347+348348+ /* reset the tx comp ring contents to 0 and reset comp ring states */349349+ memset(tq->comp_ring.base, 0, tq->comp_ring.size *350350+ sizeof(struct Vmxnet3_TxCompDesc));351351+ tq->comp_ring.next2proc = 0;352352+ tq->comp_ring.gen = VMXNET3_INIT_GEN;353353+354354+ /* reset the bookkeeping data */355355+ memset(tq->buf_info, 0, sizeof(tq->buf_info[0]) * tq->tx_ring.size);356356+ for (i = 0; i < tq->tx_ring.size; i++)357357+ tq->buf_info[i].map_type = VMXNET3_MAP_NONE;358358+359359+ /* stats are not reset */360360+}361361+362362+363363+static int364364+vmxnet3_tq_create(struct vmxnet3_tx_queue *tq,365365+ struct vmxnet3_adapter *adapter)366366+{367367+ BUG_ON(tq->tx_ring.base || tq->data_ring.base ||368368+ tq->comp_ring.base || tq->buf_info);369369+370370+ tq->tx_ring.base = pci_alloc_consistent(adapter->pdev, tq->tx_ring.size371371+ * sizeof(struct Vmxnet3_TxDesc),372372+ &tq->tx_ring.basePA);373373+ if (!tq->tx_ring.base) {374374+ printk(KERN_ERR "%s: failed to allocate tx ring\n",375375+ adapter->netdev->name);376376+ goto err;377377+ }378378+379379+ tq->data_ring.base = pci_alloc_consistent(adapter->pdev,380380+ tq->data_ring.size *381381+ sizeof(struct Vmxnet3_TxDataDesc),382382+ &tq->data_ring.basePA);383383+ if (!tq->data_ring.base) {384384+ printk(KERN_ERR "%s: failed to allocate data ring\n",385385+ adapter->netdev->name);386386+ goto err;387387+ }388388+389389+ tq->comp_ring.base = pci_alloc_consistent(adapter->pdev,390390+ tq->comp_ring.size *391391+ sizeof(struct Vmxnet3_TxCompDesc),392392+ &tq->comp_ring.basePA);393393+ if (!tq->comp_ring.base) {394394+ printk(KERN_ERR "%s: failed to allocate tx comp ring\n",395395+ adapter->netdev->name);396396+ goto err;397397+ }398398+399399+ tq->buf_info = kcalloc(tq->tx_ring.size, sizeof(tq->buf_info[0]),400400+ GFP_KERNEL);401401+ if (!tq->buf_info) {402402+ printk(KERN_ERR "%s: failed to allocate tx bufinfo\n",403403+ adapter->netdev->name);404404+ goto err;405405+ }406406+407407+ return 0;408408+409409+err:410410+ vmxnet3_tq_destroy(tq, adapter);411411+ return -ENOMEM;412412+}413413+414414+415415+/*416416+ * starting from ring->next2fill, allocate rx buffers for the given ring417417+ * of the rx queue and update the rx desc. stop after @num_to_alloc buffers418418+ * are allocated or allocation fails419419+ */420420+421421+static int422422+vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,423423+ int num_to_alloc, struct vmxnet3_adapter *adapter)424424+{425425+ int num_allocated = 0;426426+ struct vmxnet3_rx_buf_info *rbi_base = rq->buf_info[ring_idx];427427+ struct vmxnet3_cmd_ring *ring = &rq->rx_ring[ring_idx];428428+ u32 val;429429+430430+ while (num_allocated < num_to_alloc) {431431+ struct vmxnet3_rx_buf_info *rbi;432432+ union Vmxnet3_GenericDesc *gd;433433+434434+ rbi = rbi_base + ring->next2fill;435435+ gd = ring->base + ring->next2fill;436436+437437+ if (rbi->buf_type == VMXNET3_RX_BUF_SKB) {438438+ if (rbi->skb == NULL) {439439+ rbi->skb = dev_alloc_skb(rbi->len +440440+ NET_IP_ALIGN);441441+ if (unlikely(rbi->skb == NULL)) {442442+ rq->stats.rx_buf_alloc_failure++;443443+ break;444444+ }445445+ rbi->skb->dev = adapter->netdev;446446+447447+ skb_reserve(rbi->skb, NET_IP_ALIGN);448448+ rbi->dma_addr = pci_map_single(adapter->pdev,449449+ rbi->skb->data, rbi->len,450450+ PCI_DMA_FROMDEVICE);451451+ } else {452452+ /* rx buffer skipped by the device */453453+ }454454+ val = VMXNET3_RXD_BTYPE_HEAD << VMXNET3_RXD_BTYPE_SHIFT;455455+ } else {456456+ BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_PAGE ||457457+ rbi->len != PAGE_SIZE);458458+459459+ if (rbi->page == NULL) {460460+ rbi->page = alloc_page(GFP_ATOMIC);461461+ if (unlikely(rbi->page == NULL)) {462462+ rq->stats.rx_buf_alloc_failure++;463463+ break;464464+ }465465+ rbi->dma_addr = pci_map_page(adapter->pdev,466466+ rbi->page, 0, PAGE_SIZE,467467+ PCI_DMA_FROMDEVICE);468468+ } else {469469+ /* rx buffers skipped by the device */470470+ }471471+ val = VMXNET3_RXD_BTYPE_BODY << VMXNET3_RXD_BTYPE_SHIFT;472472+ }473473+474474+ BUG_ON(rbi->dma_addr == 0);475475+ gd->rxd.addr = rbi->dma_addr;476476+ gd->dword[2] = (ring->gen << VMXNET3_RXD_GEN_SHIFT) | val |477477+ rbi->len;478478+479479+ num_allocated++;480480+ vmxnet3_cmd_ring_adv_next2fill(ring);481481+ }482482+ rq->uncommitted[ring_idx] += num_allocated;483483+484484+ dprintk(KERN_ERR "alloc_rx_buf: %d allocated, next2fill %u, next2comp "485485+ "%u, uncommited %u\n", num_allocated, ring->next2fill,486486+ ring->next2comp, rq->uncommitted[ring_idx]);487487+488488+ /* so that the device can distinguish a full ring and an empty ring */489489+ BUG_ON(num_allocated != 0 && ring->next2fill == ring->next2comp);490490+491491+ return num_allocated;492492+}493493+494494+495495+static void496496+vmxnet3_append_frag(struct sk_buff *skb, struct Vmxnet3_RxCompDesc *rcd,497497+ struct vmxnet3_rx_buf_info *rbi)498498+{499499+ struct skb_frag_struct *frag = skb_shinfo(skb)->frags +500500+ skb_shinfo(skb)->nr_frags;501501+502502+ BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);503503+504504+ frag->page = rbi->page;505505+ frag->page_offset = 0;506506+ frag->size = rcd->len;507507+ skb->data_len += frag->size;508508+ skb_shinfo(skb)->nr_frags++;509509+}510510+511511+512512+static void513513+vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx,514514+ struct vmxnet3_tx_queue *tq, struct pci_dev *pdev,515515+ struct vmxnet3_adapter *adapter)516516+{517517+ u32 dw2, len;518518+ unsigned long buf_offset;519519+ int i;520520+ union Vmxnet3_GenericDesc *gdesc;521521+ struct vmxnet3_tx_buf_info *tbi = NULL;522522+523523+ BUG_ON(ctx->copy_size > skb_headlen(skb));524524+525525+ /* use the previous gen bit for the SOP desc */526526+ dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;527527+528528+ ctx->sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;529529+ gdesc = ctx->sop_txd; /* both loops below can be skipped */530530+531531+ /* no need to map the buffer if headers are copied */532532+ if (ctx->copy_size) {533533+ ctx->sop_txd->txd.addr = tq->data_ring.basePA +534534+ tq->tx_ring.next2fill *535535+ sizeof(struct Vmxnet3_TxDataDesc);536536+ ctx->sop_txd->dword[2] = dw2 | ctx->copy_size;537537+ ctx->sop_txd->dword[3] = 0;538538+539539+ tbi = tq->buf_info + tq->tx_ring.next2fill;540540+ tbi->map_type = VMXNET3_MAP_NONE;541541+542542+ dprintk(KERN_ERR "txd[%u]: 0x%Lx 0x%x 0x%x\n",543543+ tq->tx_ring.next2fill, ctx->sop_txd->txd.addr,544544+ ctx->sop_txd->dword[2], ctx->sop_txd->dword[3]);545545+ vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);546546+547547+ /* use the right gen for non-SOP desc */548548+ dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;549549+ }550550+551551+ /* linear part can use multiple tx desc if it's big */552552+ len = skb_headlen(skb) - ctx->copy_size;553553+ buf_offset = ctx->copy_size;554554+ while (len) {555555+ u32 buf_size;556556+557557+ buf_size = len > VMXNET3_MAX_TX_BUF_SIZE ?558558+ VMXNET3_MAX_TX_BUF_SIZE : len;559559+560560+ tbi = tq->buf_info + tq->tx_ring.next2fill;561561+ tbi->map_type = VMXNET3_MAP_SINGLE;562562+ tbi->dma_addr = pci_map_single(adapter->pdev,563563+ skb->data + buf_offset, buf_size,564564+ PCI_DMA_TODEVICE);565565+566566+ tbi->len = buf_size; /* this automatically convert 2^14 to 0 */567567+568568+ gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;569569+ BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);570570+571571+ gdesc->txd.addr = tbi->dma_addr;572572+ gdesc->dword[2] = dw2 | buf_size;573573+ gdesc->dword[3] = 0;574574+575575+ dprintk(KERN_ERR "txd[%u]: 0x%Lx 0x%x 0x%x\n",576576+ tq->tx_ring.next2fill, gdesc->txd.addr,577577+ gdesc->dword[2], gdesc->dword[3]);578578+ vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);579579+ dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;580580+581581+ len -= buf_size;582582+ buf_offset += buf_size;583583+ }584584+585585+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {586586+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];587587+588588+ tbi = tq->buf_info + tq->tx_ring.next2fill;589589+ tbi->map_type = VMXNET3_MAP_PAGE;590590+ tbi->dma_addr = pci_map_page(adapter->pdev, frag->page,591591+ frag->page_offset, frag->size,592592+ PCI_DMA_TODEVICE);593593+594594+ tbi->len = frag->size;595595+596596+ gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;597597+ BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);598598+599599+ gdesc->txd.addr = tbi->dma_addr;600600+ gdesc->dword[2] = dw2 | frag->size;601601+ gdesc->dword[3] = 0;602602+603603+ dprintk(KERN_ERR "txd[%u]: 0x%llu %u %u\n",604604+ tq->tx_ring.next2fill, gdesc->txd.addr,605605+ gdesc->dword[2], gdesc->dword[3]);606606+ vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);607607+ dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;608608+ }609609+610610+ ctx->eop_txd = gdesc;611611+612612+ /* set the last buf_info for the pkt */613613+ tbi->skb = skb;614614+ tbi->sop_idx = ctx->sop_txd - tq->tx_ring.base;615615+}616616+617617+618618+/*619619+ * parse and copy relevant protocol headers:620620+ * For a tso pkt, relevant headers are L2/3/4 including options621621+ * For a pkt requesting csum offloading, they are L2/3 and may include L4622622+ * if it's a TCP/UDP pkt623623+ *624624+ * Returns:625625+ * -1: error happens during parsing626626+ * 0: protocol headers parsed, but too big to be copied627627+ * 1: protocol headers parsed and copied628628+ *629629+ * Other effects:630630+ * 1. related *ctx fields are updated.631631+ * 2. ctx->copy_size is # of bytes copied632632+ * 3. the portion copied is guaranteed to be in the linear part633633+ *634634+ */635635+static int636636+vmxnet3_parse_and_copy_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,637637+ struct vmxnet3_tx_ctx *ctx,638638+ struct vmxnet3_adapter *adapter)639639+{640640+ struct Vmxnet3_TxDataDesc *tdd;641641+642642+ if (ctx->mss) {643643+ ctx->eth_ip_hdr_size = skb_transport_offset(skb);644644+ ctx->l4_hdr_size = ((struct tcphdr *)645645+ skb_transport_header(skb))->doff * 4;646646+ ctx->copy_size = ctx->eth_ip_hdr_size + ctx->l4_hdr_size;647647+ } else {648648+ unsigned int pull_size;649649+650650+ if (skb->ip_summed == CHECKSUM_PARTIAL) {651651+ ctx->eth_ip_hdr_size = skb_transport_offset(skb);652652+653653+ if (ctx->ipv4) {654654+ struct iphdr *iph = (struct iphdr *)655655+ skb_network_header(skb);656656+ if (iph->protocol == IPPROTO_TCP) {657657+ pull_size = ctx->eth_ip_hdr_size +658658+ sizeof(struct tcphdr);659659+660660+ if (unlikely(!pskb_may_pull(skb,661661+ pull_size))) {662662+ goto err;663663+ }664664+ ctx->l4_hdr_size = ((struct tcphdr *)665665+ skb_transport_header(skb))->doff * 4;666666+ } else if (iph->protocol == IPPROTO_UDP) {667667+ ctx->l4_hdr_size =668668+ sizeof(struct udphdr);669669+ } else {670670+ ctx->l4_hdr_size = 0;671671+ }672672+ } else {673673+ /* for simplicity, don't copy L4 headers */674674+ ctx->l4_hdr_size = 0;675675+ }676676+ ctx->copy_size = ctx->eth_ip_hdr_size +677677+ ctx->l4_hdr_size;678678+ } else {679679+ ctx->eth_ip_hdr_size = 0;680680+ ctx->l4_hdr_size = 0;681681+ /* copy as much as allowed */682682+ ctx->copy_size = min((unsigned int)VMXNET3_HDR_COPY_SIZE683683+ , skb_headlen(skb));684684+ }685685+686686+ /* make sure headers are accessible directly */687687+ if (unlikely(!pskb_may_pull(skb, ctx->copy_size)))688688+ goto err;689689+ }690690+691691+ if (unlikely(ctx->copy_size > VMXNET3_HDR_COPY_SIZE)) {692692+ tq->stats.oversized_hdr++;693693+ ctx->copy_size = 0;694694+ return 0;695695+ }696696+697697+ tdd = tq->data_ring.base + tq->tx_ring.next2fill;698698+699699+ memcpy(tdd->data, skb->data, ctx->copy_size);700700+ dprintk(KERN_ERR "copy %u bytes to dataRing[%u]\n",701701+ ctx->copy_size, tq->tx_ring.next2fill);702702+ return 1;703703+704704+err:705705+ return -1;706706+}707707+708708+709709+static void710710+vmxnet3_prepare_tso(struct sk_buff *skb,711711+ struct vmxnet3_tx_ctx *ctx)712712+{713713+ struct tcphdr *tcph = (struct tcphdr *)skb_transport_header(skb);714714+ if (ctx->ipv4) {715715+ struct iphdr *iph = (struct iphdr *)skb_network_header(skb);716716+ iph->check = 0;717717+ tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 0,718718+ IPPROTO_TCP, 0);719719+ } else {720720+ struct ipv6hdr *iph = (struct ipv6hdr *)skb_network_header(skb);721721+ tcph->check = ~csum_ipv6_magic(&iph->saddr, &iph->daddr, 0,722722+ IPPROTO_TCP, 0);723723+ }724724+}725725+726726+727727+/*728728+ * Transmits a pkt thru a given tq729729+ * Returns:730730+ * NETDEV_TX_OK: descriptors are setup successfully731731+ * NETDEV_TX_OK: error occured, the pkt is dropped732732+ * NETDEV_TX_BUSY: tx ring is full, queue is stopped733733+ *734734+ * Side-effects:735735+ * 1. tx ring may be changed736736+ * 2. tq stats may be updated accordingly737737+ * 3. shared->txNumDeferred may be updated738738+ */739739+740740+static int741741+vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,742742+ struct vmxnet3_adapter *adapter, struct net_device *netdev)743743+{744744+ int ret;745745+ u32 count;746746+ unsigned long flags;747747+ struct vmxnet3_tx_ctx ctx;748748+ union Vmxnet3_GenericDesc *gdesc;749749+750750+ /* conservatively estimate # of descriptors to use */751751+ count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +752752+ skb_shinfo(skb)->nr_frags + 1;753753+754754+ ctx.ipv4 = (skb->protocol == __constant_ntohs(ETH_P_IP));755755+756756+ ctx.mss = skb_shinfo(skb)->gso_size;757757+ if (ctx.mss) {758758+ if (skb_header_cloned(skb)) {759759+ if (unlikely(pskb_expand_head(skb, 0, 0,760760+ GFP_ATOMIC) != 0)) {761761+ tq->stats.drop_tso++;762762+ goto drop_pkt;763763+ }764764+ tq->stats.copy_skb_header++;765765+ }766766+ vmxnet3_prepare_tso(skb, &ctx);767767+ } else {768768+ if (unlikely(count > VMXNET3_MAX_TXD_PER_PKT)) {769769+770770+ /* non-tso pkts must not use more than771771+ * VMXNET3_MAX_TXD_PER_PKT entries772772+ */773773+ if (skb_linearize(skb) != 0) {774774+ tq->stats.drop_too_many_frags++;775775+ goto drop_pkt;776776+ }777777+ tq->stats.linearized++;778778+779779+ /* recalculate the # of descriptors to use */780780+ count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1;781781+ }782782+ }783783+784784+ ret = vmxnet3_parse_and_copy_hdr(skb, tq, &ctx, adapter);785785+ if (ret >= 0) {786786+ BUG_ON(ret <= 0 && ctx.copy_size != 0);787787+ /* hdrs parsed, check against other limits */788788+ if (ctx.mss) {789789+ if (unlikely(ctx.eth_ip_hdr_size + ctx.l4_hdr_size >790790+ VMXNET3_MAX_TX_BUF_SIZE)) {791791+ goto hdr_too_big;792792+ }793793+ } else {794794+ if (skb->ip_summed == CHECKSUM_PARTIAL) {795795+ if (unlikely(ctx.eth_ip_hdr_size +796796+ skb->csum_offset >797797+ VMXNET3_MAX_CSUM_OFFSET)) {798798+ goto hdr_too_big;799799+ }800800+ }801801+ }802802+ } else {803803+ tq->stats.drop_hdr_inspect_err++;804804+ goto drop_pkt;805805+ }806806+807807+ spin_lock_irqsave(&tq->tx_lock, flags);808808+809809+ if (count > vmxnet3_cmd_ring_desc_avail(&tq->tx_ring)) {810810+ tq->stats.tx_ring_full++;811811+ dprintk(KERN_ERR "tx queue stopped on %s, next2comp %u"812812+ " next2fill %u\n", adapter->netdev->name,813813+ tq->tx_ring.next2comp, tq->tx_ring.next2fill);814814+815815+ vmxnet3_tq_stop(tq, adapter);816816+ spin_unlock_irqrestore(&tq->tx_lock, flags);817817+ return NETDEV_TX_BUSY;818818+ }819819+820820+ /* fill tx descs related to addr & len */821821+ vmxnet3_map_pkt(skb, &ctx, tq, adapter->pdev, adapter);822822+823823+ /* setup the EOP desc */824824+ ctx.eop_txd->dword[3] = VMXNET3_TXD_CQ | VMXNET3_TXD_EOP;825825+826826+ /* setup the SOP desc */827827+ gdesc = ctx.sop_txd;828828+ if (ctx.mss) {829829+ gdesc->txd.hlen = ctx.eth_ip_hdr_size + ctx.l4_hdr_size;830830+ gdesc->txd.om = VMXNET3_OM_TSO;831831+ gdesc->txd.msscof = ctx.mss;832832+ tq->shared->txNumDeferred += (skb->len - gdesc->txd.hlen +833833+ ctx.mss - 1) / ctx.mss;834834+ } else {835835+ if (skb->ip_summed == CHECKSUM_PARTIAL) {836836+ gdesc->txd.hlen = ctx.eth_ip_hdr_size;837837+ gdesc->txd.om = VMXNET3_OM_CSUM;838838+ gdesc->txd.msscof = ctx.eth_ip_hdr_size +839839+ skb->csum_offset;840840+ } else {841841+ gdesc->txd.om = 0;842842+ gdesc->txd.msscof = 0;843843+ }844844+ tq->shared->txNumDeferred++;845845+ }846846+847847+ if (vlan_tx_tag_present(skb)) {848848+ gdesc->txd.ti = 1;849849+ gdesc->txd.tci = vlan_tx_tag_get(skb);850850+ }851851+852852+ wmb();853853+854854+ /* finally flips the GEN bit of the SOP desc */855855+ gdesc->dword[2] ^= VMXNET3_TXD_GEN;856856+ dprintk(KERN_ERR "txd[%u]: SOP 0x%Lx 0x%x 0x%x\n",857857+ (u32)((union Vmxnet3_GenericDesc *)ctx.sop_txd -858858+ tq->tx_ring.base), gdesc->txd.addr, gdesc->dword[2],859859+ gdesc->dword[3]);860860+861861+ spin_unlock_irqrestore(&tq->tx_lock, flags);862862+863863+ if (tq->shared->txNumDeferred >= tq->shared->txThreshold) {864864+ tq->shared->txNumDeferred = 0;865865+ VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_TXPROD,866866+ tq->tx_ring.next2fill);867867+ }868868+ netdev->trans_start = jiffies;869869+870870+ return NETDEV_TX_OK;871871+872872+hdr_too_big:873873+ tq->stats.drop_oversized_hdr++;874874+drop_pkt:875875+ tq->stats.drop_total++;876876+ dev_kfree_skb(skb);877877+ return NETDEV_TX_OK;878878+}879879+880880+881881+static netdev_tx_t882882+vmxnet3_xmit_frame(struct sk_buff *skb, struct net_device *netdev)883883+{884884+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);885885+ struct vmxnet3_tx_queue *tq = &adapter->tx_queue;886886+887887+ return vmxnet3_tq_xmit(skb, tq, adapter, netdev);888888+}889889+890890+891891+static void892892+vmxnet3_rx_csum(struct vmxnet3_adapter *adapter,893893+ struct sk_buff *skb,894894+ union Vmxnet3_GenericDesc *gdesc)895895+{896896+ if (!gdesc->rcd.cnc && adapter->rxcsum) {897897+ /* typical case: TCP/UDP over IP and both csums are correct */898898+ if ((gdesc->dword[3] & VMXNET3_RCD_CSUM_OK) ==899899+ VMXNET3_RCD_CSUM_OK) {900900+ skb->ip_summed = CHECKSUM_UNNECESSARY;901901+ BUG_ON(!(gdesc->rcd.tcp || gdesc->rcd.udp));902902+ BUG_ON(!(gdesc->rcd.v4 || gdesc->rcd.v6));903903+ BUG_ON(gdesc->rcd.frg);904904+ } else {905905+ if (gdesc->rcd.csum) {906906+ skb->csum = htons(gdesc->rcd.csum);907907+ skb->ip_summed = CHECKSUM_PARTIAL;908908+ } else {909909+ skb->ip_summed = CHECKSUM_NONE;910910+ }911911+ }912912+ } else {913913+ skb->ip_summed = CHECKSUM_NONE;914914+ }915915+}916916+917917+918918+static void919919+vmxnet3_rx_error(struct vmxnet3_rx_queue *rq, struct Vmxnet3_RxCompDesc *rcd,920920+ struct vmxnet3_rx_ctx *ctx, struct vmxnet3_adapter *adapter)921921+{922922+ rq->stats.drop_err++;923923+ if (!rcd->fcs)924924+ rq->stats.drop_fcs++;925925+926926+ rq->stats.drop_total++;927927+928928+ /*929929+ * We do not unmap and chain the rx buffer to the skb.930930+ * We basically pretend this buffer is not used and will be recycled931931+ * by vmxnet3_rq_alloc_rx_buf()932932+ */933933+934934+ /*935935+ * ctx->skb may be NULL if this is the first and the only one936936+ * desc for the pkt937937+ */938938+ if (ctx->skb)939939+ dev_kfree_skb_irq(ctx->skb);940940+941941+ ctx->skb = NULL;942942+}943943+944944+945945+static int946946+vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,947947+ struct vmxnet3_adapter *adapter, int quota)948948+{949949+ static u32 rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2};950950+ u32 num_rxd = 0;951951+ struct Vmxnet3_RxCompDesc *rcd;952952+ struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;953953+954954+ rcd = &rq->comp_ring.base[rq->comp_ring.next2proc].rcd;955955+ while (rcd->gen == rq->comp_ring.gen) {956956+ struct vmxnet3_rx_buf_info *rbi;957957+ struct sk_buff *skb;958958+ int num_to_alloc;959959+ struct Vmxnet3_RxDesc *rxd;960960+ u32 idx, ring_idx;961961+962962+ if (num_rxd >= quota) {963963+ /* we may stop even before we see the EOP desc of964964+ * the current pkt965965+ */966966+ break;967967+ }968968+ num_rxd++;969969+970970+ idx = rcd->rxdIdx;971971+ ring_idx = rcd->rqID == rq->qid ? 0 : 1;972972+973973+ rxd = &rq->rx_ring[ring_idx].base[idx].rxd;974974+ rbi = rq->buf_info[ring_idx] + idx;975975+976976+ BUG_ON(rxd->addr != rbi->dma_addr || rxd->len != rbi->len);977977+978978+ if (unlikely(rcd->eop && rcd->err)) {979979+ vmxnet3_rx_error(rq, rcd, ctx, adapter);980980+ goto rcd_done;981981+ }982982+983983+ if (rcd->sop) { /* first buf of the pkt */984984+ BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_HEAD ||985985+ rcd->rqID != rq->qid);986986+987987+ BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_SKB);988988+ BUG_ON(ctx->skb != NULL || rbi->skb == NULL);989989+990990+ if (unlikely(rcd->len == 0)) {991991+ /* Pretend the rx buffer is skipped. */992992+ BUG_ON(!(rcd->sop && rcd->eop));993993+ dprintk(KERN_ERR "rxRing[%u][%u] 0 length\n",994994+ ring_idx, idx);995995+ goto rcd_done;996996+ }997997+998998+ ctx->skb = rbi->skb;999999+ rbi->skb = NULL;10001000+10011001+ pci_unmap_single(adapter->pdev, rbi->dma_addr, rbi->len,10021002+ PCI_DMA_FROMDEVICE);10031003+10041004+ skb_put(ctx->skb, rcd->len);10051005+ } else {10061006+ BUG_ON(ctx->skb == NULL);10071007+ /* non SOP buffer must be type 1 in most cases */10081008+ if (rbi->buf_type == VMXNET3_RX_BUF_PAGE) {10091009+ BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY);10101010+10111011+ if (rcd->len) {10121012+ pci_unmap_page(adapter->pdev,10131013+ rbi->dma_addr, rbi->len,10141014+ PCI_DMA_FROMDEVICE);10151015+10161016+ vmxnet3_append_frag(ctx->skb, rcd, rbi);10171017+ rbi->page = NULL;10181018+ }10191019+ } else {10201020+ /*10211021+ * The only time a non-SOP buffer is type 0 is10221022+ * when it's EOP and error flag is raised, which10231023+ * has already been handled.10241024+ */10251025+ BUG_ON(true);10261026+ }10271027+ }10281028+10291029+ skb = ctx->skb;10301030+ if (rcd->eop) {10311031+ skb->len += skb->data_len;10321032+ skb->truesize += skb->data_len;10331033+10341034+ vmxnet3_rx_csum(adapter, skb,10351035+ (union Vmxnet3_GenericDesc *)rcd);10361036+ skb->protocol = eth_type_trans(skb, adapter->netdev);10371037+10381038+ if (unlikely(adapter->vlan_grp && rcd->ts)) {10391039+ vlan_hwaccel_receive_skb(skb,10401040+ adapter->vlan_grp, rcd->tci);10411041+ } else {10421042+ netif_receive_skb(skb);10431043+ }10441044+10451045+ adapter->netdev->last_rx = jiffies;10461046+ ctx->skb = NULL;10471047+ }10481048+10491049+rcd_done:10501050+ /* device may skip some rx descs */10511051+ rq->rx_ring[ring_idx].next2comp = idx;10521052+ VMXNET3_INC_RING_IDX_ONLY(rq->rx_ring[ring_idx].next2comp,10531053+ rq->rx_ring[ring_idx].size);10541054+10551055+ /* refill rx buffers frequently to avoid starving the h/w */10561056+ num_to_alloc = vmxnet3_cmd_ring_desc_avail(rq->rx_ring +10571057+ ring_idx);10581058+ if (unlikely(num_to_alloc > VMXNET3_RX_ALLOC_THRESHOLD(rq,10591059+ ring_idx, adapter))) {10601060+ vmxnet3_rq_alloc_rx_buf(rq, ring_idx, num_to_alloc,10611061+ adapter);10621062+10631063+ /* if needed, update the register */10641064+ if (unlikely(rq->shared->updateRxProd)) {10651065+ VMXNET3_WRITE_BAR0_REG(adapter,10661066+ rxprod_reg[ring_idx] + rq->qid * 8,10671067+ rq->rx_ring[ring_idx].next2fill);10681068+ rq->uncommitted[ring_idx] = 0;10691069+ }10701070+ }10711071+10721072+ vmxnet3_comp_ring_adv_next2proc(&rq->comp_ring);10731073+ rcd = &rq->comp_ring.base[rq->comp_ring.next2proc].rcd;10741074+ }10751075+10761076+ return num_rxd;10771077+}10781078+10791079+10801080+static void10811081+vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq,10821082+ struct vmxnet3_adapter *adapter)10831083+{10841084+ u32 i, ring_idx;10851085+ struct Vmxnet3_RxDesc *rxd;10861086+10871087+ for (ring_idx = 0; ring_idx < 2; ring_idx++) {10881088+ for (i = 0; i < rq->rx_ring[ring_idx].size; i++) {10891089+ rxd = &rq->rx_ring[ring_idx].base[i].rxd;10901090+10911091+ if (rxd->btype == VMXNET3_RXD_BTYPE_HEAD &&10921092+ rq->buf_info[ring_idx][i].skb) {10931093+ pci_unmap_single(adapter->pdev, rxd->addr,10941094+ rxd->len, PCI_DMA_FROMDEVICE);10951095+ dev_kfree_skb(rq->buf_info[ring_idx][i].skb);10961096+ rq->buf_info[ring_idx][i].skb = NULL;10971097+ } else if (rxd->btype == VMXNET3_RXD_BTYPE_BODY &&10981098+ rq->buf_info[ring_idx][i].page) {10991099+ pci_unmap_page(adapter->pdev, rxd->addr,11001100+ rxd->len, PCI_DMA_FROMDEVICE);11011101+ put_page(rq->buf_info[ring_idx][i].page);11021102+ rq->buf_info[ring_idx][i].page = NULL;11031103+ }11041104+ }11051105+11061106+ rq->rx_ring[ring_idx].gen = VMXNET3_INIT_GEN;11071107+ rq->rx_ring[ring_idx].next2fill =11081108+ rq->rx_ring[ring_idx].next2comp = 0;11091109+ rq->uncommitted[ring_idx] = 0;11101110+ }11111111+11121112+ rq->comp_ring.gen = VMXNET3_INIT_GEN;11131113+ rq->comp_ring.next2proc = 0;11141114+}11151115+11161116+11171117+void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq,11181118+ struct vmxnet3_adapter *adapter)11191119+{11201120+ int i;11211121+ int j;11221122+11231123+ /* all rx buffers must have already been freed */11241124+ for (i = 0; i < 2; i++) {11251125+ if (rq->buf_info[i]) {11261126+ for (j = 0; j < rq->rx_ring[i].size; j++)11271127+ BUG_ON(rq->buf_info[i][j].page != NULL);11281128+ }11291129+ }11301130+11311131+11321132+ kfree(rq->buf_info[0]);11331133+11341134+ for (i = 0; i < 2; i++) {11351135+ if (rq->rx_ring[i].base) {11361136+ pci_free_consistent(adapter->pdev, rq->rx_ring[i].size11371137+ * sizeof(struct Vmxnet3_RxDesc),11381138+ rq->rx_ring[i].base,11391139+ rq->rx_ring[i].basePA);11401140+ rq->rx_ring[i].base = NULL;11411141+ }11421142+ rq->buf_info[i] = NULL;11431143+ }11441144+11451145+ if (rq->comp_ring.base) {11461146+ pci_free_consistent(adapter->pdev, rq->comp_ring.size *11471147+ sizeof(struct Vmxnet3_RxCompDesc),11481148+ rq->comp_ring.base, rq->comp_ring.basePA);11491149+ rq->comp_ring.base = NULL;11501150+ }11511151+}11521152+11531153+11541154+static int11551155+vmxnet3_rq_init(struct vmxnet3_rx_queue *rq,11561156+ struct vmxnet3_adapter *adapter)11571157+{11581158+ int i;11591159+11601160+ /* initialize buf_info */11611161+ for (i = 0; i < rq->rx_ring[0].size; i++) {11621162+11631163+ /* 1st buf for a pkt is skbuff */11641164+ if (i % adapter->rx_buf_per_pkt == 0) {11651165+ rq->buf_info[0][i].buf_type = VMXNET3_RX_BUF_SKB;11661166+ rq->buf_info[0][i].len = adapter->skb_buf_size;11671167+ } else { /* subsequent bufs for a pkt is frag */11681168+ rq->buf_info[0][i].buf_type = VMXNET3_RX_BUF_PAGE;11691169+ rq->buf_info[0][i].len = PAGE_SIZE;11701170+ }11711171+ }11721172+ for (i = 0; i < rq->rx_ring[1].size; i++) {11731173+ rq->buf_info[1][i].buf_type = VMXNET3_RX_BUF_PAGE;11741174+ rq->buf_info[1][i].len = PAGE_SIZE;11751175+ }11761176+11771177+ /* reset internal state and allocate buffers for both rings */11781178+ for (i = 0; i < 2; i++) {11791179+ rq->rx_ring[i].next2fill = rq->rx_ring[i].next2comp = 0;11801180+ rq->uncommitted[i] = 0;11811181+11821182+ memset(rq->rx_ring[i].base, 0, rq->rx_ring[i].size *11831183+ sizeof(struct Vmxnet3_RxDesc));11841184+ rq->rx_ring[i].gen = VMXNET3_INIT_GEN;11851185+ }11861186+ if (vmxnet3_rq_alloc_rx_buf(rq, 0, rq->rx_ring[0].size - 1,11871187+ adapter) == 0) {11881188+ /* at least has 1 rx buffer for the 1st ring */11891189+ return -ENOMEM;11901190+ }11911191+ vmxnet3_rq_alloc_rx_buf(rq, 1, rq->rx_ring[1].size - 1, adapter);11921192+11931193+ /* reset the comp ring */11941194+ rq->comp_ring.next2proc = 0;11951195+ memset(rq->comp_ring.base, 0, rq->comp_ring.size *11961196+ sizeof(struct Vmxnet3_RxCompDesc));11971197+ rq->comp_ring.gen = VMXNET3_INIT_GEN;11981198+11991199+ /* reset rxctx */12001200+ rq->rx_ctx.skb = NULL;12011201+12021202+ /* stats are not reset */12031203+ return 0;12041204+}12051205+12061206+12071207+static int12081208+vmxnet3_rq_create(struct vmxnet3_rx_queue *rq, struct vmxnet3_adapter *adapter)12091209+{12101210+ int i;12111211+ size_t sz;12121212+ struct vmxnet3_rx_buf_info *bi;12131213+12141214+ for (i = 0; i < 2; i++) {12151215+12161216+ sz = rq->rx_ring[i].size * sizeof(struct Vmxnet3_RxDesc);12171217+ rq->rx_ring[i].base = pci_alloc_consistent(adapter->pdev, sz,12181218+ &rq->rx_ring[i].basePA);12191219+ if (!rq->rx_ring[i].base) {12201220+ printk(KERN_ERR "%s: failed to allocate rx ring %d\n",12211221+ adapter->netdev->name, i);12221222+ goto err;12231223+ }12241224+ }12251225+12261226+ sz = rq->comp_ring.size * sizeof(struct Vmxnet3_RxCompDesc);12271227+ rq->comp_ring.base = pci_alloc_consistent(adapter->pdev, sz,12281228+ &rq->comp_ring.basePA);12291229+ if (!rq->comp_ring.base) {12301230+ printk(KERN_ERR "%s: failed to allocate rx comp ring\n",12311231+ adapter->netdev->name);12321232+ goto err;12331233+ }12341234+12351235+ sz = sizeof(struct vmxnet3_rx_buf_info) * (rq->rx_ring[0].size +12361236+ rq->rx_ring[1].size);12371237+ bi = kmalloc(sz, GFP_KERNEL);12381238+ if (!bi) {12391239+ printk(KERN_ERR "%s: failed to allocate rx bufinfo\n",12401240+ adapter->netdev->name);12411241+ goto err;12421242+ }12431243+ memset(bi, 0, sz);12441244+ rq->buf_info[0] = bi;12451245+ rq->buf_info[1] = bi + rq->rx_ring[0].size;12461246+12471247+ return 0;12481248+12491249+err:12501250+ vmxnet3_rq_destroy(rq, adapter);12511251+ return -ENOMEM;12521252+}12531253+12541254+12551255+static int12561256+vmxnet3_do_poll(struct vmxnet3_adapter *adapter, int budget)12571257+{12581258+ if (unlikely(adapter->shared->ecr))12591259+ vmxnet3_process_events(adapter);12601260+12611261+ vmxnet3_tq_tx_complete(&adapter->tx_queue, adapter);12621262+ return vmxnet3_rq_rx_complete(&adapter->rx_queue, adapter, budget);12631263+}12641264+12651265+12661266+static int12671267+vmxnet3_poll(struct napi_struct *napi, int budget)12681268+{12691269+ struct vmxnet3_adapter *adapter = container_of(napi,12701270+ struct vmxnet3_adapter, napi);12711271+ int rxd_done;12721272+12731273+ rxd_done = vmxnet3_do_poll(adapter, budget);12741274+12751275+ if (rxd_done < budget) {12761276+ napi_complete(napi);12771277+ vmxnet3_enable_intr(adapter, 0);12781278+ }12791279+ return rxd_done;12801280+}12811281+12821282+12831283+/* Interrupt handler for vmxnet3 */12841284+static irqreturn_t12851285+vmxnet3_intr(int irq, void *dev_id)12861286+{12871287+ struct net_device *dev = dev_id;12881288+ struct vmxnet3_adapter *adapter = netdev_priv(dev);12891289+12901290+ if (unlikely(adapter->intr.type == VMXNET3_IT_INTX)) {12911291+ u32 icr = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_ICR);12921292+ if (unlikely(icr == 0))12931293+ /* not ours */12941294+ return IRQ_NONE;12951295+ }12961296+12971297+12981298+ /* disable intr if needed */12991299+ if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE)13001300+ vmxnet3_disable_intr(adapter, 0);13011301+13021302+ napi_schedule(&adapter->napi);13031303+13041304+ return IRQ_HANDLED;13051305+}13061306+13071307+#ifdef CONFIG_NET_POLL_CONTROLLER13081308+13091309+13101310+/* netpoll callback. */13111311+static void13121312+vmxnet3_netpoll(struct net_device *netdev)13131313+{13141314+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);13151315+ int irq;13161316+13171317+#ifdef CONFIG_PCI_MSI13181318+ if (adapter->intr.type == VMXNET3_IT_MSIX)13191319+ irq = adapter->intr.msix_entries[0].vector;13201320+ else13211321+#endif13221322+ irq = adapter->pdev->irq;13231323+13241324+ disable_irq(irq);13251325+ vmxnet3_intr(irq, netdev);13261326+ enable_irq(irq);13271327+}13281328+#endif13291329+13301330+static int13311331+vmxnet3_request_irqs(struct vmxnet3_adapter *adapter)13321332+{13331333+ int err;13341334+13351335+#ifdef CONFIG_PCI_MSI13361336+ if (adapter->intr.type == VMXNET3_IT_MSIX) {13371337+ /* we only use 1 MSI-X vector */13381338+ err = request_irq(adapter->intr.msix_entries[0].vector,13391339+ vmxnet3_intr, 0, adapter->netdev->name,13401340+ adapter->netdev);13411341+ } else13421342+#endif13431343+ if (adapter->intr.type == VMXNET3_IT_MSI) {13441344+ err = request_irq(adapter->pdev->irq, vmxnet3_intr, 0,13451345+ adapter->netdev->name, adapter->netdev);13461346+ } else {13471347+ err = request_irq(adapter->pdev->irq, vmxnet3_intr,13481348+ IRQF_SHARED, adapter->netdev->name,13491349+ adapter->netdev);13501350+ }13511351+13521352+ if (err)13531353+ printk(KERN_ERR "Failed to request irq %s (intr type:%d), error"13541354+ ":%d\n", adapter->netdev->name, adapter->intr.type, err);13551355+13561356+13571357+ if (!err) {13581358+ int i;13591359+ /* init our intr settings */13601360+ for (i = 0; i < adapter->intr.num_intrs; i++)13611361+ adapter->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;13621362+13631363+ /* next setup intr index for all intr sources */13641364+ adapter->tx_queue.comp_ring.intr_idx = 0;13651365+ adapter->rx_queue.comp_ring.intr_idx = 0;13661366+ adapter->intr.event_intr_idx = 0;13671367+13681368+ printk(KERN_INFO "%s: intr type %u, mode %u, %u vectors "13691369+ "allocated\n", adapter->netdev->name, adapter->intr.type,13701370+ adapter->intr.mask_mode, adapter->intr.num_intrs);13711371+ }13721372+13731373+ return err;13741374+}13751375+13761376+13771377+static void13781378+vmxnet3_free_irqs(struct vmxnet3_adapter *adapter)13791379+{13801380+ BUG_ON(adapter->intr.type == VMXNET3_IT_AUTO ||13811381+ adapter->intr.num_intrs <= 0);13821382+13831383+ switch (adapter->intr.type) {13841384+#ifdef CONFIG_PCI_MSI13851385+ case VMXNET3_IT_MSIX:13861386+ {13871387+ int i;13881388+13891389+ for (i = 0; i < adapter->intr.num_intrs; i++)13901390+ free_irq(adapter->intr.msix_entries[i].vector,13911391+ adapter->netdev);13921392+ break;13931393+ }13941394+#endif13951395+ case VMXNET3_IT_MSI:13961396+ free_irq(adapter->pdev->irq, adapter->netdev);13971397+ break;13981398+ case VMXNET3_IT_INTX:13991399+ free_irq(adapter->pdev->irq, adapter->netdev);14001400+ break;14011401+ default:14021402+ BUG_ON(true);14031403+ }14041404+}14051405+14061406+14071407+static void14081408+vmxnet3_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp)14091409+{14101410+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);14111411+ struct Vmxnet3_DriverShared *shared = adapter->shared;14121412+ u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;14131413+14141414+ if (grp) {14151415+ /* add vlan rx stripping. */14161416+ if (adapter->netdev->features & NETIF_F_HW_VLAN_RX) {14171417+ int i;14181418+ struct Vmxnet3_DSDevRead *devRead = &shared->devRead;14191419+ adapter->vlan_grp = grp;14201420+14211421+ /* update FEATURES to device */14221422+ devRead->misc.uptFeatures |= UPT1_F_RXVLAN;14231423+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,14241424+ VMXNET3_CMD_UPDATE_FEATURE);14251425+ /*14261426+ * Clear entire vfTable; then enable untagged pkts.14271427+ * Note: setting one entry in vfTable to non-zero turns14281428+ * on VLAN rx filtering.14291429+ */14301430+ for (i = 0; i < VMXNET3_VFT_SIZE; i++)14311431+ vfTable[i] = 0;14321432+14331433+ VMXNET3_SET_VFTABLE_ENTRY(vfTable, 0);14341434+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,14351435+ VMXNET3_CMD_UPDATE_VLAN_FILTERS);14361436+ } else {14371437+ printk(KERN_ERR "%s: vlan_rx_register when device has "14381438+ "no NETIF_F_HW_VLAN_RX\n", netdev->name);14391439+ }14401440+ } else {14411441+ /* remove vlan rx stripping. */14421442+ struct Vmxnet3_DSDevRead *devRead = &shared->devRead;14431443+ adapter->vlan_grp = NULL;14441444+14451445+ if (devRead->misc.uptFeatures & UPT1_F_RXVLAN) {14461446+ int i;14471447+14481448+ for (i = 0; i < VMXNET3_VFT_SIZE; i++) {14491449+ /* clear entire vfTable; this also disables14501450+ * VLAN rx filtering14511451+ */14521452+ vfTable[i] = 0;14531453+ }14541454+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,14551455+ VMXNET3_CMD_UPDATE_VLAN_FILTERS);14561456+14571457+ /* update FEATURES to device */14581458+ devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;14591459+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,14601460+ VMXNET3_CMD_UPDATE_FEATURE);14611461+ }14621462+ }14631463+}14641464+14651465+14661466+static void14671467+vmxnet3_restore_vlan(struct vmxnet3_adapter *adapter)14681468+{14691469+ if (adapter->vlan_grp) {14701470+ u16 vid;14711471+ u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;14721472+ bool activeVlan = false;14731473+14741474+ for (vid = 0; vid < VLAN_GROUP_ARRAY_LEN; vid++) {14751475+ if (vlan_group_get_device(adapter->vlan_grp, vid)) {14761476+ VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid);14771477+ activeVlan = true;14781478+ }14791479+ }14801480+ if (activeVlan) {14811481+ /* continue to allow untagged pkts */14821482+ VMXNET3_SET_VFTABLE_ENTRY(vfTable, 0);14831483+ }14841484+ }14851485+}14861486+14871487+14881488+static void14891489+vmxnet3_vlan_rx_add_vid(struct net_device *netdev, u16 vid)14901490+{14911491+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);14921492+ u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;14931493+14941494+ VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid);14951495+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,14961496+ VMXNET3_CMD_UPDATE_VLAN_FILTERS);14971497+}14981498+14991499+15001500+static void15011501+vmxnet3_vlan_rx_kill_vid(struct net_device *netdev, u16 vid)15021502+{15031503+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);15041504+ u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;15051505+15061506+ VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vid);15071507+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,15081508+ VMXNET3_CMD_UPDATE_VLAN_FILTERS);15091509+}15101510+15111511+15121512+static u8 *15131513+vmxnet3_copy_mc(struct net_device *netdev)15141514+{15151515+ u8 *buf = NULL;15161516+ u32 sz = netdev->mc_count * ETH_ALEN;15171517+15181518+ /* struct Vmxnet3_RxFilterConf.mfTableLen is u16. */15191519+ if (sz <= 0xffff) {15201520+ /* We may be called with BH disabled */15211521+ buf = kmalloc(sz, GFP_ATOMIC);15221522+ if (buf) {15231523+ int i;15241524+ struct dev_mc_list *mc = netdev->mc_list;15251525+15261526+ for (i = 0; i < netdev->mc_count; i++) {15271527+ BUG_ON(!mc);15281528+ memcpy(buf + i * ETH_ALEN, mc->dmi_addr,15291529+ ETH_ALEN);15301530+ mc = mc->next;15311531+ }15321532+ }15331533+ }15341534+ return buf;15351535+}15361536+15371537+15381538+static void15391539+vmxnet3_set_mc(struct net_device *netdev)15401540+{15411541+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);15421542+ struct Vmxnet3_RxFilterConf *rxConf =15431543+ &adapter->shared->devRead.rxFilterConf;15441544+ u8 *new_table = NULL;15451545+ u32 new_mode = VMXNET3_RXM_UCAST;15461546+15471547+ if (netdev->flags & IFF_PROMISC)15481548+ new_mode |= VMXNET3_RXM_PROMISC;15491549+15501550+ if (netdev->flags & IFF_BROADCAST)15511551+ new_mode |= VMXNET3_RXM_BCAST;15521552+15531553+ if (netdev->flags & IFF_ALLMULTI)15541554+ new_mode |= VMXNET3_RXM_ALL_MULTI;15551555+ else15561556+ if (netdev->mc_count > 0) {15571557+ new_table = vmxnet3_copy_mc(netdev);15581558+ if (new_table) {15591559+ new_mode |= VMXNET3_RXM_MCAST;15601560+ rxConf->mfTableLen = netdev->mc_count *15611561+ ETH_ALEN;15621562+ rxConf->mfTablePA = virt_to_phys(new_table);15631563+ } else {15641564+ printk(KERN_INFO "%s: failed to copy mcast list"15651565+ ", setting ALL_MULTI\n", netdev->name);15661566+ new_mode |= VMXNET3_RXM_ALL_MULTI;15671567+ }15681568+ }15691569+15701570+15711571+ if (!(new_mode & VMXNET3_RXM_MCAST)) {15721572+ rxConf->mfTableLen = 0;15731573+ rxConf->mfTablePA = 0;15741574+ }15751575+15761576+ if (new_mode != rxConf->rxMode) {15771577+ rxConf->rxMode = new_mode;15781578+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,15791579+ VMXNET3_CMD_UPDATE_RX_MODE);15801580+ }15811581+15821582+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,15831583+ VMXNET3_CMD_UPDATE_MAC_FILTERS);15841584+15851585+ kfree(new_table);15861586+}15871587+15881588+15891589+/*15901590+ * Set up driver_shared based on settings in adapter.15911591+ */15921592+15931593+static void15941594+vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter)15951595+{15961596+ struct Vmxnet3_DriverShared *shared = adapter->shared;15971597+ struct Vmxnet3_DSDevRead *devRead = &shared->devRead;15981598+ struct Vmxnet3_TxQueueConf *tqc;15991599+ struct Vmxnet3_RxQueueConf *rqc;16001600+ int i;16011601+16021602+ memset(shared, 0, sizeof(*shared));16031603+16041604+ /* driver settings */16051605+ shared->magic = VMXNET3_REV1_MAGIC;16061606+ devRead->misc.driverInfo.version = VMXNET3_DRIVER_VERSION_NUM;16071607+ devRead->misc.driverInfo.gos.gosBits = (sizeof(void *) == 4 ?16081608+ VMXNET3_GOS_BITS_32 : VMXNET3_GOS_BITS_64);16091609+ devRead->misc.driverInfo.gos.gosType = VMXNET3_GOS_TYPE_LINUX;16101610+ devRead->misc.driverInfo.vmxnet3RevSpt = 1;16111611+ devRead->misc.driverInfo.uptVerSpt = 1;16121612+16131613+ devRead->misc.ddPA = virt_to_phys(adapter);16141614+ devRead->misc.ddLen = sizeof(struct vmxnet3_adapter);16151615+16161616+ /* set up feature flags */16171617+ if (adapter->rxcsum)16181618+ devRead->misc.uptFeatures |= UPT1_F_RXCSUM;16191619+16201620+ if (adapter->lro) {16211621+ devRead->misc.uptFeatures |= UPT1_F_LRO;16221622+ devRead->misc.maxNumRxSG = 1 + MAX_SKB_FRAGS;16231623+ }16241624+ if ((adapter->netdev->features & NETIF_F_HW_VLAN_RX)16251625+ && adapter->vlan_grp) {16261626+ devRead->misc.uptFeatures |= UPT1_F_RXVLAN;16271627+ }16281628+16291629+ devRead->misc.mtu = adapter->netdev->mtu;16301630+ devRead->misc.queueDescPA = adapter->queue_desc_pa;16311631+ devRead->misc.queueDescLen = sizeof(struct Vmxnet3_TxQueueDesc) +16321632+ sizeof(struct Vmxnet3_RxQueueDesc);16331633+16341634+ /* tx queue settings */16351635+ BUG_ON(adapter->tx_queue.tx_ring.base == NULL);16361636+16371637+ devRead->misc.numTxQueues = 1;16381638+ tqc = &adapter->tqd_start->conf;16391639+ tqc->txRingBasePA = adapter->tx_queue.tx_ring.basePA;16401640+ tqc->dataRingBasePA = adapter->tx_queue.data_ring.basePA;16411641+ tqc->compRingBasePA = adapter->tx_queue.comp_ring.basePA;16421642+ tqc->ddPA = virt_to_phys(adapter->tx_queue.buf_info);16431643+ tqc->txRingSize = adapter->tx_queue.tx_ring.size;16441644+ tqc->dataRingSize = adapter->tx_queue.data_ring.size;16451645+ tqc->compRingSize = adapter->tx_queue.comp_ring.size;16461646+ tqc->ddLen = sizeof(struct vmxnet3_tx_buf_info) *16471647+ tqc->txRingSize;16481648+ tqc->intrIdx = adapter->tx_queue.comp_ring.intr_idx;16491649+16501650+ /* rx queue settings */16511651+ devRead->misc.numRxQueues = 1;16521652+ rqc = &adapter->rqd_start->conf;16531653+ rqc->rxRingBasePA[0] = adapter->rx_queue.rx_ring[0].basePA;16541654+ rqc->rxRingBasePA[1] = adapter->rx_queue.rx_ring[1].basePA;16551655+ rqc->compRingBasePA = adapter->rx_queue.comp_ring.basePA;16561656+ rqc->ddPA = virt_to_phys(adapter->rx_queue.buf_info);16571657+ rqc->rxRingSize[0] = adapter->rx_queue.rx_ring[0].size;16581658+ rqc->rxRingSize[1] = adapter->rx_queue.rx_ring[1].size;16591659+ rqc->compRingSize = adapter->rx_queue.comp_ring.size;16601660+ rqc->ddLen = sizeof(struct vmxnet3_rx_buf_info) *16611661+ (rqc->rxRingSize[0] + rqc->rxRingSize[1]);16621662+ rqc->intrIdx = adapter->rx_queue.comp_ring.intr_idx;16631663+16641664+ /* intr settings */16651665+ devRead->intrConf.autoMask = adapter->intr.mask_mode ==16661666+ VMXNET3_IMM_AUTO;16671667+ devRead->intrConf.numIntrs = adapter->intr.num_intrs;16681668+ for (i = 0; i < adapter->intr.num_intrs; i++)16691669+ devRead->intrConf.modLevels[i] = adapter->intr.mod_levels[i];16701670+16711671+ devRead->intrConf.eventIntrIdx = adapter->intr.event_intr_idx;16721672+16731673+ /* rx filter settings */16741674+ devRead->rxFilterConf.rxMode = 0;16751675+ vmxnet3_restore_vlan(adapter);16761676+ /* the rest are already zeroed */16771677+}16781678+16791679+16801680+int16811681+vmxnet3_activate_dev(struct vmxnet3_adapter *adapter)16821682+{16831683+ int err;16841684+ u32 ret;16851685+16861686+ dprintk(KERN_ERR "%s: skb_buf_size %d, rx_buf_per_pkt %d, ring sizes"16871687+ " %u %u %u\n", adapter->netdev->name, adapter->skb_buf_size,16881688+ adapter->rx_buf_per_pkt, adapter->tx_queue.tx_ring.size,16891689+ adapter->rx_queue.rx_ring[0].size,16901690+ adapter->rx_queue.rx_ring[1].size);16911691+16921692+ vmxnet3_tq_init(&adapter->tx_queue, adapter);16931693+ err = vmxnet3_rq_init(&adapter->rx_queue, adapter);16941694+ if (err) {16951695+ printk(KERN_ERR "Failed to init rx queue for %s: error %d\n",16961696+ adapter->netdev->name, err);16971697+ goto rq_err;16981698+ }16991699+17001700+ err = vmxnet3_request_irqs(adapter);17011701+ if (err) {17021702+ printk(KERN_ERR "Failed to setup irq for %s: error %d\n",17031703+ adapter->netdev->name, err);17041704+ goto irq_err;17051705+ }17061706+17071707+ vmxnet3_setup_driver_shared(adapter);17081708+17091709+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAL,17101710+ VMXNET3_GET_ADDR_LO(adapter->shared_pa));17111711+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAH,17121712+ VMXNET3_GET_ADDR_HI(adapter->shared_pa));17131713+17141714+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,17151715+ VMXNET3_CMD_ACTIVATE_DEV);17161716+ ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);17171717+17181718+ if (ret != 0) {17191719+ printk(KERN_ERR "Failed to activate dev %s: error %u\n",17201720+ adapter->netdev->name, ret);17211721+ err = -EINVAL;17221722+ goto activate_err;17231723+ }17241724+ VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_RXPROD,17251725+ adapter->rx_queue.rx_ring[0].next2fill);17261726+ VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_RXPROD2,17271727+ adapter->rx_queue.rx_ring[1].next2fill);17281728+17291729+ /* Apply the rx filter settins last. */17301730+ vmxnet3_set_mc(adapter->netdev);17311731+17321732+ /*17331733+ * Check link state when first activating device. It will start the17341734+ * tx queue if the link is up.17351735+ */17361736+ vmxnet3_check_link(adapter);17371737+17381738+ napi_enable(&adapter->napi);17391739+ vmxnet3_enable_all_intrs(adapter);17401740+ clear_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state);17411741+ return 0;17421742+17431743+activate_err:17441744+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAL, 0);17451745+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAH, 0);17461746+ vmxnet3_free_irqs(adapter);17471747+irq_err:17481748+rq_err:17491749+ /* free up buffers we allocated */17501750+ vmxnet3_rq_cleanup(&adapter->rx_queue, adapter);17511751+ return err;17521752+}17531753+17541754+17551755+void17561756+vmxnet3_reset_dev(struct vmxnet3_adapter *adapter)17571757+{17581758+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_RESET_DEV);17591759+}17601760+17611761+17621762+int17631763+vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter)17641764+{17651765+ if (test_and_set_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state))17661766+ return 0;17671767+17681768+17691769+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,17701770+ VMXNET3_CMD_QUIESCE_DEV);17711771+ vmxnet3_disable_all_intrs(adapter);17721772+17731773+ napi_disable(&adapter->napi);17741774+ netif_tx_disable(adapter->netdev);17751775+ adapter->link_speed = 0;17761776+ netif_carrier_off(adapter->netdev);17771777+17781778+ vmxnet3_tq_cleanup(&adapter->tx_queue, adapter);17791779+ vmxnet3_rq_cleanup(&adapter->rx_queue, adapter);17801780+ vmxnet3_free_irqs(adapter);17811781+ return 0;17821782+}17831783+17841784+17851785+static void17861786+vmxnet3_write_mac_addr(struct vmxnet3_adapter *adapter, u8 *mac)17871787+{17881788+ u32 tmp;17891789+17901790+ tmp = *(u32 *)mac;17911791+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_MACL, tmp);17921792+17931793+ tmp = (mac[5] << 8) | mac[4];17941794+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_MACH, tmp);17951795+}17961796+17971797+17981798+static int17991799+vmxnet3_set_mac_addr(struct net_device *netdev, void *p)18001800+{18011801+ struct sockaddr *addr = p;18021802+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);18031803+18041804+ memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);18051805+ vmxnet3_write_mac_addr(adapter, addr->sa_data);18061806+18071807+ return 0;18081808+}18091809+18101810+18111811+/* ==================== initialization and cleanup routines ============ */18121812+18131813+static int18141814+vmxnet3_alloc_pci_resources(struct vmxnet3_adapter *adapter, bool *dma64)18151815+{18161816+ int err;18171817+ unsigned long mmio_start, mmio_len;18181818+ struct pci_dev *pdev = adapter->pdev;18191819+18201820+ err = pci_enable_device(pdev);18211821+ if (err) {18221822+ printk(KERN_ERR "Failed to enable adapter %s: error %d\n",18231823+ pci_name(pdev), err);18241824+ return err;18251825+ }18261826+18271827+ if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) == 0) {18281828+ if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) {18291829+ printk(KERN_ERR "pci_set_consistent_dma_mask failed "18301830+ "for adapter %s\n", pci_name(pdev));18311831+ err = -EIO;18321832+ goto err_set_mask;18331833+ }18341834+ *dma64 = true;18351835+ } else {18361836+ if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) {18371837+ printk(KERN_ERR "pci_set_dma_mask failed for adapter "18381838+ "%s\n", pci_name(pdev));18391839+ err = -EIO;18401840+ goto err_set_mask;18411841+ }18421842+ *dma64 = false;18431843+ }18441844+18451845+ err = pci_request_selected_regions(pdev, (1 << 2) - 1,18461846+ vmxnet3_driver_name);18471847+ if (err) {18481848+ printk(KERN_ERR "Failed to request region for adapter %s: "18491849+ "error %d\n", pci_name(pdev), err);18501850+ goto err_set_mask;18511851+ }18521852+18531853+ pci_set_master(pdev);18541854+18551855+ mmio_start = pci_resource_start(pdev, 0);18561856+ mmio_len = pci_resource_len(pdev, 0);18571857+ adapter->hw_addr0 = ioremap(mmio_start, mmio_len);18581858+ if (!adapter->hw_addr0) {18591859+ printk(KERN_ERR "Failed to map bar0 for adapter %s\n",18601860+ pci_name(pdev));18611861+ err = -EIO;18621862+ goto err_ioremap;18631863+ }18641864+18651865+ mmio_start = pci_resource_start(pdev, 1);18661866+ mmio_len = pci_resource_len(pdev, 1);18671867+ adapter->hw_addr1 = ioremap(mmio_start, mmio_len);18681868+ if (!adapter->hw_addr1) {18691869+ printk(KERN_ERR "Failed to map bar1 for adapter %s\n",18701870+ pci_name(pdev));18711871+ err = -EIO;18721872+ goto err_bar1;18731873+ }18741874+ return 0;18751875+18761876+err_bar1:18771877+ iounmap(adapter->hw_addr0);18781878+err_ioremap:18791879+ pci_release_selected_regions(pdev, (1 << 2) - 1);18801880+err_set_mask:18811881+ pci_disable_device(pdev);18821882+ return err;18831883+}18841884+18851885+18861886+static void18871887+vmxnet3_free_pci_resources(struct vmxnet3_adapter *adapter)18881888+{18891889+ BUG_ON(!adapter->pdev);18901890+18911891+ iounmap(adapter->hw_addr0);18921892+ iounmap(adapter->hw_addr1);18931893+ pci_release_selected_regions(adapter->pdev, (1 << 2) - 1);18941894+ pci_disable_device(adapter->pdev);18951895+}18961896+18971897+18981898+static void18991899+vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter)19001900+{19011901+ size_t sz;19021902+19031903+ if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE -19041904+ VMXNET3_MAX_ETH_HDR_SIZE) {19051905+ adapter->skb_buf_size = adapter->netdev->mtu +19061906+ VMXNET3_MAX_ETH_HDR_SIZE;19071907+ if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE)19081908+ adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE;19091909+19101910+ adapter->rx_buf_per_pkt = 1;19111911+ } else {19121912+ adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE;19131913+ sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE +19141914+ VMXNET3_MAX_ETH_HDR_SIZE;19151915+ adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE;19161916+ }19171917+19181918+ /*19191919+ * for simplicity, force the ring0 size to be a multiple of19201920+ * rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN19211921+ */19221922+ sz = adapter->rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN;19231923+ adapter->rx_queue.rx_ring[0].size = (adapter->rx_queue.rx_ring[0].size +19241924+ sz - 1) / sz * sz;19251925+ adapter->rx_queue.rx_ring[0].size = min_t(u32,19261926+ adapter->rx_queue.rx_ring[0].size,19271927+ VMXNET3_RX_RING_MAX_SIZE / sz * sz);19281928+}19291929+19301930+19311931+int19321932+vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size,19331933+ u32 rx_ring_size, u32 rx_ring2_size)19341934+{19351935+ int err;19361936+19371937+ adapter->tx_queue.tx_ring.size = tx_ring_size;19381938+ adapter->tx_queue.data_ring.size = tx_ring_size;19391939+ adapter->tx_queue.comp_ring.size = tx_ring_size;19401940+ adapter->tx_queue.shared = &adapter->tqd_start->ctrl;19411941+ adapter->tx_queue.stopped = true;19421942+ err = vmxnet3_tq_create(&adapter->tx_queue, adapter);19431943+ if (err)19441944+ return err;19451945+19461946+ adapter->rx_queue.rx_ring[0].size = rx_ring_size;19471947+ adapter->rx_queue.rx_ring[1].size = rx_ring2_size;19481948+ vmxnet3_adjust_rx_ring_size(adapter);19491949+ adapter->rx_queue.comp_ring.size = adapter->rx_queue.rx_ring[0].size +19501950+ adapter->rx_queue.rx_ring[1].size;19511951+ adapter->rx_queue.qid = 0;19521952+ adapter->rx_queue.qid2 = 1;19531953+ adapter->rx_queue.shared = &adapter->rqd_start->ctrl;19541954+ err = vmxnet3_rq_create(&adapter->rx_queue, adapter);19551955+ if (err)19561956+ vmxnet3_tq_destroy(&adapter->tx_queue, adapter);19571957+19581958+ return err;19591959+}19601960+19611961+static int19621962+vmxnet3_open(struct net_device *netdev)19631963+{19641964+ struct vmxnet3_adapter *adapter;19651965+ int err;19661966+19671967+ adapter = netdev_priv(netdev);19681968+19691969+ spin_lock_init(&adapter->tx_queue.tx_lock);19701970+19711971+ err = vmxnet3_create_queues(adapter, VMXNET3_DEF_TX_RING_SIZE,19721972+ VMXNET3_DEF_RX_RING_SIZE,19731973+ VMXNET3_DEF_RX_RING_SIZE);19741974+ if (err)19751975+ goto queue_err;19761976+19771977+ err = vmxnet3_activate_dev(adapter);19781978+ if (err)19791979+ goto activate_err;19801980+19811981+ return 0;19821982+19831983+activate_err:19841984+ vmxnet3_rq_destroy(&adapter->rx_queue, adapter);19851985+ vmxnet3_tq_destroy(&adapter->tx_queue, adapter);19861986+queue_err:19871987+ return err;19881988+}19891989+19901990+19911991+static int19921992+vmxnet3_close(struct net_device *netdev)19931993+{19941994+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);19951995+19961996+ /*19971997+ * Reset_work may be in the middle of resetting the device, wait for its19981998+ * completion.19991999+ */20002000+ while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state))20012001+ msleep(1);20022002+20032003+ vmxnet3_quiesce_dev(adapter);20042004+20052005+ vmxnet3_rq_destroy(&adapter->rx_queue, adapter);20062006+ vmxnet3_tq_destroy(&adapter->tx_queue, adapter);20072007+20082008+ clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);20092009+20102010+20112011+ return 0;20122012+}20132013+20142014+20152015+void20162016+vmxnet3_force_close(struct vmxnet3_adapter *adapter)20172017+{20182018+ /*20192019+ * we must clear VMXNET3_STATE_BIT_RESETTING, otherwise20202020+ * vmxnet3_close() will deadlock.20212021+ */20222022+ BUG_ON(test_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state));20232023+20242024+ /* we need to enable NAPI, otherwise dev_close will deadlock */20252025+ napi_enable(&adapter->napi);20262026+ dev_close(adapter->netdev);20272027+}20282028+20292029+20302030+static int20312031+vmxnet3_change_mtu(struct net_device *netdev, int new_mtu)20322032+{20332033+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);20342034+ int err = 0;20352035+20362036+ if (new_mtu < VMXNET3_MIN_MTU || new_mtu > VMXNET3_MAX_MTU)20372037+ return -EINVAL;20382038+20392039+ if (new_mtu > 1500 && !adapter->jumbo_frame)20402040+ return -EINVAL;20412041+20422042+ netdev->mtu = new_mtu;20432043+20442044+ /*20452045+ * Reset_work may be in the middle of resetting the device, wait for its20462046+ * completion.20472047+ */20482048+ while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state))20492049+ msleep(1);20502050+20512051+ if (netif_running(netdev)) {20522052+ vmxnet3_quiesce_dev(adapter);20532053+ vmxnet3_reset_dev(adapter);20542054+20552055+ /* we need to re-create the rx queue based on the new mtu */20562056+ vmxnet3_rq_destroy(&adapter->rx_queue, adapter);20572057+ vmxnet3_adjust_rx_ring_size(adapter);20582058+ adapter->rx_queue.comp_ring.size =20592059+ adapter->rx_queue.rx_ring[0].size +20602060+ adapter->rx_queue.rx_ring[1].size;20612061+ err = vmxnet3_rq_create(&adapter->rx_queue, adapter);20622062+ if (err) {20632063+ printk(KERN_ERR "%s: failed to re-create rx queue,"20642064+ " error %d. Closing it.\n", netdev->name, err);20652065+ goto out;20662066+ }20672067+20682068+ err = vmxnet3_activate_dev(adapter);20692069+ if (err) {20702070+ printk(KERN_ERR "%s: failed to re-activate, error %d. "20712071+ "Closing it\n", netdev->name, err);20722072+ goto out;20732073+ }20742074+ }20752075+20762076+out:20772077+ clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);20782078+ if (err)20792079+ vmxnet3_force_close(adapter);20802080+20812081+ return err;20822082+}20832083+20842084+20852085+static void20862086+vmxnet3_declare_features(struct vmxnet3_adapter *adapter, bool dma64)20872087+{20882088+ struct net_device *netdev = adapter->netdev;20892089+20902090+ netdev->features = NETIF_F_SG |20912091+ NETIF_F_HW_CSUM |20922092+ NETIF_F_HW_VLAN_TX |20932093+ NETIF_F_HW_VLAN_RX |20942094+ NETIF_F_HW_VLAN_FILTER |20952095+ NETIF_F_TSO |20962096+ NETIF_F_TSO6 |20972097+ NETIF_F_LRO;20982098+20992099+ printk(KERN_INFO "features: sg csum vlan jf tso tsoIPv6 lro");21002100+21012101+ adapter->rxcsum = true;21022102+ adapter->jumbo_frame = true;21032103+ adapter->lro = true;21042104+21052105+ if (dma64) {21062106+ netdev->features |= NETIF_F_HIGHDMA;21072107+ printk(" highDMA");21082108+ }21092109+21102110+ netdev->vlan_features = netdev->features;21112111+ printk("\n");21122112+}21132113+21142114+21152115+static void21162116+vmxnet3_read_mac_addr(struct vmxnet3_adapter *adapter, u8 *mac)21172117+{21182118+ u32 tmp;21192119+21202120+ tmp = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_MACL);21212121+ *(u32 *)mac = tmp;21222122+21232123+ tmp = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_MACH);21242124+ mac[4] = tmp & 0xff;21252125+ mac[5] = (tmp >> 8) & 0xff;21262126+}21272127+21282128+21292129+static void21302130+vmxnet3_alloc_intr_resources(struct vmxnet3_adapter *adapter)21312131+{21322132+ u32 cfg;21332133+21342134+ /* intr settings */21352135+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,21362136+ VMXNET3_CMD_GET_CONF_INTR);21372137+ cfg = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);21382138+ adapter->intr.type = cfg & 0x3;21392139+ adapter->intr.mask_mode = (cfg >> 2) & 0x3;21402140+21412141+ if (adapter->intr.type == VMXNET3_IT_AUTO) {21422142+ int err;21432143+21442144+#ifdef CONFIG_PCI_MSI21452145+ adapter->intr.msix_entries[0].entry = 0;21462146+ err = pci_enable_msix(adapter->pdev, adapter->intr.msix_entries,21472147+ VMXNET3_LINUX_MAX_MSIX_VECT);21482148+ if (!err) {21492149+ adapter->intr.num_intrs = 1;21502150+ adapter->intr.type = VMXNET3_IT_MSIX;21512151+ return;21522152+ }21532153+#endif21542154+21552155+ err = pci_enable_msi(adapter->pdev);21562156+ if (!err) {21572157+ adapter->intr.num_intrs = 1;21582158+ adapter->intr.type = VMXNET3_IT_MSI;21592159+ return;21602160+ }21612161+ }21622162+21632163+ adapter->intr.type = VMXNET3_IT_INTX;21642164+21652165+ /* INT-X related setting */21662166+ adapter->intr.num_intrs = 1;21672167+}21682168+21692169+21702170+static void21712171+vmxnet3_free_intr_resources(struct vmxnet3_adapter *adapter)21722172+{21732173+ if (adapter->intr.type == VMXNET3_IT_MSIX)21742174+ pci_disable_msix(adapter->pdev);21752175+ else if (adapter->intr.type == VMXNET3_IT_MSI)21762176+ pci_disable_msi(adapter->pdev);21772177+ else21782178+ BUG_ON(adapter->intr.type != VMXNET3_IT_INTX);21792179+}21802180+21812181+21822182+static void21832183+vmxnet3_tx_timeout(struct net_device *netdev)21842184+{21852185+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);21862186+ adapter->tx_timeout_count++;21872187+21882188+ printk(KERN_ERR "%s: tx hang\n", adapter->netdev->name);21892189+ schedule_work(&adapter->work);21902190+}21912191+21922192+21932193+static void21942194+vmxnet3_reset_work(struct work_struct *data)21952195+{21962196+ struct vmxnet3_adapter *adapter;21972197+21982198+ adapter = container_of(data, struct vmxnet3_adapter, work);21992199+22002200+ /* if another thread is resetting the device, no need to proceed */22012201+ if (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state))22022202+ return;22032203+22042204+ /* if the device is closed, we must leave it alone */22052205+ if (netif_running(adapter->netdev)) {22062206+ printk(KERN_INFO "%s: resetting\n", adapter->netdev->name);22072207+ vmxnet3_quiesce_dev(adapter);22082208+ vmxnet3_reset_dev(adapter);22092209+ vmxnet3_activate_dev(adapter);22102210+ } else {22112211+ printk(KERN_INFO "%s: already closed\n", adapter->netdev->name);22122212+ }22132213+22142214+ clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);22152215+}22162216+22172217+22182218+static int __devinit22192219+vmxnet3_probe_device(struct pci_dev *pdev,22202220+ const struct pci_device_id *id)22212221+{22222222+ static const struct net_device_ops vmxnet3_netdev_ops = {22232223+ .ndo_open = vmxnet3_open,22242224+ .ndo_stop = vmxnet3_close,22252225+ .ndo_start_xmit = vmxnet3_xmit_frame,22262226+ .ndo_set_mac_address = vmxnet3_set_mac_addr,22272227+ .ndo_change_mtu = vmxnet3_change_mtu,22282228+ .ndo_get_stats = vmxnet3_get_stats,22292229+ .ndo_tx_timeout = vmxnet3_tx_timeout,22302230+ .ndo_set_multicast_list = vmxnet3_set_mc,22312231+ .ndo_vlan_rx_register = vmxnet3_vlan_rx_register,22322232+ .ndo_vlan_rx_add_vid = vmxnet3_vlan_rx_add_vid,22332233+ .ndo_vlan_rx_kill_vid = vmxnet3_vlan_rx_kill_vid,22342234+#ifdef CONFIG_NET_POLL_CONTROLLER22352235+ .ndo_poll_controller = vmxnet3_netpoll,22362236+#endif22372237+ };22382238+ int err;22392239+ bool dma64 = false; /* stupid gcc */22402240+ u32 ver;22412241+ struct net_device *netdev;22422242+ struct vmxnet3_adapter *adapter;22432243+ u8 mac[ETH_ALEN];22442244+22452245+ netdev = alloc_etherdev(sizeof(struct vmxnet3_adapter));22462246+ if (!netdev) {22472247+ printk(KERN_ERR "Failed to alloc ethernet device for adapter "22482248+ "%s\n", pci_name(pdev));22492249+ return -ENOMEM;22502250+ }22512251+22522252+ pci_set_drvdata(pdev, netdev);22532253+ adapter = netdev_priv(netdev);22542254+ adapter->netdev = netdev;22552255+ adapter->pdev = pdev;22562256+22572257+ adapter->shared = pci_alloc_consistent(adapter->pdev,22582258+ sizeof(struct Vmxnet3_DriverShared),22592259+ &adapter->shared_pa);22602260+ if (!adapter->shared) {22612261+ printk(KERN_ERR "Failed to allocate memory for %s\n",22622262+ pci_name(pdev));22632263+ err = -ENOMEM;22642264+ goto err_alloc_shared;22652265+ }22662266+22672267+ adapter->tqd_start = pci_alloc_consistent(adapter->pdev,22682268+ sizeof(struct Vmxnet3_TxQueueDesc) +22692269+ sizeof(struct Vmxnet3_RxQueueDesc),22702270+ &adapter->queue_desc_pa);22712271+22722272+ if (!adapter->tqd_start) {22732273+ printk(KERN_ERR "Failed to allocate memory for %s\n",22742274+ pci_name(pdev));22752275+ err = -ENOMEM;22762276+ goto err_alloc_queue_desc;22772277+ }22782278+ adapter->rqd_start = (struct Vmxnet3_RxQueueDesc *)(adapter->tqd_start22792279+ + 1);22802280+22812281+ adapter->pm_conf = kmalloc(sizeof(struct Vmxnet3_PMConf), GFP_KERNEL);22822282+ if (adapter->pm_conf == NULL) {22832283+ printk(KERN_ERR "Failed to allocate memory for %s\n",22842284+ pci_name(pdev));22852285+ err = -ENOMEM;22862286+ goto err_alloc_pm;22872287+ }22882288+22892289+ err = vmxnet3_alloc_pci_resources(adapter, &dma64);22902290+ if (err < 0)22912291+ goto err_alloc_pci;22922292+22932293+ ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS);22942294+ if (ver & 1) {22952295+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_VRRS, 1);22962296+ } else {22972297+ printk(KERN_ERR "Incompatible h/w version (0x%x) for adapter"22982298+ " %s\n", ver, pci_name(pdev));22992299+ err = -EBUSY;23002300+ goto err_ver;23012301+ }23022302+23032303+ ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_UVRS);23042304+ if (ver & 1) {23052305+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_UVRS, 1);23062306+ } else {23072307+ printk(KERN_ERR "Incompatible upt version (0x%x) for "23082308+ "adapter %s\n", ver, pci_name(pdev));23092309+ err = -EBUSY;23102310+ goto err_ver;23112311+ }23122312+23132313+ vmxnet3_declare_features(adapter, dma64);23142314+23152315+ adapter->dev_number = atomic_read(&devices_found);23162316+ vmxnet3_alloc_intr_resources(adapter);23172317+23182318+ vmxnet3_read_mac_addr(adapter, mac);23192319+ memcpy(netdev->dev_addr, mac, netdev->addr_len);23202320+23212321+ netdev->netdev_ops = &vmxnet3_netdev_ops;23222322+ netdev->watchdog_timeo = 5 * HZ;23232323+ vmxnet3_set_ethtool_ops(netdev);23242324+23252325+ INIT_WORK(&adapter->work, vmxnet3_reset_work);23262326+23272327+ netif_napi_add(netdev, &adapter->napi, vmxnet3_poll, 64);23282328+ SET_NETDEV_DEV(netdev, &pdev->dev);23292329+ err = register_netdev(netdev);23302330+23312331+ if (err) {23322332+ printk(KERN_ERR "Failed to register adapter %s\n",23332333+ pci_name(pdev));23342334+ goto err_register;23352335+ }23362336+23372337+ set_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state);23382338+ atomic_inc(&devices_found);23392339+ return 0;23402340+23412341+err_register:23422342+ vmxnet3_free_intr_resources(adapter);23432343+err_ver:23442344+ vmxnet3_free_pci_resources(adapter);23452345+err_alloc_pci:23462346+ kfree(adapter->pm_conf);23472347+err_alloc_pm:23482348+ pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_TxQueueDesc) +23492349+ sizeof(struct Vmxnet3_RxQueueDesc),23502350+ adapter->tqd_start, adapter->queue_desc_pa);23512351+err_alloc_queue_desc:23522352+ pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_DriverShared),23532353+ adapter->shared, adapter->shared_pa);23542354+err_alloc_shared:23552355+ pci_set_drvdata(pdev, NULL);23562356+ free_netdev(netdev);23572357+ return err;23582358+}23592359+23602360+23612361+static void __devexit23622362+vmxnet3_remove_device(struct pci_dev *pdev)23632363+{23642364+ struct net_device *netdev = pci_get_drvdata(pdev);23652365+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);23662366+23672367+ flush_scheduled_work();23682368+23692369+ unregister_netdev(netdev);23702370+23712371+ vmxnet3_free_intr_resources(adapter);23722372+ vmxnet3_free_pci_resources(adapter);23732373+ kfree(adapter->pm_conf);23742374+ pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_TxQueueDesc) +23752375+ sizeof(struct Vmxnet3_RxQueueDesc),23762376+ adapter->tqd_start, adapter->queue_desc_pa);23772377+ pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_DriverShared),23782378+ adapter->shared, adapter->shared_pa);23792379+ free_netdev(netdev);23802380+}23812381+23822382+23832383+#ifdef CONFIG_PM23842384+23852385+static int23862386+vmxnet3_suspend(struct device *device)23872387+{23882388+ struct pci_dev *pdev = to_pci_dev(device);23892389+ struct net_device *netdev = pci_get_drvdata(pdev);23902390+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);23912391+ struct Vmxnet3_PMConf *pmConf;23922392+ struct ethhdr *ehdr;23932393+ struct arphdr *ahdr;23942394+ u8 *arpreq;23952395+ struct in_device *in_dev;23962396+ struct in_ifaddr *ifa;23972397+ int i = 0;23982398+23992399+ if (!netif_running(netdev))24002400+ return 0;24012401+24022402+ vmxnet3_disable_all_intrs(adapter);24032403+ vmxnet3_free_irqs(adapter);24042404+ vmxnet3_free_intr_resources(adapter);24052405+24062406+ netif_device_detach(netdev);24072407+ netif_stop_queue(netdev);24082408+24092409+ /* Create wake-up filters. */24102410+ pmConf = adapter->pm_conf;24112411+ memset(pmConf, 0, sizeof(*pmConf));24122412+24132413+ if (adapter->wol & WAKE_UCAST) {24142414+ pmConf->filters[i].patternSize = ETH_ALEN;24152415+ pmConf->filters[i].maskSize = 1;24162416+ memcpy(pmConf->filters[i].pattern, netdev->dev_addr, ETH_ALEN);24172417+ pmConf->filters[i].mask[0] = 0x3F; /* LSB ETH_ALEN bits */24182418+24192419+ pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_FILTER;24202420+ i++;24212421+ }24222422+24232423+ if (adapter->wol & WAKE_ARP) {24242424+ in_dev = in_dev_get(netdev);24252425+ if (!in_dev)24262426+ goto skip_arp;24272427+24282428+ ifa = (struct in_ifaddr *)in_dev->ifa_list;24292429+ if (!ifa)24302430+ goto skip_arp;24312431+24322432+ pmConf->filters[i].patternSize = ETH_HLEN + /* Ethernet header*/24332433+ sizeof(struct arphdr) + /* ARP header */24342434+ 2 * ETH_ALEN + /* 2 Ethernet addresses*/24352435+ 2 * sizeof(u32); /*2 IPv4 addresses */24362436+ pmConf->filters[i].maskSize =24372437+ (pmConf->filters[i].patternSize - 1) / 8 + 1;24382438+24392439+ /* ETH_P_ARP in Ethernet header. */24402440+ ehdr = (struct ethhdr *)pmConf->filters[i].pattern;24412441+ ehdr->h_proto = htons(ETH_P_ARP);24422442+24432443+ /* ARPOP_REQUEST in ARP header. */24442444+ ahdr = (struct arphdr *)&pmConf->filters[i].pattern[ETH_HLEN];24452445+ ahdr->ar_op = htons(ARPOP_REQUEST);24462446+ arpreq = (u8 *)(ahdr + 1);24472447+24482448+ /* The Unicast IPv4 address in 'tip' field. */24492449+ arpreq += 2 * ETH_ALEN + sizeof(u32);24502450+ *(u32 *)arpreq = ifa->ifa_address;24512451+24522452+ /* The mask for the relevant bits. */24532453+ pmConf->filters[i].mask[0] = 0x00;24542454+ pmConf->filters[i].mask[1] = 0x30; /* ETH_P_ARP */24552455+ pmConf->filters[i].mask[2] = 0x30; /* ARPOP_REQUEST */24562456+ pmConf->filters[i].mask[3] = 0x00;24572457+ pmConf->filters[i].mask[4] = 0xC0; /* IPv4 TIP */24582458+ pmConf->filters[i].mask[5] = 0x03; /* IPv4 TIP */24592459+ in_dev_put(in_dev);24602460+24612461+ pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_FILTER;24622462+ i++;24632463+ }24642464+24652465+skip_arp:24662466+ if (adapter->wol & WAKE_MAGIC)24672467+ pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_MAGIC;24682468+24692469+ pmConf->numFilters = i;24702470+24712471+ adapter->shared->devRead.pmConfDesc.confVer = 1;24722472+ adapter->shared->devRead.pmConfDesc.confLen = sizeof(*pmConf);24732473+ adapter->shared->devRead.pmConfDesc.confPA = virt_to_phys(pmConf);24742474+24752475+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,24762476+ VMXNET3_CMD_UPDATE_PMCFG);24772477+24782478+ pci_save_state(pdev);24792479+ pci_enable_wake(pdev, pci_choose_state(pdev, PMSG_SUSPEND),24802480+ adapter->wol);24812481+ pci_disable_device(pdev);24822482+ pci_set_power_state(pdev, pci_choose_state(pdev, PMSG_SUSPEND));24832483+24842484+ return 0;24852485+}24862486+24872487+24882488+static int24892489+vmxnet3_resume(struct device *device)24902490+{24912491+ int err;24922492+ struct pci_dev *pdev = to_pci_dev(device);24932493+ struct net_device *netdev = pci_get_drvdata(pdev);24942494+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);24952495+ struct Vmxnet3_PMConf *pmConf;24962496+24972497+ if (!netif_running(netdev))24982498+ return 0;24992499+25002500+ /* Destroy wake-up filters. */25012501+ pmConf = adapter->pm_conf;25022502+ memset(pmConf, 0, sizeof(*pmConf));25032503+25042504+ adapter->shared->devRead.pmConfDesc.confVer = 1;25052505+ adapter->shared->devRead.pmConfDesc.confLen = sizeof(*pmConf);25062506+ adapter->shared->devRead.pmConfDesc.confPA = virt_to_phys(pmConf);25072507+25082508+ netif_device_attach(netdev);25092509+ pci_set_power_state(pdev, PCI_D0);25102510+ pci_restore_state(pdev);25112511+ err = pci_enable_device_mem(pdev);25122512+ if (err != 0)25132513+ return err;25142514+25152515+ pci_enable_wake(pdev, PCI_D0, 0);25162516+25172517+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,25182518+ VMXNET3_CMD_UPDATE_PMCFG);25192519+ vmxnet3_alloc_intr_resources(adapter);25202520+ vmxnet3_request_irqs(adapter);25212521+ vmxnet3_enable_all_intrs(adapter);25222522+25232523+ return 0;25242524+}25252525+25262526+static struct dev_pm_ops vmxnet3_pm_ops = {25272527+ .suspend = vmxnet3_suspend,25282528+ .resume = vmxnet3_resume,25292529+};25302530+#endif25312531+25322532+static struct pci_driver vmxnet3_driver = {25332533+ .name = vmxnet3_driver_name,25342534+ .id_table = vmxnet3_pciid_table,25352535+ .probe = vmxnet3_probe_device,25362536+ .remove = __devexit_p(vmxnet3_remove_device),25372537+#ifdef CONFIG_PM25382538+ .driver.pm = &vmxnet3_pm_ops,25392539+#endif25402540+};25412541+25422542+25432543+static int __init25442544+vmxnet3_init_module(void)25452545+{25462546+ printk(KERN_INFO "%s - version %s\n", VMXNET3_DRIVER_DESC,25472547+ VMXNET3_DRIVER_VERSION_REPORT);25482548+ return pci_register_driver(&vmxnet3_driver);25492549+}25502550+25512551+module_init(vmxnet3_init_module);25522552+25532553+25542554+static void25552555+vmxnet3_exit_module(void)25562556+{25572557+ pci_unregister_driver(&vmxnet3_driver);25582558+}25592559+25602560+module_exit(vmxnet3_exit_module);25612561+25622562+MODULE_AUTHOR("VMware, Inc.");25632563+MODULE_DESCRIPTION(VMXNET3_DRIVER_DESC);25642564+MODULE_LICENSE("GPL v2");25652565+MODULE_VERSION(VMXNET3_DRIVER_VERSION_STRING);
+566
drivers/net/vmxnet3/vmxnet3_ethtool.c
···11+/*22+ * Linux driver for VMware's vmxnet3 ethernet NIC.33+ *44+ * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License as published by the88+ * Free Software Foundation; version 2 of the License and no later version.99+ *1010+ * This program is distributed in the hope that it will be useful, but1111+ * WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1313+ * NON INFRINGEMENT. See the GNU General Public License for more1414+ * details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919+ *2020+ * The full GNU General Public License is included in this distribution in2121+ * the file called "COPYING".2222+ *2323+ * Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2424+ *2525+ */2626+2727+2828+#include "vmxnet3_int.h"2929+3030+struct vmxnet3_stat_desc {3131+ char desc[ETH_GSTRING_LEN];3232+ int offset;3333+};3434+3535+3636+static u323737+vmxnet3_get_rx_csum(struct net_device *netdev)3838+{3939+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);4040+ return adapter->rxcsum;4141+}4242+4343+4444+static int4545+vmxnet3_set_rx_csum(struct net_device *netdev, u32 val)4646+{4747+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);4848+4949+ if (adapter->rxcsum != val) {5050+ adapter->rxcsum = val;5151+ if (netif_running(netdev)) {5252+ if (val)5353+ adapter->shared->devRead.misc.uptFeatures |=5454+ UPT1_F_RXCSUM;5555+ else5656+ adapter->shared->devRead.misc.uptFeatures &=5757+ ~UPT1_F_RXCSUM;5858+5959+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,6060+ VMXNET3_CMD_UPDATE_FEATURE);6161+ }6262+ }6363+ return 0;6464+}6565+6666+6767+/* per tq stats maintained by the device */6868+static const struct vmxnet3_stat_desc6969+vmxnet3_tq_dev_stats[] = {7070+ /* description, offset */7171+ { "TSO pkts tx", offsetof(struct UPT1_TxStats, TSOPktsTxOK) },7272+ { "TSO bytes tx", offsetof(struct UPT1_TxStats, TSOBytesTxOK) },7373+ { "ucast pkts tx", offsetof(struct UPT1_TxStats, ucastPktsTxOK) },7474+ { "ucast bytes tx", offsetof(struct UPT1_TxStats, ucastBytesTxOK) },7575+ { "mcast pkts tx", offsetof(struct UPT1_TxStats, mcastPktsTxOK) },7676+ { "mcast bytes tx", offsetof(struct UPT1_TxStats, mcastBytesTxOK) },7777+ { "bcast pkts tx", offsetof(struct UPT1_TxStats, bcastPktsTxOK) },7878+ { "bcast bytes tx", offsetof(struct UPT1_TxStats, bcastBytesTxOK) },7979+ { "pkts tx err", offsetof(struct UPT1_TxStats, pktsTxError) },8080+ { "pkts tx discard", offsetof(struct UPT1_TxStats, pktsTxDiscard) },8181+};8282+8383+/* per tq stats maintained by the driver */8484+static const struct vmxnet3_stat_desc8585+vmxnet3_tq_driver_stats[] = {8686+ /* description, offset */8787+ {"drv dropped tx total", offsetof(struct vmxnet3_tq_driver_stats,8888+ drop_total) },8989+ { " too many frags", offsetof(struct vmxnet3_tq_driver_stats,9090+ drop_too_many_frags) },9191+ { " giant hdr", offsetof(struct vmxnet3_tq_driver_stats,9292+ drop_oversized_hdr) },9393+ { " hdr err", offsetof(struct vmxnet3_tq_driver_stats,9494+ drop_hdr_inspect_err) },9595+ { " tso", offsetof(struct vmxnet3_tq_driver_stats,9696+ drop_tso) },9797+ { "ring full", offsetof(struct vmxnet3_tq_driver_stats,9898+ tx_ring_full) },9999+ { "pkts linearized", offsetof(struct vmxnet3_tq_driver_stats,100100+ linearized) },101101+ { "hdr cloned", offsetof(struct vmxnet3_tq_driver_stats,102102+ copy_skb_header) },103103+ { "giant hdr", offsetof(struct vmxnet3_tq_driver_stats,104104+ oversized_hdr) },105105+};106106+107107+/* per rq stats maintained by the device */108108+static const struct vmxnet3_stat_desc109109+vmxnet3_rq_dev_stats[] = {110110+ { "LRO pkts rx", offsetof(struct UPT1_RxStats, LROPktsRxOK) },111111+ { "LRO byte rx", offsetof(struct UPT1_RxStats, LROBytesRxOK) },112112+ { "ucast pkts rx", offsetof(struct UPT1_RxStats, ucastPktsRxOK) },113113+ { "ucast bytes rx", offsetof(struct UPT1_RxStats, ucastBytesRxOK) },114114+ { "mcast pkts rx", offsetof(struct UPT1_RxStats, mcastPktsRxOK) },115115+ { "mcast bytes rx", offsetof(struct UPT1_RxStats, mcastBytesRxOK) },116116+ { "bcast pkts rx", offsetof(struct UPT1_RxStats, bcastPktsRxOK) },117117+ { "bcast bytes rx", offsetof(struct UPT1_RxStats, bcastBytesRxOK) },118118+ { "pkts rx out of buf", offsetof(struct UPT1_RxStats, pktsRxOutOfBuf) },119119+ { "pkts rx err", offsetof(struct UPT1_RxStats, pktsRxError) },120120+};121121+122122+/* per rq stats maintained by the driver */123123+static const struct vmxnet3_stat_desc124124+vmxnet3_rq_driver_stats[] = {125125+ /* description, offset */126126+ { "drv dropped rx total", offsetof(struct vmxnet3_rq_driver_stats,127127+ drop_total) },128128+ { " err", offsetof(struct vmxnet3_rq_driver_stats,129129+ drop_err) },130130+ { " fcs", offsetof(struct vmxnet3_rq_driver_stats,131131+ drop_fcs) },132132+ { "rx buf alloc fail", offsetof(struct vmxnet3_rq_driver_stats,133133+ rx_buf_alloc_failure) },134134+};135135+136136+/* gloabl stats maintained by the driver */137137+static const struct vmxnet3_stat_desc138138+vmxnet3_global_stats[] = {139139+ /* description, offset */140140+ { "tx timeout count", offsetof(struct vmxnet3_adapter,141141+ tx_timeout_count) }142142+};143143+144144+145145+struct net_device_stats *146146+vmxnet3_get_stats(struct net_device *netdev)147147+{148148+ struct vmxnet3_adapter *adapter;149149+ struct vmxnet3_tq_driver_stats *drvTxStats;150150+ struct vmxnet3_rq_driver_stats *drvRxStats;151151+ struct UPT1_TxStats *devTxStats;152152+ struct UPT1_RxStats *devRxStats;153153+ struct net_device_stats *net_stats = &netdev->stats;154154+155155+ adapter = netdev_priv(netdev);156156+157157+ /* Collect the dev stats into the shared area */158158+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);159159+160160+ /* Assuming that we have a single queue device */161161+ devTxStats = &adapter->tqd_start->stats;162162+ devRxStats = &adapter->rqd_start->stats;163163+164164+ /* Get access to the driver stats per queue */165165+ drvTxStats = &adapter->tx_queue.stats;166166+ drvRxStats = &adapter->rx_queue.stats;167167+168168+ memset(net_stats, 0, sizeof(*net_stats));169169+170170+ net_stats->rx_packets = devRxStats->ucastPktsRxOK +171171+ devRxStats->mcastPktsRxOK +172172+ devRxStats->bcastPktsRxOK;173173+174174+ net_stats->tx_packets = devTxStats->ucastPktsTxOK +175175+ devTxStats->mcastPktsTxOK +176176+ devTxStats->bcastPktsTxOK;177177+178178+ net_stats->rx_bytes = devRxStats->ucastBytesRxOK +179179+ devRxStats->mcastBytesRxOK +180180+ devRxStats->bcastBytesRxOK;181181+182182+ net_stats->tx_bytes = devTxStats->ucastBytesTxOK +183183+ devTxStats->mcastBytesTxOK +184184+ devTxStats->bcastBytesTxOK;185185+186186+ net_stats->rx_errors = devRxStats->pktsRxError;187187+ net_stats->tx_errors = devTxStats->pktsTxError;188188+ net_stats->rx_dropped = drvRxStats->drop_total;189189+ net_stats->tx_dropped = drvTxStats->drop_total;190190+ net_stats->multicast = devRxStats->mcastPktsRxOK;191191+192192+ return net_stats;193193+}194194+195195+static int196196+vmxnet3_get_sset_count(struct net_device *netdev, int sset)197197+{198198+ switch (sset) {199199+ case ETH_SS_STATS:200200+ return ARRAY_SIZE(vmxnet3_tq_dev_stats) +201201+ ARRAY_SIZE(vmxnet3_tq_driver_stats) +202202+ ARRAY_SIZE(vmxnet3_rq_dev_stats) +203203+ ARRAY_SIZE(vmxnet3_rq_driver_stats) +204204+ ARRAY_SIZE(vmxnet3_global_stats);205205+ default:206206+ return -EOPNOTSUPP;207207+ }208208+}209209+210210+211211+static int212212+vmxnet3_get_regs_len(struct net_device *netdev)213213+{214214+ return 20 * sizeof(u32);215215+}216216+217217+218218+static void219219+vmxnet3_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)220220+{221221+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);222222+223223+ strlcpy(drvinfo->driver, vmxnet3_driver_name, sizeof(drvinfo->driver));224224+ drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0';225225+226226+ strlcpy(drvinfo->version, VMXNET3_DRIVER_VERSION_REPORT,227227+ sizeof(drvinfo->version));228228+ drvinfo->driver[sizeof(drvinfo->version) - 1] = '\0';229229+230230+ strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version));231231+ drvinfo->fw_version[sizeof(drvinfo->fw_version) - 1] = '\0';232232+233233+ strlcpy(drvinfo->bus_info, pci_name(adapter->pdev),234234+ ETHTOOL_BUSINFO_LEN);235235+ drvinfo->n_stats = vmxnet3_get_sset_count(netdev, ETH_SS_STATS);236236+ drvinfo->testinfo_len = 0;237237+ drvinfo->eedump_len = 0;238238+ drvinfo->regdump_len = vmxnet3_get_regs_len(netdev);239239+}240240+241241+242242+static void243243+vmxnet3_get_strings(struct net_device *netdev, u32 stringset, u8 *buf)244244+{245245+ if (stringset == ETH_SS_STATS) {246246+ int i;247247+248248+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++) {249249+ memcpy(buf, vmxnet3_tq_dev_stats[i].desc,250250+ ETH_GSTRING_LEN);251251+ buf += ETH_GSTRING_LEN;252252+ }253253+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++) {254254+ memcpy(buf, vmxnet3_tq_driver_stats[i].desc,255255+ ETH_GSTRING_LEN);256256+ buf += ETH_GSTRING_LEN;257257+ }258258+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++) {259259+ memcpy(buf, vmxnet3_rq_dev_stats[i].desc,260260+ ETH_GSTRING_LEN);261261+ buf += ETH_GSTRING_LEN;262262+ }263263+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++) {264264+ memcpy(buf, vmxnet3_rq_driver_stats[i].desc,265265+ ETH_GSTRING_LEN);266266+ buf += ETH_GSTRING_LEN;267267+ }268268+ for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++) {269269+ memcpy(buf, vmxnet3_global_stats[i].desc,270270+ ETH_GSTRING_LEN);271271+ buf += ETH_GSTRING_LEN;272272+ }273273+ }274274+}275275+276276+static u32277277+vmxnet3_get_flags(struct net_device *netdev) {278278+ return netdev->features;279279+}280280+281281+static int282282+vmxnet3_set_flags(struct net_device *netdev, u32 data) {283283+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);284284+ u8 lro_requested = (data & ETH_FLAG_LRO) == 0 ? 0 : 1;285285+ u8 lro_present = (netdev->features & NETIF_F_LRO) == 0 ? 0 : 1;286286+287287+ if (lro_requested ^ lro_present) {288288+ /* toggle the LRO feature*/289289+ netdev->features ^= NETIF_F_LRO;290290+291291+ /* update harware LRO capability accordingly */292292+ if (lro_requested)293293+ adapter->shared->devRead.misc.uptFeatures &= UPT1_F_LRO;294294+ else295295+ adapter->shared->devRead.misc.uptFeatures &=296296+ ~UPT1_F_LRO;297297+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,298298+ VMXNET3_CMD_UPDATE_FEATURE);299299+ }300300+ return 0;301301+}302302+303303+static void304304+vmxnet3_get_ethtool_stats(struct net_device *netdev,305305+ struct ethtool_stats *stats, u64 *buf)306306+{307307+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);308308+ u8 *base;309309+ int i;310310+311311+ VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);312312+313313+ /* this does assume each counter is 64-bit wide */314314+315315+ base = (u8 *)&adapter->tqd_start->stats;316316+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++)317317+ *buf++ = *(u64 *)(base + vmxnet3_tq_dev_stats[i].offset);318318+319319+ base = (u8 *)&adapter->tx_queue.stats;320320+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++)321321+ *buf++ = *(u64 *)(base + vmxnet3_tq_driver_stats[i].offset);322322+323323+ base = (u8 *)&adapter->rqd_start->stats;324324+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++)325325+ *buf++ = *(u64 *)(base + vmxnet3_rq_dev_stats[i].offset);326326+327327+ base = (u8 *)&adapter->rx_queue.stats;328328+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++)329329+ *buf++ = *(u64 *)(base + vmxnet3_rq_driver_stats[i].offset);330330+331331+ base = (u8 *)adapter;332332+ for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++)333333+ *buf++ = *(u64 *)(base + vmxnet3_global_stats[i].offset);334334+}335335+336336+337337+static void338338+vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p)339339+{340340+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);341341+ u32 *buf = p;342342+343343+ memset(p, 0, vmxnet3_get_regs_len(netdev));344344+345345+ regs->version = 1;346346+347347+ /* Update vmxnet3_get_regs_len if we want to dump more registers */348348+349349+ /* make each ring use multiple of 16 bytes */350350+ buf[0] = adapter->tx_queue.tx_ring.next2fill;351351+ buf[1] = adapter->tx_queue.tx_ring.next2comp;352352+ buf[2] = adapter->tx_queue.tx_ring.gen;353353+ buf[3] = 0;354354+355355+ buf[4] = adapter->tx_queue.comp_ring.next2proc;356356+ buf[5] = adapter->tx_queue.comp_ring.gen;357357+ buf[6] = adapter->tx_queue.stopped;358358+ buf[7] = 0;359359+360360+ buf[8] = adapter->rx_queue.rx_ring[0].next2fill;361361+ buf[9] = adapter->rx_queue.rx_ring[0].next2comp;362362+ buf[10] = adapter->rx_queue.rx_ring[0].gen;363363+ buf[11] = 0;364364+365365+ buf[12] = adapter->rx_queue.rx_ring[1].next2fill;366366+ buf[13] = adapter->rx_queue.rx_ring[1].next2comp;367367+ buf[14] = adapter->rx_queue.rx_ring[1].gen;368368+ buf[15] = 0;369369+370370+ buf[16] = adapter->rx_queue.comp_ring.next2proc;371371+ buf[17] = adapter->rx_queue.comp_ring.gen;372372+ buf[18] = 0;373373+ buf[19] = 0;374374+}375375+376376+377377+static void378378+vmxnet3_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)379379+{380380+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);381381+382382+ wol->supported = WAKE_UCAST | WAKE_ARP | WAKE_MAGIC;383383+ wol->wolopts = adapter->wol;384384+}385385+386386+387387+static int388388+vmxnet3_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)389389+{390390+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);391391+392392+ if (wol->wolopts & (WAKE_PHY | WAKE_MCAST | WAKE_BCAST |393393+ WAKE_MAGICSECURE)) {394394+ return -EOPNOTSUPP;395395+ }396396+397397+ adapter->wol = wol->wolopts;398398+399399+ device_set_wakeup_enable(&adapter->pdev->dev, adapter->wol);400400+401401+ return 0;402402+}403403+404404+405405+static int406406+vmxnet3_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)407407+{408408+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);409409+410410+ ecmd->supported = SUPPORTED_10000baseT_Full | SUPPORTED_1000baseT_Full |411411+ SUPPORTED_TP;412412+ ecmd->advertising = ADVERTISED_TP;413413+ ecmd->port = PORT_TP;414414+ ecmd->transceiver = XCVR_INTERNAL;415415+416416+ if (adapter->link_speed) {417417+ ecmd->speed = adapter->link_speed;418418+ ecmd->duplex = DUPLEX_FULL;419419+ } else {420420+ ecmd->speed = -1;421421+ ecmd->duplex = -1;422422+ }423423+ return 0;424424+}425425+426426+427427+static void428428+vmxnet3_get_ringparam(struct net_device *netdev,429429+ struct ethtool_ringparam *param)430430+{431431+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);432432+433433+ param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;434434+ param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;435435+ param->rx_mini_max_pending = 0;436436+ param->rx_jumbo_max_pending = 0;437437+438438+ param->rx_pending = adapter->rx_queue.rx_ring[0].size;439439+ param->tx_pending = adapter->tx_queue.tx_ring.size;440440+ param->rx_mini_pending = 0;441441+ param->rx_jumbo_pending = 0;442442+}443443+444444+445445+static int446446+vmxnet3_set_ringparam(struct net_device *netdev,447447+ struct ethtool_ringparam *param)448448+{449449+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);450450+ u32 new_tx_ring_size, new_rx_ring_size;451451+ u32 sz;452452+ int err = 0;453453+454454+ if (param->tx_pending == 0 || param->tx_pending >455455+ VMXNET3_TX_RING_MAX_SIZE)456456+ return -EINVAL;457457+458458+ if (param->rx_pending == 0 || param->rx_pending >459459+ VMXNET3_RX_RING_MAX_SIZE)460460+ return -EINVAL;461461+462462+463463+ /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */464464+ new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) &465465+ ~VMXNET3_RING_SIZE_MASK;466466+ new_tx_ring_size = min_t(u32, new_tx_ring_size,467467+ VMXNET3_TX_RING_MAX_SIZE);468468+ if (new_tx_ring_size > VMXNET3_TX_RING_MAX_SIZE || (new_tx_ring_size %469469+ VMXNET3_RING_SIZE_ALIGN) != 0)470470+ return -EINVAL;471471+472472+ /* ring0 has to be a multiple of473473+ * rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN474474+ */475475+ sz = adapter->rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN;476476+ new_rx_ring_size = (param->rx_pending + sz - 1) / sz * sz;477477+ new_rx_ring_size = min_t(u32, new_rx_ring_size,478478+ VMXNET3_RX_RING_MAX_SIZE / sz * sz);479479+ if (new_rx_ring_size > VMXNET3_RX_RING_MAX_SIZE || (new_rx_ring_size %480480+ sz) != 0)481481+ return -EINVAL;482482+483483+ if (new_tx_ring_size == adapter->tx_queue.tx_ring.size &&484484+ new_rx_ring_size == adapter->rx_queue.rx_ring[0].size) {485485+ return 0;486486+ }487487+488488+ /*489489+ * Reset_work may be in the middle of resetting the device, wait for its490490+ * completion.491491+ */492492+ while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state))493493+ msleep(1);494494+495495+ if (netif_running(netdev)) {496496+ vmxnet3_quiesce_dev(adapter);497497+ vmxnet3_reset_dev(adapter);498498+499499+ /* recreate the rx queue and the tx queue based on the500500+ * new sizes */501501+ vmxnet3_tq_destroy(&adapter->tx_queue, adapter);502502+ vmxnet3_rq_destroy(&adapter->rx_queue, adapter);503503+504504+ err = vmxnet3_create_queues(adapter, new_tx_ring_size,505505+ new_rx_ring_size, VMXNET3_DEF_RX_RING_SIZE);506506+ if (err) {507507+ /* failed, most likely because of OOM, try default508508+ * size */509509+ printk(KERN_ERR "%s: failed to apply new sizes, try the"510510+ " default ones\n", netdev->name);511511+ err = vmxnet3_create_queues(adapter,512512+ VMXNET3_DEF_TX_RING_SIZE,513513+ VMXNET3_DEF_RX_RING_SIZE,514514+ VMXNET3_DEF_RX_RING_SIZE);515515+ if (err) {516516+ printk(KERN_ERR "%s: failed to create queues "517517+ "with default sizes. Closing it\n",518518+ netdev->name);519519+ goto out;520520+ }521521+ }522522+523523+ err = vmxnet3_activate_dev(adapter);524524+ if (err)525525+ printk(KERN_ERR "%s: failed to re-activate, error %d."526526+ " Closing it\n", netdev->name, err);527527+ }528528+529529+out:530530+ clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);531531+ if (err)532532+ vmxnet3_force_close(adapter);533533+534534+ return err;535535+}536536+537537+538538+static struct ethtool_ops vmxnet3_ethtool_ops = {539539+ .get_settings = vmxnet3_get_settings,540540+ .get_drvinfo = vmxnet3_get_drvinfo,541541+ .get_regs_len = vmxnet3_get_regs_len,542542+ .get_regs = vmxnet3_get_regs,543543+ .get_wol = vmxnet3_get_wol,544544+ .set_wol = vmxnet3_set_wol,545545+ .get_link = ethtool_op_get_link,546546+ .get_rx_csum = vmxnet3_get_rx_csum,547547+ .set_rx_csum = vmxnet3_set_rx_csum,548548+ .get_tx_csum = ethtool_op_get_tx_csum,549549+ .set_tx_csum = ethtool_op_set_tx_hw_csum,550550+ .get_sg = ethtool_op_get_sg,551551+ .set_sg = ethtool_op_set_sg,552552+ .get_tso = ethtool_op_get_tso,553553+ .set_tso = ethtool_op_set_tso,554554+ .get_strings = vmxnet3_get_strings,555555+ .get_flags = vmxnet3_get_flags,556556+ .set_flags = vmxnet3_set_flags,557557+ .get_sset_count = vmxnet3_get_sset_count,558558+ .get_ethtool_stats = vmxnet3_get_ethtool_stats,559559+ .get_ringparam = vmxnet3_get_ringparam,560560+ .set_ringparam = vmxnet3_set_ringparam,561561+};562562+563563+void vmxnet3_set_ethtool_ops(struct net_device *netdev)564564+{565565+ SET_ETHTOOL_OPS(netdev, &vmxnet3_ethtool_ops);566566+}
+389
drivers/net/vmxnet3/vmxnet3_int.h
···11+/*22+ * Linux driver for VMware's vmxnet3 ethernet NIC.33+ *44+ * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License as published by the88+ * Free Software Foundation; version 2 of the License and no later version.99+ *1010+ * This program is distributed in the hope that it will be useful, but1111+ * WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or1313+ * NON INFRINGEMENT. See the GNU General Public License for more1414+ * details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919+ *2020+ * The full GNU General Public License is included in this distribution in2121+ * the file called "COPYING".2222+ *2323+ * Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com>2424+ *2525+ */2626+2727+#ifndef _VMXNET3_INT_H2828+#define _VMXNET3_INT_H2929+3030+#include <linux/types.h>3131+#include <linux/ethtool.h>3232+#include <linux/delay.h>3333+#include <linux/netdevice.h>3434+#include <linux/pci.h>3535+#include <linux/ethtool.h>3636+#include <linux/compiler.h>3737+#include <linux/module.h>3838+#include <linux/moduleparam.h>3939+#include <linux/slab.h>4040+#include <linux/spinlock.h>4141+#include <linux/ioport.h>4242+#include <linux/highmem.h>4343+#include <linux/init.h>4444+#include <linux/timer.h>4545+#include <linux/skbuff.h>4646+#include <linux/interrupt.h>4747+#include <linux/workqueue.h>4848+#include <linux/uaccess.h>4949+#include <asm/dma.h>5050+#include <asm/page.h>5151+5252+#include <linux/tcp.h>5353+#include <linux/udp.h>5454+#include <linux/ip.h>5555+#include <linux/ipv6.h>5656+#include <linux/in.h>5757+#include <linux/etherdevice.h>5858+#include <asm/checksum.h>5959+#include <linux/if_vlan.h>6060+#include <linux/if_arp.h>6161+#include <linux/inetdevice.h>6262+#include <linux/dst.h>6363+6464+#include "vmxnet3_defs.h"6565+6666+#ifdef DEBUG6767+# define VMXNET3_DRIVER_VERSION_REPORT VMXNET3_DRIVER_VERSION_STRING"-NAPI(debug)"6868+#else6969+# define VMXNET3_DRIVER_VERSION_REPORT VMXNET3_DRIVER_VERSION_STRING"-NAPI"7070+#endif7171+7272+7373+/*7474+ * Version numbers7575+ */7676+#define VMXNET3_DRIVER_VERSION_STRING "1.0.5.0-k"7777+7878+/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */7979+#define VMXNET3_DRIVER_VERSION_NUM 0x010005008080+8181+8282+/*8383+ * Capabilities8484+ */8585+8686+enum {8787+ VMNET_CAP_SG = 0x0001, /* Can do scatter-gather transmits. */8888+ VMNET_CAP_IP4_CSUM = 0x0002, /* Can checksum only TCP/UDP over8989+ * IPv4 */9090+ VMNET_CAP_HW_CSUM = 0x0004, /* Can checksum all packets. */9191+ VMNET_CAP_HIGH_DMA = 0x0008, /* Can DMA to high memory. */9292+ VMNET_CAP_TOE = 0x0010, /* Supports TCP/IP offload. */9393+ VMNET_CAP_TSO = 0x0020, /* Supports TCP Segmentation9494+ * offload */9595+ VMNET_CAP_SW_TSO = 0x0040, /* Supports SW TCP Segmentation */9696+ VMNET_CAP_VMXNET_APROM = 0x0080, /* Vmxnet APROM support */9797+ VMNET_CAP_HW_TX_VLAN = 0x0100, /* Can we do VLAN tagging in HW */9898+ VMNET_CAP_HW_RX_VLAN = 0x0200, /* Can we do VLAN untagging in HW */9999+ VMNET_CAP_SW_VLAN = 0x0400, /* VLAN tagging/untagging in SW */100100+ VMNET_CAP_WAKE_PCKT_RCV = 0x0800, /* Can wake on network packet recv? */101101+ VMNET_CAP_ENABLE_INT_INLINE = 0x1000, /* Enable Interrupt Inline */102102+ VMNET_CAP_ENABLE_HEADER_COPY = 0x2000, /* copy header for vmkernel */103103+ VMNET_CAP_TX_CHAIN = 0x4000, /* Guest can use multiple tx entries104104+ * for a pkt */105105+ VMNET_CAP_RX_CHAIN = 0x8000, /* pkt can span multiple rx entries */106106+ VMNET_CAP_LPD = 0x10000, /* large pkt delivery */107107+ VMNET_CAP_BPF = 0x20000, /* BPF Support in VMXNET Virtual HW*/108108+ VMNET_CAP_SG_SPAN_PAGES = 0x40000, /* Scatter-gather can span multiple*/109109+ /* pages transmits */110110+ VMNET_CAP_IP6_CSUM = 0x80000, /* Can do IPv6 csum offload. */111111+ VMNET_CAP_TSO6 = 0x100000, /* TSO seg. offload for IPv6 pkts. */112112+ VMNET_CAP_TSO256k = 0x200000, /* Can do TSO seg offload for */113113+ /* pkts up to 256kB. */114114+ VMNET_CAP_UPT = 0x400000 /* Support UPT */115115+};116116+117117+/*118118+ * PCI vendor and device IDs.119119+ */120120+#define PCI_VENDOR_ID_VMWARE 0x15AD121121+#define PCI_DEVICE_ID_VMWARE_VMXNET3 0x07B0122122+#define MAX_ETHERNET_CARDS 10123123+#define MAX_PCI_PASSTHRU_DEVICE 6124124+125125+struct vmxnet3_cmd_ring {126126+ union Vmxnet3_GenericDesc *base;127127+ u32 size;128128+ u32 next2fill;129129+ u32 next2comp;130130+ u8 gen;131131+ dma_addr_t basePA;132132+};133133+134134+static inline void135135+vmxnet3_cmd_ring_adv_next2fill(struct vmxnet3_cmd_ring *ring)136136+{137137+ ring->next2fill++;138138+ if (unlikely(ring->next2fill == ring->size)) {139139+ ring->next2fill = 0;140140+ VMXNET3_FLIP_RING_GEN(ring->gen);141141+ }142142+}143143+144144+static inline void145145+vmxnet3_cmd_ring_adv_next2comp(struct vmxnet3_cmd_ring *ring)146146+{147147+ VMXNET3_INC_RING_IDX_ONLY(ring->next2comp, ring->size);148148+}149149+150150+static inline int151151+vmxnet3_cmd_ring_desc_avail(struct vmxnet3_cmd_ring *ring)152152+{153153+ return (ring->next2comp > ring->next2fill ? 0 : ring->size) +154154+ ring->next2comp - ring->next2fill - 1;155155+}156156+157157+struct vmxnet3_comp_ring {158158+ union Vmxnet3_GenericDesc *base;159159+ u32 size;160160+ u32 next2proc;161161+ u8 gen;162162+ u8 intr_idx;163163+ dma_addr_t basePA;164164+};165165+166166+static inline void167167+vmxnet3_comp_ring_adv_next2proc(struct vmxnet3_comp_ring *ring)168168+{169169+ ring->next2proc++;170170+ if (unlikely(ring->next2proc == ring->size)) {171171+ ring->next2proc = 0;172172+ VMXNET3_FLIP_RING_GEN(ring->gen);173173+ }174174+}175175+176176+struct vmxnet3_tx_data_ring {177177+ struct Vmxnet3_TxDataDesc *base;178178+ u32 size;179179+ dma_addr_t basePA;180180+};181181+182182+enum vmxnet3_buf_map_type {183183+ VMXNET3_MAP_INVALID = 0,184184+ VMXNET3_MAP_NONE,185185+ VMXNET3_MAP_SINGLE,186186+ VMXNET3_MAP_PAGE,187187+};188188+189189+struct vmxnet3_tx_buf_info {190190+ u32 map_type;191191+ u16 len;192192+ u16 sop_idx;193193+ dma_addr_t dma_addr;194194+ struct sk_buff *skb;195195+};196196+197197+struct vmxnet3_tq_driver_stats {198198+ u64 drop_total; /* # of pkts dropped by the driver, the199199+ * counters below track droppings due to200200+ * different reasons201201+ */202202+ u64 drop_too_many_frags;203203+ u64 drop_oversized_hdr;204204+ u64 drop_hdr_inspect_err;205205+ u64 drop_tso;206206+207207+ u64 tx_ring_full;208208+ u64 linearized; /* # of pkts linearized */209209+ u64 copy_skb_header; /* # of times we have to copy skb header */210210+ u64 oversized_hdr;211211+};212212+213213+struct vmxnet3_tx_ctx {214214+ bool ipv4;215215+ u16 mss;216216+ u32 eth_ip_hdr_size; /* only valid for pkts requesting tso or csum217217+ * offloading218218+ */219219+ u32 l4_hdr_size; /* only valid if mss != 0 */220220+ u32 copy_size; /* # of bytes copied into the data ring */221221+ union Vmxnet3_GenericDesc *sop_txd;222222+ union Vmxnet3_GenericDesc *eop_txd;223223+};224224+225225+struct vmxnet3_tx_queue {226226+ spinlock_t tx_lock;227227+ struct vmxnet3_cmd_ring tx_ring;228228+ struct vmxnet3_tx_buf_info *buf_info;229229+ struct vmxnet3_tx_data_ring data_ring;230230+ struct vmxnet3_comp_ring comp_ring;231231+ struct Vmxnet3_TxQueueCtrl *shared;232232+ struct vmxnet3_tq_driver_stats stats;233233+ bool stopped;234234+ int num_stop; /* # of times the queue is235235+ * stopped */236236+} __attribute__((__aligned__(SMP_CACHE_BYTES)));237237+238238+enum vmxnet3_rx_buf_type {239239+ VMXNET3_RX_BUF_NONE = 0,240240+ VMXNET3_RX_BUF_SKB = 1,241241+ VMXNET3_RX_BUF_PAGE = 2242242+};243243+244244+struct vmxnet3_rx_buf_info {245245+ enum vmxnet3_rx_buf_type buf_type;246246+ u16 len;247247+ union {248248+ struct sk_buff *skb;249249+ struct page *page;250250+ };251251+ dma_addr_t dma_addr;252252+};253253+254254+struct vmxnet3_rx_ctx {255255+ struct sk_buff *skb;256256+ u32 sop_idx;257257+};258258+259259+struct vmxnet3_rq_driver_stats {260260+ u64 drop_total;261261+ u64 drop_err;262262+ u64 drop_fcs;263263+ u64 rx_buf_alloc_failure;264264+};265265+266266+struct vmxnet3_rx_queue {267267+ struct vmxnet3_cmd_ring rx_ring[2];268268+ struct vmxnet3_comp_ring comp_ring;269269+ struct vmxnet3_rx_ctx rx_ctx;270270+ u32 qid; /* rqID in RCD for buffer from 1st ring */271271+ u32 qid2; /* rqID in RCD for buffer from 2nd ring */272272+ u32 uncommitted[2]; /* # of buffers allocated since last RXPROD273273+ * update */274274+ struct vmxnet3_rx_buf_info *buf_info[2];275275+ struct Vmxnet3_RxQueueCtrl *shared;276276+ struct vmxnet3_rq_driver_stats stats;277277+} __attribute__((__aligned__(SMP_CACHE_BYTES)));278278+279279+#define VMXNET3_LINUX_MAX_MSIX_VECT 1280280+281281+struct vmxnet3_intr {282282+ enum vmxnet3_intr_mask_mode mask_mode;283283+ enum vmxnet3_intr_type type; /* MSI-X, MSI, or INTx? */284284+ u8 num_intrs; /* # of intr vectors */285285+ u8 event_intr_idx; /* idx of the intr vector for event */286286+ u8 mod_levels[VMXNET3_LINUX_MAX_MSIX_VECT]; /* moderation level */287287+#ifdef CONFIG_PCI_MSI288288+ struct msix_entry msix_entries[VMXNET3_LINUX_MAX_MSIX_VECT];289289+#endif290290+};291291+292292+#define VMXNET3_STATE_BIT_RESETTING 0293293+#define VMXNET3_STATE_BIT_QUIESCED 1294294+struct vmxnet3_adapter {295295+ struct vmxnet3_tx_queue tx_queue;296296+ struct vmxnet3_rx_queue rx_queue;297297+ struct napi_struct napi;298298+ struct vlan_group *vlan_grp;299299+300300+ struct vmxnet3_intr intr;301301+302302+ struct Vmxnet3_DriverShared *shared;303303+ struct Vmxnet3_PMConf *pm_conf;304304+ struct Vmxnet3_TxQueueDesc *tqd_start; /* first tx queue desc */305305+ struct Vmxnet3_RxQueueDesc *rqd_start; /* first rx queue desc */306306+ struct net_device *netdev;307307+ struct pci_dev *pdev;308308+309309+ u8 *hw_addr0; /* for BAR 0 */310310+ u8 *hw_addr1; /* for BAR 1 */311311+312312+ /* feature control */313313+ bool rxcsum;314314+ bool lro;315315+ bool jumbo_frame;316316+317317+ /* rx buffer related */318318+ unsigned skb_buf_size;319319+ int rx_buf_per_pkt; /* only apply to the 1st ring */320320+ dma_addr_t shared_pa;321321+ dma_addr_t queue_desc_pa;322322+323323+ /* Wake-on-LAN */324324+ u32 wol;325325+326326+ /* Link speed */327327+ u32 link_speed; /* in mbps */328328+329329+ u64 tx_timeout_count;330330+ struct work_struct work;331331+332332+ unsigned long state; /* VMXNET3_STATE_BIT_xxx */333333+334334+ int dev_number;335335+};336336+337337+#define VMXNET3_WRITE_BAR0_REG(adapter, reg, val) \338338+ writel((val), (adapter)->hw_addr0 + (reg))339339+#define VMXNET3_READ_BAR0_REG(adapter, reg) \340340+ readl((adapter)->hw_addr0 + (reg))341341+342342+#define VMXNET3_WRITE_BAR1_REG(adapter, reg, val) \343343+ writel((val), (adapter)->hw_addr1 + (reg))344344+#define VMXNET3_READ_BAR1_REG(adapter, reg) \345345+ readl((adapter)->hw_addr1 + (reg))346346+347347+#define VMXNET3_WAKE_QUEUE_THRESHOLD(tq) (5)348348+#define VMXNET3_RX_ALLOC_THRESHOLD(rq, ring_idx, adapter) \349349+ ((rq)->rx_ring[ring_idx].size >> 3)350350+351351+#define VMXNET3_GET_ADDR_LO(dma) ((u32)(dma))352352+#define VMXNET3_GET_ADDR_HI(dma) ((u32)(((u64)(dma)) >> 32))353353+354354+/* must be a multiple of VMXNET3_RING_SIZE_ALIGN */355355+#define VMXNET3_DEF_TX_RING_SIZE 512356356+#define VMXNET3_DEF_RX_RING_SIZE 256357357+358358+#define VMXNET3_MAX_ETH_HDR_SIZE 22359359+#define VMXNET3_MAX_SKB_BUF_SIZE (3*1024)360360+361361+int362362+vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter);363363+364364+int365365+vmxnet3_activate_dev(struct vmxnet3_adapter *adapter);366366+367367+void368368+vmxnet3_force_close(struct vmxnet3_adapter *adapter);369369+370370+void371371+vmxnet3_reset_dev(struct vmxnet3_adapter *adapter);372372+373373+void374374+vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq,375375+ struct vmxnet3_adapter *adapter);376376+377377+void378378+vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq,379379+ struct vmxnet3_adapter *adapter);380380+381381+int382382+vmxnet3_create_queues(struct vmxnet3_adapter *adapter,383383+ u32 tx_ring_size, u32 rx_ring_size, u32 rx_ring2_size);384384+385385+extern void vmxnet3_set_ethtool_ops(struct net_device *netdev);386386+extern struct net_device_stats *vmxnet3_get_stats(struct net_device *netdev);387387+388388+extern char vmxnet3_driver_name[];389389+#endif
···607607 struct ieee80211_tx_queue_params p;608608};609609610610-struct b43_wldev;611611-612612-/* Data structure for the WLAN parts (802.11 cores) of the b43 chip. */613613-struct b43_wl {614614- /* Pointer to the active wireless device on this chip */615615- struct b43_wldev *current_dev;616616- /* Pointer to the ieee80211 hardware data structure */617617- struct ieee80211_hw *hw;618618-619619- /* Global driver mutex. Every operation must run with this mutex locked. */620620- struct mutex mutex;621621- /* Hard-IRQ spinlock. This lock protects things used in the hard-IRQ622622- * handler, only. This basically is just the IRQ mask register. */623623- spinlock_t hardirq_lock;624624-625625- /* The number of queues that were registered with the mac80211 subsystem626626- * initially. This is a backup copy of hw->queues in case hw->queues has627627- * to be dynamically lowered at runtime (Firmware does not support QoS).628628- * hw->queues has to be restored to the original value before unregistering629629- * from the mac80211 subsystem. */630630- u16 mac80211_initially_registered_queues;631631-632632- /* We can only have one operating interface (802.11 core)633633- * at a time. General information about this interface follows.634634- */635635-636636- struct ieee80211_vif *vif;637637- /* The MAC address of the operating interface. */638638- u8 mac_addr[ETH_ALEN];639639- /* Current BSSID */640640- u8 bssid[ETH_ALEN];641641- /* Interface type. (NL80211_IFTYPE_XXX) */642642- int if_type;643643- /* Is the card operating in AP, STA or IBSS mode? */644644- bool operating;645645- /* filter flags */646646- unsigned int filter_flags;647647- /* Stats about the wireless interface */648648- struct ieee80211_low_level_stats ieee_stats;649649-650650-#ifdef CONFIG_B43_HWRNG651651- struct hwrng rng;652652- bool rng_initialized;653653- char rng_name[30 + 1];654654-#endif /* CONFIG_B43_HWRNG */655655-656656- /* List of all wireless devices on this chip */657657- struct list_head devlist;658658- u8 nr_devs;659659-660660- bool radiotap_enabled;661661- bool radio_enabled;662662-663663- /* The beacon we are currently using (AP or IBSS mode). */664664- struct sk_buff *current_beacon;665665- bool beacon0_uploaded;666666- bool beacon1_uploaded;667667- bool beacon_templates_virgin; /* Never wrote the templates? */668668- struct work_struct beacon_update_trigger;669669-670670- /* The current QOS parameters for the 4 queues. */671671- struct b43_qos_params qos_params[4];672672-673673- /* Work for adjustment of the transmission power.674674- * This is scheduled when we determine that the actual TX output675675- * power doesn't match what we want. */676676- struct work_struct txpower_adjust_work;677677-678678- /* Packet transmit work */679679- struct work_struct tx_work;680680- /* Queue of packets to be transmitted. */681681- struct sk_buff_head tx_queue;682682-683683- /* The device LEDs. */684684- struct b43_leds leds;685685-};610610+struct b43_wl;686611687612/* The type of the firmware file. */688613enum b43_firmware_file_type {···747822 unsigned int tx_count;748823 unsigned int rx_count;749824#endif825825+};826826+827827+/*828828+ * Include goes here to avoid a dependency problem.829829+ * A better fix would be to integrate xmit.h into b43.h.830830+ */831831+#include "xmit.h"832832+833833+/* Data structure for the WLAN parts (802.11 cores) of the b43 chip. */834834+struct b43_wl {835835+ /* Pointer to the active wireless device on this chip */836836+ struct b43_wldev *current_dev;837837+ /* Pointer to the ieee80211 hardware data structure */838838+ struct ieee80211_hw *hw;839839+840840+ /* Global driver mutex. Every operation must run with this mutex locked. */841841+ struct mutex mutex;842842+ /* Hard-IRQ spinlock. This lock protects things used in the hard-IRQ843843+ * handler, only. This basically is just the IRQ mask register. */844844+ spinlock_t hardirq_lock;845845+846846+ /* The number of queues that were registered with the mac80211 subsystem847847+ * initially. This is a backup copy of hw->queues in case hw->queues has848848+ * to be dynamically lowered at runtime (Firmware does not support QoS).849849+ * hw->queues has to be restored to the original value before unregistering850850+ * from the mac80211 subsystem. */851851+ u16 mac80211_initially_registered_queues;852852+853853+ /* We can only have one operating interface (802.11 core)854854+ * at a time. General information about this interface follows.855855+ */856856+857857+ struct ieee80211_vif *vif;858858+ /* The MAC address of the operating interface. */859859+ u8 mac_addr[ETH_ALEN];860860+ /* Current BSSID */861861+ u8 bssid[ETH_ALEN];862862+ /* Interface type. (NL80211_IFTYPE_XXX) */863863+ int if_type;864864+ /* Is the card operating in AP, STA or IBSS mode? */865865+ bool operating;866866+ /* filter flags */867867+ unsigned int filter_flags;868868+ /* Stats about the wireless interface */869869+ struct ieee80211_low_level_stats ieee_stats;870870+871871+#ifdef CONFIG_B43_HWRNG872872+ struct hwrng rng;873873+ bool rng_initialized;874874+ char rng_name[30 + 1];875875+#endif /* CONFIG_B43_HWRNG */876876+877877+ /* List of all wireless devices on this chip */878878+ struct list_head devlist;879879+ u8 nr_devs;880880+881881+ bool radiotap_enabled;882882+ bool radio_enabled;883883+884884+ /* The beacon we are currently using (AP or IBSS mode). */885885+ struct sk_buff *current_beacon;886886+ bool beacon0_uploaded;887887+ bool beacon1_uploaded;888888+ bool beacon_templates_virgin; /* Never wrote the templates? */889889+ struct work_struct beacon_update_trigger;890890+891891+ /* The current QOS parameters for the 4 queues. */892892+ struct b43_qos_params qos_params[4];893893+894894+ /* Work for adjustment of the transmission power.895895+ * This is scheduled when we determine that the actual TX output896896+ * power doesn't match what we want. */897897+ struct work_struct txpower_adjust_work;898898+899899+ /* Packet transmit work */900900+ struct work_struct tx_work;901901+ /* Queue of packets to be transmitted. */902902+ struct sk_buff_head tx_queue;903903+904904+ /* The device LEDs. */905905+ struct b43_leds leds;906906+907907+#ifdef CONFIG_B43_PIO908908+ /*909909+ * RX/TX header/tail buffers used by the frame transmit functions.910910+ */911911+ struct b43_rxhdr_fw4 rxhdr;912912+ struct b43_txhdr txhdr;913913+ u8 rx_tail[4];914914+ u8 tx_tail[4];915915+#endif /* CONFIG_B43_PIO */750916};751917752918static inline struct b43_wl *hw_to_b43_wl(struct ieee80211_hw *hw)
···702702 u8 sta_id = iwl_find_station(priv, hdr->addr1);703703704704 if (sta_id == IWL_INVALID_STATION) {705705- IWL_DEBUG_RATE(priv, "LQ: ADD station %pm\n",705705+ IWL_DEBUG_RATE(priv, "LQ: ADD station %pM\n",706706 hdr->addr1);707707 sta_id = iwl_add_station(priv, hdr->addr1, false,708708 CMD_ASYNC, NULL);
+1-1
drivers/net/wireless/iwlwifi/iwl-3945.c
···611611 if (rx_status.band == IEEE80211_BAND_5GHZ)612612 rx_status.rate_idx -= IWL_FIRST_OFDM_RATE;613613614614- rx_status.antenna = le16_to_cpu(rx_hdr->phy_flags &614614+ rx_status.antenna = (le16_to_cpu(rx_hdr->phy_flags) &615615 RX_RES_PHY_FLAGS_ANTENNA_MSK) >> 4;616616617617 /* set the preamble flag if appropriate */
+1-1
drivers/net/wireless/iwlwifi/iwl-5000.c
···318318 (s32)average_noise[i])) / 1500;319319 /* bound gain by 2 bits value max, 3rd bit is sign */320320 data->delta_gain_code[i] =321321- min(abs(delta_g), CHAIN_NOISE_MAX_DELTA_GAIN_CODE);321321+ min(abs(delta_g), (long) CHAIN_NOISE_MAX_DELTA_GAIN_CODE);322322323323 if (delta_g < 0)324324 /* set negative sign */
···410410 u16 *validblockaddr)411411{412412 u16 next_link_addr = 0, link_value = 0, valid_addr;413413- int ret = 0;414413 int usedblocks = 0;415414416415 /* set addressing mode to absolute to traverse the link list */···429430 * check for more block on the link list430431 */431432 valid_addr = next_link_addr;432432- next_link_addr = link_value;433433+ next_link_addr = link_value * sizeof(u16);433434 IWL_DEBUG_INFO(priv, "OTP blocks %d addr 0x%x\n",434435 usedblocks, next_link_addr);435436 if (iwl_read_otp_word(priv, next_link_addr, &link_value))436437 return -EINVAL;437438 if (!link_value) {438439 /*439439- * reach the end of link list,440440+ * reach the end of link list, return success and440441 * set address point to the starting address441442 * of the image442443 */443443- goto done;444444+ *validblockaddr = valid_addr;445445+ /* skip first 2 bytes (link list pointer) */446446+ *validblockaddr += 2;447447+ return 0;444448 }445449 /* more in the link list, continue */446450 usedblocks++;447447- } while (usedblocks < priv->cfg->max_ll_items);448448- /* OTP full, use last block */449449- IWL_DEBUG_INFO(priv, "OTP is full, use last block\n");450450-done:451451- *validblockaddr = valid_addr;452452- /* skip first 2 bytes (link list pointer) */453453- *validblockaddr += 2;454454- return ret;451451+ } while (usedblocks <= priv->cfg->max_ll_items);452452+453453+ /* OTP has no valid blocks */454454+ IWL_DEBUG_INFO(priv, "OTP has no valid blocks\n");455455+ return -EINVAL;455456}456457457458/**
···10441044 * as a bitmask.10451045 */10461046 rx_status.antenna =10471047- le16_to_cpu(phy_res->phy_flags & RX_RES_PHY_FLAGS_ANTENNA_MSK)10471047+ (le16_to_cpu(phy_res->phy_flags) & RX_RES_PHY_FLAGS_ANTENNA_MSK)10481048 >> RX_RES_PHY_FLAGS_ANTENNA_POS;1049104910501050 /* set the preamble flag if appropriate */
···33 * responses as well as events generated by firmware.44 */55#include <linux/delay.h>66+#include <linux/sched.h>67#include <linux/if_arp.h>78#include <linux/netdevice.h>89#include <asm/unaligned.h>
···3535/* atomic_t because wait_event checks it outside of buffer_mutex */3636static atomic_t buffer_ready = ATOMIC_INIT(0);37373838-/* Add an entry to the event buffer. When we3939- * get near to the end we wake up the process4040- * sleeping on the read() of the file.3838+/*3939+ * Add an entry to the event buffer. When we get near to the end we4040+ * wake up the process sleeping on the read() of the file. To protect4141+ * the event_buffer this function may only be called when buffer_mutex4242+ * is set.4143 */4244void add_event_entry(unsigned long value)4345{4646+ /*4747+ * This shouldn't happen since all workqueues or handlers are4848+ * canceled or flushed before the event buffer is freed.4949+ */5050+ if (!event_buffer) {5151+ WARN_ON_ONCE(1);5252+ return;5353+ }5454+4455 if (buffer_pos == buffer_size) {4556 atomic_inc(&oprofile_stats.event_lost_overflow);4657 return;···80698170int alloc_event_buffer(void)8271{8383- int err = -ENOMEM;8472 unsigned long flags;85738674 spin_lock_irqsave(&oprofilefs_lock, flags);···9080 if (buffer_watershed >= buffer_size)9181 return -EINVAL;92828383+ buffer_pos = 0;9384 event_buffer = vmalloc(sizeof(unsigned long) * buffer_size);9485 if (!event_buffer)9595- goto out;8686+ return -ENOMEM;96879797- err = 0;9898-out:9999- return err;8888+ return 0;10089}101901029110392void free_event_buffer(void)10493{9494+ mutex_lock(&buffer_mutex);10595 vfree(event_buffer);106106-9696+ buffer_pos = 0;10797 event_buffer = NULL;9898+ mutex_unlock(&buffer_mutex);10899}109100110101···177166 return -EAGAIN;178167179168 mutex_lock(&buffer_mutex);169169+170170+ /* May happen if the buffer is freed during pending reads. */171171+ if (!event_buffer) {172172+ retval = -EINTR;173173+ goto out;174174+ }180175181176 atomic_set(&buffer_ready, 0);182177
+12-1
drivers/pci/dmar.c
···354354 struct acpi_dmar_hardware_unit *drhd;355355 struct acpi_dmar_reserved_memory *rmrr;356356 struct acpi_dmar_atsr *atsr;357357+ struct acpi_dmar_rhsa *rhsa;357358358359 switch (header->type) {359360 case ACPI_DMAR_TYPE_HARDWARE_UNIT:···375374 case ACPI_DMAR_TYPE_ATSR:376375 atsr = container_of(header, struct acpi_dmar_atsr, header);377376 printk(KERN_INFO PREFIX "ATSR flags: %#x\n", atsr->flags);377377+ break;378378+ case ACPI_DMAR_HARDWARE_AFFINITY:379379+ rhsa = container_of(header, struct acpi_dmar_rhsa, header);380380+ printk(KERN_INFO PREFIX "RHSA base: %#016Lx proximity domain: %#x\n",381381+ (unsigned long long)rhsa->base_address,382382+ rhsa->proximity_domain);378383 break;379384 }380385}···466459 ret = dmar_parse_one_atsr(entry_header);467460#endif468461 break;462462+ case ACPI_DMAR_HARDWARE_AFFINITY:463463+ /* We don't do anything with RHSA (yet?) */464464+ break;469465 default:470466 printk(KERN_WARNING PREFIX471471- "Unknown DMAR structure type\n");467467+ "Unknown DMAR structure type %d\n",468468+ entry_header->type);472469 ret = 0; /* for forward compatibility */473470 break;474471 }
+1
drivers/pci/hotplug/cpqphp.h
···3232#include <asm/io.h> /* for read? and write? functions */3333#include <linux/delay.h> /* for delays */3434#include <linux/mutex.h>3535+#include <linux/sched.h> /* for signal_pending() */35363637#define MY_NAME "cpqphp"3738
+77-5
drivers/pci/intel-iommu.c
···48484949#define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)5050#define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)5151+#define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e)51525253#define IOAPIC_RANGE_START (0xfee00000)5354#define IOAPIC_RANGE_END (0xfeefffff)···9594/* global iommu list, set NULL for ignored DMAR units */9695static struct intel_iommu **g_iommus;97969797+static void __init check_tylersburg_isoch(void);9898static int rwbf_quirk;9999100100/*···19361934}1937193519381936static int iommu_identity_mapping;19371937+#define IDENTMAP_ALL 119381938+#define IDENTMAP_GFX 219391939+#define IDENTMAP_AZALIA 41939194019401941static int iommu_domain_identity_map(struct dmar_domain *domain,19411942 unsigned long long start,···2156215121572152static int iommu_should_identity_map(struct pci_dev *pdev, int startup)21582153{21592159- if (iommu_identity_mapping == 2)21602160- return IS_GFX_DEVICE(pdev);21542154+ if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev))21552155+ return 1;21562156+21572157+ if ((iommu_identity_mapping & IDENTMAP_GFX) && IS_GFX_DEVICE(pdev))21582158+ return 1;21592159+21602160+ if (!(iommu_identity_mapping & IDENTMAP_ALL))21612161+ return 0;2161216221622163 /*21632164 * We want to start off with all devices in the 1:1 domain, and···23432332 }2344233323452334 if (iommu_pass_through)23462346- iommu_identity_mapping = 1;23352335+ iommu_identity_mapping |= IDENTMAP_ALL;23362336+23472337#ifdef CONFIG_DMAR_BROKEN_GFX_WA23482348- else23492349- iommu_identity_mapping = 2;23382338+ iommu_identity_mapping |= IDENTMAP_GFX;23502339#endif23402340+23412341+ check_tylersburg_isoch();23422342+23512343 /*23522344 * If pass through is not set or not enabled, setup context entries for23532345 * identity mappings for rmrr, gfx, and isa and may fall back to static···36843670}3685367136863672DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);36733673+36743674+/* On Tylersburg chipsets, some BIOSes have been known to enable the36753675+ ISOCH DMAR unit for the Azalia sound device, but not give it any36763676+ TLB entries, which causes it to deadlock. Check for that. We do36773677+ this in a function called from init_dmars(), instead of in a PCI36783678+ quirk, because we don't want to print the obnoxious "BIOS broken"36793679+ message if VT-d is actually disabled.36803680+*/36813681+static void __init check_tylersburg_isoch(void)36823682+{36833683+ struct pci_dev *pdev;36843684+ uint32_t vtisochctrl;36853685+36863686+ /* If there's no Azalia in the system anyway, forget it. */36873687+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x3a3e, NULL);36883688+ if (!pdev)36893689+ return;36903690+ pci_dev_put(pdev);36913691+36923692+ /* System Management Registers. Might be hidden, in which case36933693+ we can't do the sanity check. But that's OK, because the36943694+ known-broken BIOSes _don't_ actually hide it, so far. */36953695+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x342e, NULL);36963696+ if (!pdev)36973697+ return;36983698+36993699+ if (pci_read_config_dword(pdev, 0x188, &vtisochctrl)) {37003700+ pci_dev_put(pdev);37013701+ return;37023702+ }37033703+37043704+ pci_dev_put(pdev);37053705+37063706+ /* If Azalia DMA is routed to the non-isoch DMAR unit, fine. */37073707+ if (vtisochctrl & 1)37083708+ return;37093709+37103710+ /* Drop all bits other than the number of TLB entries */37113711+ vtisochctrl &= 0x1c;37123712+37133713+ /* If we have the recommended number of TLB entries (16), fine. */37143714+ if (vtisochctrl == 0x10)37153715+ return;37163716+37173717+ /* Zero TLB entries? You get to ride the short bus to school. */37183718+ if (!vtisochctrl) {37193719+ WARN(1, "Your BIOS is broken; DMA routed to ISOCH DMAR unit but no TLB space.\n"37203720+ "BIOS vendor: %s; Ver: %s; Product Version: %s\n",37213721+ dmi_get_system_info(DMI_BIOS_VENDOR),37223722+ dmi_get_system_info(DMI_BIOS_VERSION),37233723+ dmi_get_system_info(DMI_PRODUCT_VERSION));37243724+ iommu_identity_mapping |= IDENTMAP_AZALIA;37253725+ return;37263726+ }37273727+37283728+ printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n",37293729+ vtisochctrl);37303730+}
-13
drivers/pci/pci.c
···27232723 return 1;27242724}2725272527262726-static int __devinit pci_init(void)27272727-{27282728- struct pci_dev *dev = NULL;27292729-27302730- while ((dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) {27312731- pci_fixup_device(pci_fixup_final, dev);27322732- }27332733-27342734- return 0;27352735-}27362736-27372726static int __init pci_setup(char *str)27382727{27392728 while (str) {···27592770 return 0;27602771}27612772early_param("pci", pci_setup);27622762-27632763-device_initcall(pci_init);2764277327652774EXPORT_SYMBOL(pci_reenable_device);27662775EXPORT_SYMBOL(pci_enable_device_io);
···150150/* The actual device the driver binds to */151151static struct eeepc_hotk *ehotk;152152153153+static void eeepc_rfkill_hotplug(bool real);154154+153155/* Platform device/driver */154156static int eeepc_hotk_thaw(struct device *device);155157static int eeepc_hotk_restore(struct device *device);···345343static int eeepc_rfkill_set(void *data, bool blocked)346344{347345 unsigned long asl = (unsigned long)data;348348- return set_acpi(asl, !blocked);346346+ int ret;347347+348348+ if (asl != CM_ASL_WLAN)349349+ return set_acpi(asl, !blocked);350350+351351+ /* hack to avoid panic with rt2860sta */352352+ if (blocked)353353+ eeepc_rfkill_hotplug(false);354354+ ret = set_acpi(asl, !blocked);355355+ return ret;349356}350357351358static const struct rfkill_ops eeepc_rfkill_ops = {352359 .set_block = eeepc_rfkill_set,353360};354361355355-static void __init eeepc_enable_camera(void)362362+static void __devinit eeepc_enable_camera(void)356363{357364 /*358365 * If the following call to set_acpi() fails, it's because there's no···654643 return 0;655644}656645657657-static void eeepc_rfkill_hotplug(void)646646+static void eeepc_rfkill_hotplug(bool real)658647{659648 struct pci_dev *dev;660649 struct pci_bus *bus;661661- bool blocked = eeepc_wlan_rfkill_blocked();650650+ bool blocked = real ? eeepc_wlan_rfkill_blocked() : true;662651663663- if (ehotk->wlan_rfkill)652652+ if (real && ehotk->wlan_rfkill)664653 rfkill_set_sw_state(ehotk->wlan_rfkill, blocked);665654666655 mutex_lock(&ehotk->hotplug_lock);···703692 if (event != ACPI_NOTIFY_BUS_CHECK)704693 return;705694706706- eeepc_rfkill_hotplug();695695+ eeepc_rfkill_hotplug(true);707696}708697709698static void eeepc_hotk_notify(struct acpi_device *device, u32 event)···861850{862851 /* Refresh both wlan rfkill state and pci hotplug */863852 if (ehotk->wlan_rfkill)864864- eeepc_rfkill_hotplug();853853+ eeepc_rfkill_hotplug(true);865854866855 if (ehotk->bluetooth_rfkill)867856 rfkill_set_sw_state(ehotk->bluetooth_rfkill,···1004993 * Refresh pci hotplug in case the rfkill state was changed after1005994 * eeepc_unregister_rfkill_notifier()1006995 */10071007- eeepc_rfkill_hotplug();996996+ eeepc_rfkill_hotplug(true);1008997 if (ehotk->hotplug_slot)1009998 pci_hp_deregister(ehotk->hotplug_slot);1010999···11201109 * Refresh pci hotplug in case the rfkill state was changed during11211110 * setup.11221111 */11231123- eeepc_rfkill_hotplug();11121112+ eeepc_rfkill_hotplug(true);1124111311251114exit:11261115 if (result && result != -ENODEV)···12001189 return 0;12011190}1202119112031203-static int eeepc_hotk_add(struct acpi_device *device)11921192+static int __devinit eeepc_hotk_add(struct acpi_device *device)12041193{12051194 struct device *dev;12061195 int result;
···12501250 unsigned long flags;12511251 struct ccw_dev_id dev_id;1252125212531253- cdev = sch_get_cdev(sch);12541254- if (cdev) {12531253+ if (cio_is_console(sch->schid)) {12551254 rc = sysfs_create_group(&sch->dev.kobj,12561255 &io_subchannel_attr_group);12571256 if (rc)···12591260 "0.%x.%04x (rc=%d)\n",12601261 sch->schid.ssid, sch->schid.sch_no, rc);12611262 /*12621262- * This subchannel already has an associated ccw_device.12631263+ * The console subchannel already has an associated ccw_device.12631264 * Throw the delayed uevent for the subchannel, register12641264- * the ccw_device and exit. This happens for all early12651265- * devices, e.g. the console.12651265+ * the ccw_device and exit.12661266 */12671267 dev_set_uevent_suppress(&sch->dev, 0);12681268 kobject_uevent(&sch->dev.kobj, KOBJ_ADD);12691269+ cdev = sch_get_cdev(sch);12691270 cdev->dev.groups = ccwdev_attr_groups;12701271 device_initialize(&cdev->dev);12711272 ccw_device_register(cdev);
+1
drivers/staging/b3dfg/b3dfg.c
···3636#include <linux/wait.h>3737#include <linux/mm.h>3838#include <linux/uaccess.h>3939+#include <linux/sched.h>39404041static unsigned int b3dfg_nbuf = 2;4142
···1177117711781178static inline u32 bump_fbr(u32 *fbr, u32 limit)11791179{11801180- u32 v = *fbr;11811181- add_10bit(&v, 1);11821182- if (v > limit)11831183- v = (*fbr & ~ET_DMA10_MASK) ^ ET_DMA10_WRAP;11841184- *fbr = v;11851185- return v;11801180+ u32 v = *fbr;11811181+ v++;11821182+ /* This works for all cases where limit < 1024. The 1023 case11831183+ works because 1023++ is 1024 which means the if condition is not11841184+ taken but the carry of the bit into the wrap bit toggles the wrap11851185+ value correctly */11861186+ if ((v & ET_DMA10_MASK) > limit) {11871187+ v &= ~ET_DMA10_MASK;11881188+ v ^= ET_DMA10_WRAP;11891189+ }11901190+ /* For the 1023 case */11911191+ v &= (ET_DMA10_MASK|ET_DMA10_WRAP);11921192+ *fbr = v;11931193+ return v;11861194}1187119511881196/**
···6161 * simpler, Microsoft pushes their own approach: RNDIS. The published6262 * RNDIS specs are ambiguous and appear to be incomplete, and are also6363 * needlessly complex. They borrow more from CDC ACM than CDC ECM.6464- *6565- * While CDC ECM, CDC Subset, and RNDIS are designed to extend the ethernet6666- * interface to the target, CDC EEM was designed to use ethernet over the USB6767- * link between the host and target. CDC EEM is implemented as an alternative6868- * to those other protocols when that communication model is more appropriate6964 */70657166#define DRIVER_DESC "Ethernet Gadget"···152157#define RNDIS_PRODUCT_NUM 0xa4a2 /* Ethernet/RNDIS Gadget */153158154159/* For EEM gadgets */155155-#define EEM_VENDOR_NUM 0x0525 /* INVALID - NEEDS TO BE ALLOCATED */156156-#define EEM_PRODUCT_NUM 0xa4a1 /* INVALID - NEEDS TO BE ALLOCATED */160160+#define EEM_VENDOR_NUM 0x1d6b /* Linux Foundation */161161+#define EEM_PRODUCT_NUM 0x0102 /* EEM Gadget */157162158163/*-------------------------------------------------------------------------*/159164
+6-6
drivers/usb/host/ehci-sched.c
···14001400 goto fail;14011401 }1402140214031403+ period = urb->interval;14041404+ if (!stream->highspeed)14051405+ period <<= 3;14061406+14031407 now = ehci_readl(ehci, &ehci->regs->frame_index) % mod;1404140814051409 /* when's the last uframe this urb could start? */···1421141714221418 /* Fell behind (by up to twice the slop amount)? */14231419 if (start >= max - 2 * 8 * SCHEDULE_SLOP)14241424- start += stream->interval * DIV_ROUND_UP(14251425- max - start, stream->interval) - mod;14201420+ start += period * DIV_ROUND_UP(14211421+ max - start, period) - mod;1426142214271423 /* Tried to schedule too far into the future? */14281424 if (unlikely((start + sched->span) >= max)) {···14441440 stream->next_uframe = start;1445144114461442 /* NOTE: assumes URB_ISO_ASAP, to limit complexity/bugs */14471447-14481448- period = urb->interval;14491449- if (!stream->highspeed)14501450- period <<= 3;1451144314521444 /* find a uframe slot with enough bandwidth */14531445 for (; start < (stream->next_uframe + period); start++) {
+16-7
drivers/usb/host/whci/asl.c
···115115 if (status & QTD_STS_HALTED) {116116 /* Ug, an error. */117117 process_halted_qtd(whc, qset, td);118118+ /* A halted qTD always triggers an update119119+ because the qset was either removed or120120+ reactivated. */121121+ update |= WHC_UPDATE_UPDATED;118122 goto done;119123 }120124···309305 struct whc_urb *wurb = urb->hcpriv;310306 struct whc_qset *qset = wurb->qset;311307 struct whc_std *std, *t;308308+ bool has_qtd = false;312309 int ret;313310 unsigned long flags;314311···320315 goto out;321316322317 list_for_each_entry_safe(std, t, &qset->stds, list_node) {323323- if (std->urb == urb)318318+ if (std->urb == urb) {319319+ if (std->qtd)320320+ has_qtd = true;324321 qset_free_std(whc, std);325325- else322322+ } else326323 std->qtd = NULL; /* so this std is re-added when the qset is */327324 }328325329329- asl_qset_remove(whc, qset);330330- wurb->status = status;331331- wurb->is_async = true;332332- queue_work(whc->workqueue, &wurb->dequeue_work);333333-326326+ if (has_qtd) {327327+ asl_qset_remove(whc, qset);328328+ wurb->status = status;329329+ wurb->is_async = true;330330+ queue_work(whc->workqueue, &wurb->dequeue_work);331331+ } else332332+ qset_remove_urb(whc, qset, urb, status);334333out:335334 spin_unlock_irqrestore(&whc->lock, flags);336335
+17-7
drivers/usb/host/whci/pzl.c
···121121 if (status & QTD_STS_HALTED) {122122 /* Ug, an error. */123123 process_halted_qtd(whc, qset, td);124124+ /* A halted qTD always triggers an update125125+ because the qset was either removed or126126+ reactivated. */127127+ update |= WHC_UPDATE_UPDATED;124128 goto done;125129 }126130···337333 struct whc_urb *wurb = urb->hcpriv;338334 struct whc_qset *qset = wurb->qset;339335 struct whc_std *std, *t;336336+ bool has_qtd = false;340337 int ret;341338 unsigned long flags;342339···348343 goto out;349344350345 list_for_each_entry_safe(std, t, &qset->stds, list_node) {351351- if (std->urb == urb)346346+ if (std->urb == urb) {347347+ if (std->qtd)348348+ has_qtd = true;352349 qset_free_std(whc, std);353353- else350350+ } else354351 std->qtd = NULL; /* so this std is re-added when the qset is */355352 }356353357357- pzl_qset_remove(whc, qset);358358- wurb->status = status;359359- wurb->is_async = false;360360- queue_work(whc->workqueue, &wurb->dequeue_work);361361-354354+ if (has_qtd) {355355+ pzl_qset_remove(whc, qset);356356+ update_pzl_hw_view(whc);357357+ wurb->status = status;358358+ wurb->is_async = false;359359+ queue_work(whc->workqueue, &wurb->dequeue_work);360360+ } else361361+ qset_remove_urb(whc, qset, urb, status);362362out:363363 spin_unlock_irqrestore(&whc->lock, flags);364364
···696696 /* device supports and needs bigger sense buffer */697697 if (us->fflags & US_FL_SANE_SENSE)698698 sense_size = ~0;699699-699699+Retry_Sense:700700 US_DEBUGP("Issuing auto-REQUEST_SENSE\n");701701702702 scsi_eh_prep_cmnd(srb, &ses, NULL, 0, sense_size);···720720 srb->result = DID_ABORT << 16;721721 goto Handle_Errors;722722 }723723+724724+ /* Some devices claim to support larger sense but fail when725725+ * trying to request it. When a transport failure happens726726+ * using US_FS_SANE_SENSE, we always retry with a standard727727+ * (small) sense request. This fixes some USB GSM modems728728+ */729729+ if (temp_result == USB_STOR_TRANSPORT_FAILED &&730730+ (us->fflags & US_FL_SANE_SENSE) &&731731+ sense_size != US_SENSE_SIZE) {732732+ US_DEBUGP("-- auto-sense failure, retry small sense\n");733733+ sense_size = US_SENSE_SIZE;734734+ goto Retry_Sense;735735+ }736736+737737+ /* Other failures */723738 if (temp_result != USB_STOR_TRANSPORT_GOOD) {724739 US_DEBUGP("-- auto-sense failure\n");725740
···8686 * transid of the trans_handle that last modified this inode8787 */8888 u64 last_trans;8989+9090+ /*9191+ * log transid when this inode was last modified9292+ */9393+ u64 last_sub_trans;9494+8995 /*9096 * transid that last logged this inode9197 */
···15681568 return ret;15691569}1570157015711571-#ifdef BIO_RW_DISCARD15721571static void btrfs_issue_discard(struct block_device *bdev,15731572 u64 start, u64 len)15741573{15751574 blkdev_issue_discard(bdev, start >> 9, len >> 9, GFP_KERNEL,15761575 DISCARD_FL_BARRIER);15771576}15781578-#endif1579157715801578static int btrfs_discard_extent(struct btrfs_root *root, u64 bytenr,15811579 u64 num_bytes)15821580{15831583-#ifdef BIO_RW_DISCARD15841581 int ret;15851582 u64 map_length = num_bytes;15861583 struct btrfs_multi_bio *multi = NULL;15841584+15851585+ if (!btrfs_test_opt(root, DISCARD))15861586+ return 0;1587158715881588 /* Tell the block device(s) that the sectors can be discarded */15891589 ret = btrfs_map_block(&root->fs_info->mapping_tree, READ,···16041604 }1605160516061606 return ret;16071607-#else16081608- return 0;16091609-#endif16101607}1611160816121609int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,···36853688 struct extent_buffer *buf;3686368936873690 if (is_data)36913691+ goto pinit;36923692+36933693+ /*36943694+ * discard is sloooow, and so triggering discards on36953695+ * individual btree blocks isn't a good plan. Just36963696+ * pin everything in discard mode.36973697+ */36983698+ if (btrfs_test_opt(root, DISCARD))36883699 goto pinit;3689370036903701 buf = btrfs_find_tree_block(root, bytenr, num_bytes);
+26-15
fs/btrfs/file.c
···10861086 btrfs_end_transaction(trans, root);10871087 else10881088 btrfs_commit_transaction(trans, root);10891089- } else {10891089+ } else if (ret != BTRFS_NO_LOG_SYNC) {10901090 btrfs_commit_transaction(trans, root);10911091+ } else {10921092+ btrfs_end_transaction(trans, root);10911093 }10921094 }10931095 if (file->f_flags & O_DIRECT) {···11391137 int ret = 0;11401138 struct btrfs_trans_handle *trans;1141113911401140+11411141+ /* we wait first, since the writeback may change the inode */11421142+ root->log_batch++;11431143+ /* the VFS called filemap_fdatawrite for us */11441144+ btrfs_wait_ordered_range(inode, 0, (u64)-1);11451145+ root->log_batch++;11461146+11421147 /*11431148 * check the transaction that last modified this inode11441149 * and see if its already been committed···11531144 if (!BTRFS_I(inode)->last_trans)11541145 goto out;1155114611471147+ /*11481148+ * if the last transaction that changed this file was before11491149+ * the current transaction, we can bail out now without any11501150+ * syncing11511151+ */11561152 mutex_lock(&root->fs_info->trans_mutex);11571153 if (BTRFS_I(inode)->last_trans <=11581154 root->fs_info->last_trans_committed) {···11671153 }11681154 mutex_unlock(&root->fs_info->trans_mutex);1169115511701170- root->log_batch++;11711171- filemap_fdatawrite(inode->i_mapping);11721172- btrfs_wait_ordered_range(inode, 0, (u64)-1);11731173- root->log_batch++;11741174-11751175- if (datasync && !(inode->i_state & I_DIRTY_PAGES))11761176- goto out;11771156 /*11781157 * ok we haven't committed the transaction yet, lets do a commit11791158 */···11951188 */11961189 mutex_unlock(&dentry->d_inode->i_mutex);1197119011981198- if (ret > 0) {11991199- ret = btrfs_commit_transaction(trans, root);12001200- } else {12011201- ret = btrfs_sync_log(trans, root);12021202- if (ret == 0)12031203- ret = btrfs_end_transaction(trans, root);12041204- else11911191+ if (ret != BTRFS_NO_LOG_SYNC) {11921192+ if (ret > 0) {12051193 ret = btrfs_commit_transaction(trans, root);11941194+ } else {11951195+ ret = btrfs_sync_log(trans, root);11961196+ if (ret == 0)11971197+ ret = btrfs_end_transaction(trans, root);11981198+ else11991199+ ret = btrfs_commit_transaction(trans, root);12001200+ }12011201+ } else {12021202+ ret = btrfs_end_transaction(trans, root);12061203 }12071204 mutex_lock(&dentry->d_inode->i_mutex);12081205out:
···344344/*345345 * when btree blocks are allocated, they have some corresponding bits set for346346 * them in one of two extent_io trees. This is used to make sure all of347347- * those extents are on disk for transaction or log commit347347+ * those extents are sent to disk but does not wait on them348348 */349349-int btrfs_write_and_wait_marked_extents(struct btrfs_root *root,350350- struct extent_io_tree *dirty_pages)349349+int btrfs_write_marked_extents(struct btrfs_root *root,350350+ struct extent_io_tree *dirty_pages)351351{352352 int ret;353353 int err = 0;···394394 page_cache_release(page);395395 }396396 }397397+ if (err)398398+ werr = err;399399+ return werr;400400+}401401+402402+/*403403+ * when btree blocks are allocated, they have some corresponding bits set for404404+ * them in one of two extent_io trees. This is used to make sure all of405405+ * those extents are on disk for transaction or log commit. We wait406406+ * on all the pages and clear them from the dirty pages state tree407407+ */408408+int btrfs_wait_marked_extents(struct btrfs_root *root,409409+ struct extent_io_tree *dirty_pages)410410+{411411+ int ret;412412+ int err = 0;413413+ int werr = 0;414414+ struct page *page;415415+ struct inode *btree_inode = root->fs_info->btree_inode;416416+ u64 start = 0;417417+ u64 end;418418+ unsigned long index;419419+397420 while (1) {398421 ret = find_first_extent_bit(dirty_pages, 0, &start, &end,399422 EXTENT_DIRTY);···445422 if (err)446423 werr = err;447424 return werr;425425+}426426+427427+/*428428+ * when btree blocks are allocated, they have some corresponding bits set for429429+ * them in one of two extent_io trees. This is used to make sure all of430430+ * those extents are on disk for transaction or log commit431431+ */432432+int btrfs_write_and_wait_marked_extents(struct btrfs_root *root,433433+ struct extent_io_tree *dirty_pages)434434+{435435+ int ret;436436+ int ret2;437437+438438+ ret = btrfs_write_marked_extents(root, dirty_pages);439439+ ret2 = btrfs_wait_marked_extents(root, dirty_pages);440440+ return ret || ret2;448441}449442450443int btrfs_write_and_wait_transaction(struct btrfs_trans_handle *trans,
···19801980 int ret;19811981 struct btrfs_root *log = root->log_root;19821982 struct btrfs_root *log_root_tree = root->fs_info->log_root_tree;19831983+ u64 log_transid = 0;1983198419841985 mutex_lock(&root->log_mutex);19851986 index1 = root->log_transid % 2;···19951994 if (atomic_read(&root->log_commit[(index1 + 1) % 2]))19961995 wait_log_commit(trans, root, root->log_transid - 1);1997199619981998- while (root->log_multiple_pids) {19971997+ while (1) {19991998 unsigned long batch = root->log_batch;20002000- mutex_unlock(&root->log_mutex);20012001- schedule_timeout_uninterruptible(1);20022002- mutex_lock(&root->log_mutex);20032003-19991999+ if (root->log_multiple_pids) {20002000+ mutex_unlock(&root->log_mutex);20012001+ schedule_timeout_uninterruptible(1);20022002+ mutex_lock(&root->log_mutex);20032003+ }20042004 wait_for_writer(trans, root);20052005 if (batch == root->log_batch)20062006 break;···20142012 goto out;20152013 }2016201420172017- ret = btrfs_write_and_wait_marked_extents(log, &log->dirty_log_pages);20152015+ /* we start IO on all the marked extents here, but we don't actually20162016+ * wait for them until later.20172017+ */20182018+ ret = btrfs_write_marked_extents(log, &log->dirty_log_pages);20182019 BUG_ON(ret);2019202020202021 btrfs_set_root_node(&log->root_item, log->node);2021202220222023 root->log_batch = 0;20242024+ log_transid = root->log_transid;20232025 root->log_transid++;20242026 log->log_transid = root->log_transid;20252027 root->log_start_pid = 0;···2052204620532047 index2 = log_root_tree->log_transid % 2;20542048 if (atomic_read(&log_root_tree->log_commit[index2])) {20492049+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);20552050 wait_log_commit(trans, log_root_tree,20562051 log_root_tree->log_transid);20572052 mutex_unlock(&log_root_tree->log_mutex);···20722065 * check the full commit flag again20732066 */20742067 if (root->fs_info->last_trans_log_full_commit == trans->transid) {20682068+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);20752069 mutex_unlock(&log_root_tree->log_mutex);20762070 ret = -EAGAIN;20772071 goto out_wake_log_root;···20812073 ret = btrfs_write_and_wait_marked_extents(log_root_tree,20822074 &log_root_tree->dirty_log_pages);20832075 BUG_ON(ret);20762076+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);2084207720852078 btrfs_set_super_log_root(&root->fs_info->super_for_commit,20862079 log_root_tree->node->start);···21012092 * the running transaction open, so a full commit can't hop21022093 * in and cause problems either.21032094 */21042104- write_ctree_super(trans, root->fs_info->tree_root, 2);20952095+ write_ctree_super(trans, root->fs_info->tree_root, 1);21052096 ret = 0;20972097+20982098+ mutex_lock(&root->log_mutex);20992099+ if (root->last_log_commit < log_transid)21002100+ root->last_log_commit = log_transid;21012101+ mutex_unlock(&root->log_mutex);2106210221072103out_wake_log_root:21082104 atomic_set(&log_root_tree->log_commit[index2], 0);···28762862 return ret;28772863}2878286428652865+static int inode_in_log(struct btrfs_trans_handle *trans,28662866+ struct inode *inode)28672867+{28682868+ struct btrfs_root *root = BTRFS_I(inode)->root;28692869+ int ret = 0;28702870+28712871+ mutex_lock(&root->log_mutex);28722872+ if (BTRFS_I(inode)->logged_trans == trans->transid &&28732873+ BTRFS_I(inode)->last_sub_trans <= root->last_log_commit)28742874+ ret = 1;28752875+ mutex_unlock(&root->log_mutex);28762876+ return ret;28772877+}28782878+28792879+28792880/*28802881 * helper function around btrfs_log_inode to make sure newly created28812882 * parent directories also end up in the log. A minimal inode and backref···29292900 sb, last_committed);29302901 if (ret)29312902 goto end_no_trans;29032903+29042904+ if (inode_in_log(trans, inode)) {29052905+ ret = BTRFS_NO_LOG_SYNC;29062906+ goto end_no_trans;29072907+ }2932290829332909 start_log_trans(trans, root);29342910
+3
fs/btrfs/tree-log.h
···1919#ifndef __TREE_LOG_2020#define __TREE_LOG_21212222+/* return value for btrfs_log_dentry_safe that means we don't need to log it at all */2323+#define BTRFS_NO_LOG_SYNC 2562424+2225int btrfs_sync_log(struct btrfs_trans_handle *trans,2326 struct btrfs_root *root);2427int btrfs_free_log(struct btrfs_trans_handle *trans, struct btrfs_root *root);
···316316{317317 struct connection *con;318318319319+ /* with sctp there's no connecting without sending */320320+ if (dlm_config.ci_protocol != 0)321321+ return 0;322322+319323 if (nodeid == dlm_our_nodeid())320324 return 0;321325···459455 int prim_len, ret;460456 int addr_len;461457 struct connection *new_con;462462- struct file *file;463458 sctp_peeloff_arg_t parg;464459 int parglen = sizeof(parg);460460+ int err;465461466462 /*467463 * We get this before any data for an association.···516512 ret = kernel_getsockopt(con->sock, IPPROTO_SCTP,517513 SCTP_SOCKOPT_PEELOFF,518514 (void *)&parg, &parglen);519519- if (ret) {515515+ if (ret < 0) {520516 log_print("Can't peel off a socket for "521521- "connection %d to node %d: err=%d\n",517517+ "connection %d to node %d: err=%d",522518 parg.associd, nodeid, ret);519519+ return;523520 }524524- file = fget(parg.sd);525525- new_con->sock = SOCKET_I(file->f_dentry->d_inode);521521+ new_con->sock = sockfd_lookup(parg.sd, &err);522522+ if (!new_con->sock) {523523+ log_print("sockfd_lookup error %d", err);524524+ return;525525+ }526526 add_sock(new_con->sock, new_con);527527- fput(file);528528- put_unused_fd(parg.sd);527527+ sockfd_put(new_con->sock);529528530530- log_print("got new/restarted association %d nodeid %d",531531- (int)sn->sn_assoc_change.sac_assoc_id, nodeid);529529+ log_print("connecting to %d sctp association %d",530530+ nodeid, (int)sn->sn_assoc_change.sac_assoc_id);532531533532 /* Send any pending writes */534533 clear_bit(CF_CONNECT_PENDING, &new_con->flags);···844837 if (con->retries++ > MAX_CONNECT_RETRIES)845838 return;846839847847- log_print("Initiating association with node %d", con->nodeid);848848-849840 if (nodeid_to_addr(con->nodeid, (struct sockaddr *)&rem_addr)) {850841 log_print("no address for nodeid %d", con->nodeid);851842 return;···860855 outmessage.msg_flags = MSG_EOR;861856862857 spin_lock(&con->writequeue_lock);863863- e = list_entry(con->writequeue.next, struct writequeue_entry,864864- list);865858866866- BUG_ON((struct list_head *) e == &con->writequeue);859859+ if (list_empty(&con->writequeue)) {860860+ spin_unlock(&con->writequeue_lock);861861+ log_print("writequeue empty for nodeid %d", con->nodeid);862862+ return;863863+ }867864865865+ e = list_first_entry(&con->writequeue, struct writequeue_entry, list);868866 len = e->len;869867 offset = e->offset;870868 spin_unlock(&con->writequeue_lock);
+12-1
fs/ext3/super.c
···2321232123222322 if (!sbh)23232323 return error;23242324- es->s_wtime = cpu_to_le32(get_seconds());23242324+ /*23252325+ * If the file system is mounted read-only, don't update the23262326+ * superblock write time. This avoids updating the superblock23272327+ * write time when we are mounting the root file system23282328+ * read/only but we need to replay the journal; at that point,23292329+ * for people who are east of GMT and who make their clock23302330+ * tick in localtime for Windows bug-for-bug compatibility,23312331+ * the clock is set in the future, and this will cause e2fsck23322332+ * to complain and force a full file system check.23332333+ */23342334+ if (!(sb->s_flags & MS_RDONLY))23352335+ es->s_wtime = cpu_to_le32(get_seconds());23252336 es->s_free_blocks_count = cpu_to_le32(ext3_count_free_blocks(sb));23262337 es->s_free_inodes_count = cpu_to_le32(ext3_count_free_inodes(sb));23272338 BUFFER_TRACE(sbh, "marking dirty");
···659659660660#endif /* __KERNEL__ */661661662662+#ifndef __EXPORTED_HEADERS__663663+#ifndef __KERNEL__664664+#warning Attempt to use kernel headers from user space, see http://kernelnewbies.org/KernelHeaders665665+#endif /* __KERNEL__ */666666+#endif /* __EXPORTED_HEADERS__ */667667+662668#define SI_LOAD_SHIFT 16663669struct sysinfo {664670 long uptime; /* Seconds since boot */
+1-1
include/linux/netdevice.h
···557557 * Callback uses when the transmitter has not made any progress558558 * for dev->watchdog ticks.559559 *560560- * struct net_device_stats* (*get_stats)(struct net_device *dev);560560+ * struct net_device_stats* (*ndo_get_stats)(struct net_device *dev);561561 * Called when a user wants to get the network device usage562562 * statistics. If not defined, the counters in dev->stats will563563 * be used.
···16691669 * to this function and ieee80211_rx_irqsafe() may not be mixed for a16701670 * single hardware.16711671 *16721672+ * Note that right now, this function must be called with softirqs disabled.16731673+ *16721674 * @hw: the hardware this frame came in on16731675 * @skb: the buffer to receive, owned by mac80211 after this call16741676 */
···142142#ifdef CONFIG_LOCK_STAT143143static DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS], lock_stats);144144145145+static inline u64 lockstat_clock(void)146146+{147147+ return cpu_clock(smp_processor_id());148148+}149149+145150static int lock_point(unsigned long points[], unsigned long ip)146151{147152 int i;···163158 return i;164159}165160166166-static void lock_time_inc(struct lock_time *lt, s64 time)161161+static void lock_time_inc(struct lock_time *lt, u64 time)167162{168163 if (time > lt->max)169164 lt->max = time;···239234static void lock_release_holdtime(struct held_lock *hlock)240235{241236 struct lock_class_stats *stats;242242- s64 holdtime;237237+ u64 holdtime;243238244239 if (!lock_stat)245240 return;246241247247- holdtime = sched_clock() - hlock->holdtime_stamp;242242+ holdtime = lockstat_clock() - hlock->holdtime_stamp;248243249244 stats = get_lock_stats(hlock_class(hlock));250245 if (hlock->read)···27972792 hlock->references = references;27982793#ifdef CONFIG_LOCK_STAT27992794 hlock->waittime_stamp = 0;28002800- hlock->holdtime_stamp = sched_clock();27952795+ hlock->holdtime_stamp = lockstat_clock();28012796#endif2802279728032798 if (check == 2 && !mark_irqflags(curr, hlock))···33273322 if (hlock->instance != lock)33283323 return;3329332433303330- hlock->waittime_stamp = sched_clock();33253325+ hlock->waittime_stamp = lockstat_clock();3331332633323327 contention_point = lock_point(hlock_class(hlock)->contention_point, ip);33333328 contending_point = lock_point(hlock_class(hlock)->contending_point,···33503345 struct held_lock *hlock, *prev_hlock;33513346 struct lock_class_stats *stats;33523347 unsigned int depth;33533353- u64 now;33543354- s64 waittime = 0;33483348+ u64 now, waittime = 0;33553349 int i, cpu;3356335033573351 depth = curr->lockdep_depth;···3378337433793375 cpu = smp_processor_id();33803376 if (hlock->waittime_stamp) {33813381- now = sched_clock();33773377+ now = lockstat_clock();33823378 waittime = now - hlock->waittime_stamp;33833379 hlock->holdtime_stamp = now;33843380 }
+8-5
kernel/sched.c
···676676677677/**678678 * runqueue_is_locked679679+ * @cpu: the processor in question.679680 *680681 * Returns true if the current cpu runqueue is locked.681682 * This interface allows printk to be called with the runqueue lock···23122311{23132312 int cpu, orig_cpu, this_cpu, success = 0;23142313 unsigned long flags;23152315- struct rq *rq;23142314+ struct rq *rq, *orig_rq;2316231523172316 if (!sched_feat(SYNC_WAKEUPS))23182317 wake_flags &= ~WF_SYNC;···23202319 this_cpu = get_cpu();2321232023222321 smp_wmb();23232323- rq = task_rq_lock(p, &flags);23222322+ rq = orig_rq = task_rq_lock(p, &flags);23242323 update_rq_clock(rq);23252324 if (!(p->state & state))23262325 goto out;···23512350 set_task_cpu(p, cpu);2352235123532352 rq = task_rq_lock(p, &flags);23532353+23542354+ if (rq != orig_rq)23552355+ update_rq_clock(rq);23562356+23542357 WARN_ON(p->state != TASK_WAKING);23552358 cpu = task_cpu(p);23562359···3661365636623657/**36633658 * update_sg_lb_stats - Update sched_group's statistics for load balancing.36593659+ * @sd: The sched_domain whose statistics are to be updated.36643660 * @group: sched_group whose statistics are to be updated.36653661 * @this_cpu: Cpu for which load balance is currently performed.36663662 * @idle: Idle status of this_cpu···67246718/*67256719 * This task is about to go to sleep on IO. Increment rq->nr_iowait so67266720 * that process accounting knows that this is a task in IO wait state.67276727- *67286728- * But don't do that if it is a deliberate, throttling IO wait (this task67296729- * has set its backing_dev_info: the queue against which it should throttle)67306721 */67316722void __sched io_schedule(void)67326723{
···640640EXPORT_SYMBOL(schedule_delayed_work);641641642642/**643643+ * flush_delayed_work - block until a dwork_struct's callback has terminated644644+ * @dwork: the delayed work which is to be flushed645645+ *646646+ * Any timeout is cancelled, and any pending work is run immediately.647647+ */648648+void flush_delayed_work(struct delayed_work *dwork)649649+{650650+ if (del_timer_sync(&dwork->timer)) {651651+ struct cpu_workqueue_struct *cwq;652652+ cwq = wq_per_cpu(keventd_wq, get_cpu());653653+ __queue_work(cwq, &dwork->work);654654+ put_cpu();655655+ }656656+ flush_work(&dwork->work);657657+}658658+EXPORT_SYMBOL(flush_delayed_work);659659+660660+/**643661 * schedule_delayed_work_on - queue work in global workqueue on CPU after delay644662 * @cpu: cpu to use645663 * @dwork: job to be done
···566566 if (pages_written >= write_chunk)567567 break; /* We've done our duty */568568569569- schedule_timeout_interruptible(pause);569569+ __set_current_state(TASK_INTERRUPTIBLE);570570+ io_schedule_timeout(pause);570571571572 /*572573 * Increase the delay for each loop, up to our previous
+3-2
mm/percpu.c
···18701870 max_distance = 0;18711871 for (group = 0; group < ai->nr_groups; group++) {18721872 ai->groups[group].base_offset = areas[group] - base;18731873- max_distance = max(max_distance, ai->groups[group].base_offset);18731873+ max_distance = max_t(size_t, max_distance,18741874+ ai->groups[group].base_offset);18741875 }18751876 max_distance += ai->unit_size;1876187718771878 /* warn if maximum distance is further than 75% of vmalloc space */18781879 if (max_distance > (VMALLOC_END - VMALLOC_START) * 3 / 4) {18791879- pr_warning("PERCPU: max_distance=0x%lx too large for vmalloc "18801880+ pr_warning("PERCPU: max_distance=0x%zx too large for vmalloc "18801881 "space 0x%lx\n",18811882 max_distance, VMALLOC_END - VMALLOC_START);18821883#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+1
net/ipv4/tcp_minisocks.c
···644644 /* If TCP_DEFER_ACCEPT is set, drop bare ACK. */645645 if (inet_csk(sk)->icsk_accept_queue.rskq_defer_accept &&646646 TCP_SKB_CB(skb)->end_seq == tcp_rsk(req)->rcv_isn + 1) {647647+ inet_csk(sk)->icsk_accept_queue.rskq_defer_accept--;647648 inet_rsk(req)->acked = 1;648649 return NULL;649650 }
+43-30
net/ipv4/udp.c
···841841 return ret;842842}843843844844+845845+/**846846+ * first_packet_length - return length of first packet in receive queue847847+ * @sk: socket848848+ *849849+ * Drops all bad checksum frames, until a valid one is found.850850+ * Returns the length of found skb, or 0 if none is found.851851+ */852852+static unsigned int first_packet_length(struct sock *sk)853853+{854854+ struct sk_buff_head list_kill, *rcvq = &sk->sk_receive_queue;855855+ struct sk_buff *skb;856856+ unsigned int res;857857+858858+ __skb_queue_head_init(&list_kill);859859+860860+ spin_lock_bh(&rcvq->lock);861861+ while ((skb = skb_peek(rcvq)) != NULL &&862862+ udp_lib_checksum_complete(skb)) {863863+ UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS,864864+ IS_UDPLITE(sk));865865+ __skb_unlink(skb, rcvq);866866+ __skb_queue_tail(&list_kill, skb);867867+ }868868+ res = skb ? skb->len : 0;869869+ spin_unlock_bh(&rcvq->lock);870870+871871+ if (!skb_queue_empty(&list_kill)) {872872+ lock_sock(sk);873873+ __skb_queue_purge(&list_kill);874874+ sk_mem_reclaim_partial(sk);875875+ release_sock(sk);876876+ }877877+ return res;878878+}879879+844880/*845881 * IOCTL requests applicable to the UDP protocol846882 */···893857894858 case SIOCINQ:895859 {896896- struct sk_buff *skb;897897- unsigned long amount;860860+ unsigned int amount = first_packet_length(sk);898861899899- amount = 0;900900- spin_lock_bh(&sk->sk_receive_queue.lock);901901- skb = skb_peek(&sk->sk_receive_queue);902902- if (skb != NULL) {862862+ if (amount)903863 /*904864 * We will only return the amount905865 * of this packet since that is all906866 * that will be read.907867 */908908- amount = skb->len - sizeof(struct udphdr);909909- }910910- spin_unlock_bh(&sk->sk_receive_queue.lock);868868+ amount -= sizeof(struct udphdr);869869+911870 return put_user(amount, (int __user *)arg);912871 }913872···15711540{15721541 unsigned int mask = datagram_poll(file, sock, wait);15731542 struct sock *sk = sock->sk;15741574- int is_lite = IS_UDPLITE(sk);1575154315761544 /* Check for false positives due to checksum errors */15771577- if ((mask & POLLRDNORM) &&15781578- !(file->f_flags & O_NONBLOCK) &&15791579- !(sk->sk_shutdown & RCV_SHUTDOWN)) {15801580- struct sk_buff_head *rcvq = &sk->sk_receive_queue;15811581- struct sk_buff *skb;15821582-15831583- spin_lock_bh(&rcvq->lock);15841584- while ((skb = skb_peek(rcvq)) != NULL &&15851585- udp_lib_checksum_complete(skb)) {15861586- UDP_INC_STATS_BH(sock_net(sk),15871587- UDP_MIB_INERRORS, is_lite);15881588- __skb_unlink(skb, rcvq);15891589- kfree_skb(skb);15901590- }15911591- spin_unlock_bh(&rcvq->lock);15921592-15931593- /* nothing to see, move along */15941594- if (skb == NULL)15951595- mask &= ~(POLLIN | POLLRDNORM);15961596- }15451545+ if ((mask & POLLRDNORM) && !(file->f_flags & O_NONBLOCK) &&15461546+ !(sk->sk_shutdown & RCV_SHUTDOWN) && !first_packet_length(sk))15471547+ mask &= ~(POLLIN | POLLRDNORM);1597154815981549 return mask;15991550
+2-2
net/mac80211/ibss.c
···544544 "%pM\n", bss->cbss.bssid, ifibss->bssid);545545#endif /* CONFIG_MAC80211_IBSS_DEBUG */546546547547- if (bss && memcmp(ifibss->bssid, bss->cbss.bssid, ETH_ALEN)) {547547+ if (bss && !memcmp(ifibss->bssid, bss->cbss.bssid, ETH_ALEN)) {548548 printk(KERN_DEBUG "%s: Selected IBSS BSSID %pM"549549 " based on configured SSID\n",550550 sdata->dev->name, bss->cbss.bssid);···829829 if (!sdata->u.ibss.ssid_len)830830 continue;831831 sdata->u.ibss.last_scan_completed = jiffies;832832- ieee80211_sta_find_ibss(sdata);832832+ mod_timer(&sdata->u.ibss.timer, 0);833833 }834834 mutex_unlock(&local->iflist_mtx);835835}
···17041704 if (!is_multicast_ether_addr(hdr.addr1)) {17051705 rcu_read_lock();17061706 sta = sta_info_get(local, hdr.addr1);17071707- if (sta)17071707+ /* XXX: in the future, use sdata to look up the sta */17081708+ if (sta && sta->sdata == sdata)17081709 sta_flags = get_sta_flags(sta);17091710 rcu_read_unlock();17101711 }
···208208209209# Bzip2 and LZMA do not include size in file... so we have to fake that;210210# append the size as a 32-bit littleendian number as gzip does.211211-size_append = echo -ne $(shell \211211+size_append = /bin/echo -ne $(shell \212212dec_size=0; \213213for F in $1; do \214214 fsize=$$(stat -c "%s" $$F); \
+2-2
scripts/checkkconfigsymbols.sh
···99# Doing this once at the beginning saves a lot of time, on a cache-hot tree.1010Kconfigs="`find . -name 'Kconfig' -o -name 'Kconfig*[^~]'`"11111212-echo -e "File list \tundefined symbol used"1212+/bin/echo -e "File list \tundefined symbol used"1313find $paths -name '*.[chS]' -o -name 'Makefile' -o -name 'Makefile*[^~]'| while read i1414do1515 # Output the bare Kconfig variable and the filename; the _MODULE part at···5454 # beyond the purpose of this script.5555 symb_bare=`echo $symb | sed -e 's/_MODULE//'`5656 if ! grep -q "\<$symb_bare\>" $Kconfigs; then5757- echo -e "$files: \t$symb"5757+ /bin/echo -e "$files: \t$symb"5858 fi5959done|sort
···1818# e) generate the rpm files, based on kernel.spec1919# - Use /. to avoid tar packing just the symlink20202121+# Note that the rpm-pkg target cannot be used with KBUILD_OUTPUT,2222+# but the binrpm-pkg target can; for some reason O= gets ignored.2323+2124# Do we have rpmbuild, otherwise fall back to the older rpm2225RPM := $(shell if [ -x "/usr/bin/rpmbuild" ]; then echo rpmbuild; \2326 else echo rpm; fi)···3633 $(CONFIG_SHELL) $(MKSPEC) > $@37343835rpm-pkg rpm: $(objtree)/kernel.spec FORCE3636+ @if test -n "$(KBUILD_OUTPUT)"; then \3737+ echo "Building source + binary RPM is not possible outside the"; \3838+ echo "kernel source tree. Don't set KBUILD_OUTPUT, or use the"; \3939+ echo "binrpm-pkg target instead."; \4040+ false; \4141+ fi3942 $(MAKE) clean4043 $(PREV) ln -sf $(srctree) $(KERNELPATH)4144 $(CONFIG_SHELL) $(srctree)/scripts/setlocalversion > $(objtree)/.scmversion···7061 set -e; \7162 mv -f $(objtree)/.tmp_version $(objtree)/.version72637373- $(RPM) $(RPMOPTS) --define "_builddir $(srctree)" --target \6464+ $(RPM) $(RPMOPTS) --define "_builddir $(objtree)" --target \7465 $(UTS_MACHINE) -bb $<75667667clean-files += $(objtree)/binkernel.spec
···52525353 /* only use basic functionality for now */54545555- ice->num_total_dacs = 2; /* only PSDOUT0 is connected */5555+ /* VT1616 6ch codec connected to PSDOUT0 using packed mode */5656+ ice->num_total_dacs = 6;5657 ice->num_total_adcs = 2;57585858- /* Chaintech AV-710 has another codecs, which need initialization */5959- /* initialize WM8728 codec */5959+ /* Chaintech AV-710 has another WM8728 codec connected to PSDOUT46060+ (shared with the SPDIF output). Mixer control for this codec6161+ is not yet supported. */6062 if (ice->eeprom.subvendor == VT1724_SUBDEVICE_AV710) {6163 for (i = 0; i < ARRAY_SIZE(wm_inits); i += 2)6264 wm_put(ice, wm_inits[i], wm_inits[i+1]);