Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'mvneta-64bit'

Gregory CLEMENT says:

====================
Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver

The Armada 37xx is a new ARMv8 SoC from Marvell using same network
controller as the older Armada 370/38x/XP SoCs. This series adapts the
driver in order to be able to use it on this new SoC. The main changes
are:

- 64-bits support: the first patches allow using the driver on a 64-bit
architecture.

- MBUS support: the mbus configuration is different on Armada 37xx
from the older SoCs.

- per cpu interrupt: Armada 37xx do not support per cpu interrupt for
the NETA IP, the non-per-CPU behavior was added back.

The first patch is an optimization in the rx path in swbm mode.
The second patch remove unnecessary allocation for HWBM.
The first item is solved by patches 4 and 5.
The 2 last items are solved by patch 6.
In patch 7 the dt support is added.

Beside Armada 37xx, this series have been again tested on Armada XP
and Armada 38x (with Hardware Buffer Management and with Software
Buffer Management).

This is the 6th version of the series:
- 1st version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/469588.html

- 2nd version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/470476.html

- 3rd version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/470901.html

- 4th version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/471039.html

- 5th version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/471478.html

Changelog:
v5 -> v6:
- Added Tested-by from Marcin Wojtas on the series
- Added Reviewed-by from Jisheng Zhang on patch 3
- Fix eth1 phy mode for Armada 3720 DB board on patch 7

v4 -> v5:
- remove unnecessary cast in patch 3

v3 -> v4:
- Adding new patch: "net: mvneta: do not allocate buffer in rxq init
with HWBM"

- Simplify the HWBM case in patch 3 as suggested by Marcin

v2 -> v3:
- Adding patch 1 "Optimize rx path for small frame"

- Fix the kbuild error by moving the "phys_addr += pp->rx_offset_correction;"
line from patch 2 to patch 3 where rx_offset_correction is introduced.

- Move the memory allocation of the buf_virt_addr of the rxq to be
called by the probe function in order to avoid a memory leak.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+303 -100
+5 -2
Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
··· 1 - * Marvell Armada 370 / Armada XP Ethernet Controller (NETA) 1 + * Marvell Armada 370 / Armada XP / Armada 3700 Ethernet Controller (NETA) 2 2 3 3 Required properties: 4 - - compatible: "marvell,armada-370-neta" or "marvell,armada-xp-neta". 4 + - compatible: could be one of the followings 5 + "marvell,armada-370-neta" 6 + "marvell,armada-xp-neta" 7 + "marvell,armada-3700-neta" 5 8 - reg: address and length of the register set for the device. 6 9 - interrupts: interrupt for the device 7 10 - phy: See ethernet.txt file in the same directory.
+23
arch/arm64/boot/dts/marvell/armada-3720-db.dts
··· 81 81 &pcie0 { 82 82 status = "okay"; 83 83 }; 84 + 85 + &mdio { 86 + status = "okay"; 87 + phy0: ethernet-phy@0 { 88 + reg = <0>; 89 + }; 90 + 91 + phy1: ethernet-phy@1 { 92 + reg = <1>; 93 + }; 94 + }; 95 + 96 + &eth0 { 97 + phy-mode = "rgmii-id"; 98 + phy = <&phy0>; 99 + status = "okay"; 100 + }; 101 + 102 + &eth1 { 103 + phy-mode = "sgmii"; 104 + phy = <&phy1>; 105 + status = "okay"; 106 + };
+23
arch/arm64/boot/dts/marvell/armada-37xx.dtsi
··· 140 140 }; 141 141 }; 142 142 143 + eth0: ethernet@30000 { 144 + compatible = "marvell,armada-3700-neta"; 145 + reg = <0x30000 0x4000>; 146 + interrupts = <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>; 147 + clocks = <&sb_periph_clk 8>; 148 + status = "disabled"; 149 + }; 150 + 151 + mdio: mdio@32004 { 152 + #address-cells = <1>; 153 + #size-cells = <0>; 154 + compatible = "marvell,orion-mdio"; 155 + reg = <0x32004 0x4>; 156 + }; 157 + 158 + eth1: ethernet@40000 { 159 + compatible = "marvell,armada-3700-neta"; 160 + reg = <0x40000 0x4000>; 161 + interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>; 162 + clocks = <&sb_periph_clk 7>; 163 + status = "disabled"; 164 + }; 165 + 143 166 usb3: usb@58000 { 144 167 compatible = "marvell,armada3700-xhci", 145 168 "generic-xhci";
+6 -4
drivers/net/ethernet/marvell/Kconfig
··· 44 44 config MVNETA_BM_ENABLE 45 45 tristate "Marvell Armada 38x/XP network interface BM support" 46 46 depends on MVNETA 47 + depends on !64BIT 47 48 ---help--- 48 49 This driver supports auxiliary block of the network 49 50 interface units in the Marvell ARMADA XP and ARMADA 38x SoC ··· 56 55 buffer management. 57 56 58 57 config MVNETA 59 - tristate "Marvell Armada 370/38x/XP network interface support" 60 - depends on PLAT_ORION || COMPILE_TEST 58 + tristate "Marvell Armada 370/38x/XP/37xx network interface support" 59 + depends on ARCH_MVEBU || COMPILE_TEST 61 60 depends on HAS_DMA 62 - depends on !64BIT 63 61 select MVMDIO 64 62 select FIXED_PHY 65 63 ---help--- 66 64 This driver supports the network interface units in the 67 - Marvell ARMADA XP, ARMADA 370 and ARMADA 38x SoC family. 65 + Marvell ARMADA XP, ARMADA 370, ARMADA 38x and 66 + ARMADA 37xx SoC family. 68 67 69 68 Note that this driver is distinct from the mv643xx_eth 70 69 driver, which should be used for the older Marvell SoCs ··· 72 71 73 72 config MVNETA_BM 74 73 tristate 74 + depends on !64BIT 75 75 default y if MVNETA=y && MVNETA_BM_ENABLE!=n 76 76 default MVNETA_BM_ENABLE 77 77 select HWBM
+246 -94
drivers/net/ethernet/marvell/mvneta.c
··· 296 296 /* descriptor aligned size */ 297 297 #define MVNETA_DESC_ALIGNED_SIZE 32 298 298 299 + /* Number of bytes to be taken into account by HW when putting incoming data 300 + * to the buffers. It is needed in case NET_SKB_PAD exceeds maximum packet 301 + * offset supported in MVNETA_RXQ_CONFIG_REG(q) registers. 302 + */ 303 + #define MVNETA_RX_PKT_OFFSET_CORRECTION 64 304 + 299 305 #define MVNETA_RX_PKT_SIZE(mtu) \ 300 306 ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \ 301 307 ETH_HLEN + ETH_FCS_LEN, \ ··· 397 391 spinlock_t lock; 398 392 bool is_stopped; 399 393 394 + u32 cause_rx_tx; 395 + struct napi_struct napi; 396 + 400 397 /* Core clock */ 401 398 struct clk *clk; 402 399 /* AXI clock */ ··· 425 416 u64 ethtool_stats[ARRAY_SIZE(mvneta_statistics)]; 426 417 427 418 u32 indir[MVNETA_RSS_LU_TABLE_SIZE]; 419 + 420 + /* Flags for special SoC configurations */ 421 + bool neta_armada3700; 422 + u16 rx_offset_correction; 428 423 }; 429 424 430 425 /* The mvneta_tx_desc and mvneta_rx_desc structures describe the ··· 573 560 574 561 u32 pkts_coal; 575 562 u32 time_coal; 563 + 564 + /* Virtual address of the RX buffer */ 565 + void **buf_virt_addr; 576 566 577 567 /* Virtual address of the RX DMA descriptors array */ 578 568 struct mvneta_rx_desc *descs; ··· 971 955 return 0; 972 956 } 973 957 974 - /* Assign and initialize pools for port. In case of fail 975 - * buffer manager will remain disabled for current port. 976 - */ 977 - static int mvneta_bm_port_init(struct platform_device *pdev, 978 - struct mvneta_port *pp) 958 + static int mvneta_bm_port_mbus_init(struct mvneta_port *pp) 979 959 { 980 - struct device_node *dn = pdev->dev.of_node; 981 - u32 long_pool_id, short_pool_id, wsize; 960 + u32 wsize; 982 961 u8 target, attr; 983 962 int err; 984 963 ··· 991 980 if (err < 0) { 992 981 netdev_info(pp->dev, "fail to configure mbus window to BM\n"); 993 982 return err; 983 + } 984 + return 0; 985 + } 986 + 987 + /* Assign and initialize pools for port. In case of fail 988 + * buffer manager will remain disabled for current port. 989 + */ 990 + static int mvneta_bm_port_init(struct platform_device *pdev, 991 + struct mvneta_port *pp) 992 + { 993 + struct device_node *dn = pdev->dev.of_node; 994 + u32 long_pool_id, short_pool_id; 995 + 996 + if (!pp->neta_armada3700) { 997 + int ret; 998 + 999 + ret = mvneta_bm_port_mbus_init(pp); 1000 + if (ret) 1001 + return ret; 994 1002 } 995 1003 996 1004 if (of_property_read_u32(dn, "bm,pool-long", &long_pool_id)) { ··· 1379 1349 for_each_present_cpu(cpu) { 1380 1350 int rxq_map = 0, txq_map = 0; 1381 1351 int rxq, txq; 1352 + if (!pp->neta_armada3700) { 1353 + for (rxq = 0; rxq < rxq_number; rxq++) 1354 + if ((rxq % max_cpu) == cpu) 1355 + rxq_map |= MVNETA_CPU_RXQ_ACCESS(rxq); 1382 1356 1383 - for (rxq = 0; rxq < rxq_number; rxq++) 1384 - if ((rxq % max_cpu) == cpu) 1385 - rxq_map |= MVNETA_CPU_RXQ_ACCESS(rxq); 1357 + for (txq = 0; txq < txq_number; txq++) 1358 + if ((txq % max_cpu) == cpu) 1359 + txq_map |= MVNETA_CPU_TXQ_ACCESS(txq); 1386 1360 1387 - for (txq = 0; txq < txq_number; txq++) 1388 - if ((txq % max_cpu) == cpu) 1389 - txq_map |= MVNETA_CPU_TXQ_ACCESS(txq); 1361 + /* With only one TX queue we configure a special case 1362 + * which will allow to get all the irq on a single 1363 + * CPU 1364 + */ 1365 + if (txq_number == 1) 1366 + txq_map = (cpu == pp->rxq_def) ? 1367 + MVNETA_CPU_TXQ_ACCESS(1) : 0; 1390 1368 1391 - /* With only one TX queue we configure a special case 1392 - * which will allow to get all the irq on a single 1393 - * CPU 1394 - */ 1395 - if (txq_number == 1) 1396 - txq_map = (cpu == pp->rxq_def) ? 1397 - MVNETA_CPU_TXQ_ACCESS(1) : 0; 1369 + } else { 1370 + txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK; 1371 + rxq_map = MVNETA_CPU_RXQ_ACCESS_ALL_MASK; 1372 + } 1398 1373 1399 1374 mvreg_write(pp, MVNETA_CPU_MAP(cpu), rxq_map | txq_map); 1400 1375 } ··· 1608 1573 1609 1574 /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ 1610 1575 static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, 1611 - u32 phys_addr, u32 cookie) 1576 + u32 phys_addr, void *virt_addr, 1577 + struct mvneta_rx_queue *rxq) 1612 1578 { 1613 - rx_desc->buf_cookie = cookie; 1579 + int i; 1580 + 1614 1581 rx_desc->buf_phys_addr = phys_addr; 1582 + i = rx_desc - rxq->descs; 1583 + rxq->buf_virt_addr[i] = virt_addr; 1615 1584 } 1616 1585 1617 1586 /* Decrement sent descriptors counter */ ··· 1820 1781 1821 1782 /* Refill processing for SW buffer management */ 1822 1783 static int mvneta_rx_refill(struct mvneta_port *pp, 1823 - struct mvneta_rx_desc *rx_desc) 1784 + struct mvneta_rx_desc *rx_desc, 1785 + struct mvneta_rx_queue *rxq) 1824 1786 1825 1787 { 1826 1788 dma_addr_t phys_addr; ··· 1839 1799 return -ENOMEM; 1840 1800 } 1841 1801 1842 - mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data); 1802 + phys_addr += pp->rx_offset_correction; 1803 + mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq); 1843 1804 return 0; 1844 1805 } 1845 1806 ··· 1902 1861 1903 1862 for (i = 0; i < rxq->size; i++) { 1904 1863 struct mvneta_rx_desc *rx_desc = rxq->descs + i; 1905 - void *data = (void *)rx_desc->buf_cookie; 1864 + void *data = rxq->buf_virt_addr[i]; 1906 1865 1907 1866 dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, 1908 1867 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); ··· 1935 1894 unsigned char *data; 1936 1895 dma_addr_t phys_addr; 1937 1896 u32 rx_status, frag_size; 1938 - int rx_bytes, err; 1897 + int rx_bytes, err, index; 1939 1898 1940 1899 rx_done++; 1941 1900 rx_status = rx_desc->status; 1942 1901 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); 1943 - data = (unsigned char *)rx_desc->buf_cookie; 1902 + index = rx_desc - rxq->descs; 1903 + data = rxq->buf_virt_addr[index]; 1944 1904 phys_addr = rx_desc->buf_phys_addr; 1945 1905 1946 1906 if (!mvneta_rxq_desc_is_first_last(rx_status) || ··· 1960 1918 goto err_drop_frame; 1961 1919 1962 1920 dma_sync_single_range_for_cpu(dev->dev.parent, 1963 - rx_desc->buf_phys_addr, 1921 + phys_addr, 1964 1922 MVNETA_MH_SIZE + NET_SKB_PAD, 1965 1923 rx_bytes, 1966 1924 DMA_FROM_DEVICE); ··· 1980 1938 } 1981 1939 1982 1940 /* Refill processing */ 1983 - err = mvneta_rx_refill(pp, rx_desc); 1941 + err = mvneta_rx_refill(pp, rx_desc, rxq); 1984 1942 if (err) { 1985 1943 netdev_err(dev, "Linux processing - Can't refill\n"); 1986 1944 rxq->missed++; ··· 2062 2020 rx_done++; 2063 2021 rx_status = rx_desc->status; 2064 2022 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); 2065 - data = (unsigned char *)rx_desc->buf_cookie; 2023 + data = (u8 *)(uintptr_t)rx_desc->buf_cookie; 2066 2024 phys_addr = rx_desc->buf_phys_addr; 2067 2025 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc); 2068 2026 bm_pool = &pp->bm_priv->bm_pools[pool_id]; ··· 2652 2610 /* Interrupt handling - the callback for request_irq() */ 2653 2611 static irqreturn_t mvneta_isr(int irq, void *dev_id) 2654 2612 { 2613 + struct mvneta_port *pp = (struct mvneta_port *)dev_id; 2614 + 2615 + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0); 2616 + napi_schedule(&pp->napi); 2617 + 2618 + return IRQ_HANDLED; 2619 + } 2620 + 2621 + /* Interrupt handling - the callback for request_percpu_irq() */ 2622 + static irqreturn_t mvneta_percpu_isr(int irq, void *dev_id) 2623 + { 2655 2624 struct mvneta_pcpu_port *port = (struct mvneta_pcpu_port *)dev_id; 2656 2625 2657 2626 disable_percpu_irq(port->pp->dev->irq); ··· 2710 2657 struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports); 2711 2658 2712 2659 if (!netif_running(pp->dev)) { 2713 - napi_complete(&port->napi); 2660 + napi_complete(napi); 2714 2661 return rx_done; 2715 2662 } 2716 2663 ··· 2739 2686 */ 2740 2687 rx_queue = fls(((cause_rx_tx >> 8) & 0xff)); 2741 2688 2742 - cause_rx_tx |= port->cause_rx_tx; 2689 + cause_rx_tx |= pp->neta_armada3700 ? pp->cause_rx_tx : 2690 + port->cause_rx_tx; 2743 2691 2744 2692 if (rx_queue) { 2745 2693 rx_queue = rx_queue - 1; ··· 2754 2700 2755 2701 if (budget > 0) { 2756 2702 cause_rx_tx = 0; 2757 - napi_complete(&port->napi); 2758 - enable_percpu_irq(pp->dev->irq, 0); 2703 + napi_complete(napi); 2704 + 2705 + if (pp->neta_armada3700) { 2706 + unsigned long flags; 2707 + 2708 + local_irq_save(flags); 2709 + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 2710 + MVNETA_RX_INTR_MASK(rxq_number) | 2711 + MVNETA_TX_INTR_MASK(txq_number) | 2712 + MVNETA_MISCINTR_INTR_MASK); 2713 + local_irq_restore(flags); 2714 + } else { 2715 + enable_percpu_irq(pp->dev->irq, 0); 2716 + } 2759 2717 } 2760 2718 2761 - port->cause_rx_tx = cause_rx_tx; 2719 + if (pp->neta_armada3700) 2720 + pp->cause_rx_tx = cause_rx_tx; 2721 + else 2722 + port->cause_rx_tx = cause_rx_tx; 2723 + 2762 2724 return rx_done; 2763 2725 } 2764 2726 ··· 2786 2716 2787 2717 for (i = 0; i < num; i++) { 2788 2718 memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc)); 2789 - if (mvneta_rx_refill(pp, rxq->descs + i) != 0) { 2719 + if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) { 2790 2720 netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs filled\n", 2791 2721 __func__, rxq->id, i, num); 2792 2722 break; ··· 2843 2773 mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size); 2844 2774 2845 2775 /* Set Offset */ 2846 - mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD); 2776 + mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction); 2847 2777 2848 2778 /* Set coalescing pkts and time */ 2849 2779 mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal); ··· 2854 2784 mvneta_rxq_buf_size_set(pp, rxq, 2855 2785 MVNETA_RX_BUF_SIZE(pp->pkt_size)); 2856 2786 mvneta_rxq_bm_disable(pp, rxq); 2787 + mvneta_rxq_fill(pp, rxq, rxq->size); 2857 2788 } else { 2858 2789 mvneta_rxq_bm_enable(pp, rxq); 2859 2790 mvneta_rxq_long_pool_set(pp, rxq); 2860 2791 mvneta_rxq_short_pool_set(pp, rxq); 2792 + mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size); 2861 2793 } 2862 - 2863 - mvneta_rxq_fill(pp, rxq, rxq->size); 2864 2794 2865 2795 return 0; 2866 2796 } ··· 3044 2974 /* start the Rx/Tx activity */ 3045 2975 mvneta_port_enable(pp); 3046 2976 3047 - /* Enable polling on the port */ 3048 - for_each_online_cpu(cpu) { 3049 - struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu); 2977 + if (!pp->neta_armada3700) { 2978 + /* Enable polling on the port */ 2979 + for_each_online_cpu(cpu) { 2980 + struct mvneta_pcpu_port *port = 2981 + per_cpu_ptr(pp->ports, cpu); 3050 2982 3051 - napi_enable(&port->napi); 2983 + napi_enable(&port->napi); 2984 + } 2985 + } else { 2986 + napi_enable(&pp->napi); 3052 2987 } 3053 2988 3054 2989 /* Unmask interrupts. It has to be done from each CPU */ ··· 3075 3000 3076 3001 phy_stop(ndev->phydev); 3077 3002 3078 - for_each_online_cpu(cpu) { 3079 - struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu); 3003 + if (!pp->neta_armada3700) { 3004 + for_each_online_cpu(cpu) { 3005 + struct mvneta_pcpu_port *port = 3006 + per_cpu_ptr(pp->ports, cpu); 3080 3007 3081 - napi_disable(&port->napi); 3008 + napi_disable(&port->napi); 3009 + } 3010 + } else { 3011 + napi_disable(&pp->napi); 3082 3012 } 3083 3013 3084 3014 netif_carrier_off(pp->dev); ··· 3493 3413 goto err_cleanup_rxqs; 3494 3414 3495 3415 /* Connect to port interrupt line */ 3496 - ret = request_percpu_irq(pp->dev->irq, mvneta_isr, 3497 - MVNETA_DRIVER_NAME, pp->ports); 3416 + if (pp->neta_armada3700) 3417 + ret = request_irq(pp->dev->irq, mvneta_isr, 0, 3418 + dev->name, pp); 3419 + else 3420 + ret = request_percpu_irq(pp->dev->irq, mvneta_percpu_isr, 3421 + dev->name, pp->ports); 3498 3422 if (ret) { 3499 3423 netdev_err(pp->dev, "cannot request irq %d\n", pp->dev->irq); 3500 3424 goto err_cleanup_txqs; 3501 3425 } 3502 3426 3503 - /* Enable per-CPU interrupt on all the CPU to handle our RX 3504 - * queue interrupts 3505 - */ 3506 - on_each_cpu(mvneta_percpu_enable, pp, true); 3427 + if (!pp->neta_armada3700) { 3428 + /* Enable per-CPU interrupt on all the CPU to handle our RX 3429 + * queue interrupts 3430 + */ 3431 + on_each_cpu(mvneta_percpu_enable, pp, true); 3507 3432 3508 - pp->is_stopped = false; 3509 - /* Register a CPU notifier to handle the case where our CPU 3510 - * might be taken offline. 3511 - */ 3512 - ret = cpuhp_state_add_instance_nocalls(online_hpstate, 3513 - &pp->node_online); 3514 - if (ret) 3515 - goto err_free_irq; 3433 + pp->is_stopped = false; 3434 + /* Register a CPU notifier to handle the case where our CPU 3435 + * might be taken offline. 3436 + */ 3437 + ret = cpuhp_state_add_instance_nocalls(online_hpstate, 3438 + &pp->node_online); 3439 + if (ret) 3440 + goto err_free_irq; 3516 3441 3517 - ret = cpuhp_state_add_instance_nocalls(CPUHP_NET_MVNETA_DEAD, 3518 - &pp->node_dead); 3519 - if (ret) 3520 - goto err_free_online_hp; 3442 + ret = cpuhp_state_add_instance_nocalls(CPUHP_NET_MVNETA_DEAD, 3443 + &pp->node_dead); 3444 + if (ret) 3445 + goto err_free_online_hp; 3446 + } 3521 3447 3522 3448 /* In default link is down */ 3523 3449 netif_carrier_off(pp->dev); ··· 3539 3453 return 0; 3540 3454 3541 3455 err_free_dead_hp: 3542 - cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD, 3543 - &pp->node_dead); 3456 + if (!pp->neta_armada3700) 3457 + cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD, 3458 + &pp->node_dead); 3544 3459 err_free_online_hp: 3545 - cpuhp_state_remove_instance_nocalls(online_hpstate, &pp->node_online); 3460 + if (!pp->neta_armada3700) 3461 + cpuhp_state_remove_instance_nocalls(online_hpstate, 3462 + &pp->node_online); 3546 3463 err_free_irq: 3547 - on_each_cpu(mvneta_percpu_disable, pp, true); 3548 - free_percpu_irq(pp->dev->irq, pp->ports); 3464 + if (pp->neta_armada3700) { 3465 + free_irq(pp->dev->irq, pp); 3466 + } else { 3467 + on_each_cpu(mvneta_percpu_disable, pp, true); 3468 + free_percpu_irq(pp->dev->irq, pp->ports); 3469 + } 3549 3470 err_cleanup_txqs: 3550 3471 mvneta_cleanup_txqs(pp); 3551 3472 err_cleanup_rxqs: ··· 3565 3472 { 3566 3473 struct mvneta_port *pp = netdev_priv(dev); 3567 3474 3568 - /* Inform that we are stopping so we don't want to setup the 3569 - * driver for new CPUs in the notifiers. The code of the 3570 - * notifier for CPU online is protected by the same spinlock, 3571 - * so when we get the lock, the notifer work is done. 3572 - */ 3573 - spin_lock(&pp->lock); 3574 - pp->is_stopped = true; 3575 - spin_unlock(&pp->lock); 3475 + if (!pp->neta_armada3700) { 3476 + /* Inform that we are stopping so we don't want to setup the 3477 + * driver for new CPUs in the notifiers. The code of the 3478 + * notifier for CPU online is protected by the same spinlock, 3479 + * so when we get the lock, the notifer work is done. 3480 + */ 3481 + spin_lock(&pp->lock); 3482 + pp->is_stopped = true; 3483 + spin_unlock(&pp->lock); 3576 3484 3577 - mvneta_stop_dev(pp); 3578 - mvneta_mdio_remove(pp); 3485 + mvneta_stop_dev(pp); 3486 + mvneta_mdio_remove(pp); 3579 3487 3580 3488 cpuhp_state_remove_instance_nocalls(online_hpstate, &pp->node_online); 3581 3489 cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD, 3582 3490 &pp->node_dead); 3583 - on_each_cpu(mvneta_percpu_disable, pp, true); 3584 - free_percpu_irq(dev->irq, pp->ports); 3491 + on_each_cpu(mvneta_percpu_disable, pp, true); 3492 + free_percpu_irq(dev->irq, pp->ports); 3493 + } else { 3494 + mvneta_stop_dev(pp); 3495 + mvneta_mdio_remove(pp); 3496 + free_irq(dev->irq, pp); 3497 + } 3498 + 3585 3499 mvneta_cleanup_rxqs(pp); 3586 3500 mvneta_cleanup_txqs(pp); 3587 3501 ··· 3867 3767 const u8 *key, const u8 hfunc) 3868 3768 { 3869 3769 struct mvneta_port *pp = netdev_priv(dev); 3770 + 3771 + /* Current code for Armada 3700 doesn't support RSS features yet */ 3772 + if (pp->neta_armada3700) 3773 + return -EOPNOTSUPP; 3774 + 3870 3775 /* We require at least one supported parameter to be changed 3871 3776 * and no change in any of the unsupported parameters 3872 3777 */ ··· 3891 3786 u8 *hfunc) 3892 3787 { 3893 3788 struct mvneta_port *pp = netdev_priv(dev); 3789 + 3790 + /* Current code for Armada 3700 doesn't support RSS features yet */ 3791 + if (pp->neta_armada3700) 3792 + return -EOPNOTSUPP; 3894 3793 3895 3794 if (hfunc) 3896 3795 *hfunc = ETH_RSS_HASH_TOP; ··· 3974 3865 rxq->size = pp->rx_ring_size; 3975 3866 rxq->pkts_coal = MVNETA_RX_COAL_PKTS; 3976 3867 rxq->time_coal = MVNETA_RX_COAL_USEC; 3868 + rxq->buf_virt_addr = devm_kmalloc(pp->dev->dev.parent, 3869 + rxq->size * sizeof(void *), 3870 + GFP_KERNEL); 3871 + if (!rxq->buf_virt_addr) 3872 + return -ENOMEM; 3977 3873 } 3978 3874 3979 3875 return 0; ··· 4003 3889 win_enable = 0x3f; 4004 3890 win_protect = 0; 4005 3891 4006 - for (i = 0; i < dram->num_cs; i++) { 4007 - const struct mbus_dram_window *cs = dram->cs + i; 4008 - mvreg_write(pp, MVNETA_WIN_BASE(i), (cs->base & 0xffff0000) | 4009 - (cs->mbus_attr << 8) | dram->mbus_dram_target_id); 3892 + if (dram) { 3893 + for (i = 0; i < dram->num_cs; i++) { 3894 + const struct mbus_dram_window *cs = dram->cs + i; 4010 3895 4011 - mvreg_write(pp, MVNETA_WIN_SIZE(i), 4012 - (cs->size - 1) & 0xffff0000); 3896 + mvreg_write(pp, MVNETA_WIN_BASE(i), 3897 + (cs->base & 0xffff0000) | 3898 + (cs->mbus_attr << 8) | 3899 + dram->mbus_dram_target_id); 4013 3900 4014 - win_enable &= ~(1 << i); 4015 - win_protect |= 3 << (2 * i); 3901 + mvreg_write(pp, MVNETA_WIN_SIZE(i), 3902 + (cs->size - 1) & 0xffff0000); 3903 + 3904 + win_enable &= ~(1 << i); 3905 + win_protect |= 3 << (2 * i); 3906 + } 3907 + } else { 3908 + /* For Armada3700 open default 4GB Mbus window, leaving 3909 + * arbitration of target/attribute to a different layer 3910 + * of configuration. 3911 + */ 3912 + mvreg_write(pp, MVNETA_WIN_SIZE(0), 0xffff0000); 3913 + win_enable &= ~BIT(0); 3914 + win_protect = 3; 4016 3915 } 4017 3916 4018 3917 mvreg_write(pp, MVNETA_BASE_ADDR_ENABLE, win_enable); ··· 4146 4019 4147 4020 pp->rxq_def = rxq_def; 4148 4021 4022 + /* Set RX packet offset correction for platforms, whose 4023 + * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit 4024 + * platforms and 0B for 32-bit ones. 4025 + */ 4026 + pp->rx_offset_correction = 4027 + max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION); 4028 + 4149 4029 pp->indir[0] = rxq_def; 4030 + 4031 + /* Get special SoC configurations */ 4032 + if (of_device_is_compatible(dn, "marvell,armada-3700-neta")) 4033 + pp->neta_armada3700 = true; 4150 4034 4151 4035 pp->clk = devm_clk_get(&pdev->dev, "core"); 4152 4036 if (IS_ERR(pp->clk)) ··· 4226 4088 pp->tx_csum_limit = tx_csum_limit; 4227 4089 4228 4090 dram_target_info = mv_mbus_dram_info(); 4229 - if (dram_target_info) 4091 + /* Armada3700 requires setting default configuration of Mbus 4092 + * windows, however without using filled mbus_dram_target_info 4093 + * structure. 4094 + */ 4095 + if (dram_target_info || pp->neta_armada3700) 4230 4096 mvneta_conf_mbus_windows(pp, dram_target_info); 4231 4097 4232 4098 pp->tx_ring_size = MVNETA_MAX_TXD; ··· 4263 4121 goto err_netdev; 4264 4122 } 4265 4123 4266 - for_each_present_cpu(cpu) { 4267 - struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu); 4124 + /* Armada3700 network controller does not support per-cpu 4125 + * operation, so only single NAPI should be initialized. 4126 + */ 4127 + if (pp->neta_armada3700) { 4128 + netif_napi_add(dev, &pp->napi, mvneta_poll, NAPI_POLL_WEIGHT); 4129 + } else { 4130 + for_each_present_cpu(cpu) { 4131 + struct mvneta_pcpu_port *port = 4132 + per_cpu_ptr(pp->ports, cpu); 4268 4133 4269 - netif_napi_add(dev, &port->napi, mvneta_poll, NAPI_POLL_WEIGHT); 4270 - port->pp = pp; 4134 + netif_napi_add(dev, &port->napi, mvneta_poll, 4135 + NAPI_POLL_WEIGHT); 4136 + port->pp = pp; 4137 + } 4271 4138 } 4272 4139 4273 4140 dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO; ··· 4361 4210 static const struct of_device_id mvneta_match[] = { 4362 4211 { .compatible = "marvell,armada-370-neta" }, 4363 4212 { .compatible = "marvell,armada-xp-neta" }, 4213 + { .compatible = "marvell,armada-3700-neta" }, 4364 4214 { } 4365 4215 }; 4366 4216 MODULE_DEVICE_TABLE(of, mvneta_match);