Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'mvpp2-tx-flow-control'

Stefan Chulski says:

====================
net: mvpp2: Add TX Flow Control support

Armada hardware has a pause generation mechanism in GOP (MAC).
The GOP generate flow control frames based on an indication programmed in Ports Control 0 Register. There is a bit per port.
However assertion of the PortX Pause bits in the ports control 0 register only sends a one time pause.
To complement the function the GOP has a mechanism to periodically send pause control messages based on periodic counters.
This mechanism ensures that the pause is effective as long as the Appropriate PortX Pause is asserted.

Problem is that Packet Processor that actually can drop packets due to lack of resources not connected to the GOP flow control generation mechanism.
To solve this issue Armada has firmware running on CM3 CPU dedicated for Flow Control support.
Firmware monitors Packet Processor resources and asserts XON/XOFF by writing to Ports Control 0 Register.

MSS shared SRAM memory used to communicate between CM3 firmware and PP2 driver.
During init PP2 driver informs firmware about used BM pools, RXQs, congestion and depletion thresholds.

The pause frames are generated whenever congestion or depletion in resources is detected.
The back pressure is stopped when the resource reaches a sufficient level.
So the congestion/depletion and sufficient level implement a hysteresis that reduces the XON/XOFF toggle frequency.

Packet Processor v23 hardware introduces support for RX FIFO fill level monitor.
Patch "add PPv23 version definition" to differ between v23 and v22 hardware.
Patch "add TX FC firmware check" verifies that CM3 firmware supports Flow Control monitoring.

v12 --> v13
- Remove bm_underrun_protect module_param

v11 --> v12
- Improve warning message in "net: mvpp2: add TX FC firmware check" patch

v10 --> v11
- Improve "net: mvpp2: add CM3 SRAM memory map" comment
- Move condition check to 'net: mvpp2: always compare hw-version vs MVPP21' patch

v9 --> v10
- Add CM3 SRAM description to PPv2 documentation

v8 --> v9
- Replace generic pool allocation with devm_ioremap_resource

v7 --> v8
- Reorder "always compare hw-version vs MVPP21" and "add PPv23 version definition" commits
- Typo fixes
- Remove condition fix from "add RXQ flow control configurations"

v6 --> v7
- Reduce patch set from 18 to 15 patches
- Documentation change combined into a single patch
- RXQ and BM size change combined into a single patch
- Ring size change check moved into "add RXQ flow control configurations" commit

v5 --> v6
- No change

v4 --> v5
- Add missed Signed-off
- Fix warnings in patches 3 and 12
- Add revision requirement to warning message
- Move mss_spinlock into RXQ flow control configurations patch
- Improve FCA RXQ non occupied descriptor threshold commit message

v3 --> v4
- Remove RFC tag

v2 --> v3
- Remove inline functions
- Add PPv2.3 description into marvell-pp2.txt
- Improve mvpp2_interrupts_mask/unmask procedure
- Improve FC enable/disable procedure
- Add priv->sram_pool check
- Remove gen_pool_destroy call
- Reduce Flow Control timer to x100 faster

v1 --> v2
- Add memory requirements information
- Add EPROBE_DEFER if of_gen_pool_get return NULL
- Move Flow control configuration to mvpp2_mac_link_up callback
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+599 -49
+4 -2
Documentation/devicetree/bindings/net/marvell-pp2.txt
··· 1 1 * Marvell Armada 375 Ethernet Controller (PPv2.1) 2 2 Marvell Armada 7K/8K Ethernet Controller (PPv2.2) 3 + Marvell CN913X Ethernet Controller (PPv2.3) 3 4 4 5 Required properties: 5 6 ··· 13 12 - common controller registers 14 13 - LMS registers 15 14 - one register area per Ethernet port 16 - For "marvell,armada-7k-pp2", must contain the following register 15 + For "marvell,armada-7k-pp2" used by 7K/8K and CN913X, must contain the following register 17 16 sets: 18 17 - packet processor registers 19 18 - networking interfaces registers 19 + - CM3 address space used for TX Flow Control 20 20 21 21 - clocks: pointers to the reference clocks for this device, consequently: 22 22 - main controller clock (for both armada-375-pp2 and armada-7k-pp2) ··· 83 81 84 82 cpm_ethernet: ethernet@0 { 85 83 compatible = "marvell,armada-7k-pp22"; 86 - reg = <0x0 0x100000>, <0x129000 0xb000>; 84 + reg = <0x0 0x100000>, <0x129000 0xb000>, <0x220000 0x800>; 87 85 clocks = <&cpm_syscon0 1 3>, <&cpm_syscon0 1 9>, 88 86 <&cpm_syscon0 1 5>, <&cpm_syscon0 1 6>, <&cpm_syscon0 1 18>; 89 87 clock-names = "pp_clk", "gop_clk", "mg_clk", "mg_core_clk", "axi_clk";
+1 -1
arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
··· 59 59 60 60 CP11X_LABEL(ethernet): ethernet@0 { 61 61 compatible = "marvell,armada-7k-pp22"; 62 - reg = <0x0 0x100000>, <0x129000 0xb000>; 62 + reg = <0x0 0x100000>, <0x129000 0xb000>, <0x220000 0x800>; 63 63 clocks = <&CP11X_LABEL(clk) 1 3>, <&CP11X_LABEL(clk) 1 9>, 64 64 <&CP11X_LABEL(clk) 1 5>, <&CP11X_LABEL(clk) 1 6>, 65 65 <&CP11X_LABEL(clk) 1 18>;
+110 -14
drivers/net/ethernet/marvell/mvpp2/mvpp2.h
··· 60 60 /* Top Registers */ 61 61 #define MVPP2_MH_REG(port) (0x5040 + 4 * (port)) 62 62 #define MVPP2_DSA_EXTENDED BIT(5) 63 + #define MVPP2_VER_ID_REG 0x50b0 64 + #define MVPP2_VER_PP22 0x10 65 + #define MVPP2_VER_PP23 0x11 63 66 64 67 /* Parser Registers */ 65 68 #define MVPP2_PRS_INIT_LOOKUP_REG 0x1000 ··· 295 292 #define MVPP2_PON_CAUSE_TXP_OCCUP_DESC_ALL_MASK 0x3fc00000 296 293 #define MVPP2_PON_CAUSE_MISC_SUM_MASK BIT(31) 297 294 #define MVPP2_ISR_MISC_CAUSE_REG 0x55b0 295 + #define MVPP2_ISR_RX_ERR_CAUSE_REG(port) (0x5520 + 4 * (port)) 296 + #define MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK 0x00ff 298 297 299 298 /* Buffer Manager registers */ 300 299 #define MVPP2_BM_POOL_BASE_REG(pool) (0x6000 + ((pool) * 4)) ··· 324 319 #define MVPP2_BM_HIGH_THRESH_MASK 0x7f0000 325 320 #define MVPP2_BM_HIGH_THRESH_VALUE(val) ((val) << \ 326 321 MVPP2_BM_HIGH_THRESH_OFFS) 322 + #define MVPP2_BM_BPPI_HIGH_THRESH 0x1E 323 + #define MVPP2_BM_BPPI_LOW_THRESH 0x1C 324 + #define MVPP23_BM_BPPI_HIGH_THRESH 0x34 325 + #define MVPP23_BM_BPPI_LOW_THRESH 0x28 327 326 #define MVPP2_BM_INTR_CAUSE_REG(pool) (0x6240 + ((pool) * 4)) 328 327 #define MVPP2_BM_RELEASED_DELAY_MASK BIT(0) 329 328 #define MVPP2_BM_ALLOC_FAILED_MASK BIT(1) ··· 355 346 /* Packet Processor per-port counters */ 356 347 #define MVPP2_OVERRUN_ETH_DROP 0x7000 357 348 #define MVPP2_CLS_ETH_DROP 0x7020 349 + 350 + #define MVPP22_BM_POOL_BASE_ADDR_HIGH_REG 0x6310 351 + #define MVPP22_BM_POOL_BASE_ADDR_HIGH_MASK 0xff 352 + #define MVPP23_BM_8POOL_MODE BIT(8) 358 353 359 354 /* Hit counters registers */ 360 355 #define MVPP2_CTRS_IDX 0x7040 ··· 482 469 #define MVPP22_GMAC_INT_SUM_MASK_LINK_STAT BIT(1) 483 470 #define MVPP22_GMAC_INT_SUM_MASK_PTP BIT(2) 484 471 485 - /* Per-port XGMAC registers. PPv2.2 only, only for GOP port 0, 472 + /* Per-port XGMAC registers. PPv2.2 and PPv2.3, only for GOP port 0, 486 473 * relative to port->base. 487 474 */ 488 475 #define MVPP22_XLG_CTRL0_REG 0x100 ··· 519 506 #define MVPP22_XLG_CTRL4_MACMODSELECT_GMAC BIT(12) 520 507 #define MVPP22_XLG_CTRL4_EN_IDLE_CHECK BIT(14) 521 508 522 - /* SMI registers. PPv2.2 only, relative to priv->iface_base. */ 509 + /* SMI registers. PPv2.2 and PPv2.3, relative to priv->iface_base. */ 523 510 #define MVPP22_SMI_MISC_CFG_REG 0x1204 524 511 #define MVPP22_SMI_POLLING_EN BIT(10) 525 512 ··· 595 582 #define MVPP2_QUEUE_NEXT_DESC(q, index) \ 596 583 (((index) < (q)->last_desc) ? ((index) + 1) : 0) 597 584 598 - /* XPCS registers. PPv2.2 only */ 585 + /* XPCS registers.PPv2.2 and PPv2.3 */ 599 586 #define MVPP22_MPCS_BASE(port) (0x7000 + (port) * 0x1000) 600 587 #define MVPP22_MPCS_CTRL 0x14 601 588 #define MVPP22_MPCS_CTRL_FWD_ERR_CONN BIT(10) ··· 606 593 #define MVPP22_MPCS_CLK_RESET_DIV_RATIO(n) ((n) << 4) 607 594 #define MVPP22_MPCS_CLK_RESET_DIV_SET BIT(11) 608 595 609 - /* XPCS registers. PPv2.2 only */ 596 + /* FCA registers. PPv2.2 and PPv2.3 */ 597 + #define MVPP22_FCA_BASE(port) (0x7600 + (port) * 0x1000) 598 + #define MVPP22_FCA_REG_SIZE 16 599 + #define MVPP22_FCA_REG_MASK 0xFFFF 600 + #define MVPP22_FCA_CONTROL_REG 0x0 601 + #define MVPP22_FCA_ENABLE_PERIODIC BIT(11) 602 + #define MVPP22_PERIODIC_COUNTER_LSB_REG (0x110) 603 + #define MVPP22_PERIODIC_COUNTER_MSB_REG (0x114) 604 + 605 + /* XPCS registers. PPv2.2 and PPv2.3 */ 610 606 #define MVPP22_XPCS_BASE(port) (0x7400 + (port) * 0x1000) 611 607 #define MVPP22_XPCS_CFG0 0x0 612 608 #define MVPP22_XPCS_CFG0_RESET_DIS BIT(0) ··· 734 712 #define MVPP2_PORT_MAX_RXQ 32 735 713 736 714 /* Max number of Rx descriptors */ 737 - #define MVPP2_MAX_RXD_MAX 1024 738 - #define MVPP2_MAX_RXD_DFLT 128 715 + #define MVPP2_MAX_RXD_MAX 2048 716 + #define MVPP2_MAX_RXD_DFLT 1024 739 717 740 718 /* Max number of Tx descriptors */ 741 719 #define MVPP2_MAX_TXD_MAX 2048 ··· 769 747 #define MVPP2_TX_FIFO_THRESHOLD_MIN 256 /* Bytes */ 770 748 #define MVPP2_TX_FIFO_THRESHOLD(kb) \ 771 749 ((kb) * 1024 - MVPP2_TX_FIFO_THRESHOLD_MIN) 750 + 751 + /* RX FIFO threshold in 1KB granularity */ 752 + #define MVPP23_PORT0_FIFO_TRSH (9 * 1024) 753 + #define MVPP23_PORT1_FIFO_TRSH (4 * 1024) 754 + #define MVPP23_PORT2_FIFO_TRSH (2 * 1024) 755 + 756 + /* RX Flow Control Registers */ 757 + #define MVPP2_RX_FC_REG(port) (0x150 + 4 * (port)) 758 + #define MVPP2_RX_FC_EN BIT(24) 759 + #define MVPP2_RX_FC_TRSH_OFFS 16 760 + #define MVPP2_RX_FC_TRSH_MASK (0xFF << MVPP2_RX_FC_TRSH_OFFS) 761 + #define MVPP2_RX_FC_TRSH_UNIT 256 762 + 763 + /* MSS Flow control */ 764 + #define MSS_FC_COM_REG 0 765 + #define FLOW_CONTROL_ENABLE_BIT BIT(0) 766 + #define FLOW_CONTROL_UPDATE_COMMAND_BIT BIT(31) 767 + #define FC_QUANTA 0xFFFF 768 + #define FC_CLK_DIVIDER 100 769 + 770 + #define MSS_RXQ_TRESH_BASE 0x200 771 + #define MSS_RXQ_TRESH_OFFS 4 772 + #define MSS_RXQ_TRESH_REG(q, fq) (MSS_RXQ_TRESH_BASE + (((q) + (fq)) \ 773 + * MSS_RXQ_TRESH_OFFS)) 774 + 775 + #define MSS_BUF_POOL_BASE 0x40 776 + #define MSS_BUF_POOL_OFFS 4 777 + #define MSS_BUF_POOL_REG(id) (MSS_BUF_POOL_BASE \ 778 + + (id) * MSS_BUF_POOL_OFFS) 779 + 780 + #define MSS_BUF_POOL_STOP_MASK 0xFFF 781 + #define MSS_BUF_POOL_START_MASK (0xFFF << MSS_BUF_POOL_START_OFFS) 782 + #define MSS_BUF_POOL_START_OFFS 12 783 + #define MSS_BUF_POOL_PORTS_MASK (0xF << MSS_BUF_POOL_PORTS_OFFS) 784 + #define MSS_BUF_POOL_PORTS_OFFS 24 785 + #define MSS_BUF_POOL_PORT_OFFS(id) (0x1 << \ 786 + ((id) + MSS_BUF_POOL_PORTS_OFFS)) 787 + 788 + #define MSS_RXQ_TRESH_START_MASK 0xFFFF 789 + #define MSS_RXQ_TRESH_STOP_MASK (0xFFFF << MSS_RXQ_TRESH_STOP_OFFS) 790 + #define MSS_RXQ_TRESH_STOP_OFFS 16 791 + 792 + #define MSS_RXQ_ASS_BASE 0x80 793 + #define MSS_RXQ_ASS_OFFS 4 794 + #define MSS_RXQ_ASS_PER_REG 4 795 + #define MSS_RXQ_ASS_PER_OFFS 8 796 + #define MSS_RXQ_ASS_PORTID_OFFS 0 797 + #define MSS_RXQ_ASS_PORTID_MASK 0x3 798 + #define MSS_RXQ_ASS_HOSTID_OFFS 2 799 + #define MSS_RXQ_ASS_HOSTID_MASK 0x3F 800 + 801 + #define MSS_RXQ_ASS_Q_BASE(q, fq) ((((q) + (fq)) % MSS_RXQ_ASS_PER_REG) \ 802 + * MSS_RXQ_ASS_PER_OFFS) 803 + #define MSS_RXQ_ASS_PQ_BASE(q, fq) ((((q) + (fq)) / MSS_RXQ_ASS_PER_REG) \ 804 + * MSS_RXQ_ASS_OFFS) 805 + #define MSS_RXQ_ASS_REG(q, fq) (MSS_RXQ_ASS_BASE + MSS_RXQ_ASS_PQ_BASE(q, fq)) 806 + 807 + #define MSS_THRESHOLD_STOP 768 808 + #define MSS_THRESHOLD_START 1024 809 + #define MSS_FC_MAX_TIMEOUT 5000 772 810 773 811 /* RX buffer constants */ 774 812 #define MVPP2_SKB_SHINFO_SIZE \ ··· 927 845 #define MVPP22_PTP_TIMESTAMPQUEUESELECT BIT(18) 928 846 929 847 /* BM constants */ 930 - #define MVPP2_BM_JUMBO_BUF_NUM 512 931 - #define MVPP2_BM_LONG_BUF_NUM 1024 848 + #define MVPP2_BM_JUMBO_BUF_NUM 2048 849 + #define MVPP2_BM_LONG_BUF_NUM 2048 932 850 #define MVPP2_BM_SHORT_BUF_NUM 2048 933 851 #define MVPP2_BM_POOL_SIZE_MAX (16*1024 - MVPP2_BM_POOL_PTR_ALIGN/4) 934 852 #define MVPP2_BM_POOL_PTR_ALIGN 128 ··· 1007 925 /* Shared registers' base addresses */ 1008 926 void __iomem *lms_base; 1009 927 void __iomem *iface_base; 928 + void __iomem *cm3_base; 1010 929 1011 - /* On PPv2.2, each "software thread" can access the base 930 + /* On PPv2.2 and PPv2.3, each "software thread" can access the base 1012 931 * register through a separate address space, each 64 KB apart 1013 932 * from each other. Typically, such address spaces will be 1014 933 * used per CPU. 1015 934 */ 1016 935 void __iomem *swth_base[MVPP2_MAX_THREADS]; 1017 936 1018 - /* On PPv2.2, some port control registers are located into the system 1019 - * controller space. These registers are accessible through a regmap. 937 + /* On PPv2.2 and PPv2.3, some port control registers are located into 938 + * the system controller space. These registers are accessible 939 + * through a regmap. 1020 940 */ 1021 941 struct regmap *sysctrl_base; 1022 942 ··· 1060 976 u32 tclk; 1061 977 1062 978 /* HW version */ 1063 - enum { MVPP21, MVPP22 } hw_version; 979 + enum { MVPP21, MVPP22, MVPP23 } hw_version; 1064 980 1065 981 /* Maximum number of RXQs per port */ 1066 982 unsigned int max_port_rxqs; ··· 1080 996 1081 997 /* page_pool allocator */ 1082 998 struct page_pool *page_pool[MVPP2_PORT_MAX_RXQ]; 999 + 1000 + /* Global TX Flow Control config */ 1001 + bool global_tx_fc; 1002 + 1003 + /* Spinlocks for CM3 shared memory configuration */ 1004 + spinlock_t mss_spinlock; 1083 1005 }; 1084 1006 1085 1007 struct mvpp2_pcpu_stats { ··· 1248 1158 bool rx_hwtstamp; 1249 1159 enum hwtstamp_tx_types tx_hwtstamp_type; 1250 1160 struct mvpp2_hwtstamp_queue tx_hwtstamp_queue[2]; 1161 + 1162 + /* Firmware TX flow control */ 1163 + bool tx_fc; 1251 1164 }; 1252 1165 1253 1166 /* The mvpp2_tx_desc and mvpp2_rx_desc structures describe the ··· 1313 1220 __le32 reserved8; 1314 1221 }; 1315 1222 1316 - /* HW TX descriptor for PPv2.2 */ 1223 + /* HW TX descriptor for PPv2.2 and PPv2.3 */ 1317 1224 struct mvpp22_tx_desc { 1318 1225 __le32 command; 1319 1226 u8 packet_offset; ··· 1325 1232 __le64 buf_cookie_misc; 1326 1233 }; 1327 1234 1328 - /* HW RX descriptor for PPv2.2 */ 1235 + /* HW RX descriptor for PPv2.2 and PPv2.3 */ 1329 1236 struct mvpp22_rx_desc { 1330 1237 __le32 status; 1331 1238 __le16 reserved1; ··· 1511 1418 1512 1419 void mvpp2_dbgfs_cleanup(struct mvpp2 *priv); 1513 1420 1421 + void mvpp23_rx_fifo_fc_en(struct mvpp2 *priv, int port, bool en); 1422 + 1514 1423 #ifdef CONFIG_MVPP2_PTP 1515 1424 int mvpp22_tai_probe(struct device *dev, struct mvpp2 *priv); 1516 1425 void mvpp22_tai_tstamp(struct mvpp2_tai *tai, u32 tstamp, ··· 1545 1450 { 1546 1451 return IS_ENABLED(CONFIG_MVPP2_PTP) && port->rx_hwtstamp; 1547 1452 } 1453 + 1548 1454 #endif
+484 -32
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 91 91 return cpu % priv->nthreads; 92 92 } 93 93 94 + static void mvpp2_cm3_write(struct mvpp2 *priv, u32 offset, u32 data) 95 + { 96 + writel(data, priv->cm3_base + offset); 97 + } 98 + 99 + static u32 mvpp2_cm3_read(struct mvpp2 *priv, u32 offset) 100 + { 101 + return readl(priv->cm3_base + offset); 102 + } 103 + 94 104 static struct page_pool * 95 105 mvpp2_create_page_pool(struct device *dev, int num, int len, 96 106 enum dma_data_direction dma_dir) ··· 329 319 { 330 320 unsigned int nrxqs; 331 321 332 - if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_SINGLE_MODE) 322 + if (priv->hw_version != MVPP21 && queue_mode == MVPP2_QDIST_SINGLE_MODE) 333 323 return 1; 334 324 335 325 /* According to the PPv2.2 datasheet and our experiments on ··· 394 384 if (!IS_ALIGNED(size, 16)) 395 385 return -EINVAL; 396 386 397 - /* PPv2.1 needs 8 bytes per buffer pointer, PPv2.2 needs 16 387 + /* PPv2.1 needs 8 bytes per buffer pointer, PPv2.2 and PPv2.3 needs 16 398 388 * bytes per buffer pointer 399 389 */ 400 390 if (priv->hw_version == MVPP21) ··· 423 413 424 414 val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(bm_pool->id)); 425 415 val |= MVPP2_BM_START_MASK; 416 + 417 + val &= ~MVPP2_BM_LOW_THRESH_MASK; 418 + val &= ~MVPP2_BM_HIGH_THRESH_MASK; 419 + 420 + /* Set 8 Pools BPPI threshold for MVPP23 */ 421 + if (priv->hw_version == MVPP23) { 422 + val |= MVPP2_BM_LOW_THRESH_VALUE(MVPP23_BM_BPPI_LOW_THRESH); 423 + val |= MVPP2_BM_HIGH_THRESH_VALUE(MVPP23_BM_BPPI_HIGH_THRESH); 424 + } else { 425 + val |= MVPP2_BM_LOW_THRESH_VALUE(MVPP2_BM_BPPI_LOW_THRESH); 426 + val |= MVPP2_BM_HIGH_THRESH_VALUE(MVPP2_BM_BPPI_HIGH_THRESH); 427 + } 428 + 426 429 mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(bm_pool->id), val); 427 430 428 431 bm_pool->size = size; ··· 469 446 MVPP2_BM_PHY_ALLOC_REG(bm_pool->id)); 470 447 *phys_addr = mvpp2_thread_read(priv, thread, MVPP2_BM_VIRT_ALLOC_REG); 471 448 472 - if (priv->hw_version == MVPP22) { 449 + if (priv->hw_version != MVPP21) { 473 450 u32 val; 474 451 u32 dma_addr_highbits, phys_addr_highbits; 475 452 ··· 604 581 return err; 605 582 } 606 583 584 + /* Routine enable PPv23 8 pool mode */ 585 + static void mvpp23_bm_set_8pool_mode(struct mvpp2 *priv) 586 + { 587 + int val; 588 + 589 + val = mvpp2_read(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG); 590 + val |= MVPP23_BM_8POOL_MODE; 591 + mvpp2_write(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG, val); 592 + } 593 + 607 594 static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv) 608 595 { 609 596 enum dma_data_direction dma_dir = DMA_FROM_DEVICE; ··· 666 633 sizeof(*priv->bm_pools), GFP_KERNEL); 667 634 if (!priv->bm_pools) 668 635 return -ENOMEM; 636 + 637 + if (priv->hw_version == MVPP23) 638 + mvpp23_bm_set_8pool_mode(priv); 669 639 670 640 err = mvpp2_bm_pools_init(dev, priv); 671 641 if (err < 0) ··· 767 731 return data; 768 732 } 769 733 734 + /* Routine enable flow control for RXQs condition */ 735 + static void mvpp2_rxq_enable_fc(struct mvpp2_port *port) 736 + { 737 + int val, cm3_state, host_id, q; 738 + int fq = port->first_rxq; 739 + unsigned long flags; 740 + 741 + spin_lock_irqsave(&port->priv->mss_spinlock, flags); 742 + 743 + /* Remove Flow control enable bit to prevent race between FW and Kernel 744 + * If Flow control was enabled, it would be re-enabled. 745 + */ 746 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 747 + cm3_state = (val & FLOW_CONTROL_ENABLE_BIT); 748 + val &= ~FLOW_CONTROL_ENABLE_BIT; 749 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 750 + 751 + /* Set same Flow control for all RXQs */ 752 + for (q = 0; q < port->nrxqs; q++) { 753 + /* Set stop and start Flow control RXQ thresholds */ 754 + val = MSS_THRESHOLD_START; 755 + val |= (MSS_THRESHOLD_STOP << MSS_RXQ_TRESH_STOP_OFFS); 756 + mvpp2_cm3_write(port->priv, MSS_RXQ_TRESH_REG(q, fq), val); 757 + 758 + val = mvpp2_cm3_read(port->priv, MSS_RXQ_ASS_REG(q, fq)); 759 + /* Set RXQ port ID */ 760 + val &= ~(MSS_RXQ_ASS_PORTID_MASK << MSS_RXQ_ASS_Q_BASE(q, fq)); 761 + val |= (port->id << MSS_RXQ_ASS_Q_BASE(q, fq)); 762 + val &= ~(MSS_RXQ_ASS_HOSTID_MASK << (MSS_RXQ_ASS_Q_BASE(q, fq) 763 + + MSS_RXQ_ASS_HOSTID_OFFS)); 764 + 765 + /* Calculate RXQ host ID: 766 + * In Single queue mode: Host ID equal to Host ID used for 767 + * shared RX interrupt 768 + * In Multi queue mode: Host ID equal to number of 769 + * RXQ ID / number of CoS queues 770 + * In Single resource mode: Host ID always equal to 0 771 + */ 772 + if (queue_mode == MVPP2_QDIST_SINGLE_MODE) 773 + host_id = port->nqvecs; 774 + else if (queue_mode == MVPP2_QDIST_MULTI_MODE) 775 + host_id = q; 776 + else 777 + host_id = 0; 778 + 779 + /* Set RXQ host ID */ 780 + val |= (host_id << (MSS_RXQ_ASS_Q_BASE(q, fq) 781 + + MSS_RXQ_ASS_HOSTID_OFFS)); 782 + 783 + mvpp2_cm3_write(port->priv, MSS_RXQ_ASS_REG(q, fq), val); 784 + } 785 + 786 + /* Notify Firmware that Flow control config space ready for update */ 787 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 788 + val |= FLOW_CONTROL_UPDATE_COMMAND_BIT; 789 + val |= cm3_state; 790 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 791 + 792 + spin_unlock_irqrestore(&port->priv->mss_spinlock, flags); 793 + } 794 + 795 + /* Routine disable flow control for RXQs condition */ 796 + static void mvpp2_rxq_disable_fc(struct mvpp2_port *port) 797 + { 798 + int val, cm3_state, q; 799 + unsigned long flags; 800 + int fq = port->first_rxq; 801 + 802 + spin_lock_irqsave(&port->priv->mss_spinlock, flags); 803 + 804 + /* Remove Flow control enable bit to prevent race between FW and Kernel 805 + * If Flow control was enabled, it would be re-enabled. 806 + */ 807 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 808 + cm3_state = (val & FLOW_CONTROL_ENABLE_BIT); 809 + val &= ~FLOW_CONTROL_ENABLE_BIT; 810 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 811 + 812 + /* Disable Flow control for all RXQs */ 813 + for (q = 0; q < port->nrxqs; q++) { 814 + /* Set threshold 0 to disable Flow control */ 815 + val = 0; 816 + val |= (0 << MSS_RXQ_TRESH_STOP_OFFS); 817 + mvpp2_cm3_write(port->priv, MSS_RXQ_TRESH_REG(q, fq), val); 818 + 819 + val = mvpp2_cm3_read(port->priv, MSS_RXQ_ASS_REG(q, fq)); 820 + 821 + val &= ~(MSS_RXQ_ASS_PORTID_MASK << MSS_RXQ_ASS_Q_BASE(q, fq)); 822 + 823 + val &= ~(MSS_RXQ_ASS_HOSTID_MASK << (MSS_RXQ_ASS_Q_BASE(q, fq) 824 + + MSS_RXQ_ASS_HOSTID_OFFS)); 825 + 826 + mvpp2_cm3_write(port->priv, MSS_RXQ_ASS_REG(q, fq), val); 827 + } 828 + 829 + /* Notify Firmware that Flow control config space ready for update */ 830 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 831 + val |= FLOW_CONTROL_UPDATE_COMMAND_BIT; 832 + val |= cm3_state; 833 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 834 + 835 + spin_unlock_irqrestore(&port->priv->mss_spinlock, flags); 836 + } 837 + 838 + /* Routine disable/enable flow control for BM pool condition */ 839 + static void mvpp2_bm_pool_update_fc(struct mvpp2_port *port, 840 + struct mvpp2_bm_pool *pool, 841 + bool en) 842 + { 843 + int val, cm3_state; 844 + unsigned long flags; 845 + 846 + spin_lock_irqsave(&port->priv->mss_spinlock, flags); 847 + 848 + /* Remove Flow control enable bit to prevent race between FW and Kernel 849 + * If Flow control were enabled, it would be re-enabled. 850 + */ 851 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 852 + cm3_state = (val & FLOW_CONTROL_ENABLE_BIT); 853 + val &= ~FLOW_CONTROL_ENABLE_BIT; 854 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 855 + 856 + /* Check if BM pool should be enabled/disable */ 857 + if (en) { 858 + /* Set BM pool start and stop thresholds per port */ 859 + val = mvpp2_cm3_read(port->priv, MSS_BUF_POOL_REG(pool->id)); 860 + val |= MSS_BUF_POOL_PORT_OFFS(port->id); 861 + val &= ~MSS_BUF_POOL_START_MASK; 862 + val |= (MSS_THRESHOLD_START << MSS_BUF_POOL_START_OFFS); 863 + val &= ~MSS_BUF_POOL_STOP_MASK; 864 + val |= MSS_THRESHOLD_STOP; 865 + mvpp2_cm3_write(port->priv, MSS_BUF_POOL_REG(pool->id), val); 866 + } else { 867 + /* Remove BM pool from the port */ 868 + val = mvpp2_cm3_read(port->priv, MSS_BUF_POOL_REG(pool->id)); 869 + val &= ~MSS_BUF_POOL_PORT_OFFS(port->id); 870 + 871 + /* Zero BM pool start and stop thresholds to disable pool 872 + * flow control if pool empty (not used by any port) 873 + */ 874 + if (!pool->buf_num) { 875 + val &= ~MSS_BUF_POOL_START_MASK; 876 + val &= ~MSS_BUF_POOL_STOP_MASK; 877 + } 878 + 879 + mvpp2_cm3_write(port->priv, MSS_BUF_POOL_REG(pool->id), val); 880 + } 881 + 882 + /* Notify Firmware that Flow control config space ready for update */ 883 + val = mvpp2_cm3_read(port->priv, MSS_FC_COM_REG); 884 + val |= FLOW_CONTROL_UPDATE_COMMAND_BIT; 885 + val |= cm3_state; 886 + mvpp2_cm3_write(port->priv, MSS_FC_COM_REG, val); 887 + 888 + spin_unlock_irqrestore(&port->priv->mss_spinlock, flags); 889 + } 890 + 891 + static int mvpp2_enable_global_fc(struct mvpp2 *priv) 892 + { 893 + int val, timeout = 0; 894 + 895 + /* Enable global flow control. In this stage global 896 + * flow control enabled, but still disabled per port. 897 + */ 898 + val = mvpp2_cm3_read(priv, MSS_FC_COM_REG); 899 + val |= FLOW_CONTROL_ENABLE_BIT; 900 + mvpp2_cm3_write(priv, MSS_FC_COM_REG, val); 901 + 902 + /* Check if Firmware running and disable FC if not*/ 903 + val |= FLOW_CONTROL_UPDATE_COMMAND_BIT; 904 + mvpp2_cm3_write(priv, MSS_FC_COM_REG, val); 905 + 906 + while (timeout < MSS_FC_MAX_TIMEOUT) { 907 + val = mvpp2_cm3_read(priv, MSS_FC_COM_REG); 908 + 909 + if (!(val & FLOW_CONTROL_UPDATE_COMMAND_BIT)) 910 + return 0; 911 + usleep_range(10, 20); 912 + timeout++; 913 + } 914 + 915 + priv->global_tx_fc = false; 916 + return -EOPNOTSUPP; 917 + } 918 + 770 919 /* Release buffer to BM */ 771 920 static inline void mvpp2_bm_pool_put(struct mvpp2_port *port, int pool, 772 921 dma_addr_t buf_dma_addr, ··· 963 742 if (test_bit(thread, &port->priv->lock_map)) 964 743 spin_lock_irqsave(&port->bm_lock[thread], flags); 965 744 966 - if (port->priv->hw_version == MVPP22) { 745 + if (port->priv->hw_version != MVPP21) { 967 746 u32 val = 0; 968 747 969 748 if (sizeof(dma_addr_t) == 8) ··· 1282 1061 new_long_pool = MVPP2_BM_LONG; 1283 1062 1284 1063 if (new_long_pool != port->pool_long->id) { 1064 + if (port->tx_fc) { 1065 + if (pkt_size > MVPP2_BM_LONG_PKT_SIZE) 1066 + mvpp2_bm_pool_update_fc(port, 1067 + port->pool_short, 1068 + false); 1069 + else 1070 + mvpp2_bm_pool_update_fc(port, port->pool_long, 1071 + false); 1072 + } 1073 + 1285 1074 /* Remove port from old short & long pool */ 1286 1075 port->pool_long = mvpp2_bm_pool_use(port, port->pool_long->id, 1287 1076 port->pool_long->pkt_size); ··· 1309 1078 mvpp2_swf_bm_pool_init(port); 1310 1079 1311 1080 mvpp2_set_hw_csum(port, new_long_pool); 1081 + 1082 + if (port->tx_fc) { 1083 + if (pkt_size > MVPP2_BM_LONG_PKT_SIZE) 1084 + mvpp2_bm_pool_update_fc(port, port->pool_long, 1085 + true); 1086 + else 1087 + mvpp2_bm_pool_update_fc(port, port->pool_short, 1088 + true); 1089 + } 1090 + 1091 + /* Update L4 checksum when jumbo enable/disable on port */ 1092 + if (new_long_pool == MVPP2_BM_JUMBO && port->id != 0) { 1093 + dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 1094 + dev->hw_features &= ~(NETIF_F_IP_CSUM | 1095 + NETIF_F_IPV6_CSUM); 1096 + } else { 1097 + dev->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1098 + dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1099 + } 1312 1100 } 1313 1101 1314 1102 out_set: ··· 1383 1133 static void mvpp2_interrupts_mask(void *arg) 1384 1134 { 1385 1135 struct mvpp2_port *port = arg; 1136 + int cpu = smp_processor_id(); 1137 + u32 thread; 1386 1138 1387 1139 /* If the thread isn't used, don't do anything */ 1388 - if (smp_processor_id() > port->priv->nthreads) 1140 + if (cpu > port->priv->nthreads) 1389 1141 return; 1390 1142 1391 - mvpp2_thread_write(port->priv, 1392 - mvpp2_cpu_to_thread(port->priv, smp_processor_id()), 1143 + thread = mvpp2_cpu_to_thread(port->priv, cpu); 1144 + 1145 + mvpp2_thread_write(port->priv, thread, 1393 1146 MVPP2_ISR_RX_TX_MASK_REG(port->id), 0); 1147 + mvpp2_thread_write(port->priv, thread, 1148 + MVPP2_ISR_RX_ERR_CAUSE_REG(port->id), 0); 1394 1149 } 1395 1150 1396 1151 /* Unmask the current thread's Rx/Tx interrupts. ··· 1405 1150 static void mvpp2_interrupts_unmask(void *arg) 1406 1151 { 1407 1152 struct mvpp2_port *port = arg; 1408 - u32 val; 1153 + int cpu = smp_processor_id(); 1154 + u32 val, thread; 1409 1155 1410 1156 /* If the thread isn't used, don't do anything */ 1411 - if (smp_processor_id() > port->priv->nthreads) 1157 + if (cpu > port->priv->nthreads) 1412 1158 return; 1159 + 1160 + thread = mvpp2_cpu_to_thread(port->priv, cpu); 1413 1161 1414 1162 val = MVPP2_CAUSE_MISC_SUM_MASK | 1415 1163 MVPP2_CAUSE_RXQ_OCCUP_DESC_ALL_MASK(port->priv->hw_version); 1416 1164 if (port->has_tx_irqs) 1417 1165 val |= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK; 1418 1166 1419 - mvpp2_thread_write(port->priv, 1420 - mvpp2_cpu_to_thread(port->priv, smp_processor_id()), 1167 + mvpp2_thread_write(port->priv, thread, 1421 1168 MVPP2_ISR_RX_TX_MASK_REG(port->id), val); 1169 + mvpp2_thread_write(port->priv, thread, 1170 + MVPP2_ISR_RX_ERR_CAUSE_REG(port->id), 1171 + MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK); 1422 1172 } 1423 1173 1424 1174 static void ··· 1432 1172 u32 val; 1433 1173 int i; 1434 1174 1435 - if (port->priv->hw_version != MVPP22) 1175 + if (port->priv->hw_version == MVPP21) 1436 1176 return; 1437 1177 1438 1178 if (mask) ··· 1448 1188 1449 1189 mvpp2_thread_write(port->priv, v->sw_thread_id, 1450 1190 MVPP2_ISR_RX_TX_MASK_REG(port->id), val); 1191 + mvpp2_thread_write(port->priv, v->sw_thread_id, 1192 + MVPP2_ISR_RX_ERR_CAUSE_REG(port->id), 1193 + MVPP2_ISR_RX_ERR_CAUSE_NONOCC_MASK); 1451 1194 } 1452 1195 } 1453 1196 ··· 1462 1199 1463 1200 static bool mvpp2_port_supports_rgmii(struct mvpp2_port *port) 1464 1201 { 1465 - return !(port->priv->hw_version == MVPP22 && port->gop_id == 0); 1202 + return !(port->priv->hw_version != MVPP21 && port->gop_id == 0); 1466 1203 } 1467 1204 1468 1205 /* Port configuration routines */ ··· 1543 1280 writel(val, mpcs + MVPP22_MPCS_CLK_RESET); 1544 1281 } 1545 1282 1283 + static void mvpp22_gop_fca_enable_periodic(struct mvpp2_port *port, bool en) 1284 + { 1285 + struct mvpp2 *priv = port->priv; 1286 + void __iomem *fca = priv->iface_base + MVPP22_FCA_BASE(port->gop_id); 1287 + u32 val; 1288 + 1289 + val = readl(fca + MVPP22_FCA_CONTROL_REG); 1290 + val &= ~MVPP22_FCA_ENABLE_PERIODIC; 1291 + if (en) 1292 + val |= MVPP22_FCA_ENABLE_PERIODIC; 1293 + writel(val, fca + MVPP22_FCA_CONTROL_REG); 1294 + } 1295 + 1296 + static void mvpp22_gop_fca_set_timer(struct mvpp2_port *port, u32 timer) 1297 + { 1298 + struct mvpp2 *priv = port->priv; 1299 + void __iomem *fca = priv->iface_base + MVPP22_FCA_BASE(port->gop_id); 1300 + u32 lsb, msb; 1301 + 1302 + lsb = timer & MVPP22_FCA_REG_MASK; 1303 + msb = timer >> MVPP22_FCA_REG_SIZE; 1304 + 1305 + writel(lsb, fca + MVPP22_PERIODIC_COUNTER_LSB_REG); 1306 + writel(msb, fca + MVPP22_PERIODIC_COUNTER_MSB_REG); 1307 + } 1308 + 1309 + /* Set Flow Control timer x100 faster than pause quanta to ensure that link 1310 + * partner won't send traffic if port is in XOFF mode. 1311 + */ 1312 + static void mvpp22_gop_fca_set_periodic_timer(struct mvpp2_port *port) 1313 + { 1314 + u32 timer; 1315 + 1316 + timer = (port->priv->tclk / (USEC_PER_SEC * FC_CLK_DIVIDER)) 1317 + * FC_QUANTA; 1318 + 1319 + mvpp22_gop_fca_enable_periodic(port, false); 1320 + 1321 + mvpp22_gop_fca_set_timer(port, timer); 1322 + 1323 + mvpp22_gop_fca_enable_periodic(port, true); 1324 + } 1325 + 1546 1326 static int mvpp22_gop_init(struct mvpp2_port *port) 1547 1327 { 1548 1328 struct mvpp2 *priv = port->priv; ··· 1629 1323 regmap_read(priv->sysctrl_base, GENCONF_SOFT_RESET1, &val); 1630 1324 val |= GENCONF_SOFT_RESET1_GOP; 1631 1325 regmap_write(priv->sysctrl_base, GENCONF_SOFT_RESET1, val); 1326 + 1327 + mvpp22_gop_fca_set_periodic_timer(port); 1632 1328 1633 1329 unsupported_conf: 1634 1330 return 0; ··· 2125 1817 MVPP2_GMAC_PORT_RESET_MASK; 2126 1818 writel(val, port->base + MVPP2_GMAC_CTRL_2_REG); 2127 1819 2128 - if (port->priv->hw_version == MVPP22 && port->gop_id == 0) { 1820 + if (port->priv->hw_version != MVPP21 && port->gop_id == 0) { 2129 1821 val = readl(port->base + MVPP22_XLG_CTRL0_REG) & 2130 1822 ~MVPP22_XLG_CTRL0_MAC_RESET_DIS; 2131 1823 writel(val, port->base + MVPP22_XLG_CTRL0_REG); ··· 2138 1830 void __iomem *mpcs, *xpcs; 2139 1831 u32 val; 2140 1832 2141 - if (port->priv->hw_version != MVPP22 || port->gop_id != 0) 1833 + if (port->priv->hw_version == MVPP21 || port->gop_id != 0) 2142 1834 return; 2143 1835 2144 1836 mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id); ··· 2159 1851 void __iomem *mpcs, *xpcs; 2160 1852 u32 val; 2161 1853 2162 - if (port->priv->hw_version != MVPP22 || port->gop_id != 0) 1854 + if (port->priv->hw_version == MVPP21 || port->gop_id != 0) 2163 1855 return; 2164 1856 2165 1857 mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id); ··· 2656 2348 } 2657 2349 } 2658 2350 2351 + /* Set the number of non-occupied descriptors threshold */ 2352 + static void mvpp2_set_rxq_free_tresh(struct mvpp2_port *port, 2353 + struct mvpp2_rx_queue *rxq) 2354 + { 2355 + u32 val; 2356 + 2357 + mvpp2_write(port->priv, MVPP2_RXQ_NUM_REG, rxq->id); 2358 + 2359 + val = mvpp2_read(port->priv, MVPP2_RXQ_THRESH_REG); 2360 + val &= ~MVPP2_RXQ_NON_OCCUPIED_MASK; 2361 + val |= MSS_THRESHOLD_STOP << MVPP2_RXQ_NON_OCCUPIED_OFFSET; 2362 + mvpp2_write(port->priv, MVPP2_RXQ_THRESH_REG, val); 2363 + } 2364 + 2659 2365 /* Set the number of packets that will be received before Rx interrupt 2660 2366 * will be generated by HW. 2661 2367 */ ··· 2932 2610 /* Set coalescing pkts and time */ 2933 2611 mvpp2_rx_pkts_coal_set(port, rxq); 2934 2612 mvpp2_rx_time_coal_set(port, rxq); 2613 + 2614 + /* Set the number of non occupied descriptors threshold */ 2615 + mvpp2_set_rxq_free_tresh(port, rxq); 2935 2616 2936 2617 /* Add number of descriptors ready for receiving packets */ 2937 2618 mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size); ··· 3253 2928 3254 2929 for (queue = 0; queue < port->nrxqs; queue++) 3255 2930 mvpp2_rxq_deinit(port, port->rxqs[queue]); 2931 + 2932 + if (port->tx_fc) 2933 + mvpp2_rxq_disable_fc(port); 3256 2934 } 3257 2935 3258 2936 /* Init all Rx queues for port */ ··· 3268 2940 if (err) 3269 2941 goto err_cleanup; 3270 2942 } 2943 + 2944 + if (port->tx_fc) 2945 + mvpp2_rxq_enable_fc(port); 2946 + 3271 2947 return 0; 3272 2948 3273 2949 err_cleanup: ··· 4528 4196 /* Enable interrupts on all threads */ 4529 4197 mvpp2_interrupts_enable(port); 4530 4198 4531 - if (port->priv->hw_version == MVPP22) 4199 + if (port->priv->hw_version != MVPP21) 4532 4200 mvpp22_mode_reconfigure(port); 4533 4201 4534 4202 if (port->phylink) { ··· 4571 4239 4572 4240 if (ring->rx_pending > MVPP2_MAX_RXD_MAX) 4573 4241 new_rx_pending = MVPP2_MAX_RXD_MAX; 4242 + else if (ring->rx_pending < MSS_THRESHOLD_START) 4243 + new_rx_pending = MSS_THRESHOLD_START; 4574 4244 else if (!IS_ALIGNED(ring->rx_pending, 16)) 4575 4245 new_rx_pending = ALIGN(ring->rx_pending, 16); 4576 4246 ··· 4746 4412 valid = true; 4747 4413 } 4748 4414 4749 - if (priv->hw_version == MVPP22 && port->port_irq) { 4415 + if (priv->hw_version != MVPP21 && port->port_irq) { 4750 4416 err = request_irq(port->port_irq, mvpp2_port_isr, 0, 4751 4417 dev->name, port); 4752 4418 if (err) { ··· 5798 5464 return; 5799 5465 } 5800 5466 5801 - /* Handle the more complicated PPv2.2 case */ 5467 + /* Handle the more complicated PPv2.2 and PPv2.3 case */ 5802 5468 for (i = 0; i < port->nqvecs; i++) { 5803 5469 struct mvpp2_queue_vector *qv = port->qvecs + i; 5804 5470 ··· 5975 5641 5976 5642 /* Checks if the port dt description has the required Tx interrupts: 5977 5643 * - PPv2.1: there are no such interrupts. 5978 - * - PPv2.2: 5644 + * - PPv2.2 and PPv2.3: 5979 5645 * - The old DTs have: "rx-shared", "tx-cpuX" with X in [0...3] 5980 5646 * - The new ones have: "hifX" with X in [0..8] 5981 5647 * ··· 6217 5883 phylink_set(mask, Autoneg); 6218 5884 phylink_set_port_modes(mask); 6219 5885 5886 + if (port->priv->global_tx_fc) { 5887 + phylink_set(mask, Pause); 5888 + phylink_set(mask, Asym_Pause); 5889 + } 5890 + 6220 5891 switch (state->interface) { 6221 5892 case PHY_INTERFACE_MODE_10GBASER: 6222 5893 case PHY_INTERFACE_MODE_XAUI: ··· 6312 5973 old_ctrl4 = ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG); 6313 5974 6314 5975 ctrl0 &= ~MVPP2_GMAC_PORT_TYPE_MASK; 6315 - ctrl2 &= ~(MVPP2_GMAC_INBAND_AN_MASK | MVPP2_GMAC_PCS_ENABLE_MASK); 5976 + ctrl2 &= ~(MVPP2_GMAC_INBAND_AN_MASK | MVPP2_GMAC_PCS_ENABLE_MASK | MVPP2_GMAC_FLOW_CTRL_MASK); 6316 5977 6317 5978 /* Configure port type */ 6318 5979 if (phy_interface_mode_is_8023z(state->interface)) { ··· 6399 6060 MVPP2_GMAC_PORT_RESET_MASK, 6400 6061 MVPP2_GMAC_PORT_RESET_MASK); 6401 6062 6402 - if (port->priv->hw_version == MVPP22) { 6063 + if (port->priv->hw_version != MVPP21) { 6403 6064 mvpp22_gop_mask_irq(port); 6404 6065 6405 6066 phy_power_off(port->comphy); ··· 6453 6114 { 6454 6115 struct mvpp2_port *port = mvpp2_phylink_to_port(config); 6455 6116 6456 - if (port->priv->hw_version == MVPP22 && 6117 + if (port->priv->hw_version != MVPP21 && 6457 6118 port->phy_interface != interface) { 6458 6119 port->phy_interface = interface; 6459 6120 ··· 6501 6162 { 6502 6163 struct mvpp2_port *port = mvpp2_phylink_to_port(config); 6503 6164 u32 val; 6165 + int i; 6504 6166 6505 6167 if (mvpp2_is_xlg(interface)) { 6506 6168 if (!phylink_autoneg_inband(mode)) { ··· 6550 6210 mvpp2_modify(port->base + MVPP22_GMAC_CTRL_4_REG, 6551 6211 MVPP22_CTRL4_RX_FC_EN | MVPP22_CTRL4_TX_FC_EN, 6552 6212 val); 6213 + } 6214 + 6215 + if (port->priv->global_tx_fc) { 6216 + port->tx_fc = tx_pause; 6217 + if (tx_pause) 6218 + mvpp2_rxq_enable_fc(port); 6219 + else 6220 + mvpp2_rxq_disable_fc(port); 6221 + if (port->priv->percpu_pools) { 6222 + for (i = 0; i < port->nrxqs; i++) 6223 + mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[i], tx_pause); 6224 + } else { 6225 + mvpp2_bm_pool_update_fc(port, port->pool_long, tx_pause); 6226 + mvpp2_bm_pool_update_fc(port, port->pool_short, tx_pause); 6227 + } 6228 + if (port->priv->hw_version == MVPP23) 6229 + mvpp23_rx_fifo_fc_en(port->priv, port->id, tx_pause); 6553 6230 } 6554 6231 6555 6232 mvpp2_port_enable(port); ··· 6986 6629 mvpp2_write(priv, MVPP2_RX_ATTR_FIFO_SIZE_REG(port), attr_size); 6987 6630 } 6988 6631 6989 - /* Initialize TX FIFO's: the total FIFO size is 48kB on PPv2.2. 6632 + /* Initialize TX FIFO's: the total FIFO size is 48kB on PPv2.2 and PPv2.3. 6990 6633 * 4kB fixed space must be assigned for the loopback port. 6991 6634 * Redistribute remaining avialable 44kB space among all active ports. 6992 6635 * Guarantee minimum 32kB for 10G port and 8kB for port 1, capable of 2.5G ··· 7035 6678 mvpp2_write(priv, MVPP2_RX_FIFO_INIT_REG, 0x1); 7036 6679 } 7037 6680 6681 + /* Configure Rx FIFO Flow control thresholds */ 6682 + static void mvpp23_rx_fifo_fc_set_tresh(struct mvpp2 *priv) 6683 + { 6684 + int port, val; 6685 + 6686 + /* Port 0: maximum speed -10Gb/s port 6687 + * required by spec RX FIFO threshold 9KB 6688 + * Port 1: maximum speed -5Gb/s port 6689 + * required by spec RX FIFO threshold 4KB 6690 + * Port 2: maximum speed -1Gb/s port 6691 + * required by spec RX FIFO threshold 2KB 6692 + */ 6693 + 6694 + /* Without loopback port */ 6695 + for (port = 0; port < (MVPP2_MAX_PORTS - 1); port++) { 6696 + if (port == 0) { 6697 + val = (MVPP23_PORT0_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT) 6698 + << MVPP2_RX_FC_TRSH_OFFS; 6699 + val &= MVPP2_RX_FC_TRSH_MASK; 6700 + mvpp2_write(priv, MVPP2_RX_FC_REG(port), val); 6701 + } else if (port == 1) { 6702 + val = (MVPP23_PORT1_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT) 6703 + << MVPP2_RX_FC_TRSH_OFFS; 6704 + val &= MVPP2_RX_FC_TRSH_MASK; 6705 + mvpp2_write(priv, MVPP2_RX_FC_REG(port), val); 6706 + } else { 6707 + val = (MVPP23_PORT2_FIFO_TRSH / MVPP2_RX_FC_TRSH_UNIT) 6708 + << MVPP2_RX_FC_TRSH_OFFS; 6709 + val &= MVPP2_RX_FC_TRSH_MASK; 6710 + mvpp2_write(priv, MVPP2_RX_FC_REG(port), val); 6711 + } 6712 + } 6713 + } 6714 + 6715 + /* Configure Rx FIFO Flow control thresholds */ 6716 + void mvpp23_rx_fifo_fc_en(struct mvpp2 *priv, int port, bool en) 6717 + { 6718 + int val; 6719 + 6720 + val = mvpp2_read(priv, MVPP2_RX_FC_REG(port)); 6721 + 6722 + if (en) 6723 + val |= MVPP2_RX_FC_EN; 6724 + else 6725 + val &= ~MVPP2_RX_FC_EN; 6726 + 6727 + mvpp2_write(priv, MVPP2_RX_FC_REG(port), val); 6728 + } 6729 + 7038 6730 static void mvpp22_tx_fifo_set_hw(struct mvpp2 *priv, int port, int size) 7039 6731 { 7040 6732 int threshold = MVPP2_TX_FIFO_THRESHOLD(size); ··· 7092 6686 mvpp2_write(priv, MVPP22_TX_FIFO_THRESH_REG(port), threshold); 7093 6687 } 7094 6688 7095 - /* Initialize TX FIFO's: the total FIFO size is 19kB on PPv2.2. 6689 + /* Initialize TX FIFO's: the total FIFO size is 19kB on PPv2.2 and PPv2.3. 7096 6690 * 3kB fixed space must be assigned for the loopback port. 7097 6691 * Redistribute remaining avialable 16kB space among all active ports. 7098 6692 * The 10G interface should use 10kB (which is maximum possible size ··· 7200 6794 if (dram_target_info) 7201 6795 mvpp2_conf_mbus_windows(dram_target_info, priv); 7202 6796 7203 - if (priv->hw_version == MVPP22) 6797 + if (priv->hw_version != MVPP21) 7204 6798 mvpp2_axi_init(priv); 7205 6799 7206 6800 /* Disable HW PHY polling */ ··· 7235 6829 } else { 7236 6830 mvpp22_rx_fifo_init(priv); 7237 6831 mvpp22_tx_fifo_init(priv); 6832 + if (priv->hw_version == MVPP23) 6833 + mvpp23_rx_fifo_fc_set_tresh(priv); 7238 6834 } 7239 6835 7240 6836 if (priv->hw_version == MVPP21) ··· 7258 6850 7259 6851 /* Classifier default initialization */ 7260 6852 mvpp2_cls_init(priv); 6853 + 6854 + return 0; 6855 + } 6856 + 6857 + static int mvpp2_get_sram(struct platform_device *pdev, 6858 + struct mvpp2 *priv) 6859 + { 6860 + struct resource *res; 6861 + 6862 + res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 6863 + if (!res) { 6864 + if (has_acpi_companion(&pdev->dev)) 6865 + dev_warn(&pdev->dev, "ACPI is too old, Flow control not supported\n"); 6866 + else 6867 + dev_warn(&pdev->dev, "DT is too old, Flow control not supported\n"); 6868 + return 0; 6869 + } 6870 + 6871 + priv->cm3_base = devm_ioremap_resource(&pdev->dev, res); 6872 + if (IS_ERR(priv->cm3_base)) 6873 + return PTR_ERR(priv->cm3_base); 7261 6874 7262 6875 return 0; 7263 6876 } ··· 7339 6910 priv->iface_base = devm_ioremap_resource(&pdev->dev, res); 7340 6911 if (IS_ERR(priv->iface_base)) 7341 6912 return PTR_ERR(priv->iface_base); 6913 + 6914 + /* Map CM3 SRAM */ 6915 + err = mvpp2_get_sram(pdev, priv); 6916 + if (err) 6917 + dev_warn(&pdev->dev, "Fail to alloc CM3 SRAM\n"); 6918 + 6919 + /* Enable global Flow Control only if handler to SRAM not NULL */ 6920 + if (priv->cm3_base) 6921 + priv->global_tx_fc = true; 7342 6922 } 7343 6923 7344 - if (priv->hw_version == MVPP22 && dev_of_node(&pdev->dev)) { 6924 + if (priv->hw_version != MVPP21 && dev_of_node(&pdev->dev)) { 7345 6925 priv->sysctrl_base = 7346 6926 syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 7347 6927 "marvell,system-controller"); ··· 7363 6925 priv->sysctrl_base = NULL; 7364 6926 } 7365 6927 7366 - if (priv->hw_version == MVPP22 && 6928 + if (priv->hw_version != MVPP21 && 7367 6929 mvpp2_get_nrxqs(priv) * 2 <= MVPP2_BM_MAX_POOLS) 7368 6930 priv->percpu_pools = 1; 7369 6931 ··· 7408 6970 if (err < 0) 7409 6971 goto err_pp_clk; 7410 6972 7411 - if (priv->hw_version == MVPP22) { 6973 + if (priv->hw_version != MVPP21) { 7412 6974 priv->mg_clk = devm_clk_get(&pdev->dev, "mg_clk"); 7413 6975 if (IS_ERR(priv->mg_clk)) { 7414 6976 err = PTR_ERR(priv->mg_clk); ··· 7449 7011 return -EINVAL; 7450 7012 } 7451 7013 7452 - if (priv->hw_version == MVPP22) { 7014 + if (priv->hw_version != MVPP21) { 7453 7015 err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK); 7454 7016 if (err) 7455 7017 goto err_axi_clk; ··· 7468 7030 if (!fwnode_property_read_u32(port_fwnode, "port-id", &i)) 7469 7031 priv->port_map |= BIT(i); 7470 7032 } 7033 + 7034 + if (priv->hw_version != MVPP21) { 7035 + if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23) 7036 + priv->hw_version = MVPP23; 7037 + } 7038 + 7039 + /* Init mss lock */ 7040 + spin_lock_init(&priv->mss_spinlock); 7471 7041 7472 7042 /* Initialize network controller */ 7473 7043 err = mvpp2_init(pdev, priv); ··· 7516 7070 goto err_port_probe; 7517 7071 } 7518 7072 7073 + if (priv->global_tx_fc && priv->hw_version != MVPP21) { 7074 + err = mvpp2_enable_global_fc(priv); 7075 + if (err) 7076 + dev_warn(&pdev->dev, "Minimum of CM3 firmware 18.09 and chip revision B0 required for flow control\n"); 7077 + } 7078 + 7519 7079 mvpp2_dbgfs_init(priv, pdev->name); 7520 7080 7521 7081 platform_set_drvdata(pdev, priv); ··· 7538 7086 clk_disable_unprepare(priv->axi_clk); 7539 7087 7540 7088 err_mg_core_clk: 7541 - if (priv->hw_version == MVPP22) 7089 + if (priv->hw_version != MVPP21) 7542 7090 clk_disable_unprepare(priv->mg_core_clk); 7543 7091 err_mg_clk: 7544 - if (priv->hw_version == MVPP22) 7092 + if (priv->hw_version != MVPP21) 7545 7093 clk_disable_unprepare(priv->mg_clk); 7546 7094 err_gop_clk: 7547 7095 clk_disable_unprepare(priv->gop_clk);