Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.3-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from netfilter and bpf.

Current release - regressions:

- core: avoid skb end_offset change in __skb_unclone_keeptruesize()

- sched:
- act_connmark: handle errno on tcf_idr_check_alloc
- flower: fix fl_change() error recovery path

- ieee802154: prevent user from crashing the host

Current release - new code bugs:

- eth: bnxt_en: fix the double free during device removal

- tools: ynl:
- fix enum-as-flags in the generic CLI
- fully inherit attrs in subsets
- re-license uniformly under GPL-2.0 or BSD-3-clause

Previous releases - regressions:

- core: use indirect calls helpers for sk_exit_memory_pressure()

- tls:
- fix return value for async crypto
- avoid hanging tasks on the tx_lock

- eth: ice: copy last block omitted in ice_get_module_eeprom()

Previous releases - always broken:

- core: avoid double iput when sock_alloc_file fails

- af_unix: fix struct pid leaks in OOB support

- tls:
- fix possible race condition
- fix device-offloaded sendpage straddling records

- bpf:
- sockmap: fix an infinite loop error
- test_run: fix &xdp_frame misplacement for LIVE_FRAMES
- fix resolving BTF_KIND_VAR after ARRAY, STRUCT, UNION, PTR

- netfilter: tproxy: fix deadlock due to missing BH disable

- phylib: get rid of unnecessary locking

- eth: bgmac: fix *initial* chip reset to support BCM5358

- eth: nfp: fix csum for ipsec offload

- eth: mtk_eth_soc: fix RX data corruption issue

Misc:

- usb: qmi_wwan: add telit 0x1080 composition"

* tag 'net-6.3-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (64 commits)
tools: ynl: fix enum-as-flags in the generic CLI
tools: ynl: move the enum classes to shared code
net: avoid double iput when sock_alloc_file fails
af_unix: fix struct pid leaks in OOB support
eth: fealnx: bring back this old driver
net: dsa: mt7530: permit port 5 to work without port 6 on MT7621 SoC
net: microchip: sparx5: fix deletion of existing DSCP mappings
octeontx2-af: Unlock contexts in the queue context cache in case of fault detection
net/smc: fix fallback failed while sendmsg with fastopen
ynl: re-license uniformly under GPL-2.0 OR BSD-3-Clause
mailmap: update entries for Stephen Hemminger
mailmap: add entry for Maxim Mikityanskiy
nfc: change order inside nfc_se_io error path
ethernet: ice: avoid gcc-9 integer overflow warning
ice: don't ignore return codes in VSI related code
ice: Fix DSCP PFC TLV creation
net: usb: qmi_wwan: add Telit 0x1080 composition
net: usb: cdc_mbim: avoid altsetting toggling for Telit FE990
netfilter: conntrack: adopt safer max chain length
net: tls: fix device-offloaded sendpage straddling records
...

+2598 -355
+6 -1
.mailmap
··· 306 306 Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@redhat.com> 307 307 Mauro Carvalho Chehab <mchehab@kernel.org> <m.chehab@samsung.com> 308 308 Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@s-opensource.com> 309 + Maxim Mikityanskiy <maxtram95@gmail.com> <maximmi@mellanox.com> 310 + Maxim Mikityanskiy <maxtram95@gmail.com> <maximmi@nvidia.com> 309 311 Maxime Ripard <mripard@kernel.org> <maxime.ripard@bootlin.com> 310 312 Maxime Ripard <mripard@kernel.org> <maxime.ripard@free-electrons.com> 311 313 Mayuresh Janorkar <mayur@ti.com> ··· 413 411 Simon Arlott <simon@octiron.net> <simon@fire.lp0.eu> 414 412 Simon Kelley <simon@thekelleys.org.uk> 415 413 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> 416 - Stephen Hemminger <shemminger@osdl.org> 414 + Stephen Hemminger <stephen@networkplumber.org> <shemminger@linux-foundation.org> 415 + Stephen Hemminger <stephen@networkplumber.org> <shemminger@osdl.org> 416 + Stephen Hemminger <stephen@networkplumber.org> <sthemmin@microsoft.com> 417 + Stephen Hemminger <stephen@networkplumber.org> <sthemmin@vyatta.com> 417 418 Steve Wise <larrystevenwise@gmail.com> <swise@chelsio.com> 418 419 Steve Wise <larrystevenwise@gmail.com> <swise@opengridcomputing.com> 419 420 Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
+5 -9
Documentation/bpf/bpf_devel_QA.rst
··· 7 7 patches for stable kernels. 8 8 9 9 For general information about submitting patches, please refer to 10 - `Documentation/process/`_. This document only describes additional specifics 11 - related to BPF. 10 + Documentation/process/submitting-patches.rst. This document only describes 11 + additional specifics related to BPF. 12 12 13 13 .. contents:: 14 14 :local: ··· 461 461 462 462 $ sudo make run_tests 463 463 464 - See the kernels selftest `Documentation/dev-tools/kselftest.rst`_ 465 - document for further documentation. 464 + See :doc:`kernel selftest documentation </dev-tools/kselftest>` 465 + for details. 466 466 467 467 To maximize the number of tests passing, the .config of the kernel 468 468 under test should match the config file fragment in 469 469 tools/testing/selftests/bpf as closely as possible. 470 470 471 471 Finally to ensure support for latest BPF Type Format features - 472 - discussed in `Documentation/bpf/btf.rst`_ - pahole version 1.16 472 + discussed in Documentation/bpf/btf.rst - pahole version 1.16 473 473 is required for kernels built with CONFIG_DEBUG_INFO_BTF=y. 474 474 pahole is delivered in the dwarves package or can be built 475 475 from source at ··· 684 684 685 685 686 686 .. Links 687 - .. _Documentation/process/: https://www.kernel.org/doc/html/latest/process/ 688 687 .. _netdev-FAQ: Documentation/process/maintainer-netdev.rst 689 688 .. _selftests: 690 689 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/ 691 - .. _Documentation/dev-tools/kselftest.rst: 692 - https://www.kernel.org/doc/html/latest/dev-tools/kselftest.html 693 - .. _Documentation/bpf/btf.rst: btf.rst 694 690 695 691 Happy BPF hacking!
+1 -1
Documentation/netlink/genetlink.yaml
··· 1 - # SPDX-License-Identifier: GPL-2.0 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 %YAML 1.2 3 3 --- 4 4 $id: http://kernel.org/schemas/netlink/genetlink-legacy.yaml#
+2 -15
Documentation/netlink/specs/ethtool.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + 1 3 name: ethtool 2 4 3 5 protocol: genetlink-legacy ··· 13 11 - 14 12 name: dev-index 15 13 type: u32 16 - value: 1 17 14 - 18 15 name: dev-name 19 16 type: string ··· 26 25 - 27 26 name: index 28 27 type: u32 29 - value: 1 30 28 - 31 29 name: name 32 30 type: string ··· 39 39 name: bit 40 40 type: nest 41 41 nested-attributes: bitset-bit 42 - value: 1 43 42 - 44 43 name: bitset 45 44 attributes: 46 45 - 47 46 name: nomask 48 47 type: flag 49 - value: 1 50 48 - 51 49 name: size 52 50 type: u32 ··· 59 61 - 60 62 name: index 61 63 type: u32 62 - value: 1 63 64 - 64 65 name: value 65 66 type: string ··· 68 71 - 69 72 name: string 70 73 type: nest 71 - value: 1 72 74 multi-attr: true 73 75 nested-attributes: string 74 76 - ··· 76 80 - 77 81 name: id 78 82 type: u32 79 - value: 1 80 83 - 81 84 name: count 82 85 type: u32 ··· 91 96 name: stringset 92 97 type: nest 93 98 multi-attr: true 94 - value: 1 95 99 nested-attributes: stringset 96 100 - 97 101 name: strset 98 102 attributes: 99 103 - 100 104 name: header 101 - value: 1 102 105 type: nest 103 106 nested-attributes: header 104 107 - ··· 112 119 attributes: 113 120 - 114 121 name: header 115 - value: 1 116 122 type: nest 117 123 nested-attributes: header 118 124 - ··· 124 132 attributes: 125 133 - 126 134 name: header 127 - value: 1 128 135 type: nest 129 136 nested-attributes: header 130 137 - ··· 171 180 attributes: 172 181 - 173 182 name: pad 174 - value: 1 175 183 type: pad 176 184 - 177 185 name: reassembly-errors ··· 195 205 attributes: 196 206 - 197 207 name: header 198 - value: 1 199 208 type: nest 200 209 nested-attributes: header 201 210 - ··· 240 251 241 252 do: &strset-get-op 242 253 request: 243 - value: 1 244 254 attributes: 245 255 - header 246 256 - stringsets 247 257 - counts-only 248 258 reply: 249 - value: 1 250 259 attributes: 251 260 - header 252 261 - stringsets
+4
Documentation/netlink/specs/fou.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + 1 3 name: fou 2 4 3 5 protocol: genetlink-legacy ··· 28 26 - 29 27 name: unspec 30 28 type: unused 29 + value: 0 31 30 - 32 31 name: port 33 32 type: u16 ··· 74 71 - 75 72 name: unspec 76 73 doc: unused 74 + value: 0 77 75 78 76 - 79 77 name: add
+2 -2
Documentation/netlink/specs/netdev.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 + 1 3 name: netdev 2 4 3 5 doc: ··· 50 48 name: ifindex 51 49 doc: netdev ifindex 52 50 type: u32 53 - value: 1 54 51 checks: 55 52 min: 1 56 53 - ··· 67 66 - 68 67 name: dev-get 69 68 doc: Get / dump information about a netdev. 70 - value: 1 71 69 attribute-set: dev 72 70 do: 73 71 request:
+11 -2
Documentation/userspace-api/netlink/specs.rst
··· 24 24 This document describes details of the schema. 25 25 See :doc:`intro-specs` for a practical starting guide. 26 26 27 + All specs must be licensed under ``GPL-2.0-only OR BSD-3-Clause`` 28 + to allow for easy adoption in user space code. 29 + 27 30 Compatibility levels 28 31 ==================== 29 32 ··· 200 197 Numerical attribute ID, used in serialized Netlink messages. 201 198 The ``value`` property can be skipped, in which case the attribute ID 202 199 will be the value of the previous attribute plus one (recursively) 203 - and ``0`` for the first attribute in the attribute set. 200 + and ``1`` for the first attribute in the attribute set. 204 201 205 - Note that the ``value`` of an attribute is defined only in its main set. 202 + Attributes (and operations) use ``1`` as the default value for the first 203 + entry (unlike enums in definitions which start from ``0``) because 204 + entry ``0`` is almost always reserved as undefined. Spec can explicitly 205 + set value to ``0`` if needed. 206 + 207 + Note that the ``value`` of an attribute is defined only in its main set 208 + (not in subsets). 206 209 207 210 enum 208 211 ~~~~
+1
arch/mips/configs/mtx1_defconfig
··· 284 284 CONFIG_SKGE=m 285 285 CONFIG_SKY2=m 286 286 CONFIG_MYRI10GE=m 287 + CONFIG_FEALNX=m 287 288 CONFIG_NATSEMI=m 288 289 CONFIG_NS83820=m 289 290 CONFIG_S2IO=m
-1
arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
··· 10 10 11 11 / { 12 12 model = "fsl,T1040RDB-REV-A"; 13 - compatible = "fsl,T1040RDB-REV-A"; 14 13 }; 15 14 16 15 &seville_port0 {
+4 -1
arch/powerpc/boot/dts/fsl/t1040rdb.dts
··· 180 180 }; 181 181 182 182 &seville_port8 { 183 - ethernet = <&enet0>; 183 + status = "okay"; 184 + }; 185 + 186 + &seville_port9 { 184 187 status = "okay"; 185 188 };
+2
arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
··· 686 686 seville_port8: port@8 { 687 687 reg = <8>; 688 688 phy-mode = "internal"; 689 + ethernet = <&enet0>; 689 690 status = "disabled"; 690 691 691 692 fixed-link { ··· 698 697 seville_port9: port@9 { 699 698 reg = <9>; 700 699 phy-mode = "internal"; 700 + ethernet = <&enet1>; 701 701 status = "disabled"; 702 702 703 703 fixed-link {
+1
arch/powerpc/configs/ppc6xx_defconfig
··· 461 461 CONFIG_SKGE=m 462 462 CONFIG_SKY2=m 463 463 CONFIG_MYRI10GE=m 464 + CONFIG_FEALNX=m 464 465 CONFIG_NATSEMI=m 465 466 CONFIG_NS83820=m 466 467 CONFIG_PCMCIA_AXNET=m
+1
arch/riscv/net/bpf_jit_comp64.c
··· 10 10 #include <linux/filter.h> 11 11 #include <linux/memory.h> 12 12 #include <linux/stop_machine.h> 13 + #include <asm/patch.h> 13 14 #include "bpf_jit.h" 14 15 15 16 #define RV_REG_TCC RV_REG_A6
+20 -15
drivers/net/dsa/mt7530.c
··· 393 393 mt7530_write(priv, MT7530_ATA1 + (i * 4), reg[i]); 394 394 } 395 395 396 + /* Set up switch core clock for MT7530 */ 397 + static void mt7530_pll_setup(struct mt7530_priv *priv) 398 + { 399 + /* Disable PLL */ 400 + core_write(priv, CORE_GSWPLL_GRP1, 0); 401 + 402 + /* Set core clock into 500Mhz */ 403 + core_write(priv, CORE_GSWPLL_GRP2, 404 + RG_GSWPLL_POSDIV_500M(1) | 405 + RG_GSWPLL_FBKDIV_500M(25)); 406 + 407 + /* Enable PLL */ 408 + core_write(priv, CORE_GSWPLL_GRP1, 409 + RG_GSWPLL_EN_PRE | 410 + RG_GSWPLL_POSDIV_200M(2) | 411 + RG_GSWPLL_FBKDIV_200M(32)); 412 + } 413 + 396 414 /* Setup TX circuit including relevant PAD and driving */ 397 415 static int 398 416 mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface) ··· 470 452 /* Disable MT7530 core and TRGMII Tx clocks */ 471 453 core_clear(priv, CORE_TRGMII_GSW_CLK_CG, 472 454 REG_GSWCK_EN | REG_TRGMIICK_EN); 473 - 474 - /* Setup core clock for MT7530 */ 475 - /* Disable PLL */ 476 - core_write(priv, CORE_GSWPLL_GRP1, 0); 477 - 478 - /* Set core clock into 500Mhz */ 479 - core_write(priv, CORE_GSWPLL_GRP2, 480 - RG_GSWPLL_POSDIV_500M(1) | 481 - RG_GSWPLL_FBKDIV_500M(25)); 482 - 483 - /* Enable PLL */ 484 - core_write(priv, CORE_GSWPLL_GRP1, 485 - RG_GSWPLL_EN_PRE | 486 - RG_GSWPLL_POSDIV_200M(2) | 487 - RG_GSWPLL_FBKDIV_200M(32)); 488 455 489 456 /* Setup the MT7530 TRGMII Tx Clock */ 490 457 core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1)); ··· 2198 2195 mt7530_write(priv, MT7530_SYS_CTRL, 2199 2196 SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST | 2200 2197 SYS_CTRL_REG_RST); 2198 + 2199 + mt7530_pll_setup(priv); 2201 2200 2202 2201 /* Enable Port 6 only; P5 as GMAC5 which currently is not supported */ 2203 2202 val = mt7530_read(priv, MT7530_MHWTRAP);
+10
drivers/net/ethernet/Kconfig
··· 132 132 source "drivers/net/ethernet/microsoft/Kconfig" 133 133 source "drivers/net/ethernet/moxa/Kconfig" 134 134 source "drivers/net/ethernet/myricom/Kconfig" 135 + 136 + config FEALNX 137 + tristate "Myson MTD-8xx PCI Ethernet support" 138 + depends on PCI 139 + select CRC32 140 + select MII 141 + help 142 + Say Y here to support the Myson MTD-800 family of PCI-based Ethernet 143 + cards. <http://www.myson.com.tw/> 144 + 135 145 source "drivers/net/ethernet/ni/Kconfig" 136 146 source "drivers/net/ethernet/natsemi/Kconfig" 137 147 source "drivers/net/ethernet/neterion/Kconfig"
+1
drivers/net/ethernet/Makefile
··· 64 64 obj-$(CONFIG_NET_VENDOR_MICROSEMI) += mscc/ 65 65 obj-$(CONFIG_NET_VENDOR_MOXART) += moxa/ 66 66 obj-$(CONFIG_NET_VENDOR_MYRI) += myricom/ 67 + obj-$(CONFIG_FEALNX) += fealnx.o 67 68 obj-$(CONFIG_NET_VENDOR_NATSEMI) += natsemi/ 68 69 obj-$(CONFIG_NET_VENDOR_NETERION) += neterion/ 69 70 obj-$(CONFIG_NET_VENDOR_NETRONOME) += netronome/
+6 -2
drivers/net/ethernet/broadcom/bgmac.c
··· 890 890 891 891 if (iost & BGMAC_BCMA_IOST_ATTACHED) { 892 892 flags = BGMAC_BCMA_IOCTL_SW_CLKEN; 893 - if (!bgmac->has_robosw) 893 + if (bgmac->in_init || !bgmac->has_robosw) 894 894 flags |= BGMAC_BCMA_IOCTL_SW_RESET; 895 895 } 896 896 bgmac_clk_enable(bgmac, flags); 897 897 } 898 898 899 - if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw) 899 + if (iost & BGMAC_BCMA_IOST_ATTACHED && (bgmac->in_init || !bgmac->has_robosw)) 900 900 bgmac_idm_write(bgmac, BCMA_IOCTL, 901 901 bgmac_idm_read(bgmac, BCMA_IOCTL) & 902 902 ~BGMAC_BCMA_IOCTL_SW_RESET); ··· 1490 1490 struct net_device *net_dev = bgmac->net_dev; 1491 1491 int err; 1492 1492 1493 + bgmac->in_init = true; 1494 + 1493 1495 bgmac_chip_intrs_off(bgmac); 1494 1496 1495 1497 net_dev->irq = bgmac->irq; ··· 1543 1541 1544 1542 /* Omit FCS from max MTU size */ 1545 1543 net_dev->max_mtu = BGMAC_RX_MAX_FRAME_SIZE - ETH_FCS_LEN; 1544 + 1545 + bgmac->in_init = false; 1546 1546 1547 1547 err = register_netdev(bgmac->net_dev); 1548 1548 if (err) {
+2
drivers/net/ethernet/broadcom/bgmac.h
··· 472 472 int irq; 473 473 u32 int_mask; 474 474 475 + bool in_init; 476 + 475 477 /* Current MAC state */ 476 478 int mac_speed; 477 479 int mac_duplex;
+12 -13
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3145 3145 3146 3146 static void bnxt_free_tpa_info(struct bnxt *bp) 3147 3147 { 3148 - int i; 3148 + int i, j; 3149 3149 3150 3150 for (i = 0; i < bp->rx_nr_rings; i++) { 3151 3151 struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; ··· 3153 3153 kfree(rxr->rx_tpa_idx_map); 3154 3154 rxr->rx_tpa_idx_map = NULL; 3155 3155 if (rxr->rx_tpa) { 3156 - kfree(rxr->rx_tpa[0].agg_arr); 3157 - rxr->rx_tpa[0].agg_arr = NULL; 3156 + for (j = 0; j < bp->max_tpa; j++) { 3157 + kfree(rxr->rx_tpa[j].agg_arr); 3158 + rxr->rx_tpa[j].agg_arr = NULL; 3159 + } 3158 3160 } 3159 3161 kfree(rxr->rx_tpa); 3160 3162 rxr->rx_tpa = NULL; ··· 3165 3163 3166 3164 static int bnxt_alloc_tpa_info(struct bnxt *bp) 3167 3165 { 3168 - int i, j, total_aggs = 0; 3166 + int i, j; 3169 3167 3170 3168 bp->max_tpa = MAX_TPA; 3171 3169 if (bp->flags & BNXT_FLAG_CHIP_P5) { 3172 3170 if (!bp->max_tpa_v2) 3173 3171 return 0; 3174 3172 bp->max_tpa = max_t(u16, bp->max_tpa_v2, MAX_TPA_P5); 3175 - total_aggs = bp->max_tpa * MAX_SKB_FRAGS; 3176 3173 } 3177 3174 3178 3175 for (i = 0; i < bp->rx_nr_rings; i++) { ··· 3185 3184 3186 3185 if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 3187 3186 continue; 3188 - agg = kcalloc(total_aggs, sizeof(*agg), GFP_KERNEL); 3189 - rxr->rx_tpa[0].agg_arr = agg; 3190 - if (!agg) 3191 - return -ENOMEM; 3192 - for (j = 1; j < bp->max_tpa; j++) 3193 - rxr->rx_tpa[j].agg_arr = agg + j * MAX_SKB_FRAGS; 3187 + for (j = 0; j < bp->max_tpa; j++) { 3188 + agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL); 3189 + if (!agg) 3190 + return -ENOMEM; 3191 + rxr->rx_tpa[j].agg_arr = agg; 3192 + } 3194 3193 rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map), 3195 3194 GFP_KERNEL); 3196 3195 if (!rxr->rx_tpa_idx_map) ··· 13205 13204 bnxt_free_hwrm_resources(bp); 13206 13205 bnxt_ethtool_free(bp); 13207 13206 bnxt_dcb_free(bp); 13208 - kfree(bp->edev); 13209 - bp->edev = NULL; 13210 13207 kfree(bp->ptp_cfg); 13211 13208 bp->ptp_cfg = NULL; 13212 13209 kfree(bp->fw_health);
+2
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 317 317 { 318 318 struct bnxt_aux_priv *aux_priv = 319 319 container_of(dev, struct bnxt_aux_priv, aux_dev.dev); 320 + struct bnxt *bp = netdev_priv(aux_priv->edev->net); 320 321 321 322 ida_free(&bnxt_aux_dev_ids, aux_priv->id); 322 323 kfree(aux_priv->edev->ulp_tbl); 324 + bp->edev = NULL; 323 325 kfree(aux_priv->edev); 324 326 kfree(aux_priv); 325 327 }
+1953
drivers/net/ethernet/fealnx.c
··· 1 + /* 2 + Written 1998-2000 by Donald Becker. 3 + 4 + This software may be used and distributed according to the terms of 5 + the GNU General Public License (GPL), incorporated herein by reference. 6 + Drivers based on or derived from this code fall under the GPL and must 7 + retain the authorship, copyright and license notice. This file is not 8 + a complete program and may only be used when the entire operating 9 + system is licensed under the GPL. 10 + 11 + The author may be reached as becker@scyld.com, or C/O 12 + Scyld Computing Corporation 13 + 410 Severn Ave., Suite 210 14 + Annapolis MD 21403 15 + 16 + Support information and updates available at 17 + http://www.scyld.com/network/pci-skeleton.html 18 + 19 + Linux kernel updates: 20 + 21 + Version 2.51, Nov 17, 2001 (jgarzik): 22 + - Add ethtool support 23 + - Replace some MII-related magic numbers with constants 24 + 25 + */ 26 + 27 + #define DRV_NAME "fealnx" 28 + 29 + static int debug; /* 1-> print debug message */ 30 + static int max_interrupt_work = 20; 31 + 32 + /* Maximum number of multicast addresses to filter (vs. Rx-all-multicast). */ 33 + static int multicast_filter_limit = 32; 34 + 35 + /* Set the copy breakpoint for the copy-only-tiny-frames scheme. */ 36 + /* Setting to > 1518 effectively disables this feature. */ 37 + static int rx_copybreak; 38 + 39 + /* Used to pass the media type, etc. */ 40 + /* Both 'options[]' and 'full_duplex[]' should exist for driver */ 41 + /* interoperability. */ 42 + /* The media type is usually passed in 'options[]'. */ 43 + #define MAX_UNITS 8 /* More are supported, limit only on options */ 44 + static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1 }; 45 + static int full_duplex[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1 }; 46 + 47 + /* Operational parameters that are set at compile time. */ 48 + /* Keep the ring sizes a power of two for compile efficiency. */ 49 + /* The compiler will convert <unsigned>'%'<2^N> into a bit mask. */ 50 + /* Making the Tx ring too large decreases the effectiveness of channel */ 51 + /* bonding and packet priority. */ 52 + /* There are no ill effects from too-large receive rings. */ 53 + // 88-12-9 modify, 54 + // #define TX_RING_SIZE 16 55 + // #define RX_RING_SIZE 32 56 + #define TX_RING_SIZE 6 57 + #define RX_RING_SIZE 12 58 + #define TX_TOTAL_SIZE TX_RING_SIZE*sizeof(struct fealnx_desc) 59 + #define RX_TOTAL_SIZE RX_RING_SIZE*sizeof(struct fealnx_desc) 60 + 61 + /* Operational parameters that usually are not changed. */ 62 + /* Time in jiffies before concluding the transmitter is hung. */ 63 + #define TX_TIMEOUT (2*HZ) 64 + 65 + #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer. */ 66 + 67 + 68 + /* Include files, designed to support most kernel versions 2.0.0 and later. */ 69 + #include <linux/module.h> 70 + #include <linux/kernel.h> 71 + #include <linux/string.h> 72 + #include <linux/timer.h> 73 + #include <linux/errno.h> 74 + #include <linux/ioport.h> 75 + #include <linux/interrupt.h> 76 + #include <linux/pci.h> 77 + #include <linux/netdevice.h> 78 + #include <linux/etherdevice.h> 79 + #include <linux/skbuff.h> 80 + #include <linux/init.h> 81 + #include <linux/mii.h> 82 + #include <linux/ethtool.h> 83 + #include <linux/crc32.h> 84 + #include <linux/delay.h> 85 + #include <linux/bitops.h> 86 + 87 + #include <asm/processor.h> /* Processor type for cache alignment. */ 88 + #include <asm/io.h> 89 + #include <linux/uaccess.h> 90 + #include <asm/byteorder.h> 91 + 92 + /* This driver was written to use PCI memory space, however some x86 systems 93 + work only with I/O space accesses. */ 94 + #ifndef __alpha__ 95 + #define USE_IO_OPS 96 + #endif 97 + 98 + /* Kernel compatibility defines, some common to David Hinds' PCMCIA package. */ 99 + /* This is only in the support-all-kernels source code. */ 100 + 101 + #define RUN_AT(x) (jiffies + (x)) 102 + 103 + MODULE_AUTHOR("Myson or whoever"); 104 + MODULE_DESCRIPTION("Myson MTD-8xx 100/10M Ethernet PCI Adapter Driver"); 105 + MODULE_LICENSE("GPL"); 106 + module_param(max_interrupt_work, int, 0); 107 + module_param(debug, int, 0); 108 + module_param(rx_copybreak, int, 0); 109 + module_param(multicast_filter_limit, int, 0); 110 + module_param_array(options, int, NULL, 0); 111 + module_param_array(full_duplex, int, NULL, 0); 112 + MODULE_PARM_DESC(max_interrupt_work, "fealnx maximum events handled per interrupt"); 113 + MODULE_PARM_DESC(debug, "fealnx enable debugging (0-1)"); 114 + MODULE_PARM_DESC(rx_copybreak, "fealnx copy breakpoint for copy-only-tiny-frames"); 115 + MODULE_PARM_DESC(multicast_filter_limit, "fealnx maximum number of filtered multicast addresses"); 116 + MODULE_PARM_DESC(options, "fealnx: Bits 0-3: media type, bit 17: full duplex"); 117 + MODULE_PARM_DESC(full_duplex, "fealnx full duplex setting(s) (1)"); 118 + 119 + enum { 120 + MIN_REGION_SIZE = 136, 121 + }; 122 + 123 + /* A chip capabilities table, matching the entries in pci_tbl[] above. */ 124 + enum chip_capability_flags { 125 + HAS_MII_XCVR, 126 + HAS_CHIP_XCVR, 127 + }; 128 + 129 + /* 89/6/13 add, */ 130 + /* for different PHY */ 131 + enum phy_type_flags { 132 + MysonPHY = 1, 133 + AhdocPHY = 2, 134 + SeeqPHY = 3, 135 + MarvellPHY = 4, 136 + Myson981 = 5, 137 + LevelOnePHY = 6, 138 + OtherPHY = 10, 139 + }; 140 + 141 + struct chip_info { 142 + char *chip_name; 143 + int flags; 144 + }; 145 + 146 + static const struct chip_info skel_netdrv_tbl[] = { 147 + { "100/10M Ethernet PCI Adapter", HAS_MII_XCVR }, 148 + { "100/10M Ethernet PCI Adapter", HAS_CHIP_XCVR }, 149 + { "1000/100/10M Ethernet PCI Adapter", HAS_MII_XCVR }, 150 + }; 151 + 152 + /* Offsets to the Command and Status Registers. */ 153 + enum fealnx_offsets { 154 + PAR0 = 0x0, /* physical address 0-3 */ 155 + PAR1 = 0x04, /* physical address 4-5 */ 156 + MAR0 = 0x08, /* multicast address 0-3 */ 157 + MAR1 = 0x0C, /* multicast address 4-7 */ 158 + FAR0 = 0x10, /* flow-control address 0-3 */ 159 + FAR1 = 0x14, /* flow-control address 4-5 */ 160 + TCRRCR = 0x18, /* receive & transmit configuration */ 161 + BCR = 0x1C, /* bus command */ 162 + TXPDR = 0x20, /* transmit polling demand */ 163 + RXPDR = 0x24, /* receive polling demand */ 164 + RXCWP = 0x28, /* receive current word pointer */ 165 + TXLBA = 0x2C, /* transmit list base address */ 166 + RXLBA = 0x30, /* receive list base address */ 167 + ISR = 0x34, /* interrupt status */ 168 + IMR = 0x38, /* interrupt mask */ 169 + FTH = 0x3C, /* flow control high/low threshold */ 170 + MANAGEMENT = 0x40, /* bootrom/eeprom and mii management */ 171 + TALLY = 0x44, /* tally counters for crc and mpa */ 172 + TSR = 0x48, /* tally counter for transmit status */ 173 + BMCRSR = 0x4c, /* basic mode control and status */ 174 + PHYIDENTIFIER = 0x50, /* phy identifier */ 175 + ANARANLPAR = 0x54, /* auto-negotiation advertisement and link 176 + partner ability */ 177 + ANEROCR = 0x58, /* auto-negotiation expansion and pci conf. */ 178 + BPREMRPSR = 0x5c, /* bypass & receive error mask and phy status */ 179 + }; 180 + 181 + /* Bits in the interrupt status/enable registers. */ 182 + /* The bits in the Intr Status/Enable registers, mostly interrupt sources. */ 183 + enum intr_status_bits { 184 + RFCON = 0x00020000, /* receive flow control xon packet */ 185 + RFCOFF = 0x00010000, /* receive flow control xoff packet */ 186 + LSCStatus = 0x00008000, /* link status change */ 187 + ANCStatus = 0x00004000, /* autonegotiation completed */ 188 + FBE = 0x00002000, /* fatal bus error */ 189 + FBEMask = 0x00001800, /* mask bit12-11 */ 190 + ParityErr = 0x00000000, /* parity error */ 191 + TargetErr = 0x00001000, /* target abort */ 192 + MasterErr = 0x00000800, /* master error */ 193 + TUNF = 0x00000400, /* transmit underflow */ 194 + ROVF = 0x00000200, /* receive overflow */ 195 + ETI = 0x00000100, /* transmit early int */ 196 + ERI = 0x00000080, /* receive early int */ 197 + CNTOVF = 0x00000040, /* counter overflow */ 198 + RBU = 0x00000020, /* receive buffer unavailable */ 199 + TBU = 0x00000010, /* transmit buffer unavilable */ 200 + TI = 0x00000008, /* transmit interrupt */ 201 + RI = 0x00000004, /* receive interrupt */ 202 + RxErr = 0x00000002, /* receive error */ 203 + }; 204 + 205 + /* Bits in the NetworkConfig register, W for writing, R for reading */ 206 + /* FIXME: some names are invented by me. Marked with (name?) */ 207 + /* If you have docs and know bit names, please fix 'em */ 208 + enum rx_mode_bits { 209 + CR_W_ENH = 0x02000000, /* enhanced mode (name?) */ 210 + CR_W_FD = 0x00100000, /* full duplex */ 211 + CR_W_PS10 = 0x00080000, /* 10 mbit */ 212 + CR_W_TXEN = 0x00040000, /* tx enable (name?) */ 213 + CR_W_PS1000 = 0x00010000, /* 1000 mbit */ 214 + /* CR_W_RXBURSTMASK= 0x00000e00, Im unsure about this */ 215 + CR_W_RXMODEMASK = 0x000000e0, 216 + CR_W_PROM = 0x00000080, /* promiscuous mode */ 217 + CR_W_AB = 0x00000040, /* accept broadcast */ 218 + CR_W_AM = 0x00000020, /* accept mutlicast */ 219 + CR_W_ARP = 0x00000008, /* receive runt pkt */ 220 + CR_W_ALP = 0x00000004, /* receive long pkt */ 221 + CR_W_SEP = 0x00000002, /* receive error pkt */ 222 + CR_W_RXEN = 0x00000001, /* rx enable (unicast?) (name?) */ 223 + 224 + CR_R_TXSTOP = 0x04000000, /* tx stopped (name?) */ 225 + CR_R_FD = 0x00100000, /* full duplex detected */ 226 + CR_R_PS10 = 0x00080000, /* 10 mbit detected */ 227 + CR_R_RXSTOP = 0x00008000, /* rx stopped (name?) */ 228 + }; 229 + 230 + /* The Tulip Rx and Tx buffer descriptors. */ 231 + struct fealnx_desc { 232 + s32 status; 233 + s32 control; 234 + u32 buffer; 235 + u32 next_desc; 236 + struct fealnx_desc *next_desc_logical; 237 + struct sk_buff *skbuff; 238 + u32 reserved1; 239 + u32 reserved2; 240 + }; 241 + 242 + /* Bits in network_desc.status */ 243 + enum rx_desc_status_bits { 244 + RXOWN = 0x80000000, /* own bit */ 245 + FLNGMASK = 0x0fff0000, /* frame length */ 246 + FLNGShift = 16, 247 + MARSTATUS = 0x00004000, /* multicast address received */ 248 + BARSTATUS = 0x00002000, /* broadcast address received */ 249 + PHYSTATUS = 0x00001000, /* physical address received */ 250 + RXFSD = 0x00000800, /* first descriptor */ 251 + RXLSD = 0x00000400, /* last descriptor */ 252 + ErrorSummary = 0x80, /* error summary */ 253 + RUNTPKT = 0x40, /* runt packet received */ 254 + LONGPKT = 0x20, /* long packet received */ 255 + FAE = 0x10, /* frame align error */ 256 + CRC = 0x08, /* crc error */ 257 + RXER = 0x04, /* receive error */ 258 + }; 259 + 260 + enum rx_desc_control_bits { 261 + RXIC = 0x00800000, /* interrupt control */ 262 + RBSShift = 0, 263 + }; 264 + 265 + enum tx_desc_status_bits { 266 + TXOWN = 0x80000000, /* own bit */ 267 + JABTO = 0x00004000, /* jabber timeout */ 268 + CSL = 0x00002000, /* carrier sense lost */ 269 + LC = 0x00001000, /* late collision */ 270 + EC = 0x00000800, /* excessive collision */ 271 + UDF = 0x00000400, /* fifo underflow */ 272 + DFR = 0x00000200, /* deferred */ 273 + HF = 0x00000100, /* heartbeat fail */ 274 + NCRMask = 0x000000ff, /* collision retry count */ 275 + NCRShift = 0, 276 + }; 277 + 278 + enum tx_desc_control_bits { 279 + TXIC = 0x80000000, /* interrupt control */ 280 + ETIControl = 0x40000000, /* early transmit interrupt */ 281 + TXLD = 0x20000000, /* last descriptor */ 282 + TXFD = 0x10000000, /* first descriptor */ 283 + CRCEnable = 0x08000000, /* crc control */ 284 + PADEnable = 0x04000000, /* padding control */ 285 + RetryTxLC = 0x02000000, /* retry late collision */ 286 + PKTSMask = 0x3ff800, /* packet size bit21-11 */ 287 + PKTSShift = 11, 288 + TBSMask = 0x000007ff, /* transmit buffer bit 10-0 */ 289 + TBSShift = 0, 290 + }; 291 + 292 + /* BootROM/EEPROM/MII Management Register */ 293 + #define MASK_MIIR_MII_READ 0x00000000 294 + #define MASK_MIIR_MII_WRITE 0x00000008 295 + #define MASK_MIIR_MII_MDO 0x00000004 296 + #define MASK_MIIR_MII_MDI 0x00000002 297 + #define MASK_MIIR_MII_MDC 0x00000001 298 + 299 + /* ST+OP+PHYAD+REGAD+TA */ 300 + #define OP_READ 0x6000 /* ST:01+OP:10+PHYAD+REGAD+TA:Z0 */ 301 + #define OP_WRITE 0x5002 /* ST:01+OP:01+PHYAD+REGAD+TA:10 */ 302 + 303 + /* ------------------------------------------------------------------------- */ 304 + /* Constants for Myson PHY */ 305 + /* ------------------------------------------------------------------------- */ 306 + #define MysonPHYID 0xd0000302 307 + /* 89-7-27 add, (begin) */ 308 + #define MysonPHYID0 0x0302 309 + #define StatusRegister 18 310 + #define SPEED100 0x0400 // bit10 311 + #define FULLMODE 0x0800 // bit11 312 + /* 89-7-27 add, (end) */ 313 + 314 + /* ------------------------------------------------------------------------- */ 315 + /* Constants for Seeq 80225 PHY */ 316 + /* ------------------------------------------------------------------------- */ 317 + #define SeeqPHYID0 0x0016 318 + 319 + #define MIIRegister18 18 320 + #define SPD_DET_100 0x80 321 + #define DPLX_DET_FULL 0x40 322 + 323 + /* ------------------------------------------------------------------------- */ 324 + /* Constants for Ahdoc 101 PHY */ 325 + /* ------------------------------------------------------------------------- */ 326 + #define AhdocPHYID0 0x0022 327 + 328 + #define DiagnosticReg 18 329 + #define DPLX_FULL 0x0800 330 + #define Speed_100 0x0400 331 + 332 + /* 89/6/13 add, */ 333 + /* -------------------------------------------------------------------------- */ 334 + /* Constants */ 335 + /* -------------------------------------------------------------------------- */ 336 + #define MarvellPHYID0 0x0141 337 + #define LevelOnePHYID0 0x0013 338 + 339 + #define MII1000BaseTControlReg 9 340 + #define MII1000BaseTStatusReg 10 341 + #define SpecificReg 17 342 + 343 + /* for 1000BaseT Control Register */ 344 + #define PHYAbletoPerform1000FullDuplex 0x0200 345 + #define PHYAbletoPerform1000HalfDuplex 0x0100 346 + #define PHY1000AbilityMask 0x300 347 + 348 + // for phy specific status register, marvell phy. 349 + #define SpeedMask 0x0c000 350 + #define Speed_1000M 0x08000 351 + #define Speed_100M 0x4000 352 + #define Speed_10M 0 353 + #define Full_Duplex 0x2000 354 + 355 + // 89/12/29 add, for phy specific status register, levelone phy, (begin) 356 + #define LXT1000_100M 0x08000 357 + #define LXT1000_1000M 0x0c000 358 + #define LXT1000_Full 0x200 359 + // 89/12/29 add, for phy specific status register, levelone phy, (end) 360 + 361 + /* for 3-in-1 case, BMCRSR register */ 362 + #define LinkIsUp2 0x00040000 363 + 364 + /* for PHY */ 365 + #define LinkIsUp 0x0004 366 + 367 + 368 + struct netdev_private { 369 + /* Descriptor rings first for alignment. */ 370 + struct fealnx_desc *rx_ring; 371 + struct fealnx_desc *tx_ring; 372 + 373 + dma_addr_t rx_ring_dma; 374 + dma_addr_t tx_ring_dma; 375 + 376 + spinlock_t lock; 377 + 378 + /* Media monitoring timer. */ 379 + struct timer_list timer; 380 + 381 + /* Reset timer */ 382 + struct timer_list reset_timer; 383 + int reset_timer_armed; 384 + unsigned long crvalue_sv; 385 + unsigned long imrvalue_sv; 386 + 387 + /* Frequently used values: keep some adjacent for cache effect. */ 388 + int flags; 389 + struct pci_dev *pci_dev; 390 + unsigned long crvalue; 391 + unsigned long bcrvalue; 392 + unsigned long imrvalue; 393 + struct fealnx_desc *cur_rx; 394 + struct fealnx_desc *lack_rxbuf; 395 + int really_rx_count; 396 + struct fealnx_desc *cur_tx; 397 + struct fealnx_desc *cur_tx_copy; 398 + int really_tx_count; 399 + int free_tx_count; 400 + unsigned int rx_buf_sz; /* Based on MTU+slack. */ 401 + 402 + /* These values are keep track of the transceiver/media in use. */ 403 + unsigned int linkok; 404 + unsigned int line_speed; 405 + unsigned int duplexmode; 406 + unsigned int default_port:4; /* Last dev->if_port value. */ 407 + unsigned int PHYType; 408 + 409 + /* MII transceiver section. */ 410 + int mii_cnt; /* MII device addresses. */ 411 + unsigned char phys[2]; /* MII device addresses. */ 412 + struct mii_if_info mii; 413 + void __iomem *mem; 414 + }; 415 + 416 + 417 + static int mdio_read(struct net_device *dev, int phy_id, int location); 418 + static void mdio_write(struct net_device *dev, int phy_id, int location, int value); 419 + static int netdev_open(struct net_device *dev); 420 + static void getlinktype(struct net_device *dev); 421 + static void getlinkstatus(struct net_device *dev); 422 + static void netdev_timer(struct timer_list *t); 423 + static void reset_timer(struct timer_list *t); 424 + static void fealnx_tx_timeout(struct net_device *dev, unsigned int txqueue); 425 + static void init_ring(struct net_device *dev); 426 + static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); 427 + static irqreturn_t intr_handler(int irq, void *dev_instance); 428 + static int netdev_rx(struct net_device *dev); 429 + static void set_rx_mode(struct net_device *dev); 430 + static void __set_rx_mode(struct net_device *dev); 431 + static struct net_device_stats *get_stats(struct net_device *dev); 432 + static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 433 + static const struct ethtool_ops netdev_ethtool_ops; 434 + static int netdev_close(struct net_device *dev); 435 + static void reset_rx_descriptors(struct net_device *dev); 436 + static void reset_tx_descriptors(struct net_device *dev); 437 + 438 + static void stop_nic_rx(void __iomem *ioaddr, long crvalue) 439 + { 440 + int delay = 0x1000; 441 + iowrite32(crvalue & ~(CR_W_RXEN), ioaddr + TCRRCR); 442 + while (--delay) { 443 + if ( (ioread32(ioaddr + TCRRCR) & CR_R_RXSTOP) == CR_R_RXSTOP) 444 + break; 445 + } 446 + } 447 + 448 + 449 + static void stop_nic_rxtx(void __iomem *ioaddr, long crvalue) 450 + { 451 + int delay = 0x1000; 452 + iowrite32(crvalue & ~(CR_W_RXEN+CR_W_TXEN), ioaddr + TCRRCR); 453 + while (--delay) { 454 + if ( (ioread32(ioaddr + TCRRCR) & (CR_R_RXSTOP+CR_R_TXSTOP)) 455 + == (CR_R_RXSTOP+CR_R_TXSTOP) ) 456 + break; 457 + } 458 + } 459 + 460 + static const struct net_device_ops netdev_ops = { 461 + .ndo_open = netdev_open, 462 + .ndo_stop = netdev_close, 463 + .ndo_start_xmit = start_tx, 464 + .ndo_get_stats = get_stats, 465 + .ndo_set_rx_mode = set_rx_mode, 466 + .ndo_eth_ioctl = mii_ioctl, 467 + .ndo_tx_timeout = fealnx_tx_timeout, 468 + .ndo_set_mac_address = eth_mac_addr, 469 + .ndo_validate_addr = eth_validate_addr, 470 + }; 471 + 472 + static int fealnx_init_one(struct pci_dev *pdev, 473 + const struct pci_device_id *ent) 474 + { 475 + struct netdev_private *np; 476 + int i, option, err, irq; 477 + static int card_idx = -1; 478 + char boardname[12]; 479 + void __iomem *ioaddr; 480 + unsigned long len; 481 + unsigned int chip_id = ent->driver_data; 482 + struct net_device *dev; 483 + void *ring_space; 484 + dma_addr_t ring_dma; 485 + u8 addr[ETH_ALEN]; 486 + #ifdef USE_IO_OPS 487 + int bar = 0; 488 + #else 489 + int bar = 1; 490 + #endif 491 + 492 + card_idx++; 493 + sprintf(boardname, "fealnx%d", card_idx); 494 + 495 + option = card_idx < MAX_UNITS ? options[card_idx] : 0; 496 + 497 + i = pci_enable_device(pdev); 498 + if (i) return i; 499 + pci_set_master(pdev); 500 + 501 + len = pci_resource_len(pdev, bar); 502 + if (len < MIN_REGION_SIZE) { 503 + dev_err(&pdev->dev, 504 + "region size %ld too small, aborting\n", len); 505 + return -ENODEV; 506 + } 507 + 508 + i = pci_request_regions(pdev, boardname); 509 + if (i) 510 + return i; 511 + 512 + irq = pdev->irq; 513 + 514 + ioaddr = pci_iomap(pdev, bar, len); 515 + if (!ioaddr) { 516 + err = -ENOMEM; 517 + goto err_out_res; 518 + } 519 + 520 + dev = alloc_etherdev(sizeof(struct netdev_private)); 521 + if (!dev) { 522 + err = -ENOMEM; 523 + goto err_out_unmap; 524 + } 525 + SET_NETDEV_DEV(dev, &pdev->dev); 526 + 527 + /* read ethernet id */ 528 + for (i = 0; i < 6; ++i) 529 + addr[i] = ioread8(ioaddr + PAR0 + i); 530 + eth_hw_addr_set(dev, addr); 531 + 532 + /* Reset the chip to erase previous misconfiguration. */ 533 + iowrite32(0x00000001, ioaddr + BCR); 534 + 535 + /* Make certain the descriptor lists are aligned. */ 536 + np = netdev_priv(dev); 537 + np->mem = ioaddr; 538 + spin_lock_init(&np->lock); 539 + np->pci_dev = pdev; 540 + np->flags = skel_netdrv_tbl[chip_id].flags; 541 + pci_set_drvdata(pdev, dev); 542 + np->mii.dev = dev; 543 + np->mii.mdio_read = mdio_read; 544 + np->mii.mdio_write = mdio_write; 545 + np->mii.phy_id_mask = 0x1f; 546 + np->mii.reg_num_mask = 0x1f; 547 + 548 + ring_space = dma_alloc_coherent(&pdev->dev, RX_TOTAL_SIZE, &ring_dma, 549 + GFP_KERNEL); 550 + if (!ring_space) { 551 + err = -ENOMEM; 552 + goto err_out_free_dev; 553 + } 554 + np->rx_ring = ring_space; 555 + np->rx_ring_dma = ring_dma; 556 + 557 + ring_space = dma_alloc_coherent(&pdev->dev, TX_TOTAL_SIZE, &ring_dma, 558 + GFP_KERNEL); 559 + if (!ring_space) { 560 + err = -ENOMEM; 561 + goto err_out_free_rx; 562 + } 563 + np->tx_ring = ring_space; 564 + np->tx_ring_dma = ring_dma; 565 + 566 + /* find the connected MII xcvrs */ 567 + if (np->flags == HAS_MII_XCVR) { 568 + int phy, phy_idx = 0; 569 + 570 + for (phy = 1; phy < 32 && phy_idx < ARRAY_SIZE(np->phys); 571 + phy++) { 572 + int mii_status = mdio_read(dev, phy, 1); 573 + 574 + if (mii_status != 0xffff && mii_status != 0x0000) { 575 + np->phys[phy_idx++] = phy; 576 + dev_info(&pdev->dev, 577 + "MII PHY found at address %d, status " 578 + "0x%4.4x.\n", phy, mii_status); 579 + /* get phy type */ 580 + { 581 + unsigned int data; 582 + 583 + data = mdio_read(dev, np->phys[0], 2); 584 + if (data == SeeqPHYID0) 585 + np->PHYType = SeeqPHY; 586 + else if (data == AhdocPHYID0) 587 + np->PHYType = AhdocPHY; 588 + else if (data == MarvellPHYID0) 589 + np->PHYType = MarvellPHY; 590 + else if (data == MysonPHYID0) 591 + np->PHYType = Myson981; 592 + else if (data == LevelOnePHYID0) 593 + np->PHYType = LevelOnePHY; 594 + else 595 + np->PHYType = OtherPHY; 596 + } 597 + } 598 + } 599 + 600 + np->mii_cnt = phy_idx; 601 + if (phy_idx == 0) 602 + dev_warn(&pdev->dev, 603 + "MII PHY not found -- this device may " 604 + "not operate correctly.\n"); 605 + } else { 606 + np->phys[0] = 32; 607 + /* 89/6/23 add, (begin) */ 608 + /* get phy type */ 609 + if (ioread32(ioaddr + PHYIDENTIFIER) == MysonPHYID) 610 + np->PHYType = MysonPHY; 611 + else 612 + np->PHYType = OtherPHY; 613 + } 614 + np->mii.phy_id = np->phys[0]; 615 + 616 + if (dev->mem_start) 617 + option = dev->mem_start; 618 + 619 + /* The lower four bits are the media type. */ 620 + if (option > 0) { 621 + if (option & 0x200) 622 + np->mii.full_duplex = 1; 623 + np->default_port = option & 15; 624 + } 625 + 626 + if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0) 627 + np->mii.full_duplex = full_duplex[card_idx]; 628 + 629 + if (np->mii.full_duplex) { 630 + dev_info(&pdev->dev, "Media type forced to Full Duplex.\n"); 631 + /* 89/6/13 add, (begin) */ 632 + // if (np->PHYType==MarvellPHY) 633 + if ((np->PHYType == MarvellPHY) || (np->PHYType == LevelOnePHY)) { 634 + unsigned int data; 635 + 636 + data = mdio_read(dev, np->phys[0], 9); 637 + data = (data & 0xfcff) | 0x0200; 638 + mdio_write(dev, np->phys[0], 9, data); 639 + } 640 + /* 89/6/13 add, (end) */ 641 + if (np->flags == HAS_MII_XCVR) 642 + mdio_write(dev, np->phys[0], MII_ADVERTISE, ADVERTISE_FULL); 643 + else 644 + iowrite32(ADVERTISE_FULL, ioaddr + ANARANLPAR); 645 + np->mii.force_media = 1; 646 + } 647 + 648 + dev->netdev_ops = &netdev_ops; 649 + dev->ethtool_ops = &netdev_ethtool_ops; 650 + dev->watchdog_timeo = TX_TIMEOUT; 651 + 652 + err = register_netdev(dev); 653 + if (err) 654 + goto err_out_free_tx; 655 + 656 + printk(KERN_INFO "%s: %s at %p, %pM, IRQ %d.\n", 657 + dev->name, skel_netdrv_tbl[chip_id].chip_name, ioaddr, 658 + dev->dev_addr, irq); 659 + 660 + return 0; 661 + 662 + err_out_free_tx: 663 + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, np->tx_ring, 664 + np->tx_ring_dma); 665 + err_out_free_rx: 666 + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, np->rx_ring, 667 + np->rx_ring_dma); 668 + err_out_free_dev: 669 + free_netdev(dev); 670 + err_out_unmap: 671 + pci_iounmap(pdev, ioaddr); 672 + err_out_res: 673 + pci_release_regions(pdev); 674 + return err; 675 + } 676 + 677 + 678 + static void fealnx_remove_one(struct pci_dev *pdev) 679 + { 680 + struct net_device *dev = pci_get_drvdata(pdev); 681 + 682 + if (dev) { 683 + struct netdev_private *np = netdev_priv(dev); 684 + 685 + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, np->tx_ring, 686 + np->tx_ring_dma); 687 + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, np->rx_ring, 688 + np->rx_ring_dma); 689 + unregister_netdev(dev); 690 + pci_iounmap(pdev, np->mem); 691 + free_netdev(dev); 692 + pci_release_regions(pdev); 693 + } else 694 + printk(KERN_ERR "fealnx: remove for unknown device\n"); 695 + } 696 + 697 + 698 + static ulong m80x_send_cmd_to_phy(void __iomem *miiport, int opcode, int phyad, int regad) 699 + { 700 + ulong miir; 701 + int i; 702 + unsigned int mask, data; 703 + 704 + /* enable MII output */ 705 + miir = (ulong) ioread32(miiport); 706 + miir &= 0xfffffff0; 707 + 708 + miir |= MASK_MIIR_MII_WRITE + MASK_MIIR_MII_MDO; 709 + 710 + /* send 32 1's preamble */ 711 + for (i = 0; i < 32; i++) { 712 + /* low MDC; MDO is already high (miir) */ 713 + miir &= ~MASK_MIIR_MII_MDC; 714 + iowrite32(miir, miiport); 715 + 716 + /* high MDC */ 717 + miir |= MASK_MIIR_MII_MDC; 718 + iowrite32(miir, miiport); 719 + } 720 + 721 + /* calculate ST+OP+PHYAD+REGAD+TA */ 722 + data = opcode | (phyad << 7) | (regad << 2); 723 + 724 + /* sent out */ 725 + mask = 0x8000; 726 + while (mask) { 727 + /* low MDC, prepare MDO */ 728 + miir &= ~(MASK_MIIR_MII_MDC + MASK_MIIR_MII_MDO); 729 + if (mask & data) 730 + miir |= MASK_MIIR_MII_MDO; 731 + 732 + iowrite32(miir, miiport); 733 + /* high MDC */ 734 + miir |= MASK_MIIR_MII_MDC; 735 + iowrite32(miir, miiport); 736 + udelay(30); 737 + 738 + /* next */ 739 + mask >>= 1; 740 + if (mask == 0x2 && opcode == OP_READ) 741 + miir &= ~MASK_MIIR_MII_WRITE; 742 + } 743 + return miir; 744 + } 745 + 746 + 747 + static int mdio_read(struct net_device *dev, int phyad, int regad) 748 + { 749 + struct netdev_private *np = netdev_priv(dev); 750 + void __iomem *miiport = np->mem + MANAGEMENT; 751 + ulong miir; 752 + unsigned int mask, data; 753 + 754 + miir = m80x_send_cmd_to_phy(miiport, OP_READ, phyad, regad); 755 + 756 + /* read data */ 757 + mask = 0x8000; 758 + data = 0; 759 + while (mask) { 760 + /* low MDC */ 761 + miir &= ~MASK_MIIR_MII_MDC; 762 + iowrite32(miir, miiport); 763 + 764 + /* read MDI */ 765 + miir = ioread32(miiport); 766 + if (miir & MASK_MIIR_MII_MDI) 767 + data |= mask; 768 + 769 + /* high MDC, and wait */ 770 + miir |= MASK_MIIR_MII_MDC; 771 + iowrite32(miir, miiport); 772 + udelay(30); 773 + 774 + /* next */ 775 + mask >>= 1; 776 + } 777 + 778 + /* low MDC */ 779 + miir &= ~MASK_MIIR_MII_MDC; 780 + iowrite32(miir, miiport); 781 + 782 + return data & 0xffff; 783 + } 784 + 785 + 786 + static void mdio_write(struct net_device *dev, int phyad, int regad, int data) 787 + { 788 + struct netdev_private *np = netdev_priv(dev); 789 + void __iomem *miiport = np->mem + MANAGEMENT; 790 + ulong miir; 791 + unsigned int mask; 792 + 793 + miir = m80x_send_cmd_to_phy(miiport, OP_WRITE, phyad, regad); 794 + 795 + /* write data */ 796 + mask = 0x8000; 797 + while (mask) { 798 + /* low MDC, prepare MDO */ 799 + miir &= ~(MASK_MIIR_MII_MDC + MASK_MIIR_MII_MDO); 800 + if (mask & data) 801 + miir |= MASK_MIIR_MII_MDO; 802 + iowrite32(miir, miiport); 803 + 804 + /* high MDC */ 805 + miir |= MASK_MIIR_MII_MDC; 806 + iowrite32(miir, miiport); 807 + 808 + /* next */ 809 + mask >>= 1; 810 + } 811 + 812 + /* low MDC */ 813 + miir &= ~MASK_MIIR_MII_MDC; 814 + iowrite32(miir, miiport); 815 + } 816 + 817 + 818 + static int netdev_open(struct net_device *dev) 819 + { 820 + struct netdev_private *np = netdev_priv(dev); 821 + void __iomem *ioaddr = np->mem; 822 + const int irq = np->pci_dev->irq; 823 + int rc, i; 824 + 825 + iowrite32(0x00000001, ioaddr + BCR); /* Reset */ 826 + 827 + rc = request_irq(irq, intr_handler, IRQF_SHARED, dev->name, dev); 828 + if (rc) 829 + return -EAGAIN; 830 + 831 + for (i = 0; i < 3; i++) 832 + iowrite16(((const unsigned short *)dev->dev_addr)[i], 833 + ioaddr + PAR0 + i*2); 834 + 835 + init_ring(dev); 836 + 837 + iowrite32(np->rx_ring_dma, ioaddr + RXLBA); 838 + iowrite32(np->tx_ring_dma, ioaddr + TXLBA); 839 + 840 + /* Initialize other registers. */ 841 + /* Configure the PCI bus bursts and FIFO thresholds. 842 + 486: Set 8 longword burst. 843 + 586: no burst limit. 844 + Burst length 5:3 845 + 0 0 0 1 846 + 0 0 1 4 847 + 0 1 0 8 848 + 0 1 1 16 849 + 1 0 0 32 850 + 1 0 1 64 851 + 1 1 0 128 852 + 1 1 1 256 853 + Wait the specified 50 PCI cycles after a reset by initializing 854 + Tx and Rx queues and the address filter list. 855 + FIXME (Ueimor): optimistic for alpha + posted writes ? */ 856 + 857 + np->bcrvalue = 0x10; /* little-endian, 8 burst length */ 858 + #ifdef __BIG_ENDIAN 859 + np->bcrvalue |= 0x04; /* big-endian */ 860 + #endif 861 + 862 + #if defined(__i386__) && !defined(MODULE) && !defined(CONFIG_UML) 863 + if (boot_cpu_data.x86 <= 4) 864 + np->crvalue = 0xa00; 865 + else 866 + #endif 867 + np->crvalue = 0xe00; /* rx 128 burst length */ 868 + 869 + 870 + // 89/12/29 add, 871 + // 90/1/16 modify, 872 + // np->imrvalue=FBE|TUNF|CNTOVF|RBU|TI|RI; 873 + np->imrvalue = TUNF | CNTOVF | RBU | TI | RI; 874 + if (np->pci_dev->device == 0x891) { 875 + np->bcrvalue |= 0x200; /* set PROG bit */ 876 + np->crvalue |= CR_W_ENH; /* set enhanced bit */ 877 + np->imrvalue |= ETI; 878 + } 879 + iowrite32(np->bcrvalue, ioaddr + BCR); 880 + 881 + if (dev->if_port == 0) 882 + dev->if_port = np->default_port; 883 + 884 + iowrite32(0, ioaddr + RXPDR); 885 + // 89/9/1 modify, 886 + // np->crvalue = 0x00e40001; /* tx store and forward, tx/rx enable */ 887 + np->crvalue |= 0x00e40001; /* tx store and forward, tx/rx enable */ 888 + np->mii.full_duplex = np->mii.force_media; 889 + getlinkstatus(dev); 890 + if (np->linkok) 891 + getlinktype(dev); 892 + __set_rx_mode(dev); 893 + 894 + netif_start_queue(dev); 895 + 896 + /* Clear and Enable interrupts by setting the interrupt mask. */ 897 + iowrite32(FBE | TUNF | CNTOVF | RBU | TI | RI, ioaddr + ISR); 898 + iowrite32(np->imrvalue, ioaddr + IMR); 899 + 900 + if (debug) 901 + printk(KERN_DEBUG "%s: Done netdev_open().\n", dev->name); 902 + 903 + /* Set the timer to check for link beat. */ 904 + timer_setup(&np->timer, netdev_timer, 0); 905 + np->timer.expires = RUN_AT(3 * HZ); 906 + 907 + /* timer handler */ 908 + add_timer(&np->timer); 909 + 910 + timer_setup(&np->reset_timer, reset_timer, 0); 911 + np->reset_timer_armed = 0; 912 + return rc; 913 + } 914 + 915 + 916 + static void getlinkstatus(struct net_device *dev) 917 + /* function: Routine will read MII Status Register to get link status. */ 918 + /* input : dev... pointer to the adapter block. */ 919 + /* output : none. */ 920 + { 921 + struct netdev_private *np = netdev_priv(dev); 922 + unsigned int i, DelayTime = 0x1000; 923 + 924 + np->linkok = 0; 925 + 926 + if (np->PHYType == MysonPHY) { 927 + for (i = 0; i < DelayTime; ++i) { 928 + if (ioread32(np->mem + BMCRSR) & LinkIsUp2) { 929 + np->linkok = 1; 930 + return; 931 + } 932 + udelay(100); 933 + } 934 + } else { 935 + for (i = 0; i < DelayTime; ++i) { 936 + if (mdio_read(dev, np->phys[0], MII_BMSR) & BMSR_LSTATUS) { 937 + np->linkok = 1; 938 + return; 939 + } 940 + udelay(100); 941 + } 942 + } 943 + } 944 + 945 + 946 + static void getlinktype(struct net_device *dev) 947 + { 948 + struct netdev_private *np = netdev_priv(dev); 949 + 950 + if (np->PHYType == MysonPHY) { /* 3-in-1 case */ 951 + if (ioread32(np->mem + TCRRCR) & CR_R_FD) 952 + np->duplexmode = 2; /* full duplex */ 953 + else 954 + np->duplexmode = 1; /* half duplex */ 955 + if (ioread32(np->mem + TCRRCR) & CR_R_PS10) 956 + np->line_speed = 1; /* 10M */ 957 + else 958 + np->line_speed = 2; /* 100M */ 959 + } else { 960 + if (np->PHYType == SeeqPHY) { /* this PHY is SEEQ 80225 */ 961 + unsigned int data; 962 + 963 + data = mdio_read(dev, np->phys[0], MIIRegister18); 964 + if (data & SPD_DET_100) 965 + np->line_speed = 2; /* 100M */ 966 + else 967 + np->line_speed = 1; /* 10M */ 968 + if (data & DPLX_DET_FULL) 969 + np->duplexmode = 2; /* full duplex mode */ 970 + else 971 + np->duplexmode = 1; /* half duplex mode */ 972 + } else if (np->PHYType == AhdocPHY) { 973 + unsigned int data; 974 + 975 + data = mdio_read(dev, np->phys[0], DiagnosticReg); 976 + if (data & Speed_100) 977 + np->line_speed = 2; /* 100M */ 978 + else 979 + np->line_speed = 1; /* 10M */ 980 + if (data & DPLX_FULL) 981 + np->duplexmode = 2; /* full duplex mode */ 982 + else 983 + np->duplexmode = 1; /* half duplex mode */ 984 + } 985 + /* 89/6/13 add, (begin) */ 986 + else if (np->PHYType == MarvellPHY) { 987 + unsigned int data; 988 + 989 + data = mdio_read(dev, np->phys[0], SpecificReg); 990 + if (data & Full_Duplex) 991 + np->duplexmode = 2; /* full duplex mode */ 992 + else 993 + np->duplexmode = 1; /* half duplex mode */ 994 + data &= SpeedMask; 995 + if (data == Speed_1000M) 996 + np->line_speed = 3; /* 1000M */ 997 + else if (data == Speed_100M) 998 + np->line_speed = 2; /* 100M */ 999 + else 1000 + np->line_speed = 1; /* 10M */ 1001 + } 1002 + /* 89/6/13 add, (end) */ 1003 + /* 89/7/27 add, (begin) */ 1004 + else if (np->PHYType == Myson981) { 1005 + unsigned int data; 1006 + 1007 + data = mdio_read(dev, np->phys[0], StatusRegister); 1008 + 1009 + if (data & SPEED100) 1010 + np->line_speed = 2; 1011 + else 1012 + np->line_speed = 1; 1013 + 1014 + if (data & FULLMODE) 1015 + np->duplexmode = 2; 1016 + else 1017 + np->duplexmode = 1; 1018 + } 1019 + /* 89/7/27 add, (end) */ 1020 + /* 89/12/29 add */ 1021 + else if (np->PHYType == LevelOnePHY) { 1022 + unsigned int data; 1023 + 1024 + data = mdio_read(dev, np->phys[0], SpecificReg); 1025 + if (data & LXT1000_Full) 1026 + np->duplexmode = 2; /* full duplex mode */ 1027 + else 1028 + np->duplexmode = 1; /* half duplex mode */ 1029 + data &= SpeedMask; 1030 + if (data == LXT1000_1000M) 1031 + np->line_speed = 3; /* 1000M */ 1032 + else if (data == LXT1000_100M) 1033 + np->line_speed = 2; /* 100M */ 1034 + else 1035 + np->line_speed = 1; /* 10M */ 1036 + } 1037 + np->crvalue &= (~CR_W_PS10) & (~CR_W_FD) & (~CR_W_PS1000); 1038 + if (np->line_speed == 1) 1039 + np->crvalue |= CR_W_PS10; 1040 + else if (np->line_speed == 3) 1041 + np->crvalue |= CR_W_PS1000; 1042 + if (np->duplexmode == 2) 1043 + np->crvalue |= CR_W_FD; 1044 + } 1045 + } 1046 + 1047 + 1048 + /* Take lock before calling this */ 1049 + static void allocate_rx_buffers(struct net_device *dev) 1050 + { 1051 + struct netdev_private *np = netdev_priv(dev); 1052 + 1053 + /* allocate skb for rx buffers */ 1054 + while (np->really_rx_count != RX_RING_SIZE) { 1055 + struct sk_buff *skb; 1056 + 1057 + skb = netdev_alloc_skb(dev, np->rx_buf_sz); 1058 + if (skb == NULL) 1059 + break; /* Better luck next round. */ 1060 + 1061 + while (np->lack_rxbuf->skbuff) 1062 + np->lack_rxbuf = np->lack_rxbuf->next_desc_logical; 1063 + 1064 + np->lack_rxbuf->skbuff = skb; 1065 + np->lack_rxbuf->buffer = dma_map_single(&np->pci_dev->dev, 1066 + skb->data, 1067 + np->rx_buf_sz, 1068 + DMA_FROM_DEVICE); 1069 + np->lack_rxbuf->status = RXOWN; 1070 + ++np->really_rx_count; 1071 + } 1072 + } 1073 + 1074 + 1075 + static void netdev_timer(struct timer_list *t) 1076 + { 1077 + struct netdev_private *np = from_timer(np, t, timer); 1078 + struct net_device *dev = np->mii.dev; 1079 + void __iomem *ioaddr = np->mem; 1080 + int old_crvalue = np->crvalue; 1081 + unsigned int old_linkok = np->linkok; 1082 + unsigned long flags; 1083 + 1084 + if (debug) 1085 + printk(KERN_DEBUG "%s: Media selection timer tick, status %8.8x " 1086 + "config %8.8x.\n", dev->name, ioread32(ioaddr + ISR), 1087 + ioread32(ioaddr + TCRRCR)); 1088 + 1089 + spin_lock_irqsave(&np->lock, flags); 1090 + 1091 + if (np->flags == HAS_MII_XCVR) { 1092 + getlinkstatus(dev); 1093 + if ((old_linkok == 0) && (np->linkok == 1)) { /* we need to detect the media type again */ 1094 + getlinktype(dev); 1095 + if (np->crvalue != old_crvalue) { 1096 + stop_nic_rxtx(ioaddr, np->crvalue); 1097 + iowrite32(np->crvalue, ioaddr + TCRRCR); 1098 + } 1099 + } 1100 + } 1101 + 1102 + allocate_rx_buffers(dev); 1103 + 1104 + spin_unlock_irqrestore(&np->lock, flags); 1105 + 1106 + np->timer.expires = RUN_AT(10 * HZ); 1107 + add_timer(&np->timer); 1108 + } 1109 + 1110 + 1111 + /* Take lock before calling */ 1112 + /* Reset chip and disable rx, tx and interrupts */ 1113 + static void reset_and_disable_rxtx(struct net_device *dev) 1114 + { 1115 + struct netdev_private *np = netdev_priv(dev); 1116 + void __iomem *ioaddr = np->mem; 1117 + int delay=51; 1118 + 1119 + /* Reset the chip's Tx and Rx processes. */ 1120 + stop_nic_rxtx(ioaddr, 0); 1121 + 1122 + /* Disable interrupts by clearing the interrupt mask. */ 1123 + iowrite32(0, ioaddr + IMR); 1124 + 1125 + /* Reset the chip to erase previous misconfiguration. */ 1126 + iowrite32(0x00000001, ioaddr + BCR); 1127 + 1128 + /* Ueimor: wait for 50 PCI cycles (and flush posted writes btw). 1129 + We surely wait too long (address+data phase). Who cares? */ 1130 + while (--delay) { 1131 + ioread32(ioaddr + BCR); 1132 + rmb(); 1133 + } 1134 + } 1135 + 1136 + 1137 + /* Take lock before calling */ 1138 + /* Restore chip after reset */ 1139 + static void enable_rxtx(struct net_device *dev) 1140 + { 1141 + struct netdev_private *np = netdev_priv(dev); 1142 + void __iomem *ioaddr = np->mem; 1143 + 1144 + reset_rx_descriptors(dev); 1145 + 1146 + iowrite32(np->tx_ring_dma + ((char*)np->cur_tx - (char*)np->tx_ring), 1147 + ioaddr + TXLBA); 1148 + iowrite32(np->rx_ring_dma + ((char*)np->cur_rx - (char*)np->rx_ring), 1149 + ioaddr + RXLBA); 1150 + 1151 + iowrite32(np->bcrvalue, ioaddr + BCR); 1152 + 1153 + iowrite32(0, ioaddr + RXPDR); 1154 + __set_rx_mode(dev); /* changes np->crvalue, writes it into TCRRCR */ 1155 + 1156 + /* Clear and Enable interrupts by setting the interrupt mask. */ 1157 + iowrite32(FBE | TUNF | CNTOVF | RBU | TI | RI, ioaddr + ISR); 1158 + iowrite32(np->imrvalue, ioaddr + IMR); 1159 + 1160 + iowrite32(0, ioaddr + TXPDR); 1161 + } 1162 + 1163 + 1164 + static void reset_timer(struct timer_list *t) 1165 + { 1166 + struct netdev_private *np = from_timer(np, t, reset_timer); 1167 + struct net_device *dev = np->mii.dev; 1168 + unsigned long flags; 1169 + 1170 + printk(KERN_WARNING "%s: resetting tx and rx machinery\n", dev->name); 1171 + 1172 + spin_lock_irqsave(&np->lock, flags); 1173 + np->crvalue = np->crvalue_sv; 1174 + np->imrvalue = np->imrvalue_sv; 1175 + 1176 + reset_and_disable_rxtx(dev); 1177 + /* works for me without this: 1178 + reset_tx_descriptors(dev); */ 1179 + enable_rxtx(dev); 1180 + netif_start_queue(dev); /* FIXME: or netif_wake_queue(dev); ? */ 1181 + 1182 + np->reset_timer_armed = 0; 1183 + 1184 + spin_unlock_irqrestore(&np->lock, flags); 1185 + } 1186 + 1187 + 1188 + static void fealnx_tx_timeout(struct net_device *dev, unsigned int txqueue) 1189 + { 1190 + struct netdev_private *np = netdev_priv(dev); 1191 + void __iomem *ioaddr = np->mem; 1192 + unsigned long flags; 1193 + int i; 1194 + 1195 + printk(KERN_WARNING 1196 + "%s: Transmit timed out, status %8.8x, resetting...\n", 1197 + dev->name, ioread32(ioaddr + ISR)); 1198 + 1199 + { 1200 + printk(KERN_DEBUG " Rx ring %p: ", np->rx_ring); 1201 + for (i = 0; i < RX_RING_SIZE; i++) 1202 + printk(KERN_CONT " %8.8x", 1203 + (unsigned int) np->rx_ring[i].status); 1204 + printk(KERN_CONT "\n"); 1205 + printk(KERN_DEBUG " Tx ring %p: ", np->tx_ring); 1206 + for (i = 0; i < TX_RING_SIZE; i++) 1207 + printk(KERN_CONT " %4.4x", np->tx_ring[i].status); 1208 + printk(KERN_CONT "\n"); 1209 + } 1210 + 1211 + spin_lock_irqsave(&np->lock, flags); 1212 + 1213 + reset_and_disable_rxtx(dev); 1214 + reset_tx_descriptors(dev); 1215 + enable_rxtx(dev); 1216 + 1217 + spin_unlock_irqrestore(&np->lock, flags); 1218 + 1219 + netif_trans_update(dev); /* prevent tx timeout */ 1220 + dev->stats.tx_errors++; 1221 + netif_wake_queue(dev); /* or .._start_.. ?? */ 1222 + } 1223 + 1224 + 1225 + /* Initialize the Rx and Tx rings, along with various 'dev' bits. */ 1226 + static void init_ring(struct net_device *dev) 1227 + { 1228 + struct netdev_private *np = netdev_priv(dev); 1229 + int i; 1230 + 1231 + /* initialize rx variables */ 1232 + np->rx_buf_sz = (dev->mtu <= 1500 ? PKT_BUF_SZ : dev->mtu + 32); 1233 + np->cur_rx = &np->rx_ring[0]; 1234 + np->lack_rxbuf = np->rx_ring; 1235 + np->really_rx_count = 0; 1236 + 1237 + /* initial rx descriptors. */ 1238 + for (i = 0; i < RX_RING_SIZE; i++) { 1239 + np->rx_ring[i].status = 0; 1240 + np->rx_ring[i].control = np->rx_buf_sz << RBSShift; 1241 + np->rx_ring[i].next_desc = np->rx_ring_dma + 1242 + (i + 1)*sizeof(struct fealnx_desc); 1243 + np->rx_ring[i].next_desc_logical = &np->rx_ring[i + 1]; 1244 + np->rx_ring[i].skbuff = NULL; 1245 + } 1246 + 1247 + /* for the last rx descriptor */ 1248 + np->rx_ring[i - 1].next_desc = np->rx_ring_dma; 1249 + np->rx_ring[i - 1].next_desc_logical = np->rx_ring; 1250 + 1251 + /* allocate skb for rx buffers */ 1252 + for (i = 0; i < RX_RING_SIZE; i++) { 1253 + struct sk_buff *skb = netdev_alloc_skb(dev, np->rx_buf_sz); 1254 + 1255 + if (skb == NULL) { 1256 + np->lack_rxbuf = &np->rx_ring[i]; 1257 + break; 1258 + } 1259 + 1260 + ++np->really_rx_count; 1261 + np->rx_ring[i].skbuff = skb; 1262 + np->rx_ring[i].buffer = dma_map_single(&np->pci_dev->dev, 1263 + skb->data, 1264 + np->rx_buf_sz, 1265 + DMA_FROM_DEVICE); 1266 + np->rx_ring[i].status = RXOWN; 1267 + np->rx_ring[i].control |= RXIC; 1268 + } 1269 + 1270 + /* initialize tx variables */ 1271 + np->cur_tx = &np->tx_ring[0]; 1272 + np->cur_tx_copy = &np->tx_ring[0]; 1273 + np->really_tx_count = 0; 1274 + np->free_tx_count = TX_RING_SIZE; 1275 + 1276 + for (i = 0; i < TX_RING_SIZE; i++) { 1277 + np->tx_ring[i].status = 0; 1278 + /* do we need np->tx_ring[i].control = XXX; ?? */ 1279 + np->tx_ring[i].next_desc = np->tx_ring_dma + 1280 + (i + 1)*sizeof(struct fealnx_desc); 1281 + np->tx_ring[i].next_desc_logical = &np->tx_ring[i + 1]; 1282 + np->tx_ring[i].skbuff = NULL; 1283 + } 1284 + 1285 + /* for the last tx descriptor */ 1286 + np->tx_ring[i - 1].next_desc = np->tx_ring_dma; 1287 + np->tx_ring[i - 1].next_desc_logical = &np->tx_ring[0]; 1288 + } 1289 + 1290 + 1291 + static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev) 1292 + { 1293 + struct netdev_private *np = netdev_priv(dev); 1294 + unsigned long flags; 1295 + 1296 + spin_lock_irqsave(&np->lock, flags); 1297 + 1298 + np->cur_tx_copy->skbuff = skb; 1299 + 1300 + #define one_buffer 1301 + #define BPT 1022 1302 + #if defined(one_buffer) 1303 + np->cur_tx_copy->buffer = dma_map_single(&np->pci_dev->dev, skb->data, 1304 + skb->len, DMA_TO_DEVICE); 1305 + np->cur_tx_copy->control = TXIC | TXLD | TXFD | CRCEnable | PADEnable; 1306 + np->cur_tx_copy->control |= (skb->len << PKTSShift); /* pkt size */ 1307 + np->cur_tx_copy->control |= (skb->len << TBSShift); /* buffer size */ 1308 + // 89/12/29 add, 1309 + if (np->pci_dev->device == 0x891) 1310 + np->cur_tx_copy->control |= ETIControl | RetryTxLC; 1311 + np->cur_tx_copy->status = TXOWN; 1312 + np->cur_tx_copy = np->cur_tx_copy->next_desc_logical; 1313 + --np->free_tx_count; 1314 + #elif defined(two_buffer) 1315 + if (skb->len > BPT) { 1316 + struct fealnx_desc *next; 1317 + 1318 + /* for the first descriptor */ 1319 + np->cur_tx_copy->buffer = dma_map_single(&np->pci_dev->dev, 1320 + skb->data, BPT, 1321 + DMA_TO_DEVICE); 1322 + np->cur_tx_copy->control = TXIC | TXFD | CRCEnable | PADEnable; 1323 + np->cur_tx_copy->control |= (skb->len << PKTSShift); /* pkt size */ 1324 + np->cur_tx_copy->control |= (BPT << TBSShift); /* buffer size */ 1325 + 1326 + /* for the last descriptor */ 1327 + next = np->cur_tx_copy->next_desc_logical; 1328 + next->skbuff = skb; 1329 + next->control = TXIC | TXLD | CRCEnable | PADEnable; 1330 + next->control |= (skb->len << PKTSShift); /* pkt size */ 1331 + next->control |= ((skb->len - BPT) << TBSShift); /* buf size */ 1332 + // 89/12/29 add, 1333 + if (np->pci_dev->device == 0x891) 1334 + np->cur_tx_copy->control |= ETIControl | RetryTxLC; 1335 + next->buffer = dma_map_single(&ep->pci_dev->dev, 1336 + skb->data + BPT, skb->len - BPT, 1337 + DMA_TO_DEVICE); 1338 + 1339 + next->status = TXOWN; 1340 + np->cur_tx_copy->status = TXOWN; 1341 + 1342 + np->cur_tx_copy = next->next_desc_logical; 1343 + np->free_tx_count -= 2; 1344 + } else { 1345 + np->cur_tx_copy->buffer = dma_map_single(&np->pci_dev->dev, 1346 + skb->data, skb->len, 1347 + DMA_TO_DEVICE); 1348 + np->cur_tx_copy->control = TXIC | TXLD | TXFD | CRCEnable | PADEnable; 1349 + np->cur_tx_copy->control |= (skb->len << PKTSShift); /* pkt size */ 1350 + np->cur_tx_copy->control |= (skb->len << TBSShift); /* buffer size */ 1351 + // 89/12/29 add, 1352 + if (np->pci_dev->device == 0x891) 1353 + np->cur_tx_copy->control |= ETIControl | RetryTxLC; 1354 + np->cur_tx_copy->status = TXOWN; 1355 + np->cur_tx_copy = np->cur_tx_copy->next_desc_logical; 1356 + --np->free_tx_count; 1357 + } 1358 + #endif 1359 + 1360 + if (np->free_tx_count < 2) 1361 + netif_stop_queue(dev); 1362 + ++np->really_tx_count; 1363 + iowrite32(0, np->mem + TXPDR); 1364 + 1365 + spin_unlock_irqrestore(&np->lock, flags); 1366 + return NETDEV_TX_OK; 1367 + } 1368 + 1369 + 1370 + /* Take lock before calling */ 1371 + /* Chip probably hosed tx ring. Clean up. */ 1372 + static void reset_tx_descriptors(struct net_device *dev) 1373 + { 1374 + struct netdev_private *np = netdev_priv(dev); 1375 + struct fealnx_desc *cur; 1376 + int i; 1377 + 1378 + /* initialize tx variables */ 1379 + np->cur_tx = &np->tx_ring[0]; 1380 + np->cur_tx_copy = &np->tx_ring[0]; 1381 + np->really_tx_count = 0; 1382 + np->free_tx_count = TX_RING_SIZE; 1383 + 1384 + for (i = 0; i < TX_RING_SIZE; i++) { 1385 + cur = &np->tx_ring[i]; 1386 + if (cur->skbuff) { 1387 + dma_unmap_single(&np->pci_dev->dev, cur->buffer, 1388 + cur->skbuff->len, DMA_TO_DEVICE); 1389 + dev_kfree_skb_any(cur->skbuff); 1390 + cur->skbuff = NULL; 1391 + } 1392 + cur->status = 0; 1393 + cur->control = 0; /* needed? */ 1394 + /* probably not needed. We do it for purely paranoid reasons */ 1395 + cur->next_desc = np->tx_ring_dma + 1396 + (i + 1)*sizeof(struct fealnx_desc); 1397 + cur->next_desc_logical = &np->tx_ring[i + 1]; 1398 + } 1399 + /* for the last tx descriptor */ 1400 + np->tx_ring[TX_RING_SIZE - 1].next_desc = np->tx_ring_dma; 1401 + np->tx_ring[TX_RING_SIZE - 1].next_desc_logical = &np->tx_ring[0]; 1402 + } 1403 + 1404 + 1405 + /* Take lock and stop rx before calling this */ 1406 + static void reset_rx_descriptors(struct net_device *dev) 1407 + { 1408 + struct netdev_private *np = netdev_priv(dev); 1409 + struct fealnx_desc *cur = np->cur_rx; 1410 + int i; 1411 + 1412 + allocate_rx_buffers(dev); 1413 + 1414 + for (i = 0; i < RX_RING_SIZE; i++) { 1415 + if (cur->skbuff) 1416 + cur->status = RXOWN; 1417 + cur = cur->next_desc_logical; 1418 + } 1419 + 1420 + iowrite32(np->rx_ring_dma + ((char*)np->cur_rx - (char*)np->rx_ring), 1421 + np->mem + RXLBA); 1422 + } 1423 + 1424 + 1425 + /* The interrupt handler does all of the Rx thread work and cleans up 1426 + after the Tx thread. */ 1427 + static irqreturn_t intr_handler(int irq, void *dev_instance) 1428 + { 1429 + struct net_device *dev = (struct net_device *) dev_instance; 1430 + struct netdev_private *np = netdev_priv(dev); 1431 + void __iomem *ioaddr = np->mem; 1432 + long boguscnt = max_interrupt_work; 1433 + unsigned int num_tx = 0; 1434 + int handled = 0; 1435 + 1436 + spin_lock(&np->lock); 1437 + 1438 + iowrite32(0, ioaddr + IMR); 1439 + 1440 + do { 1441 + u32 intr_status = ioread32(ioaddr + ISR); 1442 + 1443 + /* Acknowledge all of the current interrupt sources ASAP. */ 1444 + iowrite32(intr_status, ioaddr + ISR); 1445 + 1446 + if (debug) 1447 + printk(KERN_DEBUG "%s: Interrupt, status %4.4x.\n", dev->name, 1448 + intr_status); 1449 + 1450 + if (!(intr_status & np->imrvalue)) 1451 + break; 1452 + 1453 + handled = 1; 1454 + 1455 + // 90/1/16 delete, 1456 + // 1457 + // if (intr_status & FBE) 1458 + // { /* fatal error */ 1459 + // stop_nic_tx(ioaddr, 0); 1460 + // stop_nic_rx(ioaddr, 0); 1461 + // break; 1462 + // }; 1463 + 1464 + if (intr_status & TUNF) 1465 + iowrite32(0, ioaddr + TXPDR); 1466 + 1467 + if (intr_status & CNTOVF) { 1468 + /* missed pkts */ 1469 + dev->stats.rx_missed_errors += 1470 + ioread32(ioaddr + TALLY) & 0x7fff; 1471 + 1472 + /* crc error */ 1473 + dev->stats.rx_crc_errors += 1474 + (ioread32(ioaddr + TALLY) & 0x7fff0000) >> 16; 1475 + } 1476 + 1477 + if (intr_status & (RI | RBU)) { 1478 + if (intr_status & RI) 1479 + netdev_rx(dev); 1480 + else { 1481 + stop_nic_rx(ioaddr, np->crvalue); 1482 + reset_rx_descriptors(dev); 1483 + iowrite32(np->crvalue, ioaddr + TCRRCR); 1484 + } 1485 + } 1486 + 1487 + while (np->really_tx_count) { 1488 + long tx_status = np->cur_tx->status; 1489 + long tx_control = np->cur_tx->control; 1490 + 1491 + if (!(tx_control & TXLD)) { /* this pkt is combined by two tx descriptors */ 1492 + struct fealnx_desc *next; 1493 + 1494 + next = np->cur_tx->next_desc_logical; 1495 + tx_status = next->status; 1496 + tx_control = next->control; 1497 + } 1498 + 1499 + if (tx_status & TXOWN) 1500 + break; 1501 + 1502 + if (!(np->crvalue & CR_W_ENH)) { 1503 + if (tx_status & (CSL | LC | EC | UDF | HF)) { 1504 + dev->stats.tx_errors++; 1505 + if (tx_status & EC) 1506 + dev->stats.tx_aborted_errors++; 1507 + if (tx_status & CSL) 1508 + dev->stats.tx_carrier_errors++; 1509 + if (tx_status & LC) 1510 + dev->stats.tx_window_errors++; 1511 + if (tx_status & UDF) 1512 + dev->stats.tx_fifo_errors++; 1513 + if ((tx_status & HF) && np->mii.full_duplex == 0) 1514 + dev->stats.tx_heartbeat_errors++; 1515 + 1516 + } else { 1517 + dev->stats.tx_bytes += 1518 + ((tx_control & PKTSMask) >> PKTSShift); 1519 + 1520 + dev->stats.collisions += 1521 + ((tx_status & NCRMask) >> NCRShift); 1522 + dev->stats.tx_packets++; 1523 + } 1524 + } else { 1525 + dev->stats.tx_bytes += 1526 + ((tx_control & PKTSMask) >> PKTSShift); 1527 + dev->stats.tx_packets++; 1528 + } 1529 + 1530 + /* Free the original skb. */ 1531 + dma_unmap_single(&np->pci_dev->dev, 1532 + np->cur_tx->buffer, 1533 + np->cur_tx->skbuff->len, 1534 + DMA_TO_DEVICE); 1535 + dev_consume_skb_irq(np->cur_tx->skbuff); 1536 + np->cur_tx->skbuff = NULL; 1537 + --np->really_tx_count; 1538 + if (np->cur_tx->control & TXLD) { 1539 + np->cur_tx = np->cur_tx->next_desc_logical; 1540 + ++np->free_tx_count; 1541 + } else { 1542 + np->cur_tx = np->cur_tx->next_desc_logical; 1543 + np->cur_tx = np->cur_tx->next_desc_logical; 1544 + np->free_tx_count += 2; 1545 + } 1546 + num_tx++; 1547 + } /* end of for loop */ 1548 + 1549 + if (num_tx && np->free_tx_count >= 2) 1550 + netif_wake_queue(dev); 1551 + 1552 + /* read transmit status for enhanced mode only */ 1553 + if (np->crvalue & CR_W_ENH) { 1554 + long data; 1555 + 1556 + data = ioread32(ioaddr + TSR); 1557 + dev->stats.tx_errors += (data & 0xff000000) >> 24; 1558 + dev->stats.tx_aborted_errors += 1559 + (data & 0xff000000) >> 24; 1560 + dev->stats.tx_window_errors += 1561 + (data & 0x00ff0000) >> 16; 1562 + dev->stats.collisions += (data & 0x0000ffff); 1563 + } 1564 + 1565 + if (--boguscnt < 0) { 1566 + printk(KERN_WARNING "%s: Too much work at interrupt, " 1567 + "status=0x%4.4x.\n", dev->name, intr_status); 1568 + if (!np->reset_timer_armed) { 1569 + np->reset_timer_armed = 1; 1570 + np->reset_timer.expires = RUN_AT(HZ/2); 1571 + add_timer(&np->reset_timer); 1572 + stop_nic_rxtx(ioaddr, 0); 1573 + netif_stop_queue(dev); 1574 + /* or netif_tx_disable(dev); ?? */ 1575 + /* Prevent other paths from enabling tx,rx,intrs */ 1576 + np->crvalue_sv = np->crvalue; 1577 + np->imrvalue_sv = np->imrvalue; 1578 + np->crvalue &= ~(CR_W_TXEN | CR_W_RXEN); /* or simply = 0? */ 1579 + np->imrvalue = 0; 1580 + } 1581 + 1582 + break; 1583 + } 1584 + } while (1); 1585 + 1586 + /* read the tally counters */ 1587 + /* missed pkts */ 1588 + dev->stats.rx_missed_errors += ioread32(ioaddr + TALLY) & 0x7fff; 1589 + 1590 + /* crc error */ 1591 + dev->stats.rx_crc_errors += 1592 + (ioread32(ioaddr + TALLY) & 0x7fff0000) >> 16; 1593 + 1594 + if (debug) 1595 + printk(KERN_DEBUG "%s: exiting interrupt, status=%#4.4x.\n", 1596 + dev->name, ioread32(ioaddr + ISR)); 1597 + 1598 + iowrite32(np->imrvalue, ioaddr + IMR); 1599 + 1600 + spin_unlock(&np->lock); 1601 + 1602 + return IRQ_RETVAL(handled); 1603 + } 1604 + 1605 + 1606 + /* This routine is logically part of the interrupt handler, but separated 1607 + for clarity and better register allocation. */ 1608 + static int netdev_rx(struct net_device *dev) 1609 + { 1610 + struct netdev_private *np = netdev_priv(dev); 1611 + void __iomem *ioaddr = np->mem; 1612 + 1613 + /* If EOP is set on the next entry, it's a new packet. Send it up. */ 1614 + while (!(np->cur_rx->status & RXOWN) && np->cur_rx->skbuff) { 1615 + s32 rx_status = np->cur_rx->status; 1616 + 1617 + if (np->really_rx_count == 0) 1618 + break; 1619 + 1620 + if (debug) 1621 + printk(KERN_DEBUG " netdev_rx() status was %8.8x.\n", rx_status); 1622 + 1623 + if ((!((rx_status & RXFSD) && (rx_status & RXLSD))) || 1624 + (rx_status & ErrorSummary)) { 1625 + if (rx_status & ErrorSummary) { /* there was a fatal error */ 1626 + if (debug) 1627 + printk(KERN_DEBUG 1628 + "%s: Receive error, Rx status %8.8x.\n", 1629 + dev->name, rx_status); 1630 + 1631 + dev->stats.rx_errors++; /* end of a packet. */ 1632 + if (rx_status & (LONGPKT | RUNTPKT)) 1633 + dev->stats.rx_length_errors++; 1634 + if (rx_status & RXER) 1635 + dev->stats.rx_frame_errors++; 1636 + if (rx_status & CRC) 1637 + dev->stats.rx_crc_errors++; 1638 + } else { 1639 + int need_to_reset = 0; 1640 + int desno = 0; 1641 + 1642 + if (rx_status & RXFSD) { /* this pkt is too long, over one rx buffer */ 1643 + struct fealnx_desc *cur; 1644 + 1645 + /* check this packet is received completely? */ 1646 + cur = np->cur_rx; 1647 + while (desno <= np->really_rx_count) { 1648 + ++desno; 1649 + if ((!(cur->status & RXOWN)) && 1650 + (cur->status & RXLSD)) 1651 + break; 1652 + /* goto next rx descriptor */ 1653 + cur = cur->next_desc_logical; 1654 + } 1655 + if (desno > np->really_rx_count) 1656 + need_to_reset = 1; 1657 + } else /* RXLSD did not find, something error */ 1658 + need_to_reset = 1; 1659 + 1660 + if (need_to_reset == 0) { 1661 + int i; 1662 + 1663 + dev->stats.rx_length_errors++; 1664 + 1665 + /* free all rx descriptors related this long pkt */ 1666 + for (i = 0; i < desno; ++i) { 1667 + if (!np->cur_rx->skbuff) { 1668 + printk(KERN_DEBUG 1669 + "%s: I'm scared\n", dev->name); 1670 + break; 1671 + } 1672 + np->cur_rx->status = RXOWN; 1673 + np->cur_rx = np->cur_rx->next_desc_logical; 1674 + } 1675 + continue; 1676 + } else { /* rx error, need to reset this chip */ 1677 + stop_nic_rx(ioaddr, np->crvalue); 1678 + reset_rx_descriptors(dev); 1679 + iowrite32(np->crvalue, ioaddr + TCRRCR); 1680 + } 1681 + break; /* exit the while loop */ 1682 + } 1683 + } else { /* this received pkt is ok */ 1684 + 1685 + struct sk_buff *skb; 1686 + /* Omit the four octet CRC from the length. */ 1687 + short pkt_len = ((rx_status & FLNGMASK) >> FLNGShift) - 4; 1688 + 1689 + #ifndef final_version 1690 + if (debug) 1691 + printk(KERN_DEBUG " netdev_rx() normal Rx pkt length %d" 1692 + " status %x.\n", pkt_len, rx_status); 1693 + #endif 1694 + 1695 + /* Check if the packet is long enough to accept without copying 1696 + to a minimally-sized skbuff. */ 1697 + if (pkt_len < rx_copybreak && 1698 + (skb = netdev_alloc_skb(dev, pkt_len + 2)) != NULL) { 1699 + skb_reserve(skb, 2); /* 16 byte align the IP header */ 1700 + dma_sync_single_for_cpu(&np->pci_dev->dev, 1701 + np->cur_rx->buffer, 1702 + np->rx_buf_sz, 1703 + DMA_FROM_DEVICE); 1704 + /* Call copy + cksum if available. */ 1705 + 1706 + #if ! defined(__alpha__) 1707 + skb_copy_to_linear_data(skb, 1708 + np->cur_rx->skbuff->data, pkt_len); 1709 + skb_put(skb, pkt_len); 1710 + #else 1711 + skb_put_data(skb, np->cur_rx->skbuff->data, 1712 + pkt_len); 1713 + #endif 1714 + dma_sync_single_for_device(&np->pci_dev->dev, 1715 + np->cur_rx->buffer, 1716 + np->rx_buf_sz, 1717 + DMA_FROM_DEVICE); 1718 + } else { 1719 + dma_unmap_single(&np->pci_dev->dev, 1720 + np->cur_rx->buffer, 1721 + np->rx_buf_sz, 1722 + DMA_FROM_DEVICE); 1723 + skb_put(skb = np->cur_rx->skbuff, pkt_len); 1724 + np->cur_rx->skbuff = NULL; 1725 + --np->really_rx_count; 1726 + } 1727 + skb->protocol = eth_type_trans(skb, dev); 1728 + netif_rx(skb); 1729 + dev->stats.rx_packets++; 1730 + dev->stats.rx_bytes += pkt_len; 1731 + } 1732 + 1733 + np->cur_rx = np->cur_rx->next_desc_logical; 1734 + } /* end of while loop */ 1735 + 1736 + /* allocate skb for rx buffers */ 1737 + allocate_rx_buffers(dev); 1738 + 1739 + return 0; 1740 + } 1741 + 1742 + 1743 + static struct net_device_stats *get_stats(struct net_device *dev) 1744 + { 1745 + struct netdev_private *np = netdev_priv(dev); 1746 + void __iomem *ioaddr = np->mem; 1747 + 1748 + /* The chip only need report frame silently dropped. */ 1749 + if (netif_running(dev)) { 1750 + dev->stats.rx_missed_errors += 1751 + ioread32(ioaddr + TALLY) & 0x7fff; 1752 + dev->stats.rx_crc_errors += 1753 + (ioread32(ioaddr + TALLY) & 0x7fff0000) >> 16; 1754 + } 1755 + 1756 + return &dev->stats; 1757 + } 1758 + 1759 + 1760 + /* for dev->set_multicast_list */ 1761 + static void set_rx_mode(struct net_device *dev) 1762 + { 1763 + spinlock_t *lp = &((struct netdev_private *)netdev_priv(dev))->lock; 1764 + unsigned long flags; 1765 + spin_lock_irqsave(lp, flags); 1766 + __set_rx_mode(dev); 1767 + spin_unlock_irqrestore(lp, flags); 1768 + } 1769 + 1770 + 1771 + /* Take lock before calling */ 1772 + static void __set_rx_mode(struct net_device *dev) 1773 + { 1774 + struct netdev_private *np = netdev_priv(dev); 1775 + void __iomem *ioaddr = np->mem; 1776 + u32 mc_filter[2]; /* Multicast hash filter */ 1777 + u32 rx_mode; 1778 + 1779 + if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */ 1780 + memset(mc_filter, 0xff, sizeof(mc_filter)); 1781 + rx_mode = CR_W_PROM | CR_W_AB | CR_W_AM; 1782 + } else if ((netdev_mc_count(dev) > multicast_filter_limit) || 1783 + (dev->flags & IFF_ALLMULTI)) { 1784 + /* Too many to match, or accept all multicasts. */ 1785 + memset(mc_filter, 0xff, sizeof(mc_filter)); 1786 + rx_mode = CR_W_AB | CR_W_AM; 1787 + } else { 1788 + struct netdev_hw_addr *ha; 1789 + 1790 + memset(mc_filter, 0, sizeof(mc_filter)); 1791 + netdev_for_each_mc_addr(ha, dev) { 1792 + unsigned int bit; 1793 + bit = (ether_crc(ETH_ALEN, ha->addr) >> 26) ^ 0x3F; 1794 + mc_filter[bit >> 5] |= (1 << bit); 1795 + } 1796 + rx_mode = CR_W_AB | CR_W_AM; 1797 + } 1798 + 1799 + stop_nic_rxtx(ioaddr, np->crvalue); 1800 + 1801 + iowrite32(mc_filter[0], ioaddr + MAR0); 1802 + iowrite32(mc_filter[1], ioaddr + MAR1); 1803 + np->crvalue &= ~CR_W_RXMODEMASK; 1804 + np->crvalue |= rx_mode; 1805 + iowrite32(np->crvalue, ioaddr + TCRRCR); 1806 + } 1807 + 1808 + static void netdev_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) 1809 + { 1810 + struct netdev_private *np = netdev_priv(dev); 1811 + 1812 + strscpy(info->driver, DRV_NAME, sizeof(info->driver)); 1813 + strscpy(info->bus_info, pci_name(np->pci_dev), sizeof(info->bus_info)); 1814 + } 1815 + 1816 + static int netdev_get_link_ksettings(struct net_device *dev, 1817 + struct ethtool_link_ksettings *cmd) 1818 + { 1819 + struct netdev_private *np = netdev_priv(dev); 1820 + 1821 + spin_lock_irq(&np->lock); 1822 + mii_ethtool_get_link_ksettings(&np->mii, cmd); 1823 + spin_unlock_irq(&np->lock); 1824 + 1825 + return 0; 1826 + } 1827 + 1828 + static int netdev_set_link_ksettings(struct net_device *dev, 1829 + const struct ethtool_link_ksettings *cmd) 1830 + { 1831 + struct netdev_private *np = netdev_priv(dev); 1832 + int rc; 1833 + 1834 + spin_lock_irq(&np->lock); 1835 + rc = mii_ethtool_set_link_ksettings(&np->mii, cmd); 1836 + spin_unlock_irq(&np->lock); 1837 + 1838 + return rc; 1839 + } 1840 + 1841 + static int netdev_nway_reset(struct net_device *dev) 1842 + { 1843 + struct netdev_private *np = netdev_priv(dev); 1844 + return mii_nway_restart(&np->mii); 1845 + } 1846 + 1847 + static u32 netdev_get_link(struct net_device *dev) 1848 + { 1849 + struct netdev_private *np = netdev_priv(dev); 1850 + return mii_link_ok(&np->mii); 1851 + } 1852 + 1853 + static u32 netdev_get_msglevel(struct net_device *dev) 1854 + { 1855 + return debug; 1856 + } 1857 + 1858 + static void netdev_set_msglevel(struct net_device *dev, u32 value) 1859 + { 1860 + debug = value; 1861 + } 1862 + 1863 + static const struct ethtool_ops netdev_ethtool_ops = { 1864 + .get_drvinfo = netdev_get_drvinfo, 1865 + .nway_reset = netdev_nway_reset, 1866 + .get_link = netdev_get_link, 1867 + .get_msglevel = netdev_get_msglevel, 1868 + .set_msglevel = netdev_set_msglevel, 1869 + .get_link_ksettings = netdev_get_link_ksettings, 1870 + .set_link_ksettings = netdev_set_link_ksettings, 1871 + }; 1872 + 1873 + static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1874 + { 1875 + struct netdev_private *np = netdev_priv(dev); 1876 + int rc; 1877 + 1878 + if (!netif_running(dev)) 1879 + return -EINVAL; 1880 + 1881 + spin_lock_irq(&np->lock); 1882 + rc = generic_mii_ioctl(&np->mii, if_mii(rq), cmd, NULL); 1883 + spin_unlock_irq(&np->lock); 1884 + 1885 + return rc; 1886 + } 1887 + 1888 + 1889 + static int netdev_close(struct net_device *dev) 1890 + { 1891 + struct netdev_private *np = netdev_priv(dev); 1892 + void __iomem *ioaddr = np->mem; 1893 + int i; 1894 + 1895 + netif_stop_queue(dev); 1896 + 1897 + /* Disable interrupts by clearing the interrupt mask. */ 1898 + iowrite32(0x0000, ioaddr + IMR); 1899 + 1900 + /* Stop the chip's Tx and Rx processes. */ 1901 + stop_nic_rxtx(ioaddr, 0); 1902 + 1903 + del_timer_sync(&np->timer); 1904 + del_timer_sync(&np->reset_timer); 1905 + 1906 + free_irq(np->pci_dev->irq, dev); 1907 + 1908 + /* Free all the skbuffs in the Rx queue. */ 1909 + for (i = 0; i < RX_RING_SIZE; i++) { 1910 + struct sk_buff *skb = np->rx_ring[i].skbuff; 1911 + 1912 + np->rx_ring[i].status = 0; 1913 + if (skb) { 1914 + dma_unmap_single(&np->pci_dev->dev, 1915 + np->rx_ring[i].buffer, np->rx_buf_sz, 1916 + DMA_FROM_DEVICE); 1917 + dev_kfree_skb(skb); 1918 + np->rx_ring[i].skbuff = NULL; 1919 + } 1920 + } 1921 + 1922 + for (i = 0; i < TX_RING_SIZE; i++) { 1923 + struct sk_buff *skb = np->tx_ring[i].skbuff; 1924 + 1925 + if (skb) { 1926 + dma_unmap_single(&np->pci_dev->dev, 1927 + np->tx_ring[i].buffer, skb->len, 1928 + DMA_TO_DEVICE); 1929 + dev_kfree_skb(skb); 1930 + np->tx_ring[i].skbuff = NULL; 1931 + } 1932 + } 1933 + 1934 + return 0; 1935 + } 1936 + 1937 + static const struct pci_device_id fealnx_pci_tbl[] = { 1938 + {0x1516, 0x0800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 1939 + {0x1516, 0x0803, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, 1940 + {0x1516, 0x0891, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2}, 1941 + {} /* terminate list */ 1942 + }; 1943 + MODULE_DEVICE_TABLE(pci, fealnx_pci_tbl); 1944 + 1945 + 1946 + static struct pci_driver fealnx_driver = { 1947 + .name = "fealnx", 1948 + .id_table = fealnx_pci_tbl, 1949 + .probe = fealnx_init_one, 1950 + .remove = fealnx_remove_one, 1951 + }; 1952 + 1953 + module_pci_driver(fealnx_driver);
+1 -1
drivers/net/ethernet/intel/ice/ice_dcb.c
··· 1411 1411 tlv->ouisubtype = htonl(ouisubtype); 1412 1412 1413 1413 buf[0] = dcbcfg->pfc.pfccap & 0xF; 1414 - buf[1] = dcbcfg->pfc.pfcena & 0xF; 1414 + buf[1] = dcbcfg->pfc.pfcena; 1415 1415 } 1416 1416 1417 1417 /**
+4 -2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 4331 4331 * SFP modules only ever use page 0. 4332 4332 */ 4333 4333 if (page == 0 || !(data[0x2] & 0x4)) { 4334 + u32 copy_len; 4335 + 4334 4336 /* If i2c bus is busy due to slow page change or 4335 4337 * link management access, call can fail. This is normal. 4336 4338 * So we retry this a few times. ··· 4356 4354 } 4357 4355 4358 4356 /* Make sure we have enough room for the new block */ 4359 - if ((i + SFF_READ_BLOCK_SIZE) < ee->len) 4360 - memcpy(data + i, value, SFF_READ_BLOCK_SIZE); 4357 + copy_len = min_t(u32, SFF_READ_BLOCK_SIZE, ee->len - i); 4358 + memcpy(data + i, value, copy_len); 4361 4359 } 4362 4360 } 4363 4361 return 0;
+10 -7
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2126 2126 ice_for_each_rxq(vsi, i) 2127 2127 ice_tx_xsk_pool(vsi, i); 2128 2128 2129 - return ret; 2129 + return 0; 2130 2130 } 2131 2131 2132 2132 /** ··· 2693 2693 return ret; 2694 2694 2695 2695 /* allocate memory for Tx/Rx ring stat pointers */ 2696 - if (ice_vsi_alloc_stat_arrays(vsi)) 2696 + ret = ice_vsi_alloc_stat_arrays(vsi); 2697 + if (ret) 2697 2698 goto unroll_vsi_alloc; 2698 2699 2699 2700 ice_alloc_fd_res(vsi); 2700 2701 2701 - if (ice_vsi_get_qs(vsi)) { 2702 + ret = ice_vsi_get_qs(vsi); 2703 + if (ret) { 2702 2704 dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n", 2703 2705 vsi->idx); 2704 2706 goto unroll_vsi_alloc_stat; ··· 2813 2811 break; 2814 2812 default: 2815 2813 /* clean up the resources and exit */ 2814 + ret = -EINVAL; 2816 2815 goto unroll_vsi_init; 2817 2816 } 2818 2817 ··· 3511 3508 if (vsi_flags & ICE_VSI_FLAG_INIT) { 3512 3509 ret = -EIO; 3513 3510 goto err_vsi_cfg_tc_lan; 3514 - } else { 3515 - kfree(coalesce); 3516 - return ice_schedule_reset(pf, ICE_RESET_PFR); 3517 3511 } 3512 + 3513 + kfree(coalesce); 3514 + return ice_schedule_reset(pf, ICE_RESET_PFR); 3518 3515 } 3519 3516 3520 3517 ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq); ··· 3762 3759 dev = ice_pf_to_dev(pf); 3763 3760 if (vsi->tc_cfg.ena_tc == ena_tc && 3764 3761 vsi->mqprio_qopt.mode != TC_MQPRIO_MODE_CHANNEL) 3765 - return ret; 3762 + return 0; 3766 3763 3767 3764 ice_for_each_traffic_class(i) { 3768 3765 /* build bitmap of enabled TCs */
+4 -4
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 1455 1455 if (match.mask->vlan_priority) { 1456 1456 fltr->flags |= ICE_TC_FLWR_FIELD_VLAN_PRIO; 1457 1457 headers->vlan_hdr.vlan_prio = 1458 - cpu_to_be16((match.key->vlan_priority << 1459 - VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK); 1458 + be16_encode_bits(match.key->vlan_priority, 1459 + VLAN_PRIO_MASK); 1460 1460 } 1461 1461 1462 1462 if (match.mask->vlan_tpid) ··· 1489 1489 if (match.mask->vlan_priority) { 1490 1490 fltr->flags |= ICE_TC_FLWR_FIELD_CVLAN_PRIO; 1491 1491 headers->cvlan_hdr.vlan_prio = 1492 - cpu_to_be16((match.key->vlan_priority << 1493 - VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK); 1492 + be16_encode_bits(match.key->vlan_priority, 1493 + VLAN_PRIO_MASK); 1494 1494 } 1495 1495 } 1496 1496
+5
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 884 884 int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc); 885 885 int rvu_cpt_init(struct rvu *rvu); 886 886 887 + #define NDC_AF_BANK_MASK GENMASK_ULL(7, 0) 888 + #define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16) 889 + 887 890 /* CN10K RVU */ 888 891 int rvu_set_channels_base(struct rvu *rvu); 889 892 void rvu_program_channels(struct rvu *rvu); ··· 904 901 static inline void rvu_dbg_init(struct rvu *rvu) {} 905 902 static inline void rvu_dbg_exit(struct rvu *rvu) {} 906 903 #endif 904 + 905 + int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr); 907 906 908 907 /* RVU Switch */ 909 908 void rvu_switch_enable(struct rvu *rvu);
+3 -4
drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
··· 198 198 CPT_IE_TYPE = 3, 199 199 }; 200 200 201 - #define NDC_MAX_BANK(rvu, blk_addr) (rvu_read64(rvu, \ 202 - blk_addr, NDC_AF_CONST) & 0xFF) 203 - 204 201 #define rvu_dbg_NULL NULL 205 202 #define rvu_dbg_open_NULL NULL 206 203 ··· 1445 1448 struct nix_hw *nix_hw; 1446 1449 struct rvu *rvu; 1447 1450 int bank, max_bank; 1451 + u64 ndc_af_const; 1448 1452 1449 1453 if (blk_addr == BLKADDR_NDC_NPA0) { 1450 1454 rvu = s->private; ··· 1454 1456 rvu = nix_hw->rvu; 1455 1457 } 1456 1458 1457 - max_bank = NDC_MAX_BANK(rvu, blk_addr); 1459 + ndc_af_const = rvu_read64(rvu, blk_addr, NDC_AF_CONST); 1460 + max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const); 1458 1461 for (bank = 0; bank < max_bank; bank++) { 1459 1462 seq_printf(s, "BANK:%d\n", bank); 1460 1463 seq_printf(s, "\tHits:\t%lld\n",
+15 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 790 790 struct nix_aq_res_s *result; 791 791 int timeout = 1000; 792 792 u64 reg, head; 793 + int ret; 793 794 794 795 result = (struct nix_aq_res_s *)aq->res->base; 795 796 ··· 814 813 return -EBUSY; 815 814 } 816 815 817 - if (result->compcode != NIX_AQ_COMP_GOOD) 816 + if (result->compcode != NIX_AQ_COMP_GOOD) { 818 817 /* TODO: Replace this with some error code */ 818 + if (result->compcode == NIX_AQ_COMP_CTX_FAULT || 819 + result->compcode == NIX_AQ_COMP_LOCKERR || 820 + result->compcode == NIX_AQ_COMP_CTX_POISON) { 821 + ret = rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_RX); 822 + ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_TX); 823 + ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_RX); 824 + ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_TX); 825 + if (ret) 826 + dev_err(rvu->dev, 827 + "%s: Not able to unlock cachelines\n", __func__); 828 + } 829 + 819 830 return -EBUSY; 831 + } 820 832 821 833 return 0; 822 834 }
+56 -2
drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
··· 4 4 * Copyright (C) 2018 Marvell. 5 5 * 6 6 */ 7 - 7 + #include <linux/bitfield.h> 8 8 #include <linux/module.h> 9 9 #include <linux/pci.h> 10 10 ··· 42 42 return -EBUSY; 43 43 } 44 44 45 - if (result->compcode != NPA_AQ_COMP_GOOD) 45 + if (result->compcode != NPA_AQ_COMP_GOOD) { 46 46 /* TODO: Replace this with some error code */ 47 + if (result->compcode == NPA_AQ_COMP_CTX_FAULT || 48 + result->compcode == NPA_AQ_COMP_LOCKERR || 49 + result->compcode == NPA_AQ_COMP_CTX_POISON) { 50 + if (rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NPA0)) 51 + dev_err(rvu->dev, 52 + "%s: Not able to unlock cachelines\n", __func__); 53 + } 54 + 47 55 return -EBUSY; 56 + } 48 57 49 58 return 0; 50 59 } ··· 553 544 npa_lf_hwctx_disable(rvu, &ctx_req); 554 545 555 546 npa_ctx_free(rvu, pfvf); 547 + } 548 + 549 + /* Due to an Hardware errata, in some corner cases, AQ context lock 550 + * operations can result in a NDC way getting into an illegal state 551 + * of not valid but locked. 552 + * 553 + * This API solves the problem by clearing the lock bit of the NDC block. 554 + * The operation needs to be done for each line of all the NDC banks. 555 + */ 556 + int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr) 557 + { 558 + int bank, max_bank, line, max_line, err; 559 + u64 reg, ndc_af_const; 560 + 561 + /* Set the ENABLE bit(63) to '0' */ 562 + reg = rvu_read64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL); 563 + rvu_write64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, reg & GENMASK_ULL(62, 0)); 564 + 565 + /* Poll until the BUSY bits(47:32) are set to '0' */ 566 + err = rvu_poll_reg(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, GENMASK_ULL(47, 32), true); 567 + if (err) { 568 + dev_err(rvu->dev, "Timed out while polling for NDC CAM busy bits.\n"); 569 + return err; 570 + } 571 + 572 + ndc_af_const = rvu_read64(rvu, blkaddr, NDC_AF_CONST); 573 + max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const); 574 + max_line = FIELD_GET(NDC_AF_BANK_LINE_MASK, ndc_af_const); 575 + for (bank = 0; bank < max_bank; bank++) { 576 + for (line = 0; line < max_line; line++) { 577 + /* Check if 'cache line valid bit(63)' is not set 578 + * but 'cache line lock bit(60)' is set and on 579 + * success, reset the lock bit(60). 580 + */ 581 + reg = rvu_read64(rvu, blkaddr, 582 + NDC_AF_BANKX_LINEX_METADATA(bank, line)); 583 + if (!(reg & BIT_ULL(63)) && (reg & BIT_ULL(60))) { 584 + rvu_write64(rvu, blkaddr, 585 + NDC_AF_BANKX_LINEX_METADATA(bank, line), 586 + reg & ~BIT_ULL(60)); 587 + } 588 + } 589 + } 590 + 591 + return 0; 556 592 }
+3
drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
··· 694 694 #define NDC_AF_INTR_ENA_W1S (0x00068) 695 695 #define NDC_AF_INTR_ENA_W1C (0x00070) 696 696 #define NDC_AF_ACTIVE_PC (0x00078) 697 + #define NDC_AF_CAMS_RD_INTERVAL (0x00080) 697 698 #define NDC_AF_BP_TEST_ENABLE (0x001F8) 698 699 #define NDC_AF_BP_TEST(a) (0x00200 | (a) << 3) 699 700 #define NDC_AF_BLK_RST (0x002F0) ··· 710 709 (0x00F00 | (a) << 5 | (b) << 4) 711 710 #define NDC_AF_BANKX_HIT_PC(a) (0x01000 | (a) << 3) 712 711 #define NDC_AF_BANKX_MISS_PC(a) (0x01100 | (a) << 3) 712 + #define NDC_AF_BANKX_LINEX_METADATA(a, b) \ 713 + (0x10000 | (a) << 12 | (b) << 3) 713 714 714 715 /* LBK */ 715 716 #define LBK_CONST (0x10ull)
+2 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 616 616 mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id)); 617 617 mcr_new = mcr_cur; 618 618 mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE | 619 - MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK; 619 + MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK | 620 + MAC_MCR_RX_FIFO_CLR_DIS; 620 621 621 622 /* Only update control register when needed! */ 622 623 if (mcr_new != mcr_cur)
+1
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 397 397 #define MAC_MCR_FORCE_MODE BIT(15) 398 398 #define MAC_MCR_TX_EN BIT(14) 399 399 #define MAC_MCR_RX_EN BIT(13) 400 + #define MAC_MCR_RX_FIFO_CLR_DIS BIT(12) 400 401 #define MAC_MCR_BACKOFF_EN BIT(9) 401 402 #define MAC_MCR_BACKPR_EN BIT(8) 402 403 #define MAC_MCR_FORCE_RX_FC BIT(5)
+1 -1
drivers/net/ethernet/microchip/lan966x/lan966x_police.c
··· 194 194 return -EINVAL; 195 195 } 196 196 197 - err = lan966x_police_del(port, port->tc.police_id); 197 + err = lan966x_police_del(port, POL_IDX_PORT + port->chip_port); 198 198 if (err) { 199 199 NL_SET_ERR_MSG_MOD(extack, 200 200 "Failed to add policer to port");
+16 -16
drivers/net/ethernet/microchip/sparx5/sparx5_dcb.c
··· 249 249 return 0; 250 250 } 251 251 252 + static int sparx5_dcb_ieee_delapp(struct net_device *dev, struct dcb_app *app) 253 + { 254 + int err; 255 + 256 + if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) 257 + err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_delapp); 258 + else 259 + err = dcb_ieee_delapp(dev, app); 260 + 261 + if (err < 0) 262 + return err; 263 + 264 + return sparx5_dcb_app_update(dev); 265 + } 266 + 252 267 static int sparx5_dcb_ieee_setapp(struct net_device *dev, struct dcb_app *app) 253 268 { 254 269 struct dcb_app app_itr; ··· 279 264 if (prio) { 280 265 app_itr = *app; 281 266 app_itr.priority = prio; 282 - dcb_ieee_delapp(dev, &app_itr); 267 + sparx5_dcb_ieee_delapp(dev, &app_itr); 283 268 } 284 269 285 270 if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) ··· 294 279 295 280 out: 296 281 return err; 297 - } 298 - 299 - static int sparx5_dcb_ieee_delapp(struct net_device *dev, struct dcb_app *app) 300 - { 301 - int err; 302 - 303 - if (app->selector == IEEE_8021QAZ_APP_SEL_DSCP) 304 - err = sparx5_dcb_ieee_dscp_setdel(dev, app, dcb_ieee_delapp); 305 - else 306 - err = dcb_ieee_delapp(dev, app); 307 - 308 - if (err < 0) 309 - return err; 310 - 311 - return sparx5_dcb_app_update(dev); 312 282 } 313 283 314 284 static int sparx5_dcb_setapptrust(struct net_device *dev, u8 *selectors,
+4 -3
drivers/net/ethernet/netronome/nfp/nfd3/dp.c
··· 324 324 325 325 /* Do not reorder - tso may adjust pkt cnt, vlan may override fields */ 326 326 nfp_nfd3_tx_tso(r_vec, txbuf, txd, skb, md_bytes); 327 - nfp_nfd3_tx_csum(dp, r_vec, txbuf, txd, skb); 327 + if (ipsec) 328 + nfp_nfd3_ipsec_tx(txd, skb); 329 + else 330 + nfp_nfd3_tx_csum(dp, r_vec, txbuf, txd, skb); 328 331 if (skb_vlan_tag_present(skb) && dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN) { 329 332 txd->flags |= NFD3_DESC_TX_VLAN; 330 333 txd->vlan = cpu_to_le16(skb_vlan_tag_get(skb)); 331 334 } 332 335 333 - if (ipsec) 334 - nfp_nfd3_ipsec_tx(txd, skb); 335 336 /* Gather DMA */ 336 337 if (nr_frags > 0) { 337 338 __le64 second_half;
+23 -2
drivers/net/ethernet/netronome/nfp/nfd3/ipsec.c
··· 10 10 void nfp_nfd3_ipsec_tx(struct nfp_nfd3_tx_desc *txd, struct sk_buff *skb) 11 11 { 12 12 struct xfrm_state *x = xfrm_input_state(skb); 13 + struct xfrm_offload *xo = xfrm_offload(skb); 14 + struct iphdr *iph = ip_hdr(skb); 15 + int l4_proto; 13 16 14 17 if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) { 15 - txd->flags |= NFD3_DESC_TX_CSUM | NFD3_DESC_TX_IP4_CSUM | 16 - NFD3_DESC_TX_TCP_CSUM | NFD3_DESC_TX_UDP_CSUM; 18 + txd->flags |= NFD3_DESC_TX_CSUM; 19 + 20 + if (iph->version == 4) 21 + txd->flags |= NFD3_DESC_TX_IP4_CSUM; 22 + 23 + if (x->props.mode == XFRM_MODE_TRANSPORT) 24 + l4_proto = xo->proto; 25 + else if (x->props.mode == XFRM_MODE_TUNNEL) 26 + l4_proto = xo->inner_ipproto; 27 + else 28 + return; 29 + 30 + switch (l4_proto) { 31 + case IPPROTO_UDP: 32 + txd->flags |= NFD3_DESC_TX_UDP_CSUM; 33 + return; 34 + case IPPROTO_TCP: 35 + txd->flags |= NFD3_DESC_TX_TCP_CSUM; 36 + return; 37 + } 17 38 } 18 39 }
+4 -2
drivers/net/ethernet/netronome/nfp/nfdk/dp.c
··· 387 387 if (!skb_is_gso(skb)) { 388 388 real_len = skb->len; 389 389 /* Metadata desc */ 390 - metadata = nfp_nfdk_tx_csum(dp, r_vec, 1, skb, metadata); 390 + if (!ipsec) 391 + metadata = nfp_nfdk_tx_csum(dp, r_vec, 1, skb, metadata); 391 392 txd->raw = cpu_to_le64(metadata); 392 393 txd++; 393 394 } else { ··· 396 395 (txd + 1)->raw = nfp_nfdk_tx_tso(r_vec, txbuf, skb); 397 396 real_len = txbuf->real_len; 398 397 /* Metadata desc */ 399 - metadata = nfp_nfdk_tx_csum(dp, r_vec, txbuf->pkt_cnt, skb, metadata); 398 + if (!ipsec) 399 + metadata = nfp_nfdk_tx_csum(dp, r_vec, txbuf->pkt_cnt, skb, metadata); 400 400 txd->raw = cpu_to_le64(metadata); 401 401 txd += 2; 402 402 txbuf++;
+6 -2
drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c
··· 9 9 u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb) 10 10 { 11 11 struct xfrm_state *x = xfrm_input_state(skb); 12 + struct iphdr *iph = ip_hdr(skb); 12 13 13 - if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) 14 - flags |= NFDK_DESC_TX_L3_CSUM | NFDK_DESC_TX_L4_CSUM; 14 + if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) { 15 + if (iph->version == 4) 16 + flags |= NFDK_DESC_TX_L3_CSUM; 17 + flags |= NFDK_DESC_TX_L4_CSUM; 18 + } 15 19 16 20 return flags; 17 21 }
+4
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 38 38 #include <net/tls.h> 39 39 #include <net/vxlan.h> 40 40 #include <net/xdp_sock_drv.h> 41 + #include <net/xfrm.h> 41 42 42 43 #include "nfpcore/nfp_dev.h" 43 44 #include "nfpcore/nfp_nsp.h" ··· 1897 1896 if (unlikely(hdrlen > NFP_NET_LSO_MAX_HDR_SZ - 8)) 1898 1897 features &= ~NETIF_F_GSO_MASK; 1899 1898 } 1899 + 1900 + if (xfrm_offload(skb)) 1901 + return features; 1900 1902 1901 1903 /* VXLAN/GRE check */ 1902 1904 switch (vlan_get_protocol(skb)) {
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1170 1170 1171 1171 phylink_ethtool_get_wol(priv->phylink, &wol); 1172 1172 device_set_wakeup_capable(priv->device, !!wol.supported); 1173 + device_set_wakeup_enable(priv->device, !!wol.wolopts); 1173 1174 } 1174 1175 1175 1176 return ret;
+2
drivers/net/ieee802154/ca8210.c
··· 1913 1913 * packet 1914 1914 */ 1915 1915 mac_len = ieee802154_hdr_peek_addrs(skb, &header); 1916 + if (mac_len < 0) 1917 + return mac_len; 1916 1918 1917 1919 secspec.security_level = header.sec.level; 1918 1920 secspec.key_id_mode = header.sec.key_id_mode;
+32
drivers/net/phy/microchip.c
··· 342 342 return genphy_config_aneg(phydev); 343 343 } 344 344 345 + static void lan88xx_link_change_notify(struct phy_device *phydev) 346 + { 347 + int temp; 348 + 349 + /* At forced 100 F/H mode, chip may fail to set mode correctly 350 + * when cable is switched between long(~50+m) and short one. 351 + * As workaround, set to 10 before setting to 100 352 + * at forced 100 F/H mode. 353 + */ 354 + if (!phydev->autoneg && phydev->speed == 100) { 355 + /* disable phy interrupt */ 356 + temp = phy_read(phydev, LAN88XX_INT_MASK); 357 + temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; 358 + phy_write(phydev, LAN88XX_INT_MASK, temp); 359 + 360 + temp = phy_read(phydev, MII_BMCR); 361 + temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000); 362 + phy_write(phydev, MII_BMCR, temp); /* set to 10 first */ 363 + temp |= BMCR_SPEED100; 364 + phy_write(phydev, MII_BMCR, temp); /* set to 100 later */ 365 + 366 + /* clear pending interrupt generated while workaround */ 367 + temp = phy_read(phydev, LAN88XX_INT_STS); 368 + 369 + /* enable phy interrupt back */ 370 + temp = phy_read(phydev, LAN88XX_INT_MASK); 371 + temp |= LAN88XX_INT_MASK_MDINTPIN_EN_; 372 + phy_write(phydev, LAN88XX_INT_MASK, temp); 373 + } 374 + } 375 + 345 376 static struct phy_driver microchip_phy_driver[] = { 346 377 { 347 378 .phy_id = 0x0007c132, ··· 390 359 391 360 .config_init = lan88xx_config_init, 392 361 .config_aneg = lan88xx_config_aneg, 362 + .link_change_notify = lan88xx_link_change_notify, 393 363 394 364 .config_intr = lan88xx_phy_config_intr, 395 365 .handle_interrupt = lan88xx_handle_interrupt,
+2 -8
drivers/net/phy/phy_device.c
··· 3098 3098 if (phydrv->flags & PHY_IS_INTERNAL) 3099 3099 phydev->is_internal = true; 3100 3100 3101 - mutex_lock(&phydev->lock); 3102 - 3103 3101 /* Deassert the reset signal */ 3104 3102 phy_device_reset(phydev, 0); 3105 3103 ··· 3144 3146 */ 3145 3147 err = genphy_c45_read_eee_adv(phydev, phydev->advertising_eee); 3146 3148 if (err) 3147 - return err; 3149 + goto out; 3148 3150 3149 3151 /* There is no "enabled" flag. If PHY is advertising, assume it is 3150 3152 * kind of enabled. ··· 3186 3188 phydev->state = PHY_READY; 3187 3189 3188 3190 out: 3189 - /* Assert the reset signal */ 3191 + /* Re-assert the reset signal on error */ 3190 3192 if (err) 3191 3193 phy_device_reset(phydev, 1); 3192 - 3193 - mutex_unlock(&phydev->lock); 3194 3194 3195 3195 return err; 3196 3196 } ··· 3199 3203 3200 3204 cancel_delayed_work_sync(&phydev->state_queue); 3201 3205 3202 - mutex_lock(&phydev->lock); 3203 3206 phydev->state = PHY_DOWN; 3204 - mutex_unlock(&phydev->lock); 3205 3207 3206 3208 sfp_bus_del_upstream(phydev->sfp_bus); 3207 3209 phydev->sfp_bus = NULL;
+3 -11
drivers/net/phy/smsc.c
··· 44 44 }; 45 45 46 46 struct smsc_phy_priv { 47 - u16 intmask; 48 47 bool energy_enable; 49 48 }; 50 49 ··· 56 57 57 58 static int smsc_phy_config_intr(struct phy_device *phydev) 58 59 { 59 - struct smsc_phy_priv *priv = phydev->priv; 60 60 int rc; 61 61 62 62 if (phydev->interrupts == PHY_INTERRUPT_ENABLED) { ··· 63 65 if (rc) 64 66 return rc; 65 67 66 - priv->intmask = MII_LAN83C185_ISF_INT4 | MII_LAN83C185_ISF_INT6; 67 - if (priv->energy_enable) 68 - priv->intmask |= MII_LAN83C185_ISF_INT7; 69 - 70 - rc = phy_write(phydev, MII_LAN83C185_IM, priv->intmask); 68 + rc = phy_write(phydev, MII_LAN83C185_IM, 69 + MII_LAN83C185_ISF_INT_PHYLIB_EVENTS); 71 70 } else { 72 - priv->intmask = 0; 73 - 74 71 rc = phy_write(phydev, MII_LAN83C185_IM, 0); 75 72 if (rc) 76 73 return rc; ··· 78 85 79 86 static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev) 80 87 { 81 - struct smsc_phy_priv *priv = phydev->priv; 82 88 int irq_status; 83 89 84 90 irq_status = phy_read(phydev, MII_LAN83C185_ISF); ··· 88 96 return IRQ_NONE; 89 97 } 90 98 91 - if (!(irq_status & priv->intmask)) 99 + if (!(irq_status & MII_LAN83C185_ISF_INT_PHYLIB_EVENTS)) 92 100 return IRQ_NONE; 93 101 94 102 phy_trigger_machine(phydev);
+5
drivers/net/usb/cdc_mbim.c
··· 665 665 .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, 666 666 }, 667 667 668 + /* Telit FE990 */ 669 + { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1081, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 670 + .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, 671 + }, 672 + 668 673 /* default entry */ 669 674 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 670 675 .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+1 -26
drivers/net/usb/lan78xx.c
··· 2115 2115 static void lan78xx_link_status_change(struct net_device *net) 2116 2116 { 2117 2117 struct phy_device *phydev = net->phydev; 2118 - int temp; 2119 2118 2120 - /* At forced 100 F/H mode, chip may fail to set mode correctly 2121 - * when cable is switched between long(~50+m) and short one. 2122 - * As workaround, set to 10 before setting to 100 2123 - * at forced 100 F/H mode. 2124 - */ 2125 - if (!phydev->autoneg && (phydev->speed == 100)) { 2126 - /* disable phy interrupt */ 2127 - temp = phy_read(phydev, LAN88XX_INT_MASK); 2128 - temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; 2129 - phy_write(phydev, LAN88XX_INT_MASK, temp); 2130 - 2131 - temp = phy_read(phydev, MII_BMCR); 2132 - temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000); 2133 - phy_write(phydev, MII_BMCR, temp); /* set to 10 first */ 2134 - temp |= BMCR_SPEED100; 2135 - phy_write(phydev, MII_BMCR, temp); /* set to 100 later */ 2136 - 2137 - /* clear pending interrupt generated while workaround */ 2138 - temp = phy_read(phydev, LAN88XX_INT_STS); 2139 - 2140 - /* enable phy interrupt back */ 2141 - temp = phy_read(phydev, LAN88XX_INT_MASK); 2142 - temp |= LAN88XX_INT_MASK_MDINTPIN_EN_; 2143 - phy_write(phydev, LAN88XX_INT_MASK, temp); 2144 - } 2119 + phy_print_status(phydev); 2145 2120 } 2146 2121 2147 2122 static int irq_map(struct irq_domain *d, unsigned int irq,
+1
drivers/net/usb/qmi_wwan.c
··· 1364 1364 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1057, 2)}, /* Telit FN980 */ 1365 1365 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1366 1366 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ 1367 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */ 1367 1368 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1368 1369 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1369 1370 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+4
drivers/nfc/fdp/i2c.c
··· 247 247 len, sizeof(**fw_vsc_cfg), 248 248 GFP_KERNEL); 249 249 250 + if (!*fw_vsc_cfg) 251 + goto alloc_err; 252 + 250 253 r = device_property_read_u8_array(dev, FDP_DP_FW_VSC_CFG_NAME, 251 254 *fw_vsc_cfg, len); 252 255 ··· 263 260 *fw_vsc_cfg = NULL; 264 261 } 265 262 263 + alloc_err: 266 264 dev_dbg(dev, "Clock type: %d, clock frequency: %d, VSC: %s", 267 265 *clock_type, *clock_freq, *fw_vsc_cfg != NULL ? "yes" : "no"); 268 266 }
+7
include/net/netfilter/nf_tproxy.h
··· 17 17 return false; 18 18 } 19 19 20 + static inline void nf_tproxy_twsk_deschedule_put(struct inet_timewait_sock *tw) 21 + { 22 + local_bh_disable(); 23 + inet_twsk_deschedule_put(tw); 24 + local_bh_enable(); 25 + } 26 + 20 27 /* assign a socket to the skb -- consumes sk */ 21 28 static inline void nf_tproxy_assign_sock(struct sk_buff *skb, struct sock *sk) 22 29 {
+1 -1
include/uapi/linux/fou.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 1 + /* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */ 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/fou.yaml */ 4 4 /* YNL-GEN uapi header */
+1 -1
include/uapi/linux/netdev.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 1 + /* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */ 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/netdev.yaml */ 4 4 /* YNL-GEN uapi header */
+1
kernel/bpf/btf.c
··· 4569 4569 struct btf *btf = env->btf; 4570 4570 u16 i; 4571 4571 4572 + env->resolve_mode = RESOLVE_TBD; 4572 4573 for_each_vsi_from(i, v->next_member, v->t, vsi) { 4573 4574 u32 var_type_id = vsi->type, type_id, type_size = 0; 4574 4575 const struct btf_type *var_type = btf_type_by_id(env->btf,
+13 -6
net/bpf/test_run.c
··· 97 97 struct xdp_page_head { 98 98 struct xdp_buff orig_ctx; 99 99 struct xdp_buff ctx; 100 - struct xdp_frame frm; 101 - u8 data[]; 100 + union { 101 + /* ::data_hard_start starts here */ 102 + DECLARE_FLEX_ARRAY(struct xdp_frame, frame); 103 + DECLARE_FLEX_ARRAY(u8, data); 104 + }; 102 105 }; 103 106 104 107 struct xdp_test_data { ··· 116 113 u32 frame_cnt; 117 114 }; 118 115 116 + /* tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c:%MAX_PKT_SIZE 117 + * must be updated accordingly this gets changed, otherwise BPF selftests 118 + * will fail. 119 + */ 119 120 #define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) 120 121 #define TEST_XDP_MAX_BATCH 256 121 122 ··· 139 132 headroom -= meta_len; 140 133 141 134 new_ctx = &head->ctx; 142 - frm = &head->frm; 143 - data = &head->data; 135 + frm = head->frame; 136 + data = head->data; 144 137 memcpy(data + headroom, orig_ctx->data_meta, frm_len); 145 138 146 139 xdp_init_buff(new_ctx, TEST_XDP_FRAME_SIZE, &xdp->rxq); ··· 230 223 head->ctx.data = head->orig_ctx.data; 231 224 head->ctx.data_meta = head->orig_ctx.data_meta; 232 225 head->ctx.data_end = head->orig_ctx.data_end; 233 - xdp_update_frame_from_buff(&head->ctx, &head->frm); 226 + xdp_update_frame_from_buff(&head->ctx, head->frame); 234 227 } 235 228 236 229 static int xdp_recv_frames(struct xdp_frame **frames, int nframes, ··· 292 285 head = phys_to_virt(page_to_phys(page)); 293 286 reset_ctx(head); 294 287 ctx = &head->ctx; 295 - frm = &head->frm; 288 + frm = head->frame; 296 289 xdp->frame_cnt++; 297 290 298 291 act = bpf_prog_run_xdp(prog, ctx);
+3
net/caif/caif_usb.c
··· 134 134 struct usb_device *usbdev; 135 135 int res; 136 136 137 + if (what == NETDEV_UNREGISTER && dev->reg_state >= NETREG_UNREGISTERED) 138 + return 0; 139 + 137 140 /* Check whether we have a NCM device, and find its VID/PID. */ 138 141 if (!(dev->dev.parent && dev->dev.parent->driver && 139 142 strcmp(dev->dev.parent->driver->name, "cdc_ncm") == 0))
+1 -1
net/core/netdev-genl-gen.c
··· 1 - // SPDX-License-Identifier: BSD-3-Clause 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/netdev.yaml */ 4 4 /* YNL-GEN kernel source */
+1 -1
net/core/netdev-genl-gen.h
··· 1 - /* SPDX-License-Identifier: BSD-3-Clause */ 1 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/netdev.yaml */ 4 4 /* YNL-GEN kernel header */
+23 -8
net/core/skbuff.c
··· 517 517 #ifdef HAVE_SKB_SMALL_HEAD_CACHE 518 518 if (obj_size <= SKB_SMALL_HEAD_CACHE_SIZE && 519 519 !(flags & KMALLOC_NOT_NORMAL_BITS)) { 520 - 521 - /* skb_small_head_cache has non power of two size, 522 - * likely forcing SLUB to use order-3 pages. 523 - * We deliberately attempt a NOMEMALLOC allocation only. 524 - */ 525 520 obj = kmem_cache_alloc_node(skb_small_head_cache, 526 521 flags | __GFP_NOMEMALLOC | __GFP_NOWARN, 527 522 node); 528 - if (obj) { 529 - *size = SKB_SMALL_HEAD_CACHE_SIZE; 523 + *size = SKB_SMALL_HEAD_CACHE_SIZE; 524 + if (obj || !(gfp_pfmemalloc_allowed(flags))) 530 525 goto out; 531 - } 526 + /* Try again but now we are using pfmemalloc reserves */ 527 + ret_pfmemalloc = true; 528 + obj = kmem_cache_alloc_node(skb_small_head_cache, flags, node); 529 + goto out; 532 530 } 533 531 #endif 534 532 *size = obj_size = kmalloc_size_roundup(obj_size); ··· 2080 2082 } 2081 2083 EXPORT_SYMBOL(skb_realloc_headroom); 2082 2084 2085 + /* Note: We plan to rework this in linux-6.4 */ 2083 2086 int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri) 2084 2087 { 2085 2088 unsigned int saved_end_offset, saved_truesize; ··· 2098 2099 2099 2100 if (likely(skb_end_offset(skb) == saved_end_offset)) 2100 2101 return 0; 2102 + 2103 + #ifdef HAVE_SKB_SMALL_HEAD_CACHE 2104 + /* We can not change skb->end if the original or new value 2105 + * is SKB_SMALL_HEAD_HEADROOM, as it might break skb_kfree_head(). 2106 + */ 2107 + if (saved_end_offset == SKB_SMALL_HEAD_HEADROOM || 2108 + skb_end_offset(skb) == SKB_SMALL_HEAD_HEADROOM) { 2109 + /* We think this path should not be taken. 2110 + * Add a temporary trace to warn us just in case. 2111 + */ 2112 + pr_err_once("__skb_unclone_keeptruesize() skb_end_offset() %u -> %u\n", 2113 + saved_end_offset, skb_end_offset(skb)); 2114 + WARN_ON_ONCE(1); 2115 + return 0; 2116 + } 2117 + #endif 2101 2118 2102 2119 shinfo = skb_shinfo(skb); 2103 2120
+2 -1
net/core/sock.c
··· 2818 2818 static void sk_leave_memory_pressure(struct sock *sk) 2819 2819 { 2820 2820 if (sk->sk_prot->leave_memory_pressure) { 2821 - sk->sk_prot->leave_memory_pressure(sk); 2821 + INDIRECT_CALL_INET_1(sk->sk_prot->leave_memory_pressure, 2822 + tcp_leave_memory_pressure, sk); 2822 2823 } else { 2823 2824 unsigned long *memory_pressure = sk->sk_prot->memory_pressure; 2824 2825
+1 -1
net/ieee802154/nl802154.c
··· 1412 1412 return -EOPNOTSUPP; 1413 1413 } 1414 1414 1415 - if (!nla_get_u8(info->attrs[NL802154_ATTR_SCAN_TYPE])) { 1415 + if (!info->attrs[NL802154_ATTR_SCAN_TYPE]) { 1416 1416 NL_SET_ERR_MSG(info->extack, "Malformed request, missing scan type"); 1417 1417 return -EINVAL; 1418 1418 }
+1 -1
net/ipv4/fou_nl.c
··· 1 - // SPDX-License-Identifier: BSD-3-Clause 1 + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/fou.yaml */ 4 4 /* YNL-GEN kernel source */
+1 -1
net/ipv4/fou_nl.h
··· 1 - /* SPDX-License-Identifier: BSD-3-Clause */ 1 + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/fou.yaml */ 4 4 /* YNL-GEN kernel header */
+1 -1
net/ipv4/netfilter/nf_tproxy_ipv4.c
··· 38 38 hp->source, lport ? lport : hp->dest, 39 39 skb->dev, NF_TPROXY_LOOKUP_LISTENER); 40 40 if (sk2) { 41 - inet_twsk_deschedule_put(inet_twsk(sk)); 41 + nf_tproxy_twsk_deschedule_put(inet_twsk(sk)); 42 42 sk = sk2; 43 43 } 44 44 }
+6
net/ipv4/tcp_bpf.c
··· 186 186 if (unlikely(flags & MSG_ERRQUEUE)) 187 187 return inet_recv_error(sk, msg, len, addr_len); 188 188 189 + if (!len) 190 + return 0; 191 + 189 192 psock = sk_psock_get(sk); 190 193 if (unlikely(!psock)) 191 194 return tcp_recvmsg(sk, msg, len, flags, addr_len); ··· 246 243 247 244 if (unlikely(flags & MSG_ERRQUEUE)) 248 245 return inet_recv_error(sk, msg, len, addr_len); 246 + 247 + if (!len) 248 + return 0; 249 249 250 250 psock = sk_psock_get(sk); 251 251 if (unlikely(!psock))
+3
net/ipv4/udp_bpf.c
··· 68 68 if (unlikely(flags & MSG_ERRQUEUE)) 69 69 return inet_recv_error(sk, msg, len, addr_len); 70 70 71 + if (!len) 72 + return 0; 73 + 71 74 psock = sk_psock_get(sk); 72 75 if (unlikely(!psock)) 73 76 return sk_udp_recvmsg(sk, msg, len, flags, addr_len);
+1
net/ipv6/ila/ila_xlat.c
··· 477 477 478 478 rcu_read_lock(); 479 479 480 + ret = -ESRCH; 480 481 ila = ila_lookup_by_params(&xp, ilan); 481 482 if (ila) { 482 483 ret = ila_dump_info(ila,
+1 -1
net/ipv6/netfilter/nf_tproxy_ipv6.c
··· 63 63 lport ? lport : hp->dest, 64 64 skb->dev, NF_TPROXY_LOOKUP_LISTENER); 65 65 if (sk2) { 66 - inet_twsk_deschedule_put(inet_twsk(sk)); 66 + nf_tproxy_twsk_deschedule_put(inet_twsk(sk)); 67 67 sk = sk2; 68 68 } 69 69 }
+2 -2
net/netfilter/nf_conntrack_core.c
··· 96 96 #define GC_SCAN_MAX_DURATION msecs_to_jiffies(10) 97 97 #define GC_SCAN_EXPIRED_MAX (64000u / HZ) 98 98 99 - #define MIN_CHAINLEN 8u 100 - #define MAX_CHAINLEN (32u - MIN_CHAINLEN) 99 + #define MIN_CHAINLEN 50u 100 + #define MAX_CHAINLEN (80u - MIN_CHAINLEN) 101 101 102 102 static struct conntrack_gc_work conntrack_gc_work; 103 103
+7 -7
net/netfilter/nf_conntrack_netlink.c
··· 328 328 } 329 329 330 330 #ifdef CONFIG_NF_CONNTRACK_MARK 331 - static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct) 331 + static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct, 332 + bool dump) 332 333 { 333 334 u32 mark = READ_ONCE(ct->mark); 334 335 335 - if (!mark) 336 + if (!mark && !dump) 336 337 return 0; 337 338 338 339 if (nla_put_be32(skb, CTA_MARK, htonl(mark))) ··· 344 343 return -1; 345 344 } 346 345 #else 347 - #define ctnetlink_dump_mark(a, b) (0) 346 + #define ctnetlink_dump_mark(a, b, c) (0) 348 347 #endif 349 348 350 349 #ifdef CONFIG_NF_CONNTRACK_SECMARK ··· 549 548 static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct) 550 549 { 551 550 if (ctnetlink_dump_status(skb, ct) < 0 || 552 - ctnetlink_dump_mark(skb, ct) < 0 || 551 + ctnetlink_dump_mark(skb, ct, true) < 0 || 553 552 ctnetlink_dump_secctx(skb, ct) < 0 || 554 553 ctnetlink_dump_id(skb, ct) < 0 || 555 554 ctnetlink_dump_use(skb, ct) < 0 || ··· 832 831 } 833 832 834 833 #ifdef CONFIG_NF_CONNTRACK_MARK 835 - if (events & (1 << IPCT_MARK) && 836 - ctnetlink_dump_mark(skb, ct) < 0) 834 + if (ctnetlink_dump_mark(skb, ct, events & (1 << IPCT_MARK))) 837 835 goto nla_put_failure; 838 836 #endif 839 837 nlmsg_end(skb, nlh); ··· 2735 2735 goto nla_put_failure; 2736 2736 2737 2737 #ifdef CONFIG_NF_CONNTRACK_MARK 2738 - if (ctnetlink_dump_mark(skb, ct) < 0) 2738 + if (ctnetlink_dump_mark(skb, ct, true) < 0) 2739 2739 goto nla_put_failure; 2740 2740 #endif 2741 2741 if (ctnetlink_dump_labels(skb, ct) < 0)
+4
net/netfilter/nft_last.c
··· 105 105 static int nft_last_clone(struct nft_expr *dst, const struct nft_expr *src) 106 106 { 107 107 struct nft_last_priv *priv_dst = nft_expr_priv(dst); 108 + struct nft_last_priv *priv_src = nft_expr_priv(src); 108 109 109 110 priv_dst->last = kzalloc(sizeof(*priv_dst->last), GFP_ATOMIC); 110 111 if (!priv_dst->last) 111 112 return -ENOMEM; 113 + 114 + priv_dst->last->set = priv_src->last->set; 115 + priv_dst->last->jiffies = priv_src->last->jiffies; 112 116 113 117 return 0; 114 118 }
+5 -1
net/netfilter/nft_quota.c
··· 236 236 static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src) 237 237 { 238 238 struct nft_quota *priv_dst = nft_expr_priv(dst); 239 + struct nft_quota *priv_src = nft_expr_priv(src); 240 + 241 + priv_dst->quota = priv_src->quota; 242 + priv_dst->flags = priv_src->flags; 239 243 240 244 priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), GFP_ATOMIC); 241 245 if (!priv_dst->consumed) 242 246 return -ENOMEM; 243 247 244 - atomic64_set(priv_dst->consumed, 0); 248 + *priv_dst->consumed = *priv_src->consumed; 245 249 246 250 return 0; 247 251 }
+1 -1
net/nfc/netlink.c
··· 1446 1446 return rc; 1447 1447 1448 1448 error: 1449 - kfree(cb_context); 1450 1449 device_unlock(&dev->dev); 1450 + kfree(cb_context); 1451 1451 return rc; 1452 1452 } 1453 1453
+3
net/sched/act_connmark.c
··· 158 158 nparms->zone = parm->zone; 159 159 160 160 ret = 0; 161 + } else { 162 + err = ret; 163 + goto out_free; 161 164 } 162 165 163 166 err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
+6 -4
net/sched/cls_flower.c
··· 2200 2200 fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]); 2201 2201 2202 2202 if (!tc_flags_valid(fnew->flags)) { 2203 + kfree(fnew); 2203 2204 err = -EINVAL; 2204 - goto errout; 2205 + goto errout_tb; 2205 2206 } 2206 2207 } 2207 2208 ··· 2227 2226 } 2228 2227 spin_unlock(&tp->lock); 2229 2228 2230 - if (err) 2231 - goto errout; 2229 + if (err) { 2230 + kfree(fnew); 2231 + goto errout_tb; 2232 + } 2232 2233 } 2233 2234 fnew->handle = handle; 2234 2235 ··· 2340 2337 fl_mask_put(head, fnew->mask); 2341 2338 errout_idr: 2342 2339 idr_remove(&head->handle_idr, fnew->handle); 2343 - errout: 2344 2340 __fl_put(fnew); 2345 2341 errout_tb: 2346 2342 kfree(tb);
+8 -5
net/smc/af_smc.c
··· 2657 2657 { 2658 2658 struct sock *sk = sock->sk; 2659 2659 struct smc_sock *smc; 2660 - int rc = -EPIPE; 2660 + int rc; 2661 2661 2662 2662 smc = smc_sk(sk); 2663 2663 lock_sock(sk); 2664 - if ((sk->sk_state != SMC_ACTIVE) && 2665 - (sk->sk_state != SMC_APPCLOSEWAIT1) && 2666 - (sk->sk_state != SMC_INIT)) 2667 - goto out; 2668 2664 2665 + /* SMC does not support connect with fastopen */ 2669 2666 if (msg->msg_flags & MSG_FASTOPEN) { 2667 + /* not connected yet, fallback */ 2670 2668 if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) { 2671 2669 rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP); 2672 2670 if (rc) ··· 2673 2675 rc = -EINVAL; 2674 2676 goto out; 2675 2677 } 2678 + } else if ((sk->sk_state != SMC_ACTIVE) && 2679 + (sk->sk_state != SMC_APPCLOSEWAIT1) && 2680 + (sk->sk_state != SMC_INIT)) { 2681 + rc = -EPIPE; 2682 + goto out; 2676 2683 } 2677 2684 2678 2685 if (smc->use_fallback) {
+4 -7
net/socket.c
··· 450 450 * 451 451 * Returns the &file bound with @sock, implicitly storing it 452 452 * in sock->file. If dname is %NULL, sets to "". 453 - * On failure the return is a ERR pointer (see linux/err.h). 453 + * 454 + * On failure @sock is released, and an ERR pointer is returned. 455 + * 454 456 * This function uses GFP_KERNEL internally. 455 457 */ 456 458 ··· 1640 1638 struct file *__sys_socket_file(int family, int type, int protocol) 1641 1639 { 1642 1640 struct socket *sock; 1643 - struct file *file; 1644 1641 int flags; 1645 1642 1646 1643 sock = __sys_socket_create(family, type, protocol); ··· 1650 1649 if (SOCK_NONBLOCK != O_NONBLOCK && (flags & SOCK_NONBLOCK)) 1651 1650 flags = (flags & ~SOCK_NONBLOCK) | O_NONBLOCK; 1652 1651 1653 - file = sock_alloc_file(sock, flags, NULL); 1654 - if (IS_ERR(file)) 1655 - sock_release(sock); 1656 - 1657 - return file; 1652 + return sock_alloc_file(sock, flags, NULL); 1658 1653 } 1659 1654 1660 1655 int __sys_socket(int family, int type, int protocol)
+2
net/tls/tls_device.c
··· 508 508 zc_pfrag.offset = iter_offset.offset; 509 509 zc_pfrag.size = copy; 510 510 tls_append_frag(record, &zc_pfrag, copy); 511 + 512 + iter_offset.offset += copy; 511 513 } else if (copy) { 512 514 copy = min_t(size_t, copy, pfrag->size - pfrag->offset); 513 515
+5 -18
net/tls/tls_main.c
··· 405 405 rc = -EINVAL; 406 406 goto out; 407 407 } 408 - lock_sock(sk); 409 408 memcpy(crypto_info_aes_gcm_128->iv, 410 409 cctx->iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE, 411 410 TLS_CIPHER_AES_GCM_128_IV_SIZE); 412 411 memcpy(crypto_info_aes_gcm_128->rec_seq, cctx->rec_seq, 413 412 TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE); 414 - release_sock(sk); 415 413 if (copy_to_user(optval, 416 414 crypto_info_aes_gcm_128, 417 415 sizeof(*crypto_info_aes_gcm_128))) ··· 427 429 rc = -EINVAL; 428 430 goto out; 429 431 } 430 - lock_sock(sk); 431 432 memcpy(crypto_info_aes_gcm_256->iv, 432 433 cctx->iv + TLS_CIPHER_AES_GCM_256_SALT_SIZE, 433 434 TLS_CIPHER_AES_GCM_256_IV_SIZE); 434 435 memcpy(crypto_info_aes_gcm_256->rec_seq, cctx->rec_seq, 435 436 TLS_CIPHER_AES_GCM_256_REC_SEQ_SIZE); 436 - release_sock(sk); 437 437 if (copy_to_user(optval, 438 438 crypto_info_aes_gcm_256, 439 439 sizeof(*crypto_info_aes_gcm_256))) ··· 447 451 rc = -EINVAL; 448 452 goto out; 449 453 } 450 - lock_sock(sk); 451 454 memcpy(aes_ccm_128->iv, 452 455 cctx->iv + TLS_CIPHER_AES_CCM_128_SALT_SIZE, 453 456 TLS_CIPHER_AES_CCM_128_IV_SIZE); 454 457 memcpy(aes_ccm_128->rec_seq, cctx->rec_seq, 455 458 TLS_CIPHER_AES_CCM_128_REC_SEQ_SIZE); 456 - release_sock(sk); 457 459 if (copy_to_user(optval, aes_ccm_128, sizeof(*aes_ccm_128))) 458 460 rc = -EFAULT; 459 461 break; ··· 466 472 rc = -EINVAL; 467 473 goto out; 468 474 } 469 - lock_sock(sk); 470 475 memcpy(chacha20_poly1305->iv, 471 476 cctx->iv + TLS_CIPHER_CHACHA20_POLY1305_SALT_SIZE, 472 477 TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE); 473 478 memcpy(chacha20_poly1305->rec_seq, cctx->rec_seq, 474 479 TLS_CIPHER_CHACHA20_POLY1305_REC_SEQ_SIZE); 475 - release_sock(sk); 476 480 if (copy_to_user(optval, chacha20_poly1305, 477 481 sizeof(*chacha20_poly1305))) 478 482 rc = -EFAULT; ··· 485 493 rc = -EINVAL; 486 494 goto out; 487 495 } 488 - lock_sock(sk); 489 496 memcpy(sm4_gcm_info->iv, 490 497 cctx->iv + TLS_CIPHER_SM4_GCM_SALT_SIZE, 491 498 TLS_CIPHER_SM4_GCM_IV_SIZE); 492 499 memcpy(sm4_gcm_info->rec_seq, cctx->rec_seq, 493 500 TLS_CIPHER_SM4_GCM_REC_SEQ_SIZE); 494 - release_sock(sk); 495 501 if (copy_to_user(optval, sm4_gcm_info, sizeof(*sm4_gcm_info))) 496 502 rc = -EFAULT; 497 503 break; ··· 503 513 rc = -EINVAL; 504 514 goto out; 505 515 } 506 - lock_sock(sk); 507 516 memcpy(sm4_ccm_info->iv, 508 517 cctx->iv + TLS_CIPHER_SM4_CCM_SALT_SIZE, 509 518 TLS_CIPHER_SM4_CCM_IV_SIZE); 510 519 memcpy(sm4_ccm_info->rec_seq, cctx->rec_seq, 511 520 TLS_CIPHER_SM4_CCM_REC_SEQ_SIZE); 512 - release_sock(sk); 513 521 if (copy_to_user(optval, sm4_ccm_info, sizeof(*sm4_ccm_info))) 514 522 rc = -EFAULT; 515 523 break; ··· 523 535 rc = -EINVAL; 524 536 goto out; 525 537 } 526 - lock_sock(sk); 527 538 memcpy(crypto_info_aria_gcm_128->iv, 528 539 cctx->iv + TLS_CIPHER_ARIA_GCM_128_SALT_SIZE, 529 540 TLS_CIPHER_ARIA_GCM_128_IV_SIZE); 530 541 memcpy(crypto_info_aria_gcm_128->rec_seq, cctx->rec_seq, 531 542 TLS_CIPHER_ARIA_GCM_128_REC_SEQ_SIZE); 532 - release_sock(sk); 533 543 if (copy_to_user(optval, 534 544 crypto_info_aria_gcm_128, 535 545 sizeof(*crypto_info_aria_gcm_128))) ··· 545 559 rc = -EINVAL; 546 560 goto out; 547 561 } 548 - lock_sock(sk); 549 562 memcpy(crypto_info_aria_gcm_256->iv, 550 563 cctx->iv + TLS_CIPHER_ARIA_GCM_256_SALT_SIZE, 551 564 TLS_CIPHER_ARIA_GCM_256_IV_SIZE); 552 565 memcpy(crypto_info_aria_gcm_256->rec_seq, cctx->rec_seq, 553 566 TLS_CIPHER_ARIA_GCM_256_REC_SEQ_SIZE); 554 - release_sock(sk); 555 567 if (copy_to_user(optval, 556 568 crypto_info_aria_gcm_256, 557 569 sizeof(*crypto_info_aria_gcm_256))) ··· 598 614 if (len < sizeof(value)) 599 615 return -EINVAL; 600 616 601 - lock_sock(sk); 602 617 value = -EINVAL; 603 618 if (ctx->rx_conf == TLS_SW || ctx->rx_conf == TLS_HW) 604 619 value = ctx->rx_no_pad; 605 - release_sock(sk); 606 620 if (value < 0) 607 621 return value; 608 622 ··· 616 634 char __user *optval, int __user *optlen) 617 635 { 618 636 int rc = 0; 637 + 638 + lock_sock(sk); 619 639 620 640 switch (optname) { 621 641 case TLS_TX: ··· 635 651 rc = -ENOPROTOOPT; 636 652 break; 637 653 } 654 + 655 + release_sock(sk); 656 + 638 657 return rc; 639 658 } 640 659
+20 -8
net/tls/tls_sw.c
··· 956 956 MSG_CMSG_COMPAT)) 957 957 return -EOPNOTSUPP; 958 958 959 - mutex_lock(&tls_ctx->tx_lock); 959 + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); 960 + if (ret) 961 + return ret; 960 962 lock_sock(sk); 961 963 962 964 if (unlikely(msg->msg_controllen)) { ··· 1292 1290 MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) 1293 1291 return -EOPNOTSUPP; 1294 1292 1295 - mutex_lock(&tls_ctx->tx_lock); 1293 + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); 1294 + if (ret) 1295 + return ret; 1296 1296 lock_sock(sk); 1297 1297 ret = tls_sw_do_sendpage(sk, page, offset, size, flags); 1298 1298 release_sock(sk); ··· 2131 2127 else 2132 2128 err = process_rx_list(ctx, msg, &control, 0, 2133 2129 async_copy_bytes, is_peek); 2134 - decrypted = max(err, 0); 2130 + decrypted += max(err, 0); 2135 2131 } 2136 2132 2137 2133 copied += decrypted; ··· 2439 2435 2440 2436 if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) 2441 2437 return; 2442 - mutex_lock(&tls_ctx->tx_lock); 2443 - lock_sock(sk); 2444 - tls_tx_records(sk, -1); 2445 - release_sock(sk); 2446 - mutex_unlock(&tls_ctx->tx_lock); 2438 + 2439 + if (mutex_trylock(&tls_ctx->tx_lock)) { 2440 + lock_sock(sk); 2441 + tls_tx_records(sk, -1); 2442 + release_sock(sk); 2443 + mutex_unlock(&tls_ctx->tx_lock); 2444 + } else if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) { 2445 + /* Someone is holding the tx_lock, they will likely run Tx 2446 + * and cancel the work on their way out of the lock section. 2447 + * Schedule a long delay just in case. 2448 + */ 2449 + schedule_delayed_work(&ctx->tx_work.work, msecs_to_jiffies(10)); 2450 + } 2447 2451 } 2448 2452 2449 2453 static bool tls_is_tx_ready(struct tls_sw_context_tx *ctx)
+8 -2
net/unix/af_unix.c
··· 2105 2105 #define UNIX_SKB_FRAGS_SZ (PAGE_SIZE << get_order(32768)) 2106 2106 2107 2107 #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 2108 - static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other) 2108 + static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other, 2109 + struct scm_cookie *scm, bool fds_sent) 2109 2110 { 2110 2111 struct unix_sock *ousk = unix_sk(other); 2111 2112 struct sk_buff *skb; ··· 2117 2116 if (!skb) 2118 2117 return err; 2119 2118 2119 + err = unix_scm_to_skb(scm, skb, !fds_sent); 2120 + if (err < 0) { 2121 + kfree_skb(skb); 2122 + return err; 2123 + } 2120 2124 skb_put(skb, 1); 2121 2125 err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, 1); 2122 2126 ··· 2249 2243 2250 2244 #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 2251 2245 if (msg->msg_flags & MSG_OOB) { 2252 - err = queue_oob(sock, msg, other); 2246 + err = queue_oob(sock, msg, other, &scm, fds_sent); 2253 2247 if (err) 2254 2248 goto out_err; 2255 2249 sent++;
+3
net/unix/unix_bpf.c
··· 54 54 struct sk_psock *psock; 55 55 int copied; 56 56 57 + if (!len) 58 + return 0; 59 + 57 60 psock = sk_psock_get(sk); 58 61 if (unlikely(!psock)) 59 62 return __unix_recvmsg(sk, msg, len, flags);
+1 -1
tools/include/uapi/linux/netdev.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 1 + /* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */ 2 2 /* Do not edit directly, auto-generated from: */ 3 3 /* Documentation/netlink/specs/netdev.yaml */ 4 4 /* YNL-GEN uapi header */
+1 -1
tools/net/ynl/cli.py
··· 1 1 #!/usr/bin/env python3 2 - # SPDX-License-Identifier: BSD-3-Clause 2 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 3 3 4 4 import argparse 5 5 import json
+5 -4
tools/net/ynl/lib/__init__.py
··· 1 - # SPDX-License-Identifier: BSD-3-Clause 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 3 - from .nlspec import SpecAttr, SpecAttrSet, SpecFamily, SpecOperation 3 + from .nlspec import SpecAttr, SpecAttrSet, SpecEnumEntry, SpecEnumSet, \ 4 + SpecFamily, SpecOperation 4 5 from .ynl import YnlFamily 5 6 6 - __all__ = ["SpecAttr", "SpecAttrSet", "SpecFamily", "SpecOperation", 7 - "YnlFamily"] 7 + __all__ = ["SpecAttr", "SpecAttrSet", "SpecEnumEntry", "SpecEnumSet", 8 + "SpecFamily", "SpecOperation", "YnlFamily"]
+117 -11
tools/net/ynl/lib/nlspec.py
··· 1 - # SPDX-License-Identifier: BSD-3-Clause 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 3 3 import collections 4 4 import importlib ··· 57 57 pass 58 58 59 59 60 + class SpecEnumEntry(SpecElement): 61 + """ Entry within an enum declared in the Netlink spec. 62 + 63 + Attributes: 64 + doc documentation string 65 + enum_set back reference to the enum 66 + value numerical value of this enum (use accessors in most situations!) 67 + 68 + Methods: 69 + raw_value raw value, i.e. the id in the enum, unlike user value which is a mask for flags 70 + user_value user value, same as raw value for enums, for flags it's the mask 71 + """ 72 + def __init__(self, enum_set, yaml, prev, value_start): 73 + if isinstance(yaml, str): 74 + yaml = {'name': yaml} 75 + super().__init__(enum_set.family, yaml) 76 + 77 + self.doc = yaml.get('doc', '') 78 + self.enum_set = enum_set 79 + 80 + if 'value' in yaml: 81 + self.value = yaml['value'] 82 + elif prev: 83 + self.value = prev.value + 1 84 + else: 85 + self.value = value_start 86 + 87 + def has_doc(self): 88 + return bool(self.doc) 89 + 90 + def raw_value(self): 91 + return self.value 92 + 93 + def user_value(self): 94 + if self.enum_set['type'] == 'flags': 95 + return 1 << self.value 96 + else: 97 + return self.value 98 + 99 + 100 + class SpecEnumSet(SpecElement): 101 + """ Enum type 102 + 103 + Represents an enumeration (list of numerical constants) 104 + as declared in the "definitions" section of the spec. 105 + 106 + Attributes: 107 + type enum or flags 108 + entries entries by name 109 + entries_by_val entries by value 110 + Methods: 111 + get_mask for flags compute the mask of all defined values 112 + """ 113 + def __init__(self, family, yaml): 114 + super().__init__(family, yaml) 115 + 116 + self.type = yaml['type'] 117 + 118 + prev_entry = None 119 + value_start = self.yaml.get('value-start', 0) 120 + self.entries = dict() 121 + self.entries_by_val = dict() 122 + for entry in self.yaml['entries']: 123 + e = self.new_entry(entry, prev_entry, value_start) 124 + self.entries[e.name] = e 125 + self.entries_by_val[e.raw_value()] = e 126 + prev_entry = e 127 + 128 + def new_entry(self, entry, prev_entry, value_start): 129 + return SpecEnumEntry(self, entry, prev_entry, value_start) 130 + 131 + def has_doc(self): 132 + if 'doc' in self.yaml: 133 + return True 134 + for entry in self.entries.values(): 135 + if entry.has_doc(): 136 + return True 137 + return False 138 + 139 + def get_mask(self): 140 + mask = 0 141 + idx = self.yaml.get('value-start', 0) 142 + for _ in self.entries.values(): 143 + mask |= 1 << idx 144 + idx += 1 145 + return mask 146 + 147 + 60 148 class SpecAttr(SpecElement): 61 149 """ Single Netlink atttribute type 62 150 ··· 183 95 self.attrs = collections.OrderedDict() 184 96 self.attrs_by_val = collections.OrderedDict() 185 97 186 - val = 0 187 - for elem in self.yaml['attributes']: 188 - if 'value' in elem: 189 - val = elem['value'] 98 + if self.subset_of is None: 99 + val = 1 100 + for elem in self.yaml['attributes']: 101 + if 'value' in elem: 102 + val = elem['value'] 190 103 191 - attr = self.new_attr(elem, val) 192 - self.attrs[attr.name] = attr 193 - self.attrs_by_val[attr.value] = attr 194 - val += 1 104 + attr = self.new_attr(elem, val) 105 + self.attrs[attr.name] = attr 106 + self.attrs_by_val[attr.value] = attr 107 + val += 1 108 + else: 109 + real_set = family.attr_sets[self.subset_of] 110 + for elem in self.yaml['attributes']: 111 + attr = real_set[elem['name']] 112 + self.attrs[attr.name] = attr 113 + self.attrs_by_val[attr.value] = attr 195 114 196 115 def new_attr(self, elem, value): 197 116 return SpecAttr(self.family, self, elem, value) ··· 281 186 msgs dict of all messages (index by name) 282 187 msgs_by_value dict of all messages (indexed by name) 283 188 ops dict of all valid requests / responses 189 + consts dict of all constants/enums 284 190 """ 285 191 def __init__(self, spec_path, schema_path=None): 286 192 with open(spec_path, "r") as stream: ··· 311 215 self.req_by_value = collections.OrderedDict() 312 216 self.rsp_by_value = collections.OrderedDict() 313 217 self.ops = collections.OrderedDict() 218 + self.consts = collections.OrderedDict() 314 219 315 220 last_exception = None 316 221 while len(self._resolution_list) > 0: ··· 332 235 if len(resolved) == 0: 333 236 raise last_exception 334 237 238 + def new_enum(self, elem): 239 + return SpecEnumSet(self, elem) 240 + 335 241 def new_attr_set(self, elem): 336 242 return SpecAttrSet(self, elem) 337 243 ··· 345 245 self._resolution_list.append(elem) 346 246 347 247 def _dictify_ops_unified(self): 348 - val = 0 248 + val = 1 349 249 for elem in self.yaml['operations']['list']: 350 250 if 'value' in elem: 351 251 val = elem['value'] ··· 356 256 self.msgs[op.name] = op 357 257 358 258 def _dictify_ops_directional(self): 359 - req_val = rsp_val = 0 259 + req_val = rsp_val = 1 360 260 for elem in self.yaml['operations']['list']: 361 261 if 'notify' in elem: 362 262 if 'value' in elem: ··· 388 288 389 289 def resolve(self): 390 290 self.resolve_up(super()) 291 + 292 + for elem in self.yaml['definitions']: 293 + if elem['type'] == 'enum' or elem['type'] == 'flags': 294 + self.consts[elem['name']] = self.new_enum(elem) 295 + else: 296 + self.consts[elem['name']] = elem 391 297 392 298 for elem in self.yaml['attribute-sets']: 393 299 attr_set = self.new_attr_set(elem)
+3 -8
tools/net/ynl/lib/ynl.py
··· 1 - # SPDX-License-Identifier: BSD-3-Clause 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 3 3 import functools 4 4 import os ··· 303 303 self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_CAP_ACK, 1) 304 304 self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_EXT_ACK, 1) 305 305 306 - self._types = dict() 307 - 308 - for elem in self.yaml.get('definitions', []): 309 - self._types[elem['name']] = elem 310 - 311 306 self.async_msg_ids = set() 312 307 self.async_msg_queue = [] 313 308 ··· 348 353 349 354 def _decode_enum(self, rsp, attr_spec): 350 355 raw = rsp[attr_spec['name']] 351 - enum = self._types[attr_spec['enum']] 356 + enum = self.consts[attr_spec['enum']] 352 357 i = attr_spec.get('value-start', 0) 353 358 if 'enum-as-flags' in attr_spec and attr_spec['enum-as-flags']: 354 359 value = set() 355 360 while raw: 356 361 if raw & 1: 357 - value.add(enum['entries'][i]) 362 + value.add(enum.entries_by_val[i].name) 358 363 raw >>= 1 359 364 i += 1 360 365 else:
+29 -90
tools/net/ynl/ynl-gen-c.py
··· 1 1 #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 3 3 4 import argparse 4 5 import collections 5 6 import os 6 7 import yaml 7 8 8 - from lib import SpecFamily, SpecAttrSet, SpecAttr, SpecOperation 9 + from lib import SpecFamily, SpecAttrSet, SpecAttr, SpecOperation, SpecEnumSet, SpecEnumEntry 9 10 10 11 11 12 def c_upper(name): ··· 567 566 self.inherited = [c_lower(x) for x in sorted(self._inherited)] 568 567 569 568 570 - class EnumEntry: 569 + class EnumEntry(SpecEnumEntry): 571 570 def __init__(self, enum_set, yaml, prev, value_start): 572 - if isinstance(yaml, str): 573 - self.name = yaml 574 - yaml = {} 575 - self.doc = '' 576 - else: 577 - self.name = yaml['name'] 578 - self.doc = yaml.get('doc', '') 571 + super().__init__(enum_set, yaml, prev, value_start) 579 572 580 - self.yaml = yaml 581 - self.enum_set = enum_set 582 - self.c_name = c_upper(enum_set.value_pfx + self.name) 583 - 584 - if 'value' in yaml: 585 - self.value = yaml['value'] 586 - if prev: 587 - self.value_change = (self.value != prev.value + 1) 588 - elif prev: 589 - self.value_change = False 590 - self.value = prev.value + 1 573 + if prev: 574 + self.value_change = (self.value != prev.value + 1) 591 575 else: 592 - self.value = value_start 593 576 self.value_change = (self.value != 0) 594 - 595 577 self.value_change = self.value_change or self.enum_set['type'] == 'flags' 596 578 597 - def __getitem__(self, key): 598 - return self.yaml[key] 579 + # Added by resolve: 580 + self.c_name = None 581 + delattr(self, "c_name") 599 582 600 - def __contains__(self, key): 601 - return key in self.yaml 583 + def resolve(self): 584 + self.resolve_up(super()) 602 585 603 - def has_doc(self): 604 - return bool(self.doc) 605 - 606 - # raw value, i.e. the id in the enum, unlike user value which is a mask for flags 607 - def raw_value(self): 608 - return self.value 609 - 610 - # user value, same as raw value for enums, for flags it's the mask 611 - def user_value(self): 612 - if self.enum_set['type'] == 'flags': 613 - return 1 << self.value 614 - else: 615 - return self.value 586 + self.c_name = c_upper(self.enum_set.value_pfx + self.name) 616 587 617 588 618 - class EnumSet: 589 + class EnumSet(SpecEnumSet): 619 590 def __init__(self, family, yaml): 620 - self.yaml = yaml 621 - self.family = family 622 - 623 591 self.render_name = c_lower(family.name + '-' + yaml['name']) 624 592 self.enum_name = 'enum ' + self.render_name 625 593 626 594 self.value_pfx = yaml.get('name-prefix', f"{family.name}-{yaml['name']}-") 627 595 628 - self.type = yaml['type'] 596 + super().__init__(family, yaml) 629 597 630 - prev_entry = None 631 - value_start = self.yaml.get('value-start', 0) 632 - self.entries = {} 633 - self.entry_list = [] 634 - for entry in self.yaml['entries']: 635 - e = EnumEntry(self, entry, prev_entry, value_start) 636 - self.entries[e.name] = e 637 - self.entry_list.append(e) 638 - prev_entry = e 639 - 640 - def __getitem__(self, key): 641 - return self.yaml[key] 642 - 643 - def __contains__(self, key): 644 - return key in self.yaml 645 - 646 - def has_doc(self): 647 - if 'doc' in self.yaml: 648 - return True 649 - for entry in self.entry_list: 650 - if entry.has_doc(): 651 - return True 652 - return False 653 - 654 - def get_mask(self): 655 - mask = 0 656 - idx = self.yaml.get('value-start', 0) 657 - for _ in self.entry_list: 658 - mask |= 1 << idx 659 - idx += 1 660 - return mask 598 + def new_entry(self, entry, prev_entry, value_start): 599 + return EnumEntry(self, entry, prev_entry, value_start) 661 600 662 601 663 602 class AttrSet(SpecAttrSet): ··· 732 791 733 792 self.mcgrps = self.yaml.get('mcast-groups', {'list': []}) 734 793 735 - self.consts = dict() 736 - 737 794 self.hooks = dict() 738 795 for when in ['pre', 'post']: 739 796 self.hooks[when] = dict() ··· 758 819 if self.kernel_policy == 'global': 759 820 self._load_global_policy() 760 821 822 + def new_enum(self, elem): 823 + return EnumSet(self, elem) 824 + 761 825 def new_attr_set(self, elem): 762 826 return AttrSet(self, elem) 763 827 ··· 778 836 } 779 837 780 838 def _dictify(self): 781 - for elem in self.yaml['definitions']: 782 - if elem['type'] == 'enum' or elem['type'] == 'flags': 783 - self.consts[elem['name']] = EnumSet(self, elem) 784 - else: 785 - self.consts[elem['name']] = elem 786 - 787 839 ntf = [] 788 840 for msg in self.msgs.values(): 789 841 if 'notify' in msg: ··· 1915 1979 if 'doc' in enum: 1916 1980 doc = ' - ' + enum['doc'] 1917 1981 cw.write_doc_line(enum.enum_name + doc) 1918 - for entry in enum.entry_list: 1982 + for entry in enum.entries.values(): 1919 1983 if entry.has_doc(): 1920 1984 doc = '@' + entry.c_name + ': ' + entry['doc'] 1921 1985 cw.write_doc_line(doc) ··· 1923 1987 1924 1988 uapi_enum_start(family, cw, const, 'name') 1925 1989 name_pfx = const.get('name-prefix', f"{family.name}-{const['name']}-") 1926 - for entry in enum.entry_list: 1990 + for entry in enum.entries.values(): 1927 1991 suffix = ',' 1928 1992 if entry.value_change: 1929 1993 suffix = f" = {entry.user_value()}" + suffix ··· 1980 2044 max_value = f"({cnt_name} - 1)" 1981 2045 1982 2046 uapi_enum_start(family, cw, family['operations'], 'enum-name') 2047 + val = 0 1983 2048 for op in family.msgs.values(): 1984 2049 if separate_ntf and ('notify' in op or 'event' in op): 1985 2050 continue 1986 2051 1987 2052 suffix = ',' 1988 - if 'value' in op: 1989 - suffix = f" = {op['value']}," 2053 + if op.value != val: 2054 + suffix = f" = {op.value}," 2055 + val = op.value 1990 2056 cw.p(op.enum_name + suffix) 2057 + val += 1 1991 2058 cw.nl() 1992 2059 cw.p(cnt_name + ('' if max_by_define else ',')) 1993 2060 if not max_by_define: ··· 2063 2124 2064 2125 _, spec_kernel = find_kernel_root(args.spec) 2065 2126 if args.mode == 'uapi': 2066 - cw.p('/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */') 2127 + cw.p('/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */') 2067 2128 else: 2068 2129 if args.header: 2069 - cw.p('/* SPDX-License-Identifier: BSD-3-Clause */') 2130 + cw.p('/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */') 2070 2131 else: 2071 - cw.p('// SPDX-License-Identifier: BSD-3-Clause') 2132 + cw.p('// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause') 2072 2133 cw.p("/* Do not edit directly, auto-generated from: */") 2073 2134 cw.p(f"/*\t{spec_kernel} */") 2074 2135 cw.p(f"/* YNL-GEN {args.mode} {'header' if args.header else 'source'} */")
+1 -1
tools/net/ynl/ynl-regen.sh
··· 1 1 #!/bin/bash 2 - # SPDX-License-Identifier: BSD-3-Clause 2 + # SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 3 3 4 4 TOOL=$(dirname $(realpath $0))/ynl-gen-c.py 5 5
+28
tools/testing/selftests/bpf/prog_tests/btf.c
··· 879 879 .btf_load_err = true, 880 880 .err_str = "Invalid elem", 881 881 }, 882 + { 883 + .descr = "var after datasec, ptr followed by modifier", 884 + .raw_types = { 885 + /* .bss section */ /* [1] */ 886 + BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2), 887 + sizeof(void*)+4), 888 + BTF_VAR_SECINFO_ENC(4, 0, sizeof(void*)), 889 + BTF_VAR_SECINFO_ENC(6, sizeof(void*), 4), 890 + /* int */ /* [2] */ 891 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), 892 + /* int* */ /* [3] */ 893 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 894 + BTF_VAR_ENC(NAME_TBD, 3, 0), /* [4] */ 895 + /* const int */ /* [5] */ 896 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 2), 897 + BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */ 898 + BTF_END_RAW, 899 + }, 900 + .str_sec = "\0a\0b\0c\0", 901 + .str_sec_size = sizeof("\0a\0b\0c\0"), 902 + .map_type = BPF_MAP_TYPE_ARRAY, 903 + .map_name = ".bss", 904 + .key_size = sizeof(int), 905 + .value_size = sizeof(void*)+4, 906 + .key_type_id = 0, 907 + .value_type_id = 1, 908 + .max_entries = 1, 909 + }, 882 910 /* Test member exceeds the size of struct. 883 911 * 884 912 * struct A {
+4 -3
tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
··· 65 65 } 66 66 67 67 /* The maximum permissible size is: PAGE_SIZE - sizeof(struct xdp_page_head) - 68 - * sizeof(struct skb_shared_info) - XDP_PACKET_HEADROOM = 3368 bytes 68 + * SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) - XDP_PACKET_HEADROOM = 69 + * 3408 bytes for 64-byte cacheline and 3216 for 256-byte one. 69 70 */ 70 71 #if defined(__s390x__) 71 - #define MAX_PKT_SIZE 3176 72 + #define MAX_PKT_SIZE 3216 72 73 #else 73 - #define MAX_PKT_SIZE 3368 74 + #define MAX_PKT_SIZE 3408 74 75 #endif 75 76 static void test_max_pkt_size(int fd) 76 77 {
+2
tools/testing/selftests/netfilter/nft_nat.sh
··· 404 404 echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 & 405 405 sc_s=$! 406 406 407 + sleep 1 408 + 407 409 result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT) 408 410 409 411 if [ "$result" = "SERVER-inet" ];then