Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
net: intel: start The Great Code Dedup + Page Pool for iavf

Alexander Lobakin says:

Here's a two-shot: introduce {,Intel} Ethernet common library (libeth and
libie) and switch iavf to Page Pool. Details are in the commit messages;
here's a summary:

Not a secret there's a ton of code duplication between two and more Intel
ethernet modules. Before introducing new changes, which would need to be
copied over again, start decoupling the already existing duplicate
functionality into a new module, which will be shared between several
Intel Ethernet drivers. The first name that came to my mind was
"libie" -- "Intel Ethernet common library". Also this sounds like
"lovelie" (-> one word, no "lib I E" pls) and can be expanded as
"lib Internet Explorer" :P
The "generic", pure-software part is placed separately, so that it can be
easily reused in any driver by any vendor without linking to the Intel
pre-200G guts. In a few words, it's something any modern driver does the
same way, but nobody moved it level up (yet).
The series is only the beginning. From now on, adding every new feature
or doing any good driver refactoring will remove much more lines than add
for quite some time. There's a basic roadmap with some deduplications
planned already, not speaking of that touching every line now asks:
"can I share this?". The final destination is very ambitious: have only
one unified driver for at least i40e, ice, iavf, and idpf with a struct
ops for each generation. That's never gonna happen, right? But you still
can at least try.
PP conversion for iavf lands within the same series as these two are tied
closely. libie will support Page Pool model only, so that a driver can't
use much of the lib until it's converted. iavf is only the example, the
rest will eventually be converted soon on a per-driver basis. That is
when it gets really interesting. Stay tech.

* '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
MAINTAINERS: add entry for libeth and libie
iavf: switch to Page Pool
iavf: pack iavf_ring more efficiently
libeth: add Rx buffer management
page_pool: add DMA-sync-for-CPU inline helper
page_pool: constify some read-only function arguments
slab: introduce kvmalloc_array_node() and kvcalloc_node()
iavf: drop page splitting and recycling
iavf: kill "legacy-rx" for good
net: intel: introduce {, Intel} Ethernet common library
====================

Link: https://lore.kernel.org/r/20240424203559.3420468-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+835 -1954
+20
MAINTAINERS
··· 12388 12388 F: include/linux/ata.h 12389 12389 F: include/linux/libata.h 12390 12390 12391 + LIBETH COMMON ETHERNET LIBRARY 12392 + M: Alexander Lobakin <aleksander.lobakin@intel.com> 12393 + L: netdev@vger.kernel.org 12394 + L: intel-wired-lan@lists.osuosl.org (moderated for non-subscribers) 12395 + S: Supported 12396 + T: git https://github.com/alobakin/linux.git 12397 + F: drivers/net/ethernet/intel/libeth/ 12398 + F: include/net/libeth/ 12399 + K: libeth 12400 + 12401 + LIBIE COMMON INTEL ETHERNET LIBRARY 12402 + M: Alexander Lobakin <aleksander.lobakin@intel.com> 12403 + L: intel-wired-lan@lists.osuosl.org (moderated for non-subscribers) 12404 + L: netdev@vger.kernel.org 12405 + S: Supported 12406 + T: git https://github.com/alobakin/linux.git 12407 + F: drivers/net/ethernet/intel/libie/ 12408 + F: include/linux/net/intel/libie/ 12409 + K: libie 12410 + 12391 12411 LIBNVDIMM BTT: BLOCK TRANSLATION TABLE 12392 12412 M: Vishal Verma <vishal.l.verma@intel.com> 12393 12413 M: Dan Williams <dan.j.williams@intel.com>
+7
drivers/net/ethernet/intel/Kconfig
··· 16 16 17 17 if NET_VENDOR_INTEL 18 18 19 + source "drivers/net/ethernet/intel/libeth/Kconfig" 20 + source "drivers/net/ethernet/intel/libie/Kconfig" 21 + 19 22 config E100 20 23 tristate "Intel(R) PRO/100+ support" 21 24 depends on PCI ··· 228 225 depends on PTP_1588_CLOCK_OPTIONAL 229 226 depends on PCI 230 227 select AUXILIARY_BUS 228 + select LIBIE 231 229 select NET_DEVLINK 232 230 help 233 231 This driver supports Intel(R) Ethernet Controller XL710 Family of ··· 257 253 # so that CONFIG_IAVF symbol will always mirror the state of CONFIG_I40EVF 258 254 config IAVF 259 255 tristate 256 + select LIBIE 257 + 260 258 config I40EVF 261 259 tristate "Intel(R) Ethernet Adaptive Virtual Function support" 262 260 select IAVF ··· 289 283 depends on GNSS || GNSS = n 290 284 select AUXILIARY_BUS 291 285 select DIMLIB 286 + select LIBIE 292 287 select NET_DEVLINK 293 288 select PLDMFW 294 289 select DPLL
+3
drivers/net/ethernet/intel/Makefile
··· 3 3 # Makefile for the Intel network device drivers. 4 4 # 5 5 6 + obj-$(CONFIG_LIBETH) += libeth/ 7 + obj-$(CONFIG_LIBIE) += libie/ 8 + 6 9 obj-$(CONFIG_E100) += e100.o 7 10 obj-$(CONFIG_E1000) += e1000/ 8 11 obj-$(CONFIG_E1000E) += e1000e/
-253
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 381 381 return i40e_aq_get_set_rss_key(hw, vsi_id, key, true); 382 382 } 383 383 384 - /* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the 385 - * hardware to a bit-field that can be used by SW to more easily determine the 386 - * packet type. 387 - * 388 - * Macros are used to shorten the table lines and make this table human 389 - * readable. 390 - * 391 - * We store the PTYPE in the top byte of the bit field - this is just so that 392 - * we can check that the table doesn't have a row missing, as the index into 393 - * the table should be the PTYPE. 394 - * 395 - * Typical work flow: 396 - * 397 - * IF NOT i40e_ptype_lookup[ptype].known 398 - * THEN 399 - * Packet is unknown 400 - * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP 401 - * Use the rest of the fields to look at the tunnels, inner protocols, etc 402 - * ELSE 403 - * Use the enum i40e_rx_l2_ptype to decode the packet type 404 - * ENDIF 405 - */ 406 - 407 - /* macro to make the table lines short, use explicit indexing with [PTYPE] */ 408 - #define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ 409 - [PTYPE] = { \ 410 - 1, \ 411 - I40E_RX_PTYPE_OUTER_##OUTER_IP, \ 412 - I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \ 413 - I40E_RX_PTYPE_##OUTER_FRAG, \ 414 - I40E_RX_PTYPE_TUNNEL_##T, \ 415 - I40E_RX_PTYPE_TUNNEL_END_##TE, \ 416 - I40E_RX_PTYPE_##TEF, \ 417 - I40E_RX_PTYPE_INNER_PROT_##I, \ 418 - I40E_RX_PTYPE_PAYLOAD_LAYER_##PL } 419 - 420 - #define I40E_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 421 - 422 - /* shorter macros makes the table fit but are terse */ 423 - #define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG 424 - #define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG 425 - #define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC 426 - 427 - /* Lookup table mapping in the 8-bit HW PTYPE to the bit field for decoding */ 428 - struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = { 429 - /* L2 Packet types */ 430 - I40E_PTT_UNUSED_ENTRY(0), 431 - I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 432 - I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), 433 - I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 434 - I40E_PTT_UNUSED_ENTRY(4), 435 - I40E_PTT_UNUSED_ENTRY(5), 436 - I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 437 - I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 438 - I40E_PTT_UNUSED_ENTRY(8), 439 - I40E_PTT_UNUSED_ENTRY(9), 440 - I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 441 - I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), 442 - I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 443 - I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 444 - I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 445 - I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 446 - I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 447 - I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 448 - I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 449 - I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 450 - I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 451 - I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 452 - 453 - /* Non Tunneled IPv4 */ 454 - I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), 455 - I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), 456 - I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), 457 - I40E_PTT_UNUSED_ENTRY(25), 458 - I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), 459 - I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), 460 - I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), 461 - 462 - /* IPv4 --> IPv4 */ 463 - I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), 464 - I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), 465 - I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), 466 - I40E_PTT_UNUSED_ENTRY(32), 467 - I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), 468 - I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), 469 - I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), 470 - 471 - /* IPv4 --> IPv6 */ 472 - I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), 473 - I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), 474 - I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), 475 - I40E_PTT_UNUSED_ENTRY(39), 476 - I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), 477 - I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), 478 - I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), 479 - 480 - /* IPv4 --> GRE/NAT */ 481 - I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), 482 - 483 - /* IPv4 --> GRE/NAT --> IPv4 */ 484 - I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), 485 - I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), 486 - I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), 487 - I40E_PTT_UNUSED_ENTRY(47), 488 - I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), 489 - I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), 490 - I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), 491 - 492 - /* IPv4 --> GRE/NAT --> IPv6 */ 493 - I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), 494 - I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), 495 - I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), 496 - I40E_PTT_UNUSED_ENTRY(54), 497 - I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), 498 - I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), 499 - I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), 500 - 501 - /* IPv4 --> GRE/NAT --> MAC */ 502 - I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), 503 - 504 - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ 505 - I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), 506 - I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), 507 - I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), 508 - I40E_PTT_UNUSED_ENTRY(62), 509 - I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), 510 - I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), 511 - I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), 512 - 513 - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ 514 - I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), 515 - I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), 516 - I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), 517 - I40E_PTT_UNUSED_ENTRY(69), 518 - I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), 519 - I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), 520 - I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), 521 - 522 - /* IPv4 --> GRE/NAT --> MAC/VLAN */ 523 - I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), 524 - 525 - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ 526 - I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), 527 - I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), 528 - I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), 529 - I40E_PTT_UNUSED_ENTRY(77), 530 - I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), 531 - I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), 532 - I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), 533 - 534 - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ 535 - I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), 536 - I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), 537 - I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), 538 - I40E_PTT_UNUSED_ENTRY(84), 539 - I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), 540 - I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), 541 - I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), 542 - 543 - /* Non Tunneled IPv6 */ 544 - I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), 545 - I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), 546 - I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), 547 - I40E_PTT_UNUSED_ENTRY(91), 548 - I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), 549 - I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), 550 - I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), 551 - 552 - /* IPv6 --> IPv4 */ 553 - I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), 554 - I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), 555 - I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), 556 - I40E_PTT_UNUSED_ENTRY(98), 557 - I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), 558 - I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), 559 - I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), 560 - 561 - /* IPv6 --> IPv6 */ 562 - I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), 563 - I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), 564 - I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), 565 - I40E_PTT_UNUSED_ENTRY(105), 566 - I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), 567 - I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), 568 - I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), 569 - 570 - /* IPv6 --> GRE/NAT */ 571 - I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), 572 - 573 - /* IPv6 --> GRE/NAT -> IPv4 */ 574 - I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), 575 - I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), 576 - I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), 577 - I40E_PTT_UNUSED_ENTRY(113), 578 - I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), 579 - I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), 580 - I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), 581 - 582 - /* IPv6 --> GRE/NAT -> IPv6 */ 583 - I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), 584 - I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), 585 - I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), 586 - I40E_PTT_UNUSED_ENTRY(120), 587 - I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), 588 - I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), 589 - I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), 590 - 591 - /* IPv6 --> GRE/NAT -> MAC */ 592 - I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), 593 - 594 - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ 595 - I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), 596 - I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), 597 - I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), 598 - I40E_PTT_UNUSED_ENTRY(128), 599 - I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), 600 - I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), 601 - I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), 602 - 603 - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ 604 - I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), 605 - I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), 606 - I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), 607 - I40E_PTT_UNUSED_ENTRY(135), 608 - I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), 609 - I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), 610 - I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), 611 - 612 - /* IPv6 --> GRE/NAT -> MAC/VLAN */ 613 - I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), 614 - 615 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ 616 - I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), 617 - I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), 618 - I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), 619 - I40E_PTT_UNUSED_ENTRY(143), 620 - I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), 621 - I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), 622 - I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), 623 - 624 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ 625 - I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), 626 - I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), 627 - I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), 628 - I40E_PTT_UNUSED_ENTRY(150), 629 - I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), 630 - I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), 631 - I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), 632 - 633 - /* unused entries */ 634 - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 635 - }; 636 - 637 384 /** 638 385 * i40e_init_shared_code - Initialize the shared code 639 386 * @hw: pointer to hardware structure
+1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 100 100 101 101 MODULE_AUTHOR("Intel Corporation, <e1000-devel@lists.sourceforge.net>"); 102 102 MODULE_DESCRIPTION("Intel(R) Ethernet Connection XL710 Network Driver"); 103 + MODULE_IMPORT_NS(LIBIE); 103 104 MODULE_LICENSE("GPL v2"); 104 105 105 106 static struct workqueue_struct *i40e_wq;
-7
drivers/net/ethernet/intel/i40e/i40e_prototype.h
··· 371 371 372 372 int i40e_set_mac_type(struct i40e_hw *hw); 373 373 374 - extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; 375 - 376 - static inline struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) 377 - { 378 - return i40e_ptype_lookup[ptype]; 379 - } 380 - 381 374 /** 382 375 * i40e_virtchnl_link_speed - Convert AdminQ link_speed to virtchnl definition 383 376 * @link_speed: the speed to convert
+17 -55
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2 2 /* Copyright(c) 2013 - 2018 Intel Corporation. */ 3 3 4 4 #include <linux/bpf_trace.h> 5 + #include <linux/net/intel/libie/rx.h> 5 6 #include <linux/prefetch.h> 6 7 #include <linux/sctp.h> 7 8 #include <net/mpls.h> ··· 1742 1741 struct sk_buff *skb, 1743 1742 union i40e_rx_desc *rx_desc) 1744 1743 { 1745 - struct i40e_rx_ptype_decoded decoded; 1744 + struct libeth_rx_pt decoded; 1746 1745 u32 rx_error, rx_status; 1747 1746 bool ipv4, ipv6; 1748 1747 u8 ptype; 1749 1748 u64 qword; 1750 1749 1751 - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 1752 - ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); 1753 - rx_error = FIELD_GET(I40E_RXD_QW1_ERROR_MASK, qword); 1754 - rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); 1755 - decoded = decode_rx_desc_ptype(ptype); 1756 - 1757 1750 skb->ip_summed = CHECKSUM_NONE; 1758 1751 1759 - skb_checksum_none_assert(skb); 1752 + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 1753 + ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); 1760 1754 1761 - /* Rx csum enabled and ip headers found? */ 1762 - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) 1755 + decoded = libie_rx_pt_parse(ptype); 1756 + if (!libeth_rx_pt_has_checksum(vsi->netdev, decoded)) 1763 1757 return; 1758 + 1759 + rx_error = FIELD_GET(I40E_RXD_QW1_ERROR_MASK, qword); 1760 + rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); 1764 1761 1765 1762 /* did the hardware decode the packet and checksum? */ 1766 1763 if (!(rx_status & BIT(I40E_RX_DESC_STATUS_L3L4P_SHIFT))) 1767 1764 return; 1768 1765 1769 - /* both known and outer_ip must be set for the below code to work */ 1770 - if (!(decoded.known && decoded.outer_ip)) 1771 - return; 1772 - 1773 - ipv4 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && 1774 - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV4); 1775 - ipv6 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && 1776 - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV6); 1766 + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; 1767 + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; 1777 1768 1778 1769 if (ipv4 && 1779 1770 (rx_error & (BIT(I40E_RX_DESC_ERROR_IPE_SHIFT) | ··· 1793 1800 * we need to bump the checksum level by 1 to reflect the fact that 1794 1801 * we are indicating we validated the inner checksum. 1795 1802 */ 1796 - if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT) 1803 + if (decoded.tunnel_type >= LIBETH_RX_PT_TUNNEL_IP_GRENAT) 1797 1804 skb->csum_level = 1; 1798 1805 1799 - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ 1800 - switch (decoded.inner_prot) { 1801 - case I40E_RX_PTYPE_INNER_PROT_TCP: 1802 - case I40E_RX_PTYPE_INNER_PROT_UDP: 1803 - case I40E_RX_PTYPE_INNER_PROT_SCTP: 1804 - skb->ip_summed = CHECKSUM_UNNECESSARY; 1805 - fallthrough; 1806 - default: 1807 - break; 1808 - } 1809 - 1806 + skb->ip_summed = CHECKSUM_UNNECESSARY; 1810 1807 return; 1811 1808 1812 1809 checksum_fail: 1813 1810 vsi->back->hw_csum_rx_error++; 1814 - } 1815 - 1816 - /** 1817 - * i40e_ptype_to_htype - get a hash type 1818 - * @ptype: the ptype value from the descriptor 1819 - * 1820 - * Returns a hash type to be used by skb_set_hash 1821 - **/ 1822 - static inline int i40e_ptype_to_htype(u8 ptype) 1823 - { 1824 - struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); 1825 - 1826 - if (!decoded.known) 1827 - return PKT_HASH_TYPE_NONE; 1828 - 1829 - if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && 1830 - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4) 1831 - return PKT_HASH_TYPE_L4; 1832 - else if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && 1833 - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3) 1834 - return PKT_HASH_TYPE_L3; 1835 - else 1836 - return PKT_HASH_TYPE_L2; 1837 1811 } 1838 1812 1839 1813 /** ··· 1815 1855 struct sk_buff *skb, 1816 1856 u8 rx_ptype) 1817 1857 { 1858 + struct libeth_rx_pt decoded; 1818 1859 u32 hash; 1819 1860 const __le64 rss_mask = 1820 1861 cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH << 1821 1862 I40E_RX_DESC_STATUS_FLTSTAT_SHIFT); 1822 1863 1823 - if (!(ring->netdev->features & NETIF_F_RXHASH)) 1864 + decoded = libie_rx_pt_parse(rx_ptype); 1865 + if (!libeth_rx_pt_has_hash(ring->netdev, decoded)) 1824 1866 return; 1825 1867 1826 1868 if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { 1827 1869 hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); 1828 - skb_set_hash(skb, hash, i40e_ptype_to_htype(rx_ptype)); 1870 + libeth_rx_pt_set_hash(skb, hash, decoded); 1829 1871 } 1830 1872 } 1831 1873
-88
drivers/net/ethernet/intel/i40e/i40e_type.h
··· 745 745 #define I40E_RXD_QW1_PTYPE_SHIFT 30 746 746 #define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) 747 747 748 - /* Packet type non-ip values */ 749 - enum i40e_rx_l2_ptype { 750 - I40E_RX_PTYPE_L2_RESERVED = 0, 751 - I40E_RX_PTYPE_L2_MAC_PAY2 = 1, 752 - I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, 753 - I40E_RX_PTYPE_L2_FIP_PAY2 = 3, 754 - I40E_RX_PTYPE_L2_OUI_PAY2 = 4, 755 - I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, 756 - I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, 757 - I40E_RX_PTYPE_L2_ECP_PAY2 = 7, 758 - I40E_RX_PTYPE_L2_EVB_PAY2 = 8, 759 - I40E_RX_PTYPE_L2_QCN_PAY2 = 9, 760 - I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, 761 - I40E_RX_PTYPE_L2_ARP = 11, 762 - I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, 763 - I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, 764 - I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, 765 - I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, 766 - I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, 767 - I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, 768 - I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, 769 - I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, 770 - I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, 771 - I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, 772 - I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, 773 - I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, 774 - I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, 775 - I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 776 - }; 777 - 778 - struct i40e_rx_ptype_decoded { 779 - u32 known:1; 780 - u32 outer_ip:1; 781 - u32 outer_ip_ver:1; 782 - u32 outer_frag:1; 783 - u32 tunnel_type:3; 784 - u32 tunnel_end_prot:2; 785 - u32 tunnel_end_frag:1; 786 - u32 inner_prot:4; 787 - u32 payload_layer:3; 788 - }; 789 - 790 - enum i40e_rx_ptype_outer_ip { 791 - I40E_RX_PTYPE_OUTER_L2 = 0, 792 - I40E_RX_PTYPE_OUTER_IP = 1 793 - }; 794 - 795 - enum i40e_rx_ptype_outer_ip_ver { 796 - I40E_RX_PTYPE_OUTER_NONE = 0, 797 - I40E_RX_PTYPE_OUTER_IPV4 = 0, 798 - I40E_RX_PTYPE_OUTER_IPV6 = 1 799 - }; 800 - 801 - enum i40e_rx_ptype_outer_fragmented { 802 - I40E_RX_PTYPE_NOT_FRAG = 0, 803 - I40E_RX_PTYPE_FRAG = 1 804 - }; 805 - 806 - enum i40e_rx_ptype_tunnel_type { 807 - I40E_RX_PTYPE_TUNNEL_NONE = 0, 808 - I40E_RX_PTYPE_TUNNEL_IP_IP = 1, 809 - I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, 810 - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, 811 - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, 812 - }; 813 - 814 - enum i40e_rx_ptype_tunnel_end_prot { 815 - I40E_RX_PTYPE_TUNNEL_END_NONE = 0, 816 - I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, 817 - I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, 818 - }; 819 - 820 - enum i40e_rx_ptype_inner_prot { 821 - I40E_RX_PTYPE_INNER_PROT_NONE = 0, 822 - I40E_RX_PTYPE_INNER_PROT_UDP = 1, 823 - I40E_RX_PTYPE_INNER_PROT_TCP = 2, 824 - I40E_RX_PTYPE_INNER_PROT_SCTP = 3, 825 - I40E_RX_PTYPE_INNER_PROT_ICMP = 4, 826 - I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 827 - }; 828 - 829 - enum i40e_rx_ptype_payload_layer { 830 - I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, 831 - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, 832 - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, 833 - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, 834 - }; 835 - 836 748 #define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 837 749 #define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ 838 750 I40E_RXD_QW1_LENGTH_PBUF_SHIFT)
+1 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 287 287 #define IAVF_FLAG_RESET_PENDING BIT(4) 288 288 #define IAVF_FLAG_RESET_NEEDED BIT(5) 289 289 #define IAVF_FLAG_WB_ON_ITR_CAPABLE BIT(6) 290 - #define IAVF_FLAG_LEGACY_RX BIT(15) 290 + /* BIT(15) is free, was IAVF_FLAG_LEGACY_RX */ 291 291 #define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16) 292 292 #define IAVF_FLAG_QUEUES_DISABLED BIT(17) 293 293 #define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18)
-253
drivers/net/ethernet/intel/iavf/iavf_common.c
··· 432 432 return iavf_aq_get_set_rss_key(hw, vsi_id, key, true); 433 433 } 434 434 435 - /* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the 436 - * hardware to a bit-field that can be used by SW to more easily determine the 437 - * packet type. 438 - * 439 - * Macros are used to shorten the table lines and make this table human 440 - * readable. 441 - * 442 - * We store the PTYPE in the top byte of the bit field - this is just so that 443 - * we can check that the table doesn't have a row missing, as the index into 444 - * the table should be the PTYPE. 445 - * 446 - * Typical work flow: 447 - * 448 - * IF NOT iavf_ptype_lookup[ptype].known 449 - * THEN 450 - * Packet is unknown 451 - * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP 452 - * Use the rest of the fields to look at the tunnels, inner protocols, etc 453 - * ELSE 454 - * Use the enum iavf_rx_l2_ptype to decode the packet type 455 - * ENDIF 456 - */ 457 - 458 - /* macro to make the table lines short, use explicit indexing with [PTYPE] */ 459 - #define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ 460 - [PTYPE] = { \ 461 - 1, \ 462 - IAVF_RX_PTYPE_OUTER_##OUTER_IP, \ 463 - IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \ 464 - IAVF_RX_PTYPE_##OUTER_FRAG, \ 465 - IAVF_RX_PTYPE_TUNNEL_##T, \ 466 - IAVF_RX_PTYPE_TUNNEL_END_##TE, \ 467 - IAVF_RX_PTYPE_##TEF, \ 468 - IAVF_RX_PTYPE_INNER_PROT_##I, \ 469 - IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL } 470 - 471 - #define IAVF_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 472 - 473 - /* shorter macros makes the table fit but are terse */ 474 - #define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG 475 - #define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG 476 - #define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC 477 - 478 - /* Lookup table mapping the 8-bit HW PTYPE to the bit field for decoding */ 479 - struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = { 480 - /* L2 Packet types */ 481 - IAVF_PTT_UNUSED_ENTRY(0), 482 - IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 483 - IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), 484 - IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 485 - IAVF_PTT_UNUSED_ENTRY(4), 486 - IAVF_PTT_UNUSED_ENTRY(5), 487 - IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 488 - IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 489 - IAVF_PTT_UNUSED_ENTRY(8), 490 - IAVF_PTT_UNUSED_ENTRY(9), 491 - IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), 492 - IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), 493 - IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 494 - IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 495 - IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 496 - IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 497 - IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 498 - IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 499 - IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 500 - IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 501 - IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 502 - IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), 503 - 504 - /* Non Tunneled IPv4 */ 505 - IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), 506 - IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), 507 - IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), 508 - IAVF_PTT_UNUSED_ENTRY(25), 509 - IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), 510 - IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), 511 - IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), 512 - 513 - /* IPv4 --> IPv4 */ 514 - IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), 515 - IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), 516 - IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), 517 - IAVF_PTT_UNUSED_ENTRY(32), 518 - IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), 519 - IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), 520 - IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), 521 - 522 - /* IPv4 --> IPv6 */ 523 - IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), 524 - IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), 525 - IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), 526 - IAVF_PTT_UNUSED_ENTRY(39), 527 - IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), 528 - IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), 529 - IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), 530 - 531 - /* IPv4 --> GRE/NAT */ 532 - IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), 533 - 534 - /* IPv4 --> GRE/NAT --> IPv4 */ 535 - IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), 536 - IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), 537 - IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), 538 - IAVF_PTT_UNUSED_ENTRY(47), 539 - IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), 540 - IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), 541 - IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), 542 - 543 - /* IPv4 --> GRE/NAT --> IPv6 */ 544 - IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), 545 - IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), 546 - IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), 547 - IAVF_PTT_UNUSED_ENTRY(54), 548 - IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), 549 - IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), 550 - IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), 551 - 552 - /* IPv4 --> GRE/NAT --> MAC */ 553 - IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), 554 - 555 - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ 556 - IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), 557 - IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), 558 - IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), 559 - IAVF_PTT_UNUSED_ENTRY(62), 560 - IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), 561 - IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), 562 - IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), 563 - 564 - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ 565 - IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), 566 - IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), 567 - IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), 568 - IAVF_PTT_UNUSED_ENTRY(69), 569 - IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), 570 - IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), 571 - IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), 572 - 573 - /* IPv4 --> GRE/NAT --> MAC/VLAN */ 574 - IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), 575 - 576 - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ 577 - IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), 578 - IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), 579 - IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), 580 - IAVF_PTT_UNUSED_ENTRY(77), 581 - IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), 582 - IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), 583 - IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), 584 - 585 - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ 586 - IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), 587 - IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), 588 - IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), 589 - IAVF_PTT_UNUSED_ENTRY(84), 590 - IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), 591 - IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), 592 - IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), 593 - 594 - /* Non Tunneled IPv6 */ 595 - IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), 596 - IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), 597 - IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), 598 - IAVF_PTT_UNUSED_ENTRY(91), 599 - IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), 600 - IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), 601 - IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), 602 - 603 - /* IPv6 --> IPv4 */ 604 - IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), 605 - IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), 606 - IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), 607 - IAVF_PTT_UNUSED_ENTRY(98), 608 - IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), 609 - IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), 610 - IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), 611 - 612 - /* IPv6 --> IPv6 */ 613 - IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), 614 - IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), 615 - IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), 616 - IAVF_PTT_UNUSED_ENTRY(105), 617 - IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), 618 - IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), 619 - IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), 620 - 621 - /* IPv6 --> GRE/NAT */ 622 - IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), 623 - 624 - /* IPv6 --> GRE/NAT -> IPv4 */ 625 - IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), 626 - IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), 627 - IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), 628 - IAVF_PTT_UNUSED_ENTRY(113), 629 - IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), 630 - IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), 631 - IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), 632 - 633 - /* IPv6 --> GRE/NAT -> IPv6 */ 634 - IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), 635 - IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), 636 - IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), 637 - IAVF_PTT_UNUSED_ENTRY(120), 638 - IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), 639 - IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), 640 - IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), 641 - 642 - /* IPv6 --> GRE/NAT -> MAC */ 643 - IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), 644 - 645 - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ 646 - IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), 647 - IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), 648 - IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), 649 - IAVF_PTT_UNUSED_ENTRY(128), 650 - IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), 651 - IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), 652 - IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), 653 - 654 - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ 655 - IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), 656 - IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), 657 - IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), 658 - IAVF_PTT_UNUSED_ENTRY(135), 659 - IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), 660 - IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), 661 - IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), 662 - 663 - /* IPv6 --> GRE/NAT -> MAC/VLAN */ 664 - IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), 665 - 666 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ 667 - IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), 668 - IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), 669 - IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), 670 - IAVF_PTT_UNUSED_ENTRY(143), 671 - IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), 672 - IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), 673 - IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), 674 - 675 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ 676 - IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), 677 - IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), 678 - IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), 679 - IAVF_PTT_UNUSED_ENTRY(150), 680 - IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), 681 - IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), 682 - IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), 683 - 684 - /* unused entries */ 685 - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 686 - }; 687 - 688 435 /** 689 436 * iavf_aq_send_msg_to_pf 690 437 * @hw: pointer to the hardware structure
-140
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 240 240 241 241 #define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats) 242 242 243 - /* For now we have one and only one private flag and it is only defined 244 - * when we have support for the SKIP_CPU_SYNC DMA attribute. Instead 245 - * of leaving all this code sitting around empty we will strip it unless 246 - * our one private flag is actually available. 247 - */ 248 - struct iavf_priv_flags { 249 - char flag_string[ETH_GSTRING_LEN]; 250 - u32 flag; 251 - bool read_only; 252 - }; 253 - 254 - #define IAVF_PRIV_FLAG(_name, _flag, _read_only) { \ 255 - .flag_string = _name, \ 256 - .flag = _flag, \ 257 - .read_only = _read_only, \ 258 - } 259 - 260 - static const struct iavf_priv_flags iavf_gstrings_priv_flags[] = { 261 - IAVF_PRIV_FLAG("legacy-rx", IAVF_FLAG_LEGACY_RX, 0), 262 - }; 263 - 264 - #define IAVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(iavf_gstrings_priv_flags) 265 - 266 243 /** 267 244 * iavf_get_link_ksettings - Get Link Speed and Duplex settings 268 245 * @netdev: network interface device structure ··· 319 342 return IAVF_STATS_LEN + 320 343 (IAVF_QUEUE_STATS_LEN * 2 * 321 344 netdev->real_num_tx_queues); 322 - else if (sset == ETH_SS_PRIV_FLAGS) 323 - return IAVF_PRIV_FLAGS_STR_LEN; 324 345 else 325 346 return -EINVAL; 326 347 } ··· 361 386 } 362 387 363 388 /** 364 - * iavf_get_priv_flag_strings - Get private flag strings 365 - * @netdev: network interface device structure 366 - * @data: buffer for string data 367 - * 368 - * Builds the private flags string table 369 - **/ 370 - static void iavf_get_priv_flag_strings(struct net_device *netdev, u8 *data) 371 - { 372 - unsigned int i; 373 - 374 - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) 375 - ethtool_puts(&data, iavf_gstrings_priv_flags[i].flag_string); 376 - } 377 - 378 - /** 379 389 * iavf_get_stat_strings - Get stat strings 380 390 * @netdev: network interface device structure 381 391 * @data: buffer for string data ··· 398 438 case ETH_SS_STATS: 399 439 iavf_get_stat_strings(netdev, data); 400 440 break; 401 - case ETH_SS_PRIV_FLAGS: 402 - iavf_get_priv_flag_strings(netdev, data); 403 - break; 404 441 default: 405 442 break; 406 443 } 407 - } 408 - 409 - /** 410 - * iavf_get_priv_flags - report device private flags 411 - * @netdev: network interface device structure 412 - * 413 - * The get string set count and the string set should be matched for each 414 - * flag returned. Add new strings for each flag to the iavf_gstrings_priv_flags 415 - * array. 416 - * 417 - * Returns a u32 bitmap of flags. 418 - **/ 419 - static u32 iavf_get_priv_flags(struct net_device *netdev) 420 - { 421 - struct iavf_adapter *adapter = netdev_priv(netdev); 422 - u32 i, ret_flags = 0; 423 - 424 - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { 425 - const struct iavf_priv_flags *priv_flags; 426 - 427 - priv_flags = &iavf_gstrings_priv_flags[i]; 428 - 429 - if (priv_flags->flag & adapter->flags) 430 - ret_flags |= BIT(i); 431 - } 432 - 433 - return ret_flags; 434 - } 435 - 436 - /** 437 - * iavf_set_priv_flags - set private flags 438 - * @netdev: network interface device structure 439 - * @flags: bit flags to be set 440 - **/ 441 - static int iavf_set_priv_flags(struct net_device *netdev, u32 flags) 442 - { 443 - struct iavf_adapter *adapter = netdev_priv(netdev); 444 - u32 orig_flags, new_flags, changed_flags; 445 - int ret = 0; 446 - u32 i; 447 - 448 - orig_flags = READ_ONCE(adapter->flags); 449 - new_flags = orig_flags; 450 - 451 - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { 452 - const struct iavf_priv_flags *priv_flags; 453 - 454 - priv_flags = &iavf_gstrings_priv_flags[i]; 455 - 456 - if (flags & BIT(i)) 457 - new_flags |= priv_flags->flag; 458 - else 459 - new_flags &= ~(priv_flags->flag); 460 - 461 - if (priv_flags->read_only && 462 - ((orig_flags ^ new_flags) & ~BIT(i))) 463 - return -EOPNOTSUPP; 464 - } 465 - 466 - /* Before we finalize any flag changes, any checks which we need to 467 - * perform to determine if the new flags will be supported should go 468 - * here... 469 - */ 470 - 471 - /* Compare and exchange the new flags into place. If we failed, that 472 - * is if cmpxchg returns anything but the old value, this means 473 - * something else must have modified the flags variable since we 474 - * copied it. We'll just punt with an error and log something in the 475 - * message buffer. 476 - */ 477 - if (cmpxchg(&adapter->flags, orig_flags, new_flags) != orig_flags) { 478 - dev_warn(&adapter->pdev->dev, 479 - "Unable to update adapter->flags as it was modified by another thread...\n"); 480 - return -EAGAIN; 481 - } 482 - 483 - changed_flags = orig_flags ^ new_flags; 484 - 485 - /* Process any additional changes needed as a result of flag changes. 486 - * The changed_flags value reflects the list of bits that were changed 487 - * in the code above. 488 - */ 489 - 490 - /* issue a reset to force legacy-rx change to take effect */ 491 - if (changed_flags & IAVF_FLAG_LEGACY_RX) { 492 - if (netif_running(netdev)) { 493 - iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); 494 - ret = iavf_wait_for_reset(adapter); 495 - if (ret) 496 - netdev_warn(netdev, "Changing private flags timeout or interrupted waiting for reset"); 497 - } 498 - } 499 - 500 - return ret; 501 444 } 502 445 503 446 /** ··· 448 585 strscpy(drvinfo->driver, iavf_driver_name, 32); 449 586 strscpy(drvinfo->fw_version, "N/A", 4); 450 587 strscpy(drvinfo->bus_info, pci_name(adapter->pdev), 32); 451 - drvinfo->n_priv_flags = IAVF_PRIV_FLAGS_STR_LEN; 452 588 } 453 589 454 590 /** ··· 1857 1995 .get_strings = iavf_get_strings, 1858 1996 .get_ethtool_stats = iavf_get_ethtool_stats, 1859 1997 .get_sset_count = iavf_get_sset_count, 1860 - .get_priv_flags = iavf_get_priv_flags, 1861 - .set_priv_flags = iavf_set_priv_flags, 1862 1998 .get_msglevel = iavf_get_msglevel, 1863 1999 .set_msglevel = iavf_set_msglevel, 1864 2000 .get_coalesce = iavf_get_coalesce,
+6 -34
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright(c) 2013 - 2018 Intel Corporation. */ 3 3 4 + #include <linux/net/intel/libie/rx.h> 5 + 4 6 #include "iavf.h" 5 7 #include "iavf_prototype.h" 6 8 /* All iavf tracepoints are defined by the include below, which must ··· 47 45 MODULE_ALIAS("i40evf"); 48 46 MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>"); 49 47 MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver"); 48 + MODULE_IMPORT_NS(LIBETH); 49 + MODULE_IMPORT_NS(LIBIE); 50 50 MODULE_LICENSE("GPL v2"); 51 51 52 52 static const struct net_device_ops iavf_netdev_ops; ··· 718 714 **/ 719 715 static void iavf_configure_rx(struct iavf_adapter *adapter) 720 716 { 721 - unsigned int rx_buf_len = IAVF_RXBUFFER_2048; 722 717 struct iavf_hw *hw = &adapter->hw; 723 - int i; 724 718 725 - /* Legacy Rx will always default to a 2048 buffer size. */ 726 - #if (PAGE_SIZE < 8192) 727 - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX)) { 728 - struct net_device *netdev = adapter->netdev; 729 - 730 - /* For jumbo frames on systems with 4K pages we have to use 731 - * an order 1 page, so we might as well increase the size 732 - * of our Rx buffer to make better use of the available space 733 - */ 734 - rx_buf_len = IAVF_RXBUFFER_3072; 735 - 736 - /* We use a 1536 buffer size for configurations with 737 - * standard Ethernet mtu. On x86 this gives us enough room 738 - * for shared info and 192 bytes of padding. 739 - */ 740 - if (!IAVF_2K_TOO_SMALL_WITH_PADDING && 741 - (netdev->mtu <= ETH_DATA_LEN)) 742 - rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; 743 - } 744 - #endif 745 - 746 - for (i = 0; i < adapter->num_active_queues; i++) { 719 + for (u32 i = 0; i < adapter->num_active_queues; i++) 747 720 adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); 748 - adapter->rx_rings[i].rx_buf_len = rx_buf_len; 749 - 750 - if (adapter->flags & IAVF_FLAG_LEGACY_RX) 751 - clear_ring_build_skb_enabled(&adapter->rx_rings[i]); 752 - else 753 - set_ring_build_skb_enabled(&adapter->rx_rings[i]); 754 - } 755 721 } 756 722 757 723 /** ··· 1589 1615 rx_ring = &adapter->rx_rings[i]; 1590 1616 rx_ring->queue_index = i; 1591 1617 rx_ring->netdev = adapter->netdev; 1592 - rx_ring->dev = &adapter->pdev->dev; 1593 1618 rx_ring->count = adapter->rx_desc_count; 1594 1619 rx_ring->itr_setting = IAVF_ITR_RX_DEF; 1595 1620 } ··· 2615 2642 iavf_set_ethtool_ops(netdev); 2616 2643 netdev->watchdog_timeo = 5 * HZ; 2617 2644 2618 - /* MTU range: 68 - 9710 */ 2619 2645 netdev->min_mtu = ETH_MIN_MTU; 2620 - netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD; 2646 + netdev->max_mtu = LIBIE_MAX_MTU; 2621 2647 2622 2648 if (!is_valid_ether_addr(adapter->hw.mac.addr)) { 2623 2649 dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",
-7
drivers/net/ethernet/intel/iavf/iavf_prototype.h
··· 45 45 enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid, 46 46 struct iavf_aqc_get_set_rss_key_data *key); 47 47 48 - extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[]; 49 - 50 - static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) 51 - { 52 - return iavf_ptype_lookup[ptype]; 53 - } 54 - 55 48 void iavf_vf_parse_hw_config(struct iavf_hw *hw, 56 49 struct virtchnl_vf_resource *msg); 57 50 enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
+96 -453
drivers/net/ethernet/intel/iavf/iavf_txrx.c
··· 2 2 /* Copyright(c) 2013 - 2018 Intel Corporation. */ 3 3 4 4 #include <linux/bitfield.h> 5 + #include <linux/net/intel/libie/rx.h> 5 6 #include <linux/prefetch.h> 6 7 7 8 #include "iavf.h" ··· 185 184 * pending work. 186 185 */ 187 186 packets = tx_ring->stats.packets & INT_MAX; 188 - if (tx_ring->tx_stats.prev_pkt_ctr == packets) { 187 + if (tx_ring->prev_pkt_ctr == packets) { 189 188 iavf_force_wb(vsi, tx_ring->q_vector); 190 189 continue; 191 190 } ··· 194 193 * to iavf_get_tx_pending() 195 194 */ 196 195 smp_rmb(); 197 - tx_ring->tx_stats.prev_pkt_ctr = 196 + tx_ring->prev_pkt_ctr = 198 197 iavf_get_tx_pending(tx_ring, true) ? packets : -1; 199 198 } 200 199 } ··· 320 319 ((j / WB_STRIDE) == 0) && (j > 0) && 321 320 !test_bit(__IAVF_VSI_DOWN, vsi->state) && 322 321 (IAVF_DESC_UNUSED(tx_ring) != tx_ring->count)) 323 - tx_ring->arm_wb = true; 322 + tx_ring->flags |= IAVF_TXR_FLAGS_ARM_WB; 324 323 } 325 324 326 325 /* notify netdev of completed buffers */ ··· 675 674 676 675 tx_ring->next_to_use = 0; 677 676 tx_ring->next_to_clean = 0; 678 - tx_ring->tx_stats.prev_pkt_ctr = -1; 677 + tx_ring->prev_pkt_ctr = -1; 679 678 return 0; 680 679 681 680 err: ··· 690 689 **/ 691 690 static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) 692 691 { 693 - unsigned long bi_size; 694 - u16 i; 695 - 696 692 /* ring already cleared, nothing to do */ 697 - if (!rx_ring->rx_bi) 693 + if (!rx_ring->rx_fqes) 698 694 return; 699 695 700 696 if (rx_ring->skb) { ··· 699 701 rx_ring->skb = NULL; 700 702 } 701 703 702 - /* Free all the Rx ring sk_buffs */ 703 - for (i = 0; i < rx_ring->count; i++) { 704 - struct iavf_rx_buffer *rx_bi = &rx_ring->rx_bi[i]; 704 + /* Free all the Rx ring buffers */ 705 + for (u32 i = rx_ring->next_to_clean; i != rx_ring->next_to_use; ) { 706 + const struct libeth_fqe *rx_fqes = &rx_ring->rx_fqes[i]; 705 707 706 - if (!rx_bi->page) 707 - continue; 708 + page_pool_put_full_page(rx_ring->pp, rx_fqes->page, false); 708 709 709 - /* Invalidate cache lines that may have been written to by 710 - * device so that we avoid corrupting memory. 711 - */ 712 - dma_sync_single_range_for_cpu(rx_ring->dev, 713 - rx_bi->dma, 714 - rx_bi->page_offset, 715 - rx_ring->rx_buf_len, 716 - DMA_FROM_DEVICE); 717 - 718 - /* free resources associated with mapping */ 719 - dma_unmap_page_attrs(rx_ring->dev, rx_bi->dma, 720 - iavf_rx_pg_size(rx_ring), 721 - DMA_FROM_DEVICE, 722 - IAVF_RX_DMA_ATTR); 723 - 724 - __page_frag_cache_drain(rx_bi->page, rx_bi->pagecnt_bias); 725 - 726 - rx_bi->page = NULL; 727 - rx_bi->page_offset = 0; 710 + if (unlikely(++i == rx_ring->count)) 711 + i = 0; 728 712 } 729 713 730 - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; 731 - memset(rx_ring->rx_bi, 0, bi_size); 732 - 733 - /* Zero out the descriptor ring */ 734 - memset(rx_ring->desc, 0, rx_ring->size); 735 - 736 - rx_ring->next_to_alloc = 0; 737 714 rx_ring->next_to_clean = 0; 738 715 rx_ring->next_to_use = 0; 739 716 } ··· 721 748 **/ 722 749 void iavf_free_rx_resources(struct iavf_ring *rx_ring) 723 750 { 751 + struct libeth_fq fq = { 752 + .fqes = rx_ring->rx_fqes, 753 + .pp = rx_ring->pp, 754 + }; 755 + 724 756 iavf_clean_rx_ring(rx_ring); 725 - kfree(rx_ring->rx_bi); 726 - rx_ring->rx_bi = NULL; 727 757 728 758 if (rx_ring->desc) { 729 - dma_free_coherent(rx_ring->dev, rx_ring->size, 759 + dma_free_coherent(rx_ring->pp->p.dev, rx_ring->size, 730 760 rx_ring->desc, rx_ring->dma); 731 761 rx_ring->desc = NULL; 732 762 } 763 + 764 + libeth_rx_fq_destroy(&fq); 765 + rx_ring->rx_fqes = NULL; 766 + rx_ring->pp = NULL; 733 767 } 734 768 735 769 /** ··· 747 767 **/ 748 768 int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) 749 769 { 750 - struct device *dev = rx_ring->dev; 751 - int bi_size; 770 + struct libeth_fq fq = { 771 + .count = rx_ring->count, 772 + .buf_len = LIBIE_MAX_RX_BUF_LEN, 773 + .nid = NUMA_NO_NODE, 774 + }; 775 + int ret; 752 776 753 - /* warn if we are about to overwrite the pointer */ 754 - WARN_ON(rx_ring->rx_bi); 755 - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; 756 - rx_ring->rx_bi = kzalloc(bi_size, GFP_KERNEL); 757 - if (!rx_ring->rx_bi) 758 - goto err; 777 + ret = libeth_rx_fq_create(&fq, &rx_ring->q_vector->napi); 778 + if (ret) 779 + return ret; 780 + 781 + rx_ring->pp = fq.pp; 782 + rx_ring->rx_fqes = fq.fqes; 783 + rx_ring->truesize = fq.truesize; 784 + rx_ring->rx_buf_len = fq.buf_len; 759 785 760 786 u64_stats_init(&rx_ring->syncp); 761 787 762 788 /* Round up to nearest 4K */ 763 789 rx_ring->size = rx_ring->count * sizeof(union iavf_32byte_rx_desc); 764 790 rx_ring->size = ALIGN(rx_ring->size, 4096); 765 - rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, 791 + rx_ring->desc = dma_alloc_coherent(fq.pp->p.dev, rx_ring->size, 766 792 &rx_ring->dma, GFP_KERNEL); 767 793 768 794 if (!rx_ring->desc) { 769 - dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", 795 + dev_info(fq.pp->p.dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", 770 796 rx_ring->size); 771 797 goto err; 772 798 } 773 799 774 - rx_ring->next_to_alloc = 0; 775 800 rx_ring->next_to_clean = 0; 776 801 rx_ring->next_to_use = 0; 777 802 778 803 return 0; 804 + 779 805 err: 780 - kfree(rx_ring->rx_bi); 781 - rx_ring->rx_bi = NULL; 806 + libeth_rx_fq_destroy(&fq); 807 + rx_ring->rx_fqes = NULL; 808 + rx_ring->pp = NULL; 809 + 782 810 return -ENOMEM; 783 811 } 784 812 ··· 799 811 { 800 812 rx_ring->next_to_use = val; 801 813 802 - /* update next to alloc since we have filled the ring */ 803 - rx_ring->next_to_alloc = val; 804 - 805 814 /* Force memory writes to complete before letting h/w 806 815 * know there are new descriptors to fetch. (Only 807 816 * applicable for weak-ordered memory model archs, ··· 806 821 */ 807 822 wmb(); 808 823 writel(val, rx_ring->tail); 809 - } 810 - 811 - /** 812 - * iavf_rx_offset - Return expected offset into page to access data 813 - * @rx_ring: Ring we are requesting offset of 814 - * 815 - * Returns the offset value for ring into the data buffer. 816 - */ 817 - static unsigned int iavf_rx_offset(struct iavf_ring *rx_ring) 818 - { 819 - return ring_uses_build_skb(rx_ring) ? IAVF_SKB_PAD : 0; 820 - } 821 - 822 - /** 823 - * iavf_alloc_mapped_page - recycle or make a new page 824 - * @rx_ring: ring to use 825 - * @bi: rx_buffer struct to modify 826 - * 827 - * Returns true if the page was successfully allocated or 828 - * reused. 829 - **/ 830 - static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, 831 - struct iavf_rx_buffer *bi) 832 - { 833 - struct page *page = bi->page; 834 - dma_addr_t dma; 835 - 836 - /* since we are recycling buffers we should seldom need to alloc */ 837 - if (likely(page)) { 838 - rx_ring->rx_stats.page_reuse_count++; 839 - return true; 840 - } 841 - 842 - /* alloc new page for storage */ 843 - page = dev_alloc_pages(iavf_rx_pg_order(rx_ring)); 844 - if (unlikely(!page)) { 845 - rx_ring->rx_stats.alloc_page_failed++; 846 - return false; 847 - } 848 - 849 - /* map page for use */ 850 - dma = dma_map_page_attrs(rx_ring->dev, page, 0, 851 - iavf_rx_pg_size(rx_ring), 852 - DMA_FROM_DEVICE, 853 - IAVF_RX_DMA_ATTR); 854 - 855 - /* if mapping failed free memory back to system since 856 - * there isn't much point in holding memory we can't use 857 - */ 858 - if (dma_mapping_error(rx_ring->dev, dma)) { 859 - __free_pages(page, iavf_rx_pg_order(rx_ring)); 860 - rx_ring->rx_stats.alloc_page_failed++; 861 - return false; 862 - } 863 - 864 - bi->dma = dma; 865 - bi->page = page; 866 - bi->page_offset = iavf_rx_offset(rx_ring); 867 - 868 - /* initialize pagecnt_bias to 1 representing we fully own page */ 869 - bi->pagecnt_bias = 1; 870 - 871 - return true; 872 824 } 873 825 874 826 /** ··· 838 916 **/ 839 917 bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) 840 918 { 919 + const struct libeth_fq_fp fq = { 920 + .pp = rx_ring->pp, 921 + .fqes = rx_ring->rx_fqes, 922 + .truesize = rx_ring->truesize, 923 + .count = rx_ring->count, 924 + }; 841 925 u16 ntu = rx_ring->next_to_use; 842 926 union iavf_rx_desc *rx_desc; 843 - struct iavf_rx_buffer *bi; 844 927 845 928 /* do nothing if no valid netdev defined */ 846 929 if (!rx_ring->netdev || !cleaned_count) 847 930 return false; 848 931 849 932 rx_desc = IAVF_RX_DESC(rx_ring, ntu); 850 - bi = &rx_ring->rx_bi[ntu]; 851 933 852 934 do { 853 - if (!iavf_alloc_mapped_page(rx_ring, bi)) 854 - goto no_buffers; 935 + dma_addr_t addr; 855 936 856 - /* sync the buffer for use by the device */ 857 - dma_sync_single_range_for_device(rx_ring->dev, bi->dma, 858 - bi->page_offset, 859 - rx_ring->rx_buf_len, 860 - DMA_FROM_DEVICE); 937 + addr = libeth_rx_alloc(&fq, ntu); 938 + if (addr == DMA_MAPPING_ERROR) 939 + goto no_buffers; 861 940 862 941 /* Refresh the desc even if buffer_addrs didn't change 863 942 * because each write-back erases this info. 864 943 */ 865 - rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); 944 + rx_desc->read.pkt_addr = cpu_to_le64(addr); 866 945 867 946 rx_desc++; 868 - bi++; 869 947 ntu++; 870 948 if (unlikely(ntu == rx_ring->count)) { 871 949 rx_desc = IAVF_RX_DESC(rx_ring, 0); 872 - bi = rx_ring->rx_bi; 873 950 ntu = 0; 874 951 } 875 952 ··· 887 966 if (rx_ring->next_to_use != ntu) 888 967 iavf_release_rx_desc(rx_ring, ntu); 889 968 969 + rx_ring->rx_stats.alloc_page_failed++; 970 + 890 971 /* make sure to come back via polling to try again after 891 972 * allocation failure 892 973 */ ··· 905 982 struct sk_buff *skb, 906 983 union iavf_rx_desc *rx_desc) 907 984 { 908 - struct iavf_rx_ptype_decoded decoded; 985 + struct libeth_rx_pt decoded; 909 986 u32 rx_error, rx_status; 910 987 bool ipv4, ipv6; 911 988 u8 ptype; 912 989 u64 qword; 913 990 914 - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 915 - ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); 916 - rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); 917 - rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); 918 - decoded = decode_rx_desc_ptype(ptype); 919 - 920 991 skb->ip_summed = CHECKSUM_NONE; 921 992 922 - skb_checksum_none_assert(skb); 993 + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 994 + ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); 923 995 924 - /* Rx csum enabled and ip headers found? */ 925 - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) 996 + decoded = libie_rx_pt_parse(ptype); 997 + if (!libeth_rx_pt_has_checksum(vsi->netdev, decoded)) 926 998 return; 999 + 1000 + rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); 1001 + rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); 927 1002 928 1003 /* did the hardware decode the packet and checksum? */ 929 1004 if (!(rx_status & BIT(IAVF_RX_DESC_STATUS_L3L4P_SHIFT))) 930 1005 return; 931 1006 932 - /* both known and outer_ip must be set for the below code to work */ 933 - if (!(decoded.known && decoded.outer_ip)) 934 - return; 935 - 936 - ipv4 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && 937 - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV4); 938 - ipv6 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && 939 - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV6); 1007 + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; 1008 + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; 940 1009 941 1010 if (ipv4 && 942 1011 (rx_error & (BIT(IAVF_RX_DESC_ERROR_IPE_SHIFT) | ··· 952 1037 if (rx_error & BIT(IAVF_RX_DESC_ERROR_PPRS_SHIFT)) 953 1038 return; 954 1039 955 - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ 956 - switch (decoded.inner_prot) { 957 - case IAVF_RX_PTYPE_INNER_PROT_TCP: 958 - case IAVF_RX_PTYPE_INNER_PROT_UDP: 959 - case IAVF_RX_PTYPE_INNER_PROT_SCTP: 960 - skb->ip_summed = CHECKSUM_UNNECESSARY; 961 - fallthrough; 962 - default: 963 - break; 964 - } 965 - 1040 + skb->ip_summed = CHECKSUM_UNNECESSARY; 966 1041 return; 967 1042 968 1043 checksum_fail: 969 1044 vsi->back->hw_csum_rx_error++; 970 - } 971 - 972 - /** 973 - * iavf_ptype_to_htype - get a hash type 974 - * @ptype: the ptype value from the descriptor 975 - * 976 - * Returns a hash type to be used by skb_set_hash 977 - **/ 978 - static int iavf_ptype_to_htype(u8 ptype) 979 - { 980 - struct iavf_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); 981 - 982 - if (!decoded.known) 983 - return PKT_HASH_TYPE_NONE; 984 - 985 - if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && 986 - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4) 987 - return PKT_HASH_TYPE_L4; 988 - else if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && 989 - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3) 990 - return PKT_HASH_TYPE_L3; 991 - else 992 - return PKT_HASH_TYPE_L2; 993 1045 } 994 1046 995 1047 /** ··· 971 1089 struct sk_buff *skb, 972 1090 u8 rx_ptype) 973 1091 { 1092 + struct libeth_rx_pt decoded; 974 1093 u32 hash; 975 1094 const __le64 rss_mask = 976 1095 cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH << 977 1096 IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT); 978 1097 979 - if (!(ring->netdev->features & NETIF_F_RXHASH)) 1098 + decoded = libie_rx_pt_parse(rx_ptype); 1099 + if (!libeth_rx_pt_has_hash(ring->netdev, decoded)) 980 1100 return; 981 1101 982 1102 if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { 983 1103 hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); 984 - skb_set_hash(skb, hash, iavf_ptype_to_htype(rx_ptype)); 1104 + libeth_rx_pt_set_hash(skb, hash, decoded); 985 1105 } 986 1106 } 987 1107 ··· 1036 1152 } 1037 1153 1038 1154 /** 1039 - * iavf_reuse_rx_page - page flip buffer and store it back on the ring 1040 - * @rx_ring: rx descriptor ring to store buffers on 1041 - * @old_buff: donor buffer to have page reused 1042 - * 1043 - * Synchronizes page for reuse by the adapter 1044 - **/ 1045 - static void iavf_reuse_rx_page(struct iavf_ring *rx_ring, 1046 - struct iavf_rx_buffer *old_buff) 1047 - { 1048 - struct iavf_rx_buffer *new_buff; 1049 - u16 nta = rx_ring->next_to_alloc; 1050 - 1051 - new_buff = &rx_ring->rx_bi[nta]; 1052 - 1053 - /* update, and store next to alloc */ 1054 - nta++; 1055 - rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; 1056 - 1057 - /* transfer page from old buffer to new buffer */ 1058 - new_buff->dma = old_buff->dma; 1059 - new_buff->page = old_buff->page; 1060 - new_buff->page_offset = old_buff->page_offset; 1061 - new_buff->pagecnt_bias = old_buff->pagecnt_bias; 1062 - } 1063 - 1064 - /** 1065 - * iavf_can_reuse_rx_page - Determine if this page can be reused by 1066 - * the adapter for another receive 1067 - * 1068 - * @rx_buffer: buffer containing the page 1069 - * 1070 - * If page is reusable, rx_buffer->page_offset is adjusted to point to 1071 - * an unused region in the page. 1072 - * 1073 - * For small pages, @truesize will be a constant value, half the size 1074 - * of the memory at page. We'll attempt to alternate between high and 1075 - * low halves of the page, with one half ready for use by the hardware 1076 - * and the other half being consumed by the stack. We use the page 1077 - * ref count to determine whether the stack has finished consuming the 1078 - * portion of this page that was passed up with a previous packet. If 1079 - * the page ref count is >1, we'll assume the "other" half page is 1080 - * still busy, and this page cannot be reused. 1081 - * 1082 - * For larger pages, @truesize will be the actual space used by the 1083 - * received packet (adjusted upward to an even multiple of the cache 1084 - * line size). This will advance through the page by the amount 1085 - * actually consumed by the received packets while there is still 1086 - * space for a buffer. Each region of larger pages will be used at 1087 - * most once, after which the page will not be reused. 1088 - * 1089 - * In either case, if the page is reusable its refcount is increased. 1090 - **/ 1091 - static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer) 1092 - { 1093 - unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; 1094 - struct page *page = rx_buffer->page; 1095 - 1096 - /* Is any reuse possible? */ 1097 - if (!dev_page_is_reusable(page)) 1098 - return false; 1099 - 1100 - #if (PAGE_SIZE < 8192) 1101 - /* if we are only owner of page we can reuse it */ 1102 - if (unlikely((page_count(page) - pagecnt_bias) > 1)) 1103 - return false; 1104 - #else 1105 - #define IAVF_LAST_OFFSET \ 1106 - (SKB_WITH_OVERHEAD(PAGE_SIZE) - IAVF_RXBUFFER_2048) 1107 - if (rx_buffer->page_offset > IAVF_LAST_OFFSET) 1108 - return false; 1109 - #endif 1110 - 1111 - /* If we have drained the page fragment pool we need to update 1112 - * the pagecnt_bias and page count so that we fully restock the 1113 - * number of references the driver holds. 1114 - */ 1115 - if (unlikely(!pagecnt_bias)) { 1116 - page_ref_add(page, USHRT_MAX); 1117 - rx_buffer->pagecnt_bias = USHRT_MAX; 1118 - } 1119 - 1120 - return true; 1121 - } 1122 - 1123 - /** 1124 1155 * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff 1125 - * @rx_ring: rx descriptor ring to transact packets on 1126 - * @rx_buffer: buffer containing page to add 1127 1156 * @skb: sk_buff to place the data into 1157 + * @rx_buffer: buffer containing page to add 1128 1158 * @size: packet length from rx_desc 1129 1159 * 1130 1160 * This function will add the data contained in rx_buffer->page to the skb. ··· 1046 1248 * 1047 1249 * The function will then update the page offset. 1048 1250 **/ 1049 - static void iavf_add_rx_frag(struct iavf_ring *rx_ring, 1050 - struct iavf_rx_buffer *rx_buffer, 1051 - struct sk_buff *skb, 1251 + static void iavf_add_rx_frag(struct sk_buff *skb, 1252 + const struct libeth_fqe *rx_buffer, 1052 1253 unsigned int size) 1053 1254 { 1054 - #if (PAGE_SIZE < 8192) 1055 - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; 1056 - #else 1057 - unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring)); 1058 - #endif 1059 - 1060 - if (!size) 1061 - return; 1255 + u32 hr = rx_buffer->page->pp->p.offset; 1062 1256 1063 1257 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, 1064 - rx_buffer->page_offset, size, truesize); 1065 - 1066 - /* page is being used so we must update the page offset */ 1067 - #if (PAGE_SIZE < 8192) 1068 - rx_buffer->page_offset ^= truesize; 1069 - #else 1070 - rx_buffer->page_offset += truesize; 1071 - #endif 1072 - } 1073 - 1074 - /** 1075 - * iavf_get_rx_buffer - Fetch Rx buffer and synchronize data for use 1076 - * @rx_ring: rx descriptor ring to transact packets on 1077 - * @size: size of buffer to add to skb 1078 - * 1079 - * This function will pull an Rx buffer from the ring and synchronize it 1080 - * for use by the CPU. 1081 - */ 1082 - static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, 1083 - const unsigned int size) 1084 - { 1085 - struct iavf_rx_buffer *rx_buffer; 1086 - 1087 - rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; 1088 - prefetchw(rx_buffer->page); 1089 - if (!size) 1090 - return rx_buffer; 1091 - 1092 - /* we are reusing so sync this buffer for CPU use */ 1093 - dma_sync_single_range_for_cpu(rx_ring->dev, 1094 - rx_buffer->dma, 1095 - rx_buffer->page_offset, 1096 - size, 1097 - DMA_FROM_DEVICE); 1098 - 1099 - /* We have pulled a buffer for use, so decrement pagecnt_bias */ 1100 - rx_buffer->pagecnt_bias--; 1101 - 1102 - return rx_buffer; 1103 - } 1104 - 1105 - /** 1106 - * iavf_construct_skb - Allocate skb and populate it 1107 - * @rx_ring: rx descriptor ring to transact packets on 1108 - * @rx_buffer: rx buffer to pull data from 1109 - * @size: size of buffer to add to skb 1110 - * 1111 - * This function allocates an skb. It then populates it with the page 1112 - * data from the current receive descriptor, taking care to set up the 1113 - * skb correctly. 1114 - */ 1115 - static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring, 1116 - struct iavf_rx_buffer *rx_buffer, 1117 - unsigned int size) 1118 - { 1119 - void *va; 1120 - #if (PAGE_SIZE < 8192) 1121 - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; 1122 - #else 1123 - unsigned int truesize = SKB_DATA_ALIGN(size); 1124 - #endif 1125 - unsigned int headlen; 1126 - struct sk_buff *skb; 1127 - 1128 - if (!rx_buffer) 1129 - return NULL; 1130 - /* prefetch first cache line of first page */ 1131 - va = page_address(rx_buffer->page) + rx_buffer->page_offset; 1132 - net_prefetch(va); 1133 - 1134 - /* allocate a skb to store the frags */ 1135 - skb = napi_alloc_skb(&rx_ring->q_vector->napi, IAVF_RX_HDR_SIZE); 1136 - if (unlikely(!skb)) 1137 - return NULL; 1138 - 1139 - /* Determine available headroom for copy */ 1140 - headlen = size; 1141 - if (headlen > IAVF_RX_HDR_SIZE) 1142 - headlen = eth_get_headlen(skb->dev, va, IAVF_RX_HDR_SIZE); 1143 - 1144 - /* align pull length to size of long to optimize memcpy performance */ 1145 - memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); 1146 - 1147 - /* update all of the pointers */ 1148 - size -= headlen; 1149 - if (size) { 1150 - skb_add_rx_frag(skb, 0, rx_buffer->page, 1151 - rx_buffer->page_offset + headlen, 1152 - size, truesize); 1153 - 1154 - /* buffer is used by skb, update page_offset */ 1155 - #if (PAGE_SIZE < 8192) 1156 - rx_buffer->page_offset ^= truesize; 1157 - #else 1158 - rx_buffer->page_offset += truesize; 1159 - #endif 1160 - } else { 1161 - /* buffer is unused, reset bias back to rx_buffer */ 1162 - rx_buffer->pagecnt_bias++; 1163 - } 1164 - 1165 - return skb; 1258 + rx_buffer->offset + hr, size, rx_buffer->truesize); 1166 1259 } 1167 1260 1168 1261 /** 1169 1262 * iavf_build_skb - Build skb around an existing buffer 1170 - * @rx_ring: Rx descriptor ring to transact packets on 1171 1263 * @rx_buffer: Rx buffer to pull data from 1172 1264 * @size: size of buffer to add to skb 1173 1265 * 1174 1266 * This function builds an skb around an existing Rx buffer, taking care 1175 1267 * to set up the skb correctly and avoid any memcpy overhead. 1176 1268 */ 1177 - static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, 1178 - struct iavf_rx_buffer *rx_buffer, 1269 + static struct sk_buff *iavf_build_skb(const struct libeth_fqe *rx_buffer, 1179 1270 unsigned int size) 1180 1271 { 1181 - void *va; 1182 - #if (PAGE_SIZE < 8192) 1183 - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; 1184 - #else 1185 - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + 1186 - SKB_DATA_ALIGN(IAVF_SKB_PAD + size); 1187 - #endif 1272 + u32 hr = rx_buffer->page->pp->p.offset; 1188 1273 struct sk_buff *skb; 1274 + void *va; 1189 1275 1190 - if (!rx_buffer || !size) 1191 - return NULL; 1192 1276 /* prefetch first cache line of first page */ 1193 - va = page_address(rx_buffer->page) + rx_buffer->page_offset; 1194 - net_prefetch(va); 1277 + va = page_address(rx_buffer->page) + rx_buffer->offset; 1278 + net_prefetch(va + hr); 1195 1279 1196 1280 /* build an skb around the page buffer */ 1197 - skb = napi_build_skb(va - IAVF_SKB_PAD, truesize); 1281 + skb = napi_build_skb(va, rx_buffer->truesize); 1198 1282 if (unlikely(!skb)) 1199 1283 return NULL; 1200 1284 1285 + skb_mark_for_recycle(skb); 1286 + 1201 1287 /* update pointers within the skb to store the data */ 1202 - skb_reserve(skb, IAVF_SKB_PAD); 1288 + skb_reserve(skb, hr); 1203 1289 __skb_put(skb, size); 1204 1290 1205 - /* buffer is used by skb, update page_offset */ 1206 - #if (PAGE_SIZE < 8192) 1207 - rx_buffer->page_offset ^= truesize; 1208 - #else 1209 - rx_buffer->page_offset += truesize; 1210 - #endif 1211 - 1212 1291 return skb; 1213 - } 1214 - 1215 - /** 1216 - * iavf_put_rx_buffer - Clean up used buffer and either recycle or free 1217 - * @rx_ring: rx descriptor ring to transact packets on 1218 - * @rx_buffer: rx buffer to pull data from 1219 - * 1220 - * This function will clean up the contents of the rx_buffer. It will 1221 - * either recycle the buffer or unmap it and free the associated resources. 1222 - */ 1223 - static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, 1224 - struct iavf_rx_buffer *rx_buffer) 1225 - { 1226 - if (!rx_buffer) 1227 - return; 1228 - 1229 - if (iavf_can_reuse_rx_page(rx_buffer)) { 1230 - /* hand second half of page back to the ring */ 1231 - iavf_reuse_rx_page(rx_ring, rx_buffer); 1232 - rx_ring->rx_stats.page_reuse_count++; 1233 - } else { 1234 - /* we are not reusing the buffer so unmap it */ 1235 - dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, 1236 - iavf_rx_pg_size(rx_ring), 1237 - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); 1238 - __page_frag_cache_drain(rx_buffer->page, 1239 - rx_buffer->pagecnt_bias); 1240 - } 1241 - 1242 - /* clear contents of buffer_info */ 1243 - rx_buffer->page = NULL; 1244 1292 } 1245 1293 1246 1294 /** ··· 1142 1498 bool failure = false; 1143 1499 1144 1500 while (likely(total_rx_packets < (unsigned int)budget)) { 1145 - struct iavf_rx_buffer *rx_buffer; 1501 + struct libeth_fqe *rx_buffer; 1146 1502 union iavf_rx_desc *rx_desc; 1147 1503 unsigned int size; 1148 1504 u16 vlan_tag = 0; ··· 1177 1533 size = FIELD_GET(IAVF_RXD_QW1_LENGTH_PBUF_MASK, qword); 1178 1534 1179 1535 iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); 1180 - rx_buffer = iavf_get_rx_buffer(rx_ring, size); 1536 + 1537 + rx_buffer = &rx_ring->rx_fqes[rx_ring->next_to_clean]; 1538 + if (!libeth_rx_sync_for_cpu(rx_buffer, size)) 1539 + goto skip_data; 1181 1540 1182 1541 /* retrieve a buffer from the ring */ 1183 1542 if (skb) 1184 - iavf_add_rx_frag(rx_ring, rx_buffer, skb, size); 1185 - else if (ring_uses_build_skb(rx_ring)) 1186 - skb = iavf_build_skb(rx_ring, rx_buffer, size); 1543 + iavf_add_rx_frag(skb, rx_buffer, size); 1187 1544 else 1188 - skb = iavf_construct_skb(rx_ring, rx_buffer, size); 1545 + skb = iavf_build_skb(rx_buffer, size); 1189 1546 1190 1547 /* exit if we failed to retrieve a buffer */ 1191 1548 if (!skb) { 1192 1549 rx_ring->rx_stats.alloc_buff_failed++; 1193 - if (rx_buffer && size) 1194 - rx_buffer->pagecnt_bias++; 1195 1550 break; 1196 1551 } 1197 1552 1198 - iavf_put_rx_buffer(rx_ring, rx_buffer); 1553 + skip_data: 1199 1554 cleaned_count++; 1200 1555 1201 - if (iavf_is_non_eop(rx_ring, rx_desc, skb)) 1556 + if (iavf_is_non_eop(rx_ring, rx_desc, skb) || unlikely(!skb)) 1202 1557 continue; 1203 1558 1204 1559 /* ERR_MASK will only have valid bits if EOP set, and ··· 1384 1741 clean_complete = false; 1385 1742 continue; 1386 1743 } 1387 - arm_wb |= ring->arm_wb; 1388 - ring->arm_wb = false; 1744 + arm_wb |= !!(ring->flags & IAVF_TXR_FLAGS_ARM_WB); 1745 + ring->flags &= ~IAVF_TXR_FLAGS_ARM_WB; 1389 1746 } 1390 1747 1391 1748 /* Handle case where we are called by netpoll with a budget of 0 */
+13 -133
drivers/net/ethernet/intel/iavf/iavf_txrx.h
··· 80 80 BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \ 81 81 BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) 82 82 83 - /* Supported Rx Buffer Sizes (a multiple of 128) */ 84 - #define IAVF_RXBUFFER_256 256 85 - #define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */ 86 - #define IAVF_RXBUFFER_2048 2048 87 - #define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ 88 - #define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ 89 - 90 - /* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we 91 - * reserve 2 more, and skb_shared_info adds an additional 384 bytes more, 92 - * this adds up to 512 bytes of extra data meaning the smallest allocation 93 - * we could have is 1K. 94 - * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab) 95 - * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab) 96 - */ 97 - #define IAVF_RX_HDR_SIZE IAVF_RXBUFFER_256 98 - #define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) 99 83 #define iavf_rx_desc iavf_32byte_rx_desc 100 - 101 - #define IAVF_RX_DMA_ATTR \ 102 - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) 103 - 104 - /* Attempt to maximize the headroom available for incoming frames. We 105 - * use a 2K buffer for receives and need 1536/1534 to store the data for 106 - * the frame. This leaves us with 512 bytes of room. From that we need 107 - * to deduct the space needed for the shared info and the padding needed 108 - * to IP align the frame. 109 - * 110 - * Note: For cache line sizes 256 or larger this value is going to end 111 - * up negative. In these cases we should fall back to the legacy 112 - * receive path. 113 - */ 114 - #if (PAGE_SIZE < 8192) 115 - #define IAVF_2K_TOO_SMALL_WITH_PADDING \ 116 - ((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048)) 117 - 118 - static inline int iavf_compute_pad(int rx_buf_len) 119 - { 120 - int page_size, pad_size; 121 - 122 - page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2); 123 - pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len; 124 - 125 - return pad_size; 126 - } 127 - 128 - static inline int iavf_skb_pad(void) 129 - { 130 - int rx_buf_len; 131 - 132 - /* If a 2K buffer cannot handle a standard Ethernet frame then 133 - * optimize padding for a 3K buffer instead of a 1.5K buffer. 134 - * 135 - * For a 3K buffer we need to add enough padding to allow for 136 - * tailroom due to NET_IP_ALIGN possibly shifting us out of 137 - * cache-line alignment. 138 - */ 139 - if (IAVF_2K_TOO_SMALL_WITH_PADDING) 140 - rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN); 141 - else 142 - rx_buf_len = IAVF_RXBUFFER_1536; 143 - 144 - /* if needed make room for NET_IP_ALIGN */ 145 - rx_buf_len -= NET_IP_ALIGN; 146 - 147 - return iavf_compute_pad(rx_buf_len); 148 - } 149 - 150 - #define IAVF_SKB_PAD iavf_skb_pad() 151 - #else 152 - #define IAVF_2K_TOO_SMALL_WITH_PADDING false 153 - #define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) 154 - #endif 155 84 156 85 /** 157 86 * iavf_test_staterr - tests bits in Rx descriptor status and error fields ··· 200 271 u32 tx_flags; 201 272 }; 202 273 203 - struct iavf_rx_buffer { 204 - dma_addr_t dma; 205 - struct page *page; 206 - #if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) 207 - __u32 page_offset; 208 - #else 209 - __u16 page_offset; 210 - #endif 211 - __u16 pagecnt_bias; 212 - }; 213 - 214 274 struct iavf_queue_stats { 215 275 u64 packets; 216 276 u64 bytes; ··· 211 293 u64 tx_done_old; 212 294 u64 tx_linearize; 213 295 u64 tx_force_wb; 214 - int prev_pkt_ctr; 215 296 u64 tx_lost_interrupt; 216 297 }; 217 298 ··· 218 301 u64 non_eop_descs; 219 302 u64 alloc_page_failed; 220 303 u64 alloc_buff_failed; 221 - u64 page_reuse_count; 222 - u64 realloc_count; 223 - }; 224 - 225 - enum iavf_ring_state_t { 226 - __IAVF_TX_FDIR_INIT_DONE, 227 - __IAVF_TX_XPS_INIT_DONE, 228 - __IAVF_RING_STATE_NBITS /* must be last */ 229 304 }; 230 305 231 306 /* some useful defines for virtchannel interface, which ··· 235 326 struct iavf_ring { 236 327 struct iavf_ring *next; /* pointer to next ring in q_vector */ 237 328 void *desc; /* Descriptor ring memory */ 238 - struct device *dev; /* Used for DMA mapping */ 329 + union { 330 + struct page_pool *pp; /* Used on Rx for buffer management */ 331 + struct device *dev; /* Used on Tx for DMA mapping */ 332 + }; 239 333 struct net_device *netdev; /* netdev ring maps to */ 240 334 union { 335 + struct libeth_fqe *rx_fqes; 241 336 struct iavf_tx_buffer *tx_bi; 242 - struct iavf_rx_buffer *rx_bi; 243 337 }; 244 - DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS); 245 - u16 queue_index; /* Queue number of ring */ 246 - u8 dcb_tc; /* Traffic class of ring */ 247 338 u8 __iomem *tail; 339 + u32 truesize; 340 + 341 + u16 queue_index; /* Queue number of ring */ 248 342 249 343 /* high bit set means dynamic, use accessors routines to read/write. 250 344 * hardware only supports 2us resolution for the ITR registers. ··· 257 345 u16 itr_setting; 258 346 259 347 u16 count; /* Number of descriptors */ 260 - u16 reg_idx; /* HW register index of the ring */ 261 - u16 rx_buf_len; 262 348 263 349 /* used in interrupt processing */ 264 350 u16 next_to_use; 265 351 u16 next_to_clean; 266 352 267 - u8 atr_sample_rate; 268 - u8 atr_count; 269 - 270 - bool ring_active; /* is ring online or not */ 271 - bool arm_wb; /* do something to arm write back */ 272 - u8 packet_stride; 273 - 274 353 u16 flags; 275 354 #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) 276 - #define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1) 355 + #define IAVF_TXR_FLAGS_ARM_WB BIT(1) 356 + /* BIT(2) is free */ 277 357 #define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) 278 358 #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) 279 359 #define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) ··· 278 374 struct iavf_rx_queue_stats rx_stats; 279 375 }; 280 376 377 + int prev_pkt_ctr; /* For Tx stall detection */ 281 378 unsigned int size; /* length of descriptor ring in bytes */ 282 379 dma_addr_t dma; /* physical address of ring */ 283 380 ··· 286 381 struct iavf_q_vector *q_vector; /* Backreference to associated vector */ 287 382 288 383 struct rcu_head rcu; /* to avoid race on free */ 289 - u16 next_to_alloc; 290 384 struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must 291 385 * return before it sees the EOP for 292 386 * the current packet, we save that skb ··· 294 390 * iavf_clean_rx_ring_irq() is called 295 391 * for this ring. 296 392 */ 393 + 394 + u32 rx_buf_len; 297 395 } ____cacheline_internodealigned_in_smp; 298 - 299 - static inline bool ring_uses_build_skb(struct iavf_ring *ring) 300 - { 301 - return !!(ring->flags & IAVF_RXR_FLAGS_BUILD_SKB_ENABLED); 302 - } 303 - 304 - static inline void set_ring_build_skb_enabled(struct iavf_ring *ring) 305 - { 306 - ring->flags |= IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; 307 - } 308 - 309 - static inline void clear_ring_build_skb_enabled(struct iavf_ring *ring) 310 - { 311 - ring->flags &= ~IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; 312 - } 313 396 314 397 #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002 315 398 #define IAVF_ITR_ADAPTIVE_MIN_USECS 0x0002 ··· 318 427 /* iterator for handling rings in ring container */ 319 428 #define iavf_for_each_ring(pos, head) \ 320 429 for (pos = (head).ring; pos != NULL; pos = pos->next) 321 - 322 - static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) 323 - { 324 - #if (PAGE_SIZE < 8192) 325 - if (ring->rx_buf_len > (PAGE_SIZE / 2)) 326 - return 1; 327 - #endif 328 - return 0; 329 - } 330 - 331 - #define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring)) 332 430 333 431 bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count); 334 432 netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
-90
drivers/net/ethernet/intel/iavf/iavf_type.h
··· 10 10 #include "iavf_adminq.h" 11 11 #include "iavf_devids.h" 12 12 13 - #define IAVF_RXQ_CTX_DBUFF_SHIFT 7 14 - 15 13 /* IAVF_MASK is a macro used on 32 bit registers */ 16 14 #define IAVF_MASK(mask, shift) ((u32)(mask) << (shift)) 17 15 ··· 324 326 325 327 #define IAVF_RXD_QW1_PTYPE_SHIFT 30 326 328 #define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT) 327 - 328 - /* Packet type non-ip values */ 329 - enum iavf_rx_l2_ptype { 330 - IAVF_RX_PTYPE_L2_RESERVED = 0, 331 - IAVF_RX_PTYPE_L2_MAC_PAY2 = 1, 332 - IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, 333 - IAVF_RX_PTYPE_L2_FIP_PAY2 = 3, 334 - IAVF_RX_PTYPE_L2_OUI_PAY2 = 4, 335 - IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, 336 - IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6, 337 - IAVF_RX_PTYPE_L2_ECP_PAY2 = 7, 338 - IAVF_RX_PTYPE_L2_EVB_PAY2 = 8, 339 - IAVF_RX_PTYPE_L2_QCN_PAY2 = 9, 340 - IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10, 341 - IAVF_RX_PTYPE_L2_ARP = 11, 342 - IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12, 343 - IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, 344 - IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, 345 - IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, 346 - IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, 347 - IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, 348 - IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, 349 - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, 350 - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, 351 - IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, 352 - IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, 353 - IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, 354 - IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, 355 - IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 356 - }; 357 - 358 - struct iavf_rx_ptype_decoded { 359 - u32 known:1; 360 - u32 outer_ip:1; 361 - u32 outer_ip_ver:1; 362 - u32 outer_frag:1; 363 - u32 tunnel_type:3; 364 - u32 tunnel_end_prot:2; 365 - u32 tunnel_end_frag:1; 366 - u32 inner_prot:4; 367 - u32 payload_layer:3; 368 - }; 369 - 370 - enum iavf_rx_ptype_outer_ip { 371 - IAVF_RX_PTYPE_OUTER_L2 = 0, 372 - IAVF_RX_PTYPE_OUTER_IP = 1 373 - }; 374 - 375 - enum iavf_rx_ptype_outer_ip_ver { 376 - IAVF_RX_PTYPE_OUTER_NONE = 0, 377 - IAVF_RX_PTYPE_OUTER_IPV4 = 0, 378 - IAVF_RX_PTYPE_OUTER_IPV6 = 1 379 - }; 380 - 381 - enum iavf_rx_ptype_outer_fragmented { 382 - IAVF_RX_PTYPE_NOT_FRAG = 0, 383 - IAVF_RX_PTYPE_FRAG = 1 384 - }; 385 - 386 - enum iavf_rx_ptype_tunnel_type { 387 - IAVF_RX_PTYPE_TUNNEL_NONE = 0, 388 - IAVF_RX_PTYPE_TUNNEL_IP_IP = 1, 389 - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, 390 - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, 391 - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, 392 - }; 393 - 394 - enum iavf_rx_ptype_tunnel_end_prot { 395 - IAVF_RX_PTYPE_TUNNEL_END_NONE = 0, 396 - IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1, 397 - IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2, 398 - }; 399 - 400 - enum iavf_rx_ptype_inner_prot { 401 - IAVF_RX_PTYPE_INNER_PROT_NONE = 0, 402 - IAVF_RX_PTYPE_INNER_PROT_UDP = 1, 403 - IAVF_RX_PTYPE_INNER_PROT_TCP = 2, 404 - IAVF_RX_PTYPE_INNER_PROT_SCTP = 3, 405 - IAVF_RX_PTYPE_INNER_PROT_ICMP = 4, 406 - IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5 407 - }; 408 - 409 - enum iavf_rx_ptype_payload_layer { 410 - IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, 411 - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, 412 - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, 413 - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, 414 - }; 415 329 416 330 #define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38 417 331 #define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \
+6 -11
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright(c) 2013 - 2018 Intel Corporation. */ 3 3 4 + #include <linux/net/intel/libie/rx.h> 5 + 4 6 #include "iavf.h" 5 7 #include "iavf_prototype.h" 6 8 ··· 270 268 void iavf_configure_queues(struct iavf_adapter *adapter) 271 269 { 272 270 struct virtchnl_vsi_queue_config_info *vqci; 273 - int i, max_frame = adapter->vf_res->max_mtu; 274 271 int pairs = adapter->num_active_queues; 275 272 struct virtchnl_queue_pair_info *vqpi; 273 + u32 i, max_frame; 276 274 size_t len; 277 275 278 - if (max_frame > IAVF_MAX_RXBUFFER || !max_frame) 279 - max_frame = IAVF_MAX_RXBUFFER; 276 + max_frame = LIBIE_MAX_RX_FRM_LEN(adapter->rx_rings->pp->p.offset); 277 + max_frame = min_not_zero(adapter->vf_res->max_mtu, max_frame); 280 278 281 279 if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { 282 280 /* bail because we already have a command pending */ ··· 289 287 vqci = kzalloc(len, GFP_KERNEL); 290 288 if (!vqci) 291 289 return; 292 - 293 - /* Limit maximum frame size when jumbo frames is not enabled */ 294 - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX) && 295 - (adapter->netdev->mtu <= ETH_DATA_LEN)) 296 - max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; 297 290 298 291 vqci->vsi_id = adapter->vsi_res->vsi_id; 299 292 vqci->num_queue_pairs = pairs; ··· 306 309 vqpi->rxq.ring_len = adapter->rx_rings[i].count; 307 310 vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma; 308 311 vqpi->rxq.max_pkt_size = max_frame; 309 - vqpi->rxq.databuffer_size = 310 - ALIGN(adapter->rx_rings[i].rx_buf_len, 311 - BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT)); 312 + vqpi->rxq.databuffer_size = adapter->rx_rings[i].rx_buf_len; 312 313 if (CRC_OFFLOAD_ALLOWED(adapter)) 313 314 vqpi->rxq.crc_disable = !!(adapter->netdev->features & 314 315 NETIF_F_RXFCS);
-320
drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
··· 160 160 (0x1ULL << ICE_FXD_FLTR_WB_QW1_FAIL_PROF_S) 161 161 #define ICE_FXD_FLTR_WB_QW1_FAIL_PROF_YES 0x1ULL 162 162 163 - struct ice_rx_ptype_decoded { 164 - u32 known:1; 165 - u32 outer_ip:1; 166 - u32 outer_ip_ver:2; 167 - u32 outer_frag:1; 168 - u32 tunnel_type:3; 169 - u32 tunnel_end_prot:2; 170 - u32 tunnel_end_frag:1; 171 - u32 inner_prot:4; 172 - u32 payload_layer:3; 173 - }; 174 - 175 - enum ice_rx_ptype_outer_ip { 176 - ICE_RX_PTYPE_OUTER_L2 = 0, 177 - ICE_RX_PTYPE_OUTER_IP = 1, 178 - }; 179 - 180 - enum ice_rx_ptype_outer_ip_ver { 181 - ICE_RX_PTYPE_OUTER_NONE = 0, 182 - ICE_RX_PTYPE_OUTER_IPV4 = 1, 183 - ICE_RX_PTYPE_OUTER_IPV6 = 2, 184 - }; 185 - 186 - enum ice_rx_ptype_outer_fragmented { 187 - ICE_RX_PTYPE_NOT_FRAG = 0, 188 - ICE_RX_PTYPE_FRAG = 1, 189 - }; 190 - 191 - enum ice_rx_ptype_tunnel_type { 192 - ICE_RX_PTYPE_TUNNEL_NONE = 0, 193 - ICE_RX_PTYPE_TUNNEL_IP_IP = 1, 194 - ICE_RX_PTYPE_TUNNEL_IP_GRENAT = 2, 195 - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, 196 - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, 197 - }; 198 - 199 - enum ice_rx_ptype_tunnel_end_prot { 200 - ICE_RX_PTYPE_TUNNEL_END_NONE = 0, 201 - ICE_RX_PTYPE_TUNNEL_END_IPV4 = 1, 202 - ICE_RX_PTYPE_TUNNEL_END_IPV6 = 2, 203 - }; 204 - 205 - enum ice_rx_ptype_inner_prot { 206 - ICE_RX_PTYPE_INNER_PROT_NONE = 0, 207 - ICE_RX_PTYPE_INNER_PROT_UDP = 1, 208 - ICE_RX_PTYPE_INNER_PROT_TCP = 2, 209 - ICE_RX_PTYPE_INNER_PROT_SCTP = 3, 210 - ICE_RX_PTYPE_INNER_PROT_ICMP = 4, 211 - ICE_RX_PTYPE_INNER_PROT_TIMESYNC = 5, 212 - }; 213 - 214 - enum ice_rx_ptype_payload_layer { 215 - ICE_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, 216 - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, 217 - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, 218 - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, 219 - }; 220 - 221 163 /* Rx Flex Descriptor 222 164 * This descriptor is used instead of the legacy version descriptor when 223 165 * ice_rlan_ctx.adv_desc is set ··· 592 650 u8 pkt_shaper_prof_idx; 593 651 u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */ 594 652 }; 595 - 596 - /* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the 597 - * hardware to a bit-field that can be used by SW to more easily determine the 598 - * packet type. 599 - * 600 - * Macros are used to shorten the table lines and make this table human 601 - * readable. 602 - * 603 - * We store the PTYPE in the top byte of the bit field - this is just so that 604 - * we can check that the table doesn't have a row missing, as the index into 605 - * the table should be the PTYPE. 606 - * 607 - * Typical work flow: 608 - * 609 - * IF NOT ice_ptype_lkup[ptype].known 610 - * THEN 611 - * Packet is unknown 612 - * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP 613 - * Use the rest of the fields to look at the tunnels, inner protocols, etc 614 - * ELSE 615 - * Use the enum ice_rx_l2_ptype to decode the packet type 616 - * ENDIF 617 - */ 618 - #define ICE_PTYPES \ 619 - /* L2 Packet types */ \ 620 - ICE_PTT_UNUSED_ENTRY(0), \ 621 - ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), \ 622 - ICE_PTT_UNUSED_ENTRY(2), \ 623 - ICE_PTT_UNUSED_ENTRY(3), \ 624 - ICE_PTT_UNUSED_ENTRY(4), \ 625 - ICE_PTT_UNUSED_ENTRY(5), \ 626 - ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ 627 - ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ 628 - ICE_PTT_UNUSED_ENTRY(8), \ 629 - ICE_PTT_UNUSED_ENTRY(9), \ 630 - ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ 631 - ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ 632 - ICE_PTT_UNUSED_ENTRY(12), \ 633 - ICE_PTT_UNUSED_ENTRY(13), \ 634 - ICE_PTT_UNUSED_ENTRY(14), \ 635 - ICE_PTT_UNUSED_ENTRY(15), \ 636 - ICE_PTT_UNUSED_ENTRY(16), \ 637 - ICE_PTT_UNUSED_ENTRY(17), \ 638 - ICE_PTT_UNUSED_ENTRY(18), \ 639 - ICE_PTT_UNUSED_ENTRY(19), \ 640 - ICE_PTT_UNUSED_ENTRY(20), \ 641 - ICE_PTT_UNUSED_ENTRY(21), \ 642 - \ 643 - /* Non Tunneled IPv4 */ \ 644 - ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), \ 645 - ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), \ 646 - ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), \ 647 - ICE_PTT_UNUSED_ENTRY(25), \ 648 - ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), \ 649 - ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), \ 650 - ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), \ 651 - \ 652 - /* IPv4 --> IPv4 */ \ 653 - ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), \ 654 - ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), \ 655 - ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), \ 656 - ICE_PTT_UNUSED_ENTRY(32), \ 657 - ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), \ 658 - ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), \ 659 - ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), \ 660 - \ 661 - /* IPv4 --> IPv6 */ \ 662 - ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), \ 663 - ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), \ 664 - ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), \ 665 - ICE_PTT_UNUSED_ENTRY(39), \ 666 - ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), \ 667 - ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), \ 668 - ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), \ 669 - \ 670 - /* IPv4 --> GRE/NAT */ \ 671 - ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), \ 672 - \ 673 - /* IPv4 --> GRE/NAT --> IPv4 */ \ 674 - ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), \ 675 - ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), \ 676 - ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), \ 677 - ICE_PTT_UNUSED_ENTRY(47), \ 678 - ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), \ 679 - ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), \ 680 - ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), \ 681 - \ 682 - /* IPv4 --> GRE/NAT --> IPv6 */ \ 683 - ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), \ 684 - ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), \ 685 - ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), \ 686 - ICE_PTT_UNUSED_ENTRY(54), \ 687 - ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), \ 688 - ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), \ 689 - ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), \ 690 - \ 691 - /* IPv4 --> GRE/NAT --> MAC */ \ 692 - ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), \ 693 - \ 694 - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ \ 695 - ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), \ 696 - ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), \ 697 - ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), \ 698 - ICE_PTT_UNUSED_ENTRY(62), \ 699 - ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), \ 700 - ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), \ 701 - ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), \ 702 - \ 703 - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ \ 704 - ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), \ 705 - ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), \ 706 - ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), \ 707 - ICE_PTT_UNUSED_ENTRY(69), \ 708 - ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), \ 709 - ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), \ 710 - ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), \ 711 - \ 712 - /* IPv4 --> GRE/NAT --> MAC/VLAN */ \ 713 - ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), \ 714 - \ 715 - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ \ 716 - ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), \ 717 - ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), \ 718 - ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), \ 719 - ICE_PTT_UNUSED_ENTRY(77), \ 720 - ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), \ 721 - ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), \ 722 - ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), \ 723 - \ 724 - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ \ 725 - ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), \ 726 - ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), \ 727 - ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), \ 728 - ICE_PTT_UNUSED_ENTRY(84), \ 729 - ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), \ 730 - ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), \ 731 - ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), \ 732 - \ 733 - /* Non Tunneled IPv6 */ \ 734 - ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), \ 735 - ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), \ 736 - ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), \ 737 - ICE_PTT_UNUSED_ENTRY(91), \ 738 - ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), \ 739 - ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), \ 740 - ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), \ 741 - \ 742 - /* IPv6 --> IPv4 */ \ 743 - ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), \ 744 - ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), \ 745 - ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), \ 746 - ICE_PTT_UNUSED_ENTRY(98), \ 747 - ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), \ 748 - ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), \ 749 - ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), \ 750 - \ 751 - /* IPv6 --> IPv6 */ \ 752 - ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), \ 753 - ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), \ 754 - ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), \ 755 - ICE_PTT_UNUSED_ENTRY(105), \ 756 - ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), \ 757 - ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), \ 758 - ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), \ 759 - \ 760 - /* IPv6 --> GRE/NAT */ \ 761 - ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), \ 762 - \ 763 - /* IPv6 --> GRE/NAT -> IPv4 */ \ 764 - ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), \ 765 - ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), \ 766 - ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), \ 767 - ICE_PTT_UNUSED_ENTRY(113), \ 768 - ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), \ 769 - ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), \ 770 - ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), \ 771 - \ 772 - /* IPv6 --> GRE/NAT -> IPv6 */ \ 773 - ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), \ 774 - ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), \ 775 - ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), \ 776 - ICE_PTT_UNUSED_ENTRY(120), \ 777 - ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), \ 778 - ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), \ 779 - ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), \ 780 - \ 781 - /* IPv6 --> GRE/NAT -> MAC */ \ 782 - ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), \ 783 - \ 784 - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ \ 785 - ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), \ 786 - ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), \ 787 - ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), \ 788 - ICE_PTT_UNUSED_ENTRY(128), \ 789 - ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), \ 790 - ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), \ 791 - ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), \ 792 - \ 793 - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ \ 794 - ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), \ 795 - ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), \ 796 - ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), \ 797 - ICE_PTT_UNUSED_ENTRY(135), \ 798 - ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), \ 799 - ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), \ 800 - ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), \ 801 - \ 802 - /* IPv6 --> GRE/NAT -> MAC/VLAN */ \ 803 - ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), \ 804 - \ 805 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ \ 806 - ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), \ 807 - ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), \ 808 - ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), \ 809 - ICE_PTT_UNUSED_ENTRY(143), \ 810 - ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), \ 811 - ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), \ 812 - ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), \ 813 - \ 814 - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ \ 815 - ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), \ 816 - ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), \ 817 - ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), \ 818 - ICE_PTT_UNUSED_ENTRY(150), \ 819 - ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), \ 820 - ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), \ 821 - ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), 822 - 823 - #define ICE_NUM_DEFINED_PTYPES 154 824 - 825 - /* macro to make the table lines short, use explicit indexing with [PTYPE] */ 826 - #define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ 827 - [PTYPE] = { \ 828 - 1, \ 829 - ICE_RX_PTYPE_OUTER_##OUTER_IP, \ 830 - ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \ 831 - ICE_RX_PTYPE_##OUTER_FRAG, \ 832 - ICE_RX_PTYPE_TUNNEL_##T, \ 833 - ICE_RX_PTYPE_TUNNEL_END_##TE, \ 834 - ICE_RX_PTYPE_##TEF, \ 835 - ICE_RX_PTYPE_INNER_PROT_##I, \ 836 - ICE_RX_PTYPE_PAYLOAD_LAYER_##PL } 837 - 838 - #define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 839 - 840 - /* shorter macros makes the table fit but are terse */ 841 - #define ICE_RX_PTYPE_NOF ICE_RX_PTYPE_NOT_FRAG 842 - #define ICE_RX_PTYPE_FRG ICE_RX_PTYPE_FRAG 843 - 844 - /* Lookup table mapping in the 10-bit HW PTYPE to the bit field for decoding */ 845 - static const struct ice_rx_ptype_decoded ice_ptype_lkup[BIT(10)] = { 846 - ICE_PTYPES 847 - 848 - /* unused entries */ 849 - [ICE_NUM_DEFINED_PTYPES ... 1023] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } 850 - }; 851 - 852 - static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype) 853 - { 854 - return ice_ptype_lkup[ptype]; 855 - } 856 - 857 653 858 654 #endif /* _ICE_LAN_TX_RX_H_ */
+1
drivers/net/ethernet/intel/ice/ice_main.c
··· 37 37 38 38 MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>"); 39 39 MODULE_DESCRIPTION(DRV_SUMMARY); 40 + MODULE_IMPORT_NS(LIBIE); 40 41 MODULE_LICENSE("GPL v2"); 41 42 MODULE_FIRMWARE(ICE_DDP_PKG_FILE); 42 43
+16 -95
drivers/net/ethernet/intel/ice/ice_txrx_lib.c
··· 2 2 /* Copyright (c) 2019, Intel Corporation. */ 3 3 4 4 #include <linux/filter.h> 5 + #include <linux/net/intel/libie/rx.h> 5 6 6 7 #include "ice_txrx_lib.h" 7 8 #include "ice_eswitch.h" ··· 40 39 } 41 40 42 41 /** 43 - * ice_ptype_to_htype - get a hash type 44 - * @ptype: the ptype value from the descriptor 45 - * 46 - * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by 47 - * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of 48 - * Rx desc. 49 - */ 50 - static enum pkt_hash_types ice_ptype_to_htype(u16 ptype) 51 - { 52 - struct ice_rx_ptype_decoded decoded = ice_decode_rx_desc_ptype(ptype); 53 - 54 - if (!decoded.known) 55 - return PKT_HASH_TYPE_NONE; 56 - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4) 57 - return PKT_HASH_TYPE_L4; 58 - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3) 59 - return PKT_HASH_TYPE_L3; 60 - if (decoded.outer_ip == ICE_RX_PTYPE_OUTER_L2) 61 - return PKT_HASH_TYPE_L2; 62 - 63 - return PKT_HASH_TYPE_NONE; 64 - } 65 - 66 - /** 67 42 * ice_get_rx_hash - get RX hash value from descriptor 68 43 * @rx_desc: specific descriptor 69 44 * ··· 68 91 const union ice_32b_rx_flex_desc *rx_desc, 69 92 struct sk_buff *skb, u16 rx_ptype) 70 93 { 94 + struct libeth_rx_pt decoded; 71 95 u32 hash; 72 96 73 - if (!(rx_ring->netdev->features & NETIF_F_RXHASH)) 97 + decoded = libie_rx_pt_parse(rx_ptype); 98 + if (!libeth_rx_pt_has_hash(rx_ring->netdev, decoded)) 74 99 return; 75 100 76 101 hash = ice_get_rx_hash(rx_desc); 77 102 if (likely(hash)) 78 - skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); 103 + libeth_rx_pt_set_hash(skb, hash, decoded); 79 104 } 80 105 81 106 /** ··· 93 114 ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, 94 115 union ice_32b_rx_flex_desc *rx_desc, u16 ptype) 95 116 { 96 - struct ice_rx_ptype_decoded decoded; 117 + struct libeth_rx_pt decoded; 97 118 u16 rx_status0, rx_status1; 98 119 bool ipv4, ipv6; 99 120 100 - rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); 101 - rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); 102 - 103 - decoded = ice_decode_rx_desc_ptype(ptype); 104 - 105 121 /* Start with CHECKSUM_NONE and by default csum_level = 0 */ 106 122 skb->ip_summed = CHECKSUM_NONE; 107 - skb_checksum_none_assert(skb); 108 123 109 - /* check if Rx checksum is enabled */ 110 - if (!(ring->netdev->features & NETIF_F_RXCSUM)) 124 + decoded = libie_rx_pt_parse(ptype); 125 + if (!libeth_rx_pt_has_checksum(ring->netdev, decoded)) 111 126 return; 127 + 128 + rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); 129 + rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); 112 130 113 131 /* check if HW has decoded the packet and checksum */ 114 132 if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S))) 115 133 return; 116 134 117 - if (!(decoded.known && decoded.outer_ip)) 118 - return; 119 - 120 - ipv4 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && 121 - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV4); 122 - ipv6 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && 123 - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV6); 135 + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; 136 + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; 124 137 125 138 if (ipv4 && (rx_status0 & (BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) { 126 139 ring->vsi->back->hw_rx_eipe_error++; ··· 140 169 * we need to bump the checksum level by 1 to reflect the fact that 141 170 * we are indicating we validated the inner checksum. 142 171 */ 143 - if (decoded.tunnel_type >= ICE_RX_PTYPE_TUNNEL_IP_GRENAT) 172 + if (decoded.tunnel_type >= LIBETH_RX_PT_TUNNEL_IP_GRENAT) 144 173 skb->csum_level = 1; 145 174 146 - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ 147 - switch (decoded.inner_prot) { 148 - case ICE_RX_PTYPE_INNER_PROT_TCP: 149 - case ICE_RX_PTYPE_INNER_PROT_UDP: 150 - case ICE_RX_PTYPE_INNER_PROT_SCTP: 151 - skb->ip_summed = CHECKSUM_UNNECESSARY; 152 - break; 153 - default: 154 - break; 155 - } 175 + skb->ip_summed = CHECKSUM_UNNECESSARY; 156 176 return; 157 177 158 178 checksum_fail: ··· 498 536 return 0; 499 537 } 500 538 501 - /* Define a ptype index -> XDP hash type lookup table. 502 - * It uses the same ptype definitions as ice_decode_rx_desc_ptype[], 503 - * avoiding possible copy-paste errors. 504 - */ 505 - #undef ICE_PTT 506 - #undef ICE_PTT_UNUSED_ENTRY 507 - 508 - #define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ 509 - [PTYPE] = XDP_RSS_L3_##OUTER_IP_VER | XDP_RSS_L4_##I | XDP_RSS_TYPE_##PL 510 - 511 - #define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = 0 512 - 513 - /* A few supplementary definitions for when XDP hash types do not coincide 514 - * with what can be generated from ptype definitions 515 - * by means of preprocessor concatenation. 516 - */ 517 - #define XDP_RSS_L3_NONE XDP_RSS_TYPE_NONE 518 - #define XDP_RSS_L4_NONE XDP_RSS_TYPE_NONE 519 - #define XDP_RSS_TYPE_PAY2 XDP_RSS_TYPE_L2 520 - #define XDP_RSS_TYPE_PAY3 XDP_RSS_TYPE_NONE 521 - #define XDP_RSS_TYPE_PAY4 XDP_RSS_L4 522 - 523 - static const enum xdp_rss_hash_type 524 - ice_ptype_to_xdp_hash[ICE_NUM_DEFINED_PTYPES] = { 525 - ICE_PTYPES 526 - }; 527 - 528 - #undef XDP_RSS_L3_NONE 529 - #undef XDP_RSS_L4_NONE 530 - #undef XDP_RSS_TYPE_PAY2 531 - #undef XDP_RSS_TYPE_PAY3 532 - #undef XDP_RSS_TYPE_PAY4 533 - 534 - #undef ICE_PTT 535 - #undef ICE_PTT_UNUSED_ENTRY 536 - 537 539 /** 538 540 * ice_xdp_rx_hash_type - Get XDP-specific hash type from the RX descriptor 539 541 * @eop_desc: End of Packet descriptor ··· 505 579 static enum xdp_rss_hash_type 506 580 ice_xdp_rx_hash_type(const union ice_32b_rx_flex_desc *eop_desc) 507 581 { 508 - u16 ptype = ice_get_ptype(eop_desc); 509 - 510 - if (unlikely(ptype >= ICE_NUM_DEFINED_PTYPES)) 511 - return 0; 512 - 513 - return ice_ptype_to_xdp_hash[ptype]; 582 + return libie_rx_pt_parse(ice_get_ptype(eop_desc)).hash_type; 514 583 } 515 584 516 585 /**
+9
drivers/net/ethernet/intel/libeth/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # Copyright (C) 2024 Intel Corporation 3 + 4 + config LIBETH 5 + tristate 6 + select PAGE_POOL 7 + help 8 + libeth is a common library containing routines shared between several 9 + drivers, but not yet promoted to the generic kernel API.
+6
drivers/net/ethernet/intel/libeth/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # Copyright (C) 2024 Intel Corporation 3 + 4 + obj-$(CONFIG_LIBETH) += libeth.o 5 + 6 + libeth-objs += rx.o
+150
drivers/net/ethernet/intel/libeth/rx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (C) 2024 Intel Corporation */ 3 + 4 + #include <net/libeth/rx.h> 5 + 6 + /* Rx buffer management */ 7 + 8 + /** 9 + * libeth_rx_hw_len - get the actual buffer size to be passed to HW 10 + * @pp: &page_pool_params of the netdev to calculate the size for 11 + * @max_len: maximum buffer size for a single descriptor 12 + * 13 + * Return: HW-writeable length per one buffer to pass it to the HW accounting: 14 + * MTU the @dev has, HW required alignment, minimum and maximum allowed values, 15 + * and system's page size. 16 + */ 17 + static u32 libeth_rx_hw_len(const struct page_pool_params *pp, u32 max_len) 18 + { 19 + u32 len; 20 + 21 + len = READ_ONCE(pp->netdev->mtu) + LIBETH_RX_LL_LEN; 22 + len = ALIGN(len, LIBETH_RX_BUF_STRIDE); 23 + len = min3(len, ALIGN_DOWN(max_len ? : U32_MAX, LIBETH_RX_BUF_STRIDE), 24 + pp->max_len); 25 + 26 + return len; 27 + } 28 + 29 + /** 30 + * libeth_rx_fq_create - create a PP with the default libeth settings 31 + * @fq: buffer queue struct to fill 32 + * @napi: &napi_struct covering this PP (no usage outside its poll loops) 33 + * 34 + * Return: %0 on success, -%errno on failure. 35 + */ 36 + int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi) 37 + { 38 + struct page_pool_params pp = { 39 + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, 40 + .order = LIBETH_RX_PAGE_ORDER, 41 + .pool_size = fq->count, 42 + .nid = fq->nid, 43 + .dev = napi->dev->dev.parent, 44 + .netdev = napi->dev, 45 + .napi = napi, 46 + .dma_dir = DMA_FROM_DEVICE, 47 + .offset = LIBETH_SKB_HEADROOM, 48 + }; 49 + struct libeth_fqe *fqes; 50 + struct page_pool *pool; 51 + 52 + /* HW-writeable / syncable length per one page */ 53 + pp.max_len = LIBETH_RX_PAGE_LEN(pp.offset); 54 + 55 + /* HW-writeable length per buffer */ 56 + fq->buf_len = libeth_rx_hw_len(&pp, fq->buf_len); 57 + /* Buffer size to allocate */ 58 + fq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp.offset + 59 + fq->buf_len)); 60 + 61 + pool = page_pool_create(&pp); 62 + if (IS_ERR(pool)) 63 + return PTR_ERR(pool); 64 + 65 + fqes = kvcalloc_node(fq->count, sizeof(*fqes), GFP_KERNEL, fq->nid); 66 + if (!fqes) 67 + goto err_buf; 68 + 69 + fq->fqes = fqes; 70 + fq->pp = pool; 71 + 72 + return 0; 73 + 74 + err_buf: 75 + page_pool_destroy(pool); 76 + 77 + return -ENOMEM; 78 + } 79 + EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_create, LIBETH); 80 + 81 + /** 82 + * libeth_rx_fq_destroy - destroy a &page_pool created by libeth 83 + * @fq: buffer queue to process 84 + */ 85 + void libeth_rx_fq_destroy(struct libeth_fq *fq) 86 + { 87 + kvfree(fq->fqes); 88 + page_pool_destroy(fq->pp); 89 + } 90 + EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_destroy, LIBETH); 91 + 92 + /** 93 + * libeth_rx_recycle_slow - recycle a libeth page from the NAPI context 94 + * @page: page to recycle 95 + * 96 + * To be used on exceptions or rare cases not requiring fast inline recycling. 97 + */ 98 + void libeth_rx_recycle_slow(struct page *page) 99 + { 100 + page_pool_recycle_direct(page->pp, page); 101 + } 102 + EXPORT_SYMBOL_NS_GPL(libeth_rx_recycle_slow, LIBETH); 103 + 104 + /* Converting abstract packet type numbers into a software structure with 105 + * the packet parameters to do O(1) lookup on Rx. 106 + */ 107 + 108 + static const u16 libeth_rx_pt_xdp_oip[] = { 109 + [LIBETH_RX_PT_OUTER_L2] = XDP_RSS_TYPE_NONE, 110 + [LIBETH_RX_PT_OUTER_IPV4] = XDP_RSS_L3_IPV4, 111 + [LIBETH_RX_PT_OUTER_IPV6] = XDP_RSS_L3_IPV6, 112 + }; 113 + 114 + static const u16 libeth_rx_pt_xdp_iprot[] = { 115 + [LIBETH_RX_PT_INNER_NONE] = XDP_RSS_TYPE_NONE, 116 + [LIBETH_RX_PT_INNER_UDP] = XDP_RSS_L4_UDP, 117 + [LIBETH_RX_PT_INNER_TCP] = XDP_RSS_L4_TCP, 118 + [LIBETH_RX_PT_INNER_SCTP] = XDP_RSS_L4_SCTP, 119 + [LIBETH_RX_PT_INNER_ICMP] = XDP_RSS_L4_ICMP, 120 + [LIBETH_RX_PT_INNER_TIMESYNC] = XDP_RSS_TYPE_NONE, 121 + }; 122 + 123 + static const u16 libeth_rx_pt_xdp_pl[] = { 124 + [LIBETH_RX_PT_PAYLOAD_NONE] = XDP_RSS_TYPE_NONE, 125 + [LIBETH_RX_PT_PAYLOAD_L2] = XDP_RSS_TYPE_NONE, 126 + [LIBETH_RX_PT_PAYLOAD_L3] = XDP_RSS_TYPE_NONE, 127 + [LIBETH_RX_PT_PAYLOAD_L4] = XDP_RSS_L4, 128 + }; 129 + 130 + /** 131 + * libeth_rx_pt_gen_hash_type - generate an XDP RSS hash type for a PT 132 + * @pt: PT structure to evaluate 133 + * 134 + * Generates ```hash_type``` field with XDP RSS type values from the parsed 135 + * packet parameters if they're obtained dynamically at runtime. 136 + */ 137 + void libeth_rx_pt_gen_hash_type(struct libeth_rx_pt *pt) 138 + { 139 + pt->hash_type = 0; 140 + pt->hash_type |= libeth_rx_pt_xdp_oip[pt->outer_ip]; 141 + pt->hash_type |= libeth_rx_pt_xdp_iprot[pt->inner_prot]; 142 + pt->hash_type |= libeth_rx_pt_xdp_pl[pt->payload_layer]; 143 + } 144 + EXPORT_SYMBOL_NS_GPL(libeth_rx_pt_gen_hash_type, LIBETH); 145 + 146 + /* Module */ 147 + 148 + MODULE_AUTHOR("Intel Corporation"); 149 + MODULE_DESCRIPTION("Common Ethernet library"); 150 + MODULE_LICENSE("GPL");
+10
drivers/net/ethernet/intel/libie/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # Copyright (C) 2024 Intel Corporation 3 + 4 + config LIBIE 5 + tristate 6 + select LIBETH 7 + help 8 + libie (Intel Ethernet library) is a common library built on top of 9 + libeth and containing vendor-specific routines shared between several 10 + Intel Ethernet drivers.
+6
drivers/net/ethernet/intel/libie/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # Copyright (C) 2024 Intel Corporation 3 + 4 + obj-$(CONFIG_LIBIE) += libie.o 5 + 6 + libie-objs += rx.o
+124
drivers/net/ethernet/intel/libie/rx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (C) 2024 Intel Corporation */ 3 + 4 + #include <linux/net/intel/libie/rx.h> 5 + 6 + /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed 7 + * bitfield struct. 8 + */ 9 + 10 + /* A few supplementary definitions for when XDP hash types do not coincide 11 + * with what can be generated from ptype definitions by means of preprocessor 12 + * concatenation. 13 + */ 14 + #define XDP_RSS_L3_L2 XDP_RSS_TYPE_NONE 15 + #define XDP_RSS_L4_NONE XDP_RSS_TYPE_NONE 16 + #define XDP_RSS_L4_TIMESYNC XDP_RSS_TYPE_NONE 17 + #define XDP_RSS_TYPE_L3 XDP_RSS_TYPE_NONE 18 + #define XDP_RSS_TYPE_L4 XDP_RSS_L4 19 + 20 + #define LIBIE_RX_PT(oip, ofrag, tun, tp, tefr, iprot, pl) { \ 21 + .outer_ip = LIBETH_RX_PT_OUTER_##oip, \ 22 + .outer_frag = LIBETH_RX_PT_##ofrag, \ 23 + .tunnel_type = LIBETH_RX_PT_TUNNEL_IP_##tun, \ 24 + .tunnel_end_prot = LIBETH_RX_PT_TUNNEL_END_##tp, \ 25 + .tunnel_end_frag = LIBETH_RX_PT_##tefr, \ 26 + .inner_prot = LIBETH_RX_PT_INNER_##iprot, \ 27 + .payload_layer = LIBETH_RX_PT_PAYLOAD_##pl, \ 28 + .hash_type = XDP_RSS_L3_##oip | \ 29 + XDP_RSS_L4_##iprot | \ 30 + XDP_RSS_TYPE_##pl, \ 31 + } 32 + 33 + #define LIBIE_RX_PT_UNUSED { } 34 + 35 + #define __LIBIE_RX_PT_L2(iprot, pl) \ 36 + LIBIE_RX_PT(L2, NOT_FRAG, NONE, NONE, NOT_FRAG, iprot, pl) 37 + #define LIBIE_RX_PT_L2 __LIBIE_RX_PT_L2(NONE, L2) 38 + #define LIBIE_RX_PT_TS __LIBIE_RX_PT_L2(TIMESYNC, L2) 39 + #define LIBIE_RX_PT_L3 __LIBIE_RX_PT_L2(NONE, L3) 40 + 41 + #define LIBIE_RX_PT_IP_FRAG(oip) \ 42 + LIBIE_RX_PT(IPV##oip, FRAG, NONE, NONE, NOT_FRAG, NONE, L3) 43 + #define LIBIE_RX_PT_IP_L3(oip, tun, teprot, tefr) \ 44 + LIBIE_RX_PT(IPV##oip, NOT_FRAG, tun, teprot, tefr, NONE, L3) 45 + #define LIBIE_RX_PT_IP_L4(oip, tun, teprot, iprot) \ 46 + LIBIE_RX_PT(IPV##oip, NOT_FRAG, tun, teprot, NOT_FRAG, iprot, L4) 47 + 48 + #define LIBIE_RX_PT_IP_NOF(oip, tun, ver) \ 49 + LIBIE_RX_PT_IP_L3(oip, tun, ver, NOT_FRAG), \ 50 + LIBIE_RX_PT_IP_L4(oip, tun, ver, UDP), \ 51 + LIBIE_RX_PT_UNUSED, \ 52 + LIBIE_RX_PT_IP_L4(oip, tun, ver, TCP), \ 53 + LIBIE_RX_PT_IP_L4(oip, tun, ver, SCTP), \ 54 + LIBIE_RX_PT_IP_L4(oip, tun, ver, ICMP) 55 + 56 + /* IPv oip --> tun --> IPv ver */ 57 + #define LIBIE_RX_PT_IP_TUN_VER(oip, tun, ver) \ 58 + LIBIE_RX_PT_IP_L3(oip, tun, ver, FRAG), \ 59 + LIBIE_RX_PT_IP_NOF(oip, tun, ver) 60 + 61 + /* Non Tunneled IPv oip */ 62 + #define LIBIE_RX_PT_IP_RAW(oip) \ 63 + LIBIE_RX_PT_IP_FRAG(oip), \ 64 + LIBIE_RX_PT_IP_NOF(oip, NONE, NONE) 65 + 66 + /* IPv oip --> tun --> { IPv4, IPv6 } */ 67 + #define LIBIE_RX_PT_IP_TUN(oip, tun) \ 68 + LIBIE_RX_PT_IP_TUN_VER(oip, tun, IPV4), \ 69 + LIBIE_RX_PT_IP_TUN_VER(oip, tun, IPV6) 70 + 71 + /* IPv oip --> GRE/NAT tun --> { x, IPv4, IPv6 } */ 72 + #define LIBIE_RX_PT_IP_GRE(oip, tun) \ 73 + LIBIE_RX_PT_IP_L3(oip, tun, NONE, NOT_FRAG), \ 74 + LIBIE_RX_PT_IP_TUN(oip, tun) 75 + 76 + /* Non Tunneled IPv oip 77 + * IPv oip --> { IPv4, IPv6 } 78 + * IPv oip --> GRE/NAT --> { x, IPv4, IPv6 } 79 + * IPv oip --> GRE/NAT --> MAC --> { x, IPv4, IPv6 } 80 + * IPv oip --> GRE/NAT --> MAC/VLAN --> { x, IPv4, IPv6 } 81 + */ 82 + #define LIBIE_RX_PT_IP(oip) \ 83 + LIBIE_RX_PT_IP_RAW(oip), \ 84 + LIBIE_RX_PT_IP_TUN(oip, IP), \ 85 + LIBIE_RX_PT_IP_GRE(oip, GRENAT), \ 86 + LIBIE_RX_PT_IP_GRE(oip, GRENAT_MAC), \ 87 + LIBIE_RX_PT_IP_GRE(oip, GRENAT_MAC_VLAN) 88 + 89 + /* Lookup table mapping for O(1) parsing */ 90 + const struct libeth_rx_pt libie_rx_pt_lut[LIBIE_RX_PT_NUM] = { 91 + /* L2 packet types */ 92 + LIBIE_RX_PT_UNUSED, 93 + LIBIE_RX_PT_L2, 94 + LIBIE_RX_PT_TS, 95 + LIBIE_RX_PT_L2, 96 + LIBIE_RX_PT_UNUSED, 97 + LIBIE_RX_PT_UNUSED, 98 + LIBIE_RX_PT_L2, 99 + LIBIE_RX_PT_L2, 100 + LIBIE_RX_PT_UNUSED, 101 + LIBIE_RX_PT_UNUSED, 102 + LIBIE_RX_PT_L2, 103 + LIBIE_RX_PT_UNUSED, 104 + 105 + LIBIE_RX_PT_L3, 106 + LIBIE_RX_PT_L3, 107 + LIBIE_RX_PT_L3, 108 + LIBIE_RX_PT_L3, 109 + LIBIE_RX_PT_L3, 110 + LIBIE_RX_PT_L3, 111 + LIBIE_RX_PT_L3, 112 + LIBIE_RX_PT_L3, 113 + LIBIE_RX_PT_L3, 114 + LIBIE_RX_PT_L3, 115 + 116 + LIBIE_RX_PT_IP(4), 117 + LIBIE_RX_PT_IP(6), 118 + }; 119 + EXPORT_SYMBOL_NS_GPL(libie_rx_pt_lut, LIBIE); 120 + 121 + MODULE_AUTHOR("Intel Corporation"); 122 + MODULE_DESCRIPTION("Intel(R) Ethernet common library"); 123 + MODULE_IMPORT_NS(LIBETH); 124 + MODULE_LICENSE("GPL");
+50
include/linux/net/intel/libie/rx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright (C) 2024 Intel Corporation */ 3 + 4 + #ifndef __LIBIE_RX_H 5 + #define __LIBIE_RX_H 6 + 7 + #include <net/libeth/rx.h> 8 + 9 + /* Rx buffer management */ 10 + 11 + /* The largest size for a single descriptor as per HW */ 12 + #define LIBIE_MAX_RX_BUF_LEN 9728U 13 + /* "True" HW-writeable space: minimum from SW and HW values */ 14 + #define LIBIE_RX_BUF_LEN(hr) min_t(u32, LIBETH_RX_PAGE_LEN(hr), \ 15 + LIBIE_MAX_RX_BUF_LEN) 16 + 17 + /* The maximum frame size as per HW (S/G) */ 18 + #define __LIBIE_MAX_RX_FRM_LEN 16382U 19 + /* ATST, HW can chain up to 5 Rx descriptors */ 20 + #define LIBIE_MAX_RX_FRM_LEN(hr) \ 21 + min_t(u32, __LIBIE_MAX_RX_FRM_LEN, LIBIE_RX_BUF_LEN(hr) * 5) 22 + /* Maximum frame size minus LL overhead */ 23 + #define LIBIE_MAX_MTU \ 24 + (LIBIE_MAX_RX_FRM_LEN(LIBETH_MAX_HEADROOM) - LIBETH_RX_LL_LEN) 25 + 26 + /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed 27 + * bitfield struct. 28 + */ 29 + 30 + #define LIBIE_RX_PT_NUM 154 31 + 32 + extern const struct libeth_rx_pt libie_rx_pt_lut[LIBIE_RX_PT_NUM]; 33 + 34 + /** 35 + * libie_rx_pt_parse - convert HW packet type to software bitfield structure 36 + * @pt: 10-bit hardware packet type value from the descriptor 37 + * 38 + * ```libie_rx_pt_lut``` must be accessed only using this wrapper. 39 + * 40 + * Return: parsed bitfield struct corresponding to the provided ptype. 41 + */ 42 + static inline struct libeth_rx_pt libie_rx_pt_parse(u32 pt) 43 + { 44 + if (unlikely(pt >= LIBIE_RX_PT_NUM)) 45 + pt = 0; 46 + 47 + return libie_rx_pt_lut[pt]; 48 + } 49 + 50 + #endif /* __LIBIE_RX_H */
+15 -2
include/linux/slab.h
··· 774 774 return kvmalloc(size, flags | __GFP_ZERO); 775 775 } 776 776 777 - static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) 777 + static inline __alloc_size(1, 2) void * 778 + kvmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) 778 779 { 779 780 size_t bytes; 780 781 781 782 if (unlikely(check_mul_overflow(n, size, &bytes))) 782 783 return NULL; 783 784 784 - return kvmalloc(bytes, flags); 785 + return kvmalloc_node(bytes, flags, node); 786 + } 787 + 788 + static inline __alloc_size(1, 2) void * 789 + kvmalloc_array(size_t n, size_t size, gfp_t flags) 790 + { 791 + return kvmalloc_array_node(n, size, flags, NUMA_NO_NODE); 792 + } 793 + 794 + static inline __alloc_size(1, 2) void * 795 + kvcalloc_node(size_t n, size_t size, gfp_t flags, int node) 796 + { 797 + return kvmalloc_array_node(n, size, flags | __GFP_ZERO, node); 785 798 } 786 799 787 800 static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags)
+242
include/net/libeth/rx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright (C) 2024 Intel Corporation */ 3 + 4 + #ifndef __LIBETH_RX_H 5 + #define __LIBETH_RX_H 6 + 7 + #include <linux/if_vlan.h> 8 + 9 + #include <net/page_pool/helpers.h> 10 + #include <net/xdp.h> 11 + 12 + /* Rx buffer management */ 13 + 14 + /* Space reserved in front of each frame */ 15 + #define LIBETH_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN) 16 + /* Maximum headroom for worst-case calculations */ 17 + #define LIBETH_MAX_HEADROOM LIBETH_SKB_HEADROOM 18 + /* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ 19 + #define LIBETH_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) 20 + 21 + /* Always use order-0 pages */ 22 + #define LIBETH_RX_PAGE_ORDER 0 23 + /* Pick a sane buffer stride and align to a cacheline boundary */ 24 + #define LIBETH_RX_BUF_STRIDE SKB_DATA_ALIGN(128) 25 + /* HW-writeable space in one buffer: truesize - headroom/tailroom, aligned */ 26 + #define LIBETH_RX_PAGE_LEN(hr) \ 27 + ALIGN_DOWN(SKB_MAX_ORDER(hr, LIBETH_RX_PAGE_ORDER), \ 28 + LIBETH_RX_BUF_STRIDE) 29 + 30 + /** 31 + * struct libeth_fqe - structure representing an Rx buffer (fill queue element) 32 + * @page: page holding the buffer 33 + * @offset: offset from the page start (to the headroom) 34 + * @truesize: total space occupied by the buffer (w/ headroom and tailroom) 35 + * 36 + * Depending on the MTU, API switches between one-page-per-frame and shared 37 + * page model (to conserve memory on bigger-page platforms). In case of the 38 + * former, @offset is always 0 and @truesize is always ```PAGE_SIZE```. 39 + */ 40 + struct libeth_fqe { 41 + struct page *page; 42 + u32 offset; 43 + u32 truesize; 44 + } __aligned_largest; 45 + 46 + /** 47 + * struct libeth_fq - structure representing a buffer (fill) queue 48 + * @fp: hotpath part of the structure 49 + * @pp: &page_pool for buffer management 50 + * @fqes: array of Rx buffers 51 + * @truesize: size to allocate per buffer, w/overhead 52 + * @count: number of descriptors/buffers the queue has 53 + * @buf_len: HW-writeable length per each buffer 54 + * @nid: ID of the closest NUMA node with memory 55 + */ 56 + struct libeth_fq { 57 + struct_group_tagged(libeth_fq_fp, fp, 58 + struct page_pool *pp; 59 + struct libeth_fqe *fqes; 60 + 61 + u32 truesize; 62 + u32 count; 63 + ); 64 + 65 + /* Cold fields */ 66 + u32 buf_len; 67 + int nid; 68 + }; 69 + 70 + int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi); 71 + void libeth_rx_fq_destroy(struct libeth_fq *fq); 72 + 73 + /** 74 + * libeth_rx_alloc - allocate a new Rx buffer 75 + * @fq: fill queue to allocate for 76 + * @i: index of the buffer within the queue 77 + * 78 + * Return: DMA address to be passed to HW for Rx on successful allocation, 79 + * ```DMA_MAPPING_ERROR``` otherwise. 80 + */ 81 + static inline dma_addr_t libeth_rx_alloc(const struct libeth_fq_fp *fq, u32 i) 82 + { 83 + struct libeth_fqe *buf = &fq->fqes[i]; 84 + 85 + buf->truesize = fq->truesize; 86 + buf->page = page_pool_dev_alloc(fq->pp, &buf->offset, &buf->truesize); 87 + if (unlikely(!buf->page)) 88 + return DMA_MAPPING_ERROR; 89 + 90 + return page_pool_get_dma_addr(buf->page) + buf->offset + 91 + fq->pp->p.offset; 92 + } 93 + 94 + void libeth_rx_recycle_slow(struct page *page); 95 + 96 + /** 97 + * libeth_rx_sync_for_cpu - synchronize or recycle buffer post DMA 98 + * @fqe: buffer to process 99 + * @len: frame length from the descriptor 100 + * 101 + * Process the buffer after it's written by HW. The regular path is to 102 + * synchronize DMA for CPU, but in case of no data it will be immediately 103 + * recycled back to its PP. 104 + * 105 + * Return: true when there's data to process, false otherwise. 106 + */ 107 + static inline bool libeth_rx_sync_for_cpu(const struct libeth_fqe *fqe, 108 + u32 len) 109 + { 110 + struct page *page = fqe->page; 111 + 112 + /* Very rare, but possible case. The most common reason: 113 + * the last fragment contained FCS only, which was then 114 + * stripped by the HW. 115 + */ 116 + if (unlikely(!len)) { 117 + libeth_rx_recycle_slow(page); 118 + return false; 119 + } 120 + 121 + page_pool_dma_sync_for_cpu(page->pp, page, fqe->offset, len); 122 + 123 + return true; 124 + } 125 + 126 + /* Converting abstract packet type numbers into a software structure with 127 + * the packet parameters to do O(1) lookup on Rx. 128 + */ 129 + 130 + enum { 131 + LIBETH_RX_PT_OUTER_L2 = 0U, 132 + LIBETH_RX_PT_OUTER_IPV4, 133 + LIBETH_RX_PT_OUTER_IPV6, 134 + }; 135 + 136 + enum { 137 + LIBETH_RX_PT_NOT_FRAG = 0U, 138 + LIBETH_RX_PT_FRAG, 139 + }; 140 + 141 + enum { 142 + LIBETH_RX_PT_TUNNEL_IP_NONE = 0U, 143 + LIBETH_RX_PT_TUNNEL_IP_IP, 144 + LIBETH_RX_PT_TUNNEL_IP_GRENAT, 145 + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC, 146 + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC_VLAN, 147 + }; 148 + 149 + enum { 150 + LIBETH_RX_PT_TUNNEL_END_NONE = 0U, 151 + LIBETH_RX_PT_TUNNEL_END_IPV4, 152 + LIBETH_RX_PT_TUNNEL_END_IPV6, 153 + }; 154 + 155 + enum { 156 + LIBETH_RX_PT_INNER_NONE = 0U, 157 + LIBETH_RX_PT_INNER_UDP, 158 + LIBETH_RX_PT_INNER_TCP, 159 + LIBETH_RX_PT_INNER_SCTP, 160 + LIBETH_RX_PT_INNER_ICMP, 161 + LIBETH_RX_PT_INNER_TIMESYNC, 162 + }; 163 + 164 + #define LIBETH_RX_PT_PAYLOAD_NONE PKT_HASH_TYPE_NONE 165 + #define LIBETH_RX_PT_PAYLOAD_L2 PKT_HASH_TYPE_L2 166 + #define LIBETH_RX_PT_PAYLOAD_L3 PKT_HASH_TYPE_L3 167 + #define LIBETH_RX_PT_PAYLOAD_L4 PKT_HASH_TYPE_L4 168 + 169 + struct libeth_rx_pt { 170 + u32 outer_ip:2; 171 + u32 outer_frag:1; 172 + u32 tunnel_type:3; 173 + u32 tunnel_end_prot:2; 174 + u32 tunnel_end_frag:1; 175 + u32 inner_prot:3; 176 + enum pkt_hash_types payload_layer:2; 177 + 178 + u32 pad:2; 179 + enum xdp_rss_hash_type hash_type:16; 180 + }; 181 + 182 + void libeth_rx_pt_gen_hash_type(struct libeth_rx_pt *pt); 183 + 184 + /** 185 + * libeth_rx_pt_get_ip_ver - get IP version from a packet type structure 186 + * @pt: packet type params 187 + * 188 + * Wrapper to compile out the IPv6 code from the drivers when not supported 189 + * by the kernel. 190 + * 191 + * Return: @pt.outer_ip or stub for IPv6 when not compiled-in. 192 + */ 193 + static inline u32 libeth_rx_pt_get_ip_ver(struct libeth_rx_pt pt) 194 + { 195 + #if !IS_ENABLED(CONFIG_IPV6) 196 + switch (pt.outer_ip) { 197 + case LIBETH_RX_PT_OUTER_IPV4: 198 + return LIBETH_RX_PT_OUTER_IPV4; 199 + default: 200 + return LIBETH_RX_PT_OUTER_L2; 201 + } 202 + #else 203 + return pt.outer_ip; 204 + #endif 205 + } 206 + 207 + /* libeth_has_*() can be used to quickly check whether the HW metadata is 208 + * available to avoid further expensive processing such as descriptor reads. 209 + * They already check for the corresponding netdev feature to be enabled, 210 + * thus can be used as drop-in replacements. 211 + */ 212 + 213 + static inline bool libeth_rx_pt_has_checksum(const struct net_device *dev, 214 + struct libeth_rx_pt pt) 215 + { 216 + /* Non-zero _INNER* is only possible when _OUTER_IPV* is set, 217 + * it is enough to check only for the L4 type. 218 + */ 219 + return likely(pt.inner_prot > LIBETH_RX_PT_INNER_NONE && 220 + (dev->features & NETIF_F_RXCSUM)); 221 + } 222 + 223 + static inline bool libeth_rx_pt_has_hash(const struct net_device *dev, 224 + struct libeth_rx_pt pt) 225 + { 226 + return likely(pt.payload_layer > LIBETH_RX_PT_PAYLOAD_NONE && 227 + (dev->features & NETIF_F_RXHASH)); 228 + } 229 + 230 + /** 231 + * libeth_rx_pt_set_hash - fill in skb hash value basing on the PT 232 + * @skb: skb to fill the hash in 233 + * @hash: 32-bit hash value from the descriptor 234 + * @pt: packet type 235 + */ 236 + static inline void libeth_rx_pt_set_hash(struct sk_buff *skb, u32 hash, 237 + struct libeth_rx_pt pt) 238 + { 239 + skb_set_hash(skb, hash, pt.payload_layer); 240 + } 241 + 242 + #endif /* __LIBETH_RX_H */
+29 -5
include/net/page_pool/helpers.h
··· 52 52 #ifndef _NET_PAGE_POOL_HELPERS_H 53 53 #define _NET_PAGE_POOL_HELPERS_H 54 54 55 + #include <linux/dma-mapping.h> 56 + 55 57 #include <net/page_pool/types.h> 56 58 57 59 #ifdef CONFIG_PAGE_POOL_STATS 58 60 /* Deprecated driver-facing API, use netlink instead */ 59 61 int page_pool_ethtool_stats_get_count(void); 60 62 u8 *page_pool_ethtool_stats_get_strings(u8 *data); 61 - u64 *page_pool_ethtool_stats_get(u64 *data, void *stats); 63 + u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats); 62 64 63 65 bool page_pool_get_stats(const struct page_pool *pool, 64 66 struct page_pool_stats *stats); ··· 75 73 return data; 76 74 } 77 75 78 - static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) 76 + static inline u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) 79 77 { 80 78 return data; 81 79 } ··· 206 204 * Get the stored dma direction. A driver might decide to store this locally 207 205 * and avoid the extra cache line from page_pool to determine the direction. 208 206 */ 209 - static 210 - inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) 207 + static inline enum dma_data_direction 208 + page_pool_get_dma_dir(const struct page_pool *pool) 211 209 { 212 210 return pool->p.dma_dir; 213 211 } ··· 372 370 * Fetch the DMA address of the page. The page pool to which the page belongs 373 371 * must had been created with PP_FLAG_DMA_MAP. 374 372 */ 375 - static inline dma_addr_t page_pool_get_dma_addr(struct page *page) 373 + static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) 376 374 { 377 375 dma_addr_t ret = page->dma_addr; 378 376 ··· 395 393 396 394 page->dma_addr = addr; 397 395 return false; 396 + } 397 + 398 + /** 399 + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW 400 + * @pool: &page_pool the @page belongs to 401 + * @page: page to sync 402 + * @offset: offset from page start to "hard" start if using PP frags 403 + * @dma_sync_size: size of the data written to the page 404 + * 405 + * Can be used as a shorthand to sync Rx pages before accessing them in the 406 + * driver. Caller must ensure the pool was created with ``PP_FLAG_DMA_MAP``. 407 + * Note that this version performs DMA sync unconditionally, even if the 408 + * associated PP doesn't perform sync-for-device. 409 + */ 410 + static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, 411 + const struct page *page, 412 + u32 offset, u32 dma_sync_size) 413 + { 414 + dma_sync_single_range_for_cpu(pool->p.dev, 415 + page_pool_get_dma_addr(page), 416 + offset + pool->p.offset, dma_sync_size, 417 + page_pool_get_dma_dir(pool)); 398 418 } 399 419 400 420 static inline bool page_pool_put(struct page_pool *pool)
+2 -2
include/net/page_pool/types.h
··· 213 213 #ifdef CONFIG_PAGE_POOL 214 214 void page_pool_destroy(struct page_pool *pool); 215 215 void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), 216 - struct xdp_mem_info *mem); 216 + const struct xdp_mem_info *mem); 217 217 void page_pool_put_page_bulk(struct page_pool *pool, void **data, 218 218 int count); 219 219 #else ··· 223 223 224 224 static inline void page_pool_use_xdp_mem(struct page_pool *pool, 225 225 void (*disconnect)(void *), 226 - struct xdp_mem_info *mem) 226 + const struct xdp_mem_info *mem) 227 227 { 228 228 } 229 229
+5 -5
net/core/page_pool.c
··· 123 123 } 124 124 EXPORT_SYMBOL(page_pool_ethtool_stats_get_count); 125 125 126 - u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) 126 + u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) 127 127 { 128 - struct page_pool_stats *pool_stats = stats; 128 + const struct page_pool_stats *pool_stats = stats; 129 129 130 130 *data++ = pool_stats->alloc_stats.fast; 131 131 *data++ = pool_stats->alloc_stats.slow; ··· 383 383 return page; 384 384 } 385 385 386 - static void page_pool_dma_sync_for_device(struct page_pool *pool, 387 - struct page *page, 386 + static void page_pool_dma_sync_for_device(const struct page_pool *pool, 387 + const struct page *page, 388 388 unsigned int dma_sync_size) 389 389 { 390 390 dma_addr_t dma_addr = page_pool_get_dma_addr(page); ··· 987 987 } 988 988 989 989 void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), 990 - struct xdp_mem_info *mem) 990 + const struct xdp_mem_info *mem) 991 991 { 992 992 refcount_inc(&pool->user_cnt); 993 993 pool->disconnect = disconnect;