Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.11-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from can, bluetooth and wireless.

No known regressions at this point. Another calm week, but chances are
that has more to do with vacation season than the quality of our work.

Current release - new code bugs:

- smc: prevent NULL pointer dereference in txopt_get

- eth: ti: am65-cpsw: number of XDP-related fixes

Previous releases - regressions:

- Revert "Bluetooth: MGMT/SMP: Fix address type when using SMP over
BREDR/LE", it breaks existing user space

- Bluetooth: qca: if memdump doesn't work, re-enable IBS to avoid
later problems with suspend

- can: mcp251x: fix deadlock if an interrupt occurs during
mcp251x_open

- eth: r8152: fix the firmware communication error due to use of bulk
write

- ptp: ocp: fix serial port information export

- eth: igb: fix not clearing TimeSync interrupts for 82580

- Revert "wifi: ath11k: support hibernation", fix suspend on Lenovo

Previous releases - always broken:

- eth: intel: fix crashes and bugs when reconfiguration and resets
happening in parallel

- wifi: ath11k: fix NULL dereference in ath11k_mac_get_eirp_power()

Misc:

- docs: netdev: document guidance on cleanup.h"

* tag 'net-6.11-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (61 commits)
ila: call nf_unregister_net_hooks() sooner
tools/net/ynl: fix cli.py --subscribe feature
MAINTAINERS: fix ptp ocp driver maintainers address
selftests: net: enable bind tests
net: dsa: vsc73xx: fix possible subblocks range of CAPT block
sched: sch_cake: fix bulk flow accounting logic for host fairness
docs: netdev: document guidance on cleanup.h
net: xilinx: axienet: Fix race in axienet_stop
net: bridge: br_fdb_external_learn_add(): always set EXT_LEARN
r8152: fix the firmware doesn't work
fou: Fix null-ptr-deref in GRO.
bareudp: Fix device stats updates.
net: mana: Fix error handling in mana_create_txq/rxq's NAPI cleanup
bpf, net: Fix a potential race in do_sock_getsockopt()
net: dqs: Do not use extern for unused dql_group
sch/netem: fix use after free in netem_dequeue
usbnet: modern method to get random MAC
MAINTAINERS: wifi: cw1200: add net-cw1200.h
ice: do not bring the VSI up, if it was down before the XDP setup
ice: remove ICE_CFG_BUSY locking from AF_XDP code
...

+867 -681
+19 -14
Documentation/ABI/testing/sysfs-timecard
··· 258 258 the estimated point where the FPGA latches the PHC time. This 259 259 value may be changed by writing an unsigned integer. 260 260 261 - What: /sys/class/timecard/ocpN/ttyGNSS 262 - What: /sys/class/timecard/ocpN/ttyGNSS2 263 - Date: September 2021 264 - Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 265 - Description: These optional attributes link to the TTY serial ports 266 - associated with the GNSS devices. 261 + What: /sys/class/timecard/ocpN/tty 262 + Date: August 2024 263 + Contact: Vadim Fedorenko <vadim.fedorenko@linux.dev> 264 + Description: (RO) Directory containing the sysfs nodes for TTY attributes 267 265 268 - What: /sys/class/timecard/ocpN/ttyMAC 269 - Date: September 2021 266 + What: /sys/class/timecard/ocpN/tty/ttyGNSS 267 + What: /sys/class/timecard/ocpN/tty/ttyGNSS2 268 + Date: August 2024 270 269 Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 271 - Description: This optional attribute links to the TTY serial port 272 - associated with the Miniature Atomic Clock. 270 + Description: (RO) These optional attributes contain names of the TTY serial 271 + ports associated with the GNSS devices. 273 272 274 - What: /sys/class/timecard/ocpN/ttyNMEA 275 - Date: September 2021 273 + What: /sys/class/timecard/ocpN/tty/ttyMAC 274 + Date: August 2024 276 275 Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 277 - Description: This optional attribute links to the TTY serial port 278 - which outputs the PHC time in NMEA ZDA format. 276 + Description: (RO) This optional attribute contains name of the TTY serial 277 + port associated with the Miniature Atomic Clock. 278 + 279 + What: /sys/class/timecard/ocpN/tty/ttyNMEA 280 + Date: August 2024 281 + Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 282 + Description: (RO) This optional attribute contains name of the TTY serial 283 + port which outputs the PHC time in NMEA ZDA format. 279 284 280 285 What: /sys/class/timecard/ocpN/utc_tai_offset 281 286 Date: September 2021
+16
Documentation/process/maintainer-netdev.rst
··· 375 375 your code follow the most recent guidelines, so that eventually all code 376 376 in the domain of netdev is in the preferred format. 377 377 378 + Using device-managed and cleanup.h constructs 379 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 380 + 381 + Netdev remains skeptical about promises of all "auto-cleanup" APIs, 382 + including even ``devm_`` helpers, historically. They are not the preferred 383 + style of implementation, merely an acceptable one. 384 + 385 + Use of ``guard()`` is discouraged within any function longer than 20 lines, 386 + ``scoped_guard()`` is considered more readable. Using normal lock/unlock is 387 + still (weakly) preferred. 388 + 389 + Low level cleanup constructs (such as ``__free()``) can be used when building 390 + APIs and helpers, especially scoped iterators. However, direct use of 391 + ``__free()`` within networking core and drivers is discouraged. 392 + Similar guidance applies to declaring variables mid-function. 393 + 378 394 Resending after review 379 395 ~~~~~~~~~~~~~~~~~~~~~~ 380 396
+4 -1
MAINTAINERS
··· 5956 5956 CW1200 WLAN driver 5957 5957 S: Orphan 5958 5958 F: drivers/net/wireless/st/cw1200/ 5959 + F: include/linux/platform_data/net-cw1200.h 5959 5960 5960 5961 CX18 VIDEO4LINUX DRIVER 5961 5962 M: Andy Walls <awalls@md.metrocast.net> ··· 15906 15905 F: include/uapi/linux/if_* 15907 15906 F: include/uapi/linux/netdev* 15908 15907 F: tools/testing/selftests/drivers/net/ 15908 + X: Documentation/devicetree/bindings/net/bluetooth/ 15909 + X: Documentation/devicetree/bindings/net/wireless/ 15909 15910 X: drivers/net/wireless/ 15910 15911 15911 15912 NETWORKING DRIVERS (WIRELESS) ··· 17133 17130 17134 17131 OPENCOMPUTE PTP CLOCK DRIVER 17135 17132 M: Jonathan Lemon <jonathan.lemon@gmail.com> 17136 - M: Vadim Fedorenko <vadfed@linux.dev> 17133 + M: Vadim Fedorenko <vadim.fedorenko@linux.dev> 17137 17134 L: netdev@vger.kernel.org 17138 17135 S: Maintained 17139 17136 F: drivers/ptp/ptp_ocp.c
+1
drivers/bluetooth/hci_qca.c
··· 1091 1091 qca->memdump_state = QCA_MEMDUMP_COLLECTED; 1092 1092 cancel_delayed_work(&qca->ctrl_memdump_timeout); 1093 1093 clear_bit(QCA_MEMDUMP_COLLECTION, &qca->flags); 1094 + clear_bit(QCA_IBS_DISABLED, &qca->flags); 1094 1095 mutex_unlock(&qca->hci_memdump_lock); 1095 1096 return; 1096 1097 }
+11 -11
drivers/net/bareudp.c
··· 83 83 84 84 if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion, 85 85 sizeof(ipversion))) { 86 - bareudp->dev->stats.rx_dropped++; 86 + DEV_STATS_INC(bareudp->dev, rx_dropped); 87 87 goto drop; 88 88 } 89 89 ipversion >>= 4; ··· 93 93 } else if (ipversion == 6 && bareudp->multi_proto_mode) { 94 94 proto = htons(ETH_P_IPV6); 95 95 } else { 96 - bareudp->dev->stats.rx_dropped++; 96 + DEV_STATS_INC(bareudp->dev, rx_dropped); 97 97 goto drop; 98 98 } 99 99 } else if (bareudp->ethertype == htons(ETH_P_MPLS_UC)) { ··· 107 107 ipv4_is_multicast(tunnel_hdr->daddr)) { 108 108 proto = htons(ETH_P_MPLS_MC); 109 109 } else { 110 - bareudp->dev->stats.rx_dropped++; 110 + DEV_STATS_INC(bareudp->dev, rx_dropped); 111 111 goto drop; 112 112 } 113 113 } else { ··· 123 123 (addr_type & IPV6_ADDR_MULTICAST)) { 124 124 proto = htons(ETH_P_MPLS_MC); 125 125 } else { 126 - bareudp->dev->stats.rx_dropped++; 126 + DEV_STATS_INC(bareudp->dev, rx_dropped); 127 127 goto drop; 128 128 } 129 129 } ··· 135 135 proto, 136 136 !net_eq(bareudp->net, 137 137 dev_net(bareudp->dev)))) { 138 - bareudp->dev->stats.rx_dropped++; 138 + DEV_STATS_INC(bareudp->dev, rx_dropped); 139 139 goto drop; 140 140 } 141 141 ··· 143 143 144 144 tun_dst = udp_tun_rx_dst(skb, family, key, 0, 0); 145 145 if (!tun_dst) { 146 - bareudp->dev->stats.rx_dropped++; 146 + DEV_STATS_INC(bareudp->dev, rx_dropped); 147 147 goto drop; 148 148 } 149 149 skb_dst_set(skb, &tun_dst->dst); ··· 169 169 &((struct ipv6hdr *)oiph)->saddr); 170 170 } 171 171 if (err > 1) { 172 - ++bareudp->dev->stats.rx_frame_errors; 173 - ++bareudp->dev->stats.rx_errors; 172 + DEV_STATS_INC(bareudp->dev, rx_frame_errors); 173 + DEV_STATS_INC(bareudp->dev, rx_errors); 174 174 goto drop; 175 175 } 176 176 } ··· 467 467 dev_kfree_skb(skb); 468 468 469 469 if (err == -ELOOP) 470 - dev->stats.collisions++; 470 + DEV_STATS_INC(dev, collisions); 471 471 else if (err == -ENETUNREACH) 472 - dev->stats.tx_carrier_errors++; 472 + DEV_STATS_INC(dev, tx_carrier_errors); 473 473 474 - dev->stats.tx_errors++; 474 + DEV_STATS_INC(dev, tx_errors); 475 475 return NETDEV_TX_OK; 476 476 } 477 477
+8 -10
drivers/net/can/kvaser_pciefd.c
··· 1686 1686 const struct kvaser_pciefd_irq_mask *irq_mask = pcie->driver_data->irq_mask; 1687 1687 u32 pci_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie)); 1688 1688 u32 srb_irq = 0; 1689 + u32 srb_release = 0; 1689 1690 int i; 1690 1691 1691 1692 if (!(pci_irq & irq_mask->all)) ··· 1700 1699 kvaser_pciefd_transmit_irq(pcie->can[i]); 1701 1700 } 1702 1701 1703 - if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0) { 1704 - /* Reset DMA buffer 0, may trigger new interrupt */ 1705 - iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0, 1706 - KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1707 - } 1702 + if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0) 1703 + srb_release |= KVASER_PCIEFD_SRB_CMD_RDB0; 1708 1704 1709 - if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1) { 1710 - /* Reset DMA buffer 1, may trigger new interrupt */ 1711 - iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1, 1712 - KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1713 - } 1705 + if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1) 1706 + srb_release |= KVASER_PCIEFD_SRB_CMD_RDB1; 1707 + 1708 + if (srb_release) 1709 + iowrite32(srb_release, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1714 1710 1715 1711 return IRQ_HANDLED; 1716 1712 }
+70 -46
drivers/net/can/m_can/m_can.c
··· 483 483 { 484 484 m_can_coalescing_disable(cdev); 485 485 m_can_write(cdev, M_CAN_ILE, 0x0); 486 - cdev->active_interrupts = 0x0; 487 486 488 487 if (!cdev->net->irq) { 489 488 dev_dbg(cdev->dev, "Stop hrtimer\n"); 490 - hrtimer_cancel(&cdev->hrtimer); 489 + hrtimer_try_to_cancel(&cdev->hrtimer); 491 490 } 492 491 } 493 492 ··· 1036 1037 return work_done; 1037 1038 } 1038 1039 1039 - static int m_can_rx_peripheral(struct net_device *dev, u32 irqstatus) 1040 - { 1041 - struct m_can_classdev *cdev = netdev_priv(dev); 1042 - int work_done; 1043 - 1044 - work_done = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, irqstatus); 1045 - 1046 - /* Don't re-enable interrupts if the driver had a fatal error 1047 - * (e.g., FIFO read failure). 1048 - */ 1049 - if (work_done < 0) 1050 - m_can_disable_all_interrupts(cdev); 1051 - 1052 - return work_done; 1053 - } 1054 - 1055 1040 static int m_can_poll(struct napi_struct *napi, int quota) 1056 1041 { 1057 1042 struct net_device *dev = napi->dev; ··· 1200 1217 HRTIMER_MODE_REL); 1201 1218 } 1202 1219 1203 - static irqreturn_t m_can_isr(int irq, void *dev_id) 1220 + /* This interrupt handler is called either from the interrupt thread or a 1221 + * hrtimer. This has implications like cancelling a timer won't be possible 1222 + * blocking. 1223 + */ 1224 + static int m_can_interrupt_handler(struct m_can_classdev *cdev) 1204 1225 { 1205 - struct net_device *dev = (struct net_device *)dev_id; 1206 - struct m_can_classdev *cdev = netdev_priv(dev); 1226 + struct net_device *dev = cdev->net; 1207 1227 u32 ir; 1228 + int ret; 1208 1229 1209 - if (pm_runtime_suspended(cdev->dev)) { 1210 - m_can_coalescing_disable(cdev); 1230 + if (pm_runtime_suspended(cdev->dev)) 1211 1231 return IRQ_NONE; 1212 - } 1213 1232 1214 1233 ir = m_can_read(cdev, M_CAN_IR); 1215 1234 m_can_coalescing_update(cdev, ir); ··· 1235 1250 m_can_disable_all_interrupts(cdev); 1236 1251 napi_schedule(&cdev->napi); 1237 1252 } else { 1238 - int pkts; 1239 - 1240 - pkts = m_can_rx_peripheral(dev, ir); 1241 - if (pkts < 0) 1242 - goto out_fail; 1253 + ret = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, ir); 1254 + if (ret < 0) 1255 + return ret; 1243 1256 } 1244 1257 } 1245 1258 ··· 1255 1272 } else { 1256 1273 if (ir & (IR_TEFN | IR_TEFW)) { 1257 1274 /* New TX FIFO Element arrived */ 1258 - if (m_can_echo_tx_event(dev) != 0) 1259 - goto out_fail; 1275 + ret = m_can_echo_tx_event(dev); 1276 + if (ret != 0) 1277 + return ret; 1260 1278 } 1261 1279 } 1262 1280 ··· 1265 1281 can_rx_offload_threaded_irq_finish(&cdev->offload); 1266 1282 1267 1283 return IRQ_HANDLED; 1284 + } 1268 1285 1269 - out_fail: 1270 - m_can_disable_all_interrupts(cdev); 1271 - return IRQ_HANDLED; 1286 + static irqreturn_t m_can_isr(int irq, void *dev_id) 1287 + { 1288 + struct net_device *dev = (struct net_device *)dev_id; 1289 + struct m_can_classdev *cdev = netdev_priv(dev); 1290 + int ret; 1291 + 1292 + ret = m_can_interrupt_handler(cdev); 1293 + if (ret < 0) { 1294 + m_can_disable_all_interrupts(cdev); 1295 + return IRQ_HANDLED; 1296 + } 1297 + 1298 + return ret; 1272 1299 } 1273 1300 1274 1301 static enum hrtimer_restart m_can_coalescing_timer(struct hrtimer *timer) 1275 1302 { 1276 1303 struct m_can_classdev *cdev = container_of(timer, struct m_can_classdev, hrtimer); 1304 + 1305 + if (cdev->can.state == CAN_STATE_BUS_OFF || 1306 + cdev->can.state == CAN_STATE_STOPPED) 1307 + return HRTIMER_NORESTART; 1277 1308 1278 1309 irq_wake_thread(cdev->net->irq, cdev->net); 1279 1310 ··· 1541 1542 else 1542 1543 interrupts &= ~(IR_ERR_LEC_31X); 1543 1544 } 1545 + cdev->active_interrupts = 0; 1544 1546 m_can_interrupt_enable(cdev, interrupts); 1545 1547 1546 1548 /* route all interrupts to INT0 */ ··· 1991 1991 { 1992 1992 struct m_can_classdev *cdev = container_of(timer, struct 1993 1993 m_can_classdev, hrtimer); 1994 + int ret; 1994 1995 1995 - m_can_isr(0, cdev->net); 1996 + if (cdev->can.state == CAN_STATE_BUS_OFF || 1997 + cdev->can.state == CAN_STATE_STOPPED) 1998 + return HRTIMER_NORESTART; 1999 + 2000 + ret = m_can_interrupt_handler(cdev); 2001 + 2002 + /* On error or if napi is scheduled to read, stop the timer */ 2003 + if (ret < 0 || napi_is_scheduled(&cdev->napi)) 2004 + return HRTIMER_NORESTART; 1996 2005 1997 2006 hrtimer_forward_now(timer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS)); 1998 2007 ··· 2061 2052 /* start the m_can controller */ 2062 2053 err = m_can_start(dev); 2063 2054 if (err) 2064 - goto exit_irq_fail; 2055 + goto exit_start_fail; 2065 2056 2066 2057 if (!cdev->is_peripheral) 2067 2058 napi_enable(&cdev->napi); ··· 2070 2061 2071 2062 return 0; 2072 2063 2064 + exit_start_fail: 2065 + if (cdev->is_peripheral || dev->irq) 2066 + free_irq(dev->irq, dev); 2073 2067 exit_irq_fail: 2074 2068 if (cdev->is_peripheral) 2075 2069 destroy_workqueue(cdev->tx_wq); ··· 2184 2172 return 0; 2185 2173 } 2186 2174 2187 - static const struct ethtool_ops m_can_ethtool_ops = { 2175 + static const struct ethtool_ops m_can_ethtool_ops_coalescing = { 2188 2176 .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ | 2189 2177 ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ | 2190 2178 ETHTOOL_COALESCE_TX_USECS_IRQ | ··· 2195 2183 .set_coalesce = m_can_set_coalesce, 2196 2184 }; 2197 2185 2198 - static const struct ethtool_ops m_can_ethtool_ops_polling = { 2186 + static const struct ethtool_ops m_can_ethtool_ops = { 2199 2187 .get_ts_info = ethtool_op_get_ts_info, 2200 2188 }; 2201 2189 2202 - static int register_m_can_dev(struct net_device *dev) 2190 + static int register_m_can_dev(struct m_can_classdev *cdev) 2203 2191 { 2192 + struct net_device *dev = cdev->net; 2193 + 2204 2194 dev->flags |= IFF_ECHO; /* we support local echo */ 2205 2195 dev->netdev_ops = &m_can_netdev_ops; 2206 - if (dev->irq) 2207 - dev->ethtool_ops = &m_can_ethtool_ops; 2196 + if (dev->irq && cdev->is_peripheral) 2197 + dev->ethtool_ops = &m_can_ethtool_ops_coalescing; 2208 2198 else 2209 - dev->ethtool_ops = &m_can_ethtool_ops_polling; 2199 + dev->ethtool_ops = &m_can_ethtool_ops; 2210 2200 2211 2201 return register_candev(dev); 2212 2202 } ··· 2394 2380 if (ret) 2395 2381 goto rx_offload_del; 2396 2382 2397 - ret = register_m_can_dev(cdev->net); 2383 + ret = register_m_can_dev(cdev); 2398 2384 if (ret) { 2399 2385 dev_err(cdev->dev, "registering %s failed (err=%d)\n", 2400 2386 cdev->net->name, ret); ··· 2441 2427 netif_device_detach(ndev); 2442 2428 2443 2429 /* leave the chip running with rx interrupt enabled if it is 2444 - * used as a wake-up source. 2430 + * used as a wake-up source. Coalescing needs to be reset then, 2431 + * the timer is cancelled here, interrupts are done in resume. 2445 2432 */ 2446 - if (cdev->pm_wake_source) 2433 + if (cdev->pm_wake_source) { 2434 + hrtimer_cancel(&cdev->hrtimer); 2447 2435 m_can_write(cdev, M_CAN_IE, IR_RF0N); 2448 - else 2436 + } else { 2449 2437 m_can_stop(ndev); 2438 + } 2450 2439 2451 2440 m_can_clk_stop(cdev); 2452 2441 } ··· 2479 2462 return ret; 2480 2463 2481 2464 if (cdev->pm_wake_source) { 2465 + /* Restore active interrupts but disable coalescing as 2466 + * we may have missed important waterlevel interrupts 2467 + * between suspend and resume. Timers are already 2468 + * stopped in suspend. Here we enable all interrupts 2469 + * again. 2470 + */ 2471 + cdev->active_interrupts |= IR_RF0N | IR_TEFN; 2482 2472 m_can_write(cdev, M_CAN_IE, cdev->active_interrupts); 2483 2473 } else { 2484 2474 ret = m_can_start(ndev);
+1 -1
drivers/net/can/spi/mcp251x.c
··· 752 752 int ret; 753 753 754 754 /* Force wakeup interrupt to wake device, but don't execute IST */ 755 - disable_irq(spi->irq); 755 + disable_irq_nosync(spi->irq); 756 756 mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF); 757 757 758 758 /* Wait for oscillator startup timer after wake up */
+10 -1
drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c
··· 97 97 if (ring) { 98 98 u8 num_rx_coalesce = 0, num_tx_coalesce = 0; 99 99 100 - num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, ring->rx_pending); 100 + /* If the ring parameters have been configured in 101 + * CAN-CC mode, but and we are in CAN-FD mode now, 102 + * they might be to big. Use the default CAN-FD values 103 + * in this case. 104 + */ 105 + num_rx = ring->rx_pending; 106 + if (num_rx > layout->max_rx) 107 + num_rx = layout->default_rx; 108 + 109 + num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, num_rx); 101 110 102 111 /* The ethtool doc says: 103 112 * To disable coalescing, set usecs = 0 and max_frames = 1.
+28 -6
drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
··· 290 290 const struct mcp251xfd_rx_ring *rx_ring; 291 291 u16 base = 0, ram_used; 292 292 u8 fifo_nr = 1; 293 - int i; 293 + int err = 0, i; 294 294 295 295 netdev_reset_queue(priv->ndev); 296 296 ··· 386 386 netdev_err(priv->ndev, 387 387 "Error during ring configuration, using more RAM (%u bytes) than available (%u bytes).\n", 388 388 ram_used, MCP251XFD_RAM_SIZE); 389 - return -ENOMEM; 389 + err = -ENOMEM; 390 390 } 391 391 392 - return 0; 392 + if (priv->tx_obj_num_coalesce_irq && 393 + priv->tx_obj_num_coalesce_irq * 2 != priv->tx->obj_num) { 394 + netdev_err(priv->ndev, 395 + "Error during ring configuration, number of TEF coalescing buffers (%u) must be half of TEF buffers (%u).\n", 396 + priv->tx_obj_num_coalesce_irq, priv->tx->obj_num); 397 + err = -EINVAL; 398 + } 399 + 400 + return err; 393 401 } 394 402 395 403 void mcp251xfd_ring_free(struct mcp251xfd_priv *priv) ··· 477 469 478 470 /* switching from CAN-2.0 to CAN-FD mode or vice versa */ 479 471 if (fd_mode != test_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags)) { 472 + const struct ethtool_ringparam ring = { 473 + .rx_pending = priv->rx_obj_num, 474 + .tx_pending = priv->tx->obj_num, 475 + }; 476 + const struct ethtool_coalesce ec = { 477 + .rx_coalesce_usecs_irq = priv->rx_coalesce_usecs_irq, 478 + .rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq, 479 + .tx_coalesce_usecs_irq = priv->tx_coalesce_usecs_irq, 480 + .tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq, 481 + }; 480 482 struct can_ram_layout layout; 481 483 482 - can_ram_get_layout(&layout, &mcp251xfd_ram_config, NULL, NULL, fd_mode); 483 - priv->rx_obj_num = layout.default_rx; 484 - tx_ring->obj_num = layout.default_tx; 484 + can_ram_get_layout(&layout, &mcp251xfd_ram_config, &ring, &ec, fd_mode); 485 + 486 + priv->rx_obj_num = layout.cur_rx; 487 + priv->rx_obj_num_coalesce_irq = layout.rx_coalesce; 488 + 489 + tx_ring->obj_num = layout.cur_tx; 490 + priv->tx_obj_num_coalesce_irq = layout.tx_coalesce; 485 491 } 486 492 487 493 if (fd_mode) {
+8 -2
drivers/net/dsa/vitesse-vsc73xx-core.c
··· 36 36 #define VSC73XX_BLOCK_ANALYZER 0x2 /* Only subblock 0 */ 37 37 #define VSC73XX_BLOCK_MII 0x3 /* Subblocks 0 and 1 */ 38 38 #define VSC73XX_BLOCK_MEMINIT 0x3 /* Only subblock 2 */ 39 - #define VSC73XX_BLOCK_CAPTURE 0x4 /* Only subblock 2 */ 39 + #define VSC73XX_BLOCK_CAPTURE 0x4 /* Subblocks 0-4, 6, 7 */ 40 40 #define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */ 41 41 #define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */ 42 42 ··· 410 410 break; 411 411 412 412 case VSC73XX_BLOCK_MII: 413 - case VSC73XX_BLOCK_CAPTURE: 414 413 case VSC73XX_BLOCK_ARBITER: 415 414 switch (subblock) { 416 415 case 0 ... 1: 416 + return 1; 417 + } 418 + break; 419 + case VSC73XX_BLOCK_CAPTURE: 420 + switch (subblock) { 421 + case 0 ... 4: 422 + case 6 ... 7: 417 423 return 1; 418 424 } 419 425 break;
+2
drivers/net/ethernet/intel/ice/ice.h
··· 318 318 ICE_VSI_UMAC_FLTR_CHANGED, 319 319 ICE_VSI_MMAC_FLTR_CHANGED, 320 320 ICE_VSI_PROMISC_CHANGED, 321 + ICE_VSI_REBUILD_PENDING, 321 322 ICE_VSI_STATE_NBITS /* must be last */ 322 323 }; 323 324 ··· 412 411 struct ice_tx_ring **xdp_rings; /* XDP ring array */ 413 412 u16 num_xdp_txq; /* Used XDP queues */ 414 413 u8 xdp_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */ 414 + struct mutex xdp_state_lock; 415 415 416 416 struct net_device **target_netdevs; 417 417
+3 -8
drivers/net/ethernet/intel/ice/ice_base.c
··· 190 190 } 191 191 q_vector = vsi->q_vectors[v_idx]; 192 192 193 - ice_for_each_tx_ring(tx_ring, q_vector->tx) { 194 - ice_queue_set_napi(vsi, tx_ring->q_index, NETDEV_QUEUE_TYPE_TX, 195 - NULL); 193 + ice_for_each_tx_ring(tx_ring, vsi->q_vectors[v_idx]->tx) 196 194 tx_ring->q_vector = NULL; 197 - } 198 - ice_for_each_rx_ring(rx_ring, q_vector->rx) { 199 - ice_queue_set_napi(vsi, rx_ring->q_index, NETDEV_QUEUE_TYPE_RX, 200 - NULL); 195 + 196 + ice_for_each_rx_ring(rx_ring, vsi->q_vectors[v_idx]->rx) 201 197 rx_ring->q_vector = NULL; 202 - } 203 198 204 199 /* only VSI with an associated netdev is set up with NAPI */ 205 200 if (vsi->netdev)
+71 -130
drivers/net/ethernet/intel/ice/ice_lib.c
··· 447 447 448 448 ice_vsi_free_stats(vsi); 449 449 ice_vsi_free_arrays(vsi); 450 + mutex_destroy(&vsi->xdp_state_lock); 450 451 mutex_unlock(&pf->sw_mutex); 451 452 devm_kfree(dev, vsi); 452 453 } ··· 626 625 /* prepare pf->next_vsi for next use */ 627 626 pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, 628 627 pf->next_vsi); 628 + 629 + mutex_init(&vsi->xdp_state_lock); 629 630 630 631 unlock_pf: 631 632 mutex_unlock(&pf->sw_mutex); ··· 2289 2286 2290 2287 ice_vsi_map_rings_to_vectors(vsi); 2291 2288 2292 - /* Associate q_vector rings to napi */ 2293 - ice_vsi_set_napi_queues(vsi); 2294 - 2295 2289 vsi->stat_offsets_loaded = false; 2296 2290 2297 2291 /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ ··· 2426 2426 dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", 2427 2427 vsi->vsi_num, err); 2428 2428 2429 - if (ice_is_xdp_ena_vsi(vsi)) 2429 + if (vsi->xdp_rings) 2430 2430 /* return value check can be skipped here, it always returns 2431 2431 * 0 if reset is in progress 2432 2432 */ ··· 2528 2528 for (q = 0; q < q_vector->num_ring_tx; q++) { 2529 2529 ice_write_itr(&q_vector->tx, 0); 2530 2530 wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0); 2531 - if (ice_is_xdp_ena_vsi(vsi)) { 2531 + if (vsi->xdp_rings) { 2532 2532 u32 xdp_txq = txq + vsi->num_xdp_txq; 2533 2533 2534 2534 wr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]), 0); ··· 2628 2628 if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state)) 2629 2629 ice_down(vsi); 2630 2630 2631 + ice_vsi_clear_napi_queues(vsi); 2631 2632 ice_vsi_free_irq(vsi); 2632 2633 ice_vsi_free_tx_rings(vsi); 2633 2634 ice_vsi_free_rx_rings(vsi); ··· 2672 2671 */ 2673 2672 void ice_dis_vsi(struct ice_vsi *vsi, bool locked) 2674 2673 { 2675 - if (test_bit(ICE_VSI_DOWN, vsi->state)) 2676 - return; 2674 + bool already_down = test_bit(ICE_VSI_DOWN, vsi->state); 2677 2675 2678 2676 set_bit(ICE_VSI_NEEDS_RESTART, vsi->state); 2679 2677 ··· 2680 2680 if (netif_running(vsi->netdev)) { 2681 2681 if (!locked) 2682 2682 rtnl_lock(); 2683 - 2684 - ice_vsi_close(vsi); 2683 + already_down = test_bit(ICE_VSI_DOWN, vsi->state); 2684 + if (!already_down) 2685 + ice_vsi_close(vsi); 2685 2686 2686 2687 if (!locked) 2687 2688 rtnl_unlock(); 2688 - } else { 2689 + } else if (!already_down) { 2689 2690 ice_vsi_close(vsi); 2690 2691 } 2691 - } else if (vsi->type == ICE_VSI_CTRL) { 2692 + } else if (vsi->type == ICE_VSI_CTRL && !already_down) { 2692 2693 ice_vsi_close(vsi); 2693 2694 } 2694 2695 } 2695 2696 2696 2697 /** 2697 - * __ice_queue_set_napi - Set the napi instance for the queue 2698 - * @dev: device to which NAPI and queue belong 2699 - * @queue_index: Index of queue 2700 - * @type: queue type as RX or TX 2701 - * @napi: NAPI context 2702 - * @locked: is the rtnl_lock already held 2703 - * 2704 - * Set the napi instance for the queue. Caller indicates the lock status. 2705 - */ 2706 - static void 2707 - __ice_queue_set_napi(struct net_device *dev, unsigned int queue_index, 2708 - enum netdev_queue_type type, struct napi_struct *napi, 2709 - bool locked) 2710 - { 2711 - if (!locked) 2712 - rtnl_lock(); 2713 - netif_queue_set_napi(dev, queue_index, type, napi); 2714 - if (!locked) 2715 - rtnl_unlock(); 2716 - } 2717 - 2718 - /** 2719 - * ice_queue_set_napi - Set the napi instance for the queue 2720 - * @vsi: VSI being configured 2721 - * @queue_index: Index of queue 2722 - * @type: queue type as RX or TX 2723 - * @napi: NAPI context 2724 - * 2725 - * Set the napi instance for the queue. The rtnl lock state is derived from the 2726 - * execution path. 2727 - */ 2728 - void 2729 - ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index, 2730 - enum netdev_queue_type type, struct napi_struct *napi) 2731 - { 2732 - struct ice_pf *pf = vsi->back; 2733 - 2734 - if (!vsi->netdev) 2735 - return; 2736 - 2737 - if (current_work() == &pf->serv_task || 2738 - test_bit(ICE_PREPARED_FOR_RESET, pf->state) || 2739 - test_bit(ICE_DOWN, pf->state) || 2740 - test_bit(ICE_SUSPENDED, pf->state)) 2741 - __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, 2742 - false); 2743 - else 2744 - __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, 2745 - true); 2746 - } 2747 - 2748 - /** 2749 - * __ice_q_vector_set_napi_queues - Map queue[s] associated with the napi 2750 - * @q_vector: q_vector pointer 2751 - * @locked: is the rtnl_lock already held 2752 - * 2753 - * Associate the q_vector napi with all the queue[s] on the vector. 2754 - * Caller indicates the lock status. 2755 - */ 2756 - void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked) 2757 - { 2758 - struct ice_rx_ring *rx_ring; 2759 - struct ice_tx_ring *tx_ring; 2760 - 2761 - ice_for_each_rx_ring(rx_ring, q_vector->rx) 2762 - __ice_queue_set_napi(q_vector->vsi->netdev, rx_ring->q_index, 2763 - NETDEV_QUEUE_TYPE_RX, &q_vector->napi, 2764 - locked); 2765 - 2766 - ice_for_each_tx_ring(tx_ring, q_vector->tx) 2767 - __ice_queue_set_napi(q_vector->vsi->netdev, tx_ring->q_index, 2768 - NETDEV_QUEUE_TYPE_TX, &q_vector->napi, 2769 - locked); 2770 - /* Also set the interrupt number for the NAPI */ 2771 - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2772 - } 2773 - 2774 - /** 2775 - * ice_q_vector_set_napi_queues - Map queue[s] associated with the napi 2776 - * @q_vector: q_vector pointer 2777 - * 2778 - * Associate the q_vector napi with all the queue[s] on the vector 2779 - */ 2780 - void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector) 2781 - { 2782 - struct ice_rx_ring *rx_ring; 2783 - struct ice_tx_ring *tx_ring; 2784 - 2785 - ice_for_each_rx_ring(rx_ring, q_vector->rx) 2786 - ice_queue_set_napi(q_vector->vsi, rx_ring->q_index, 2787 - NETDEV_QUEUE_TYPE_RX, &q_vector->napi); 2788 - 2789 - ice_for_each_tx_ring(tx_ring, q_vector->tx) 2790 - ice_queue_set_napi(q_vector->vsi, tx_ring->q_index, 2791 - NETDEV_QUEUE_TYPE_TX, &q_vector->napi); 2792 - /* Also set the interrupt number for the NAPI */ 2793 - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2794 - } 2795 - 2796 - /** 2797 - * ice_vsi_set_napi_queues 2698 + * ice_vsi_set_napi_queues - associate netdev queues with napi 2798 2699 * @vsi: VSI pointer 2799 2700 * 2800 - * Associate queue[s] with napi for all vectors 2701 + * Associate queue[s] with napi for all vectors. 2702 + * The caller must hold rtnl_lock. 2801 2703 */ 2802 2704 void ice_vsi_set_napi_queues(struct ice_vsi *vsi) 2803 2705 { 2804 - int i; 2706 + struct net_device *netdev = vsi->netdev; 2707 + int q_idx, v_idx; 2805 2708 2806 - if (!vsi->netdev) 2709 + if (!netdev) 2807 2710 return; 2808 2711 2809 - ice_for_each_q_vector(vsi, i) 2810 - ice_q_vector_set_napi_queues(vsi->q_vectors[i]); 2712 + ice_for_each_rxq(vsi, q_idx) 2713 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, 2714 + &vsi->rx_rings[q_idx]->q_vector->napi); 2715 + 2716 + ice_for_each_txq(vsi, q_idx) 2717 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, 2718 + &vsi->tx_rings[q_idx]->q_vector->napi); 2719 + /* Also set the interrupt number for the NAPI */ 2720 + ice_for_each_q_vector(vsi, v_idx) { 2721 + struct ice_q_vector *q_vector = vsi->q_vectors[v_idx]; 2722 + 2723 + netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2724 + } 2725 + } 2726 + 2727 + /** 2728 + * ice_vsi_clear_napi_queues - dissociate netdev queues from napi 2729 + * @vsi: VSI pointer 2730 + * 2731 + * Clear the association between all VSI queues queue[s] and napi. 2732 + * The caller must hold rtnl_lock. 2733 + */ 2734 + void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) 2735 + { 2736 + struct net_device *netdev = vsi->netdev; 2737 + int q_idx; 2738 + 2739 + if (!netdev) 2740 + return; 2741 + 2742 + ice_for_each_txq(vsi, q_idx) 2743 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, NULL); 2744 + 2745 + ice_for_each_rxq(vsi, q_idx) 2746 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL); 2811 2747 } 2812 2748 2813 2749 /** ··· 2975 3039 if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf)) 2976 3040 return -EINVAL; 2977 3041 3042 + mutex_lock(&vsi->xdp_state_lock); 3043 + 2978 3044 ret = ice_vsi_realloc_stat_arrays(vsi); 2979 3045 if (ret) 2980 - goto err_vsi_cfg; 3046 + goto unlock; 2981 3047 2982 3048 ice_vsi_decfg(vsi); 2983 3049 ret = ice_vsi_cfg_def(vsi); 2984 3050 if (ret) 2985 - goto err_vsi_cfg; 3051 + goto unlock; 2986 3052 2987 3053 coalesce = kcalloc(vsi->num_q_vectors, 2988 3054 sizeof(struct ice_coalesce_stored), GFP_KERNEL); 2989 - if (!coalesce) 2990 - return -ENOMEM; 3055 + if (!coalesce) { 3056 + ret = -ENOMEM; 3057 + goto decfg; 3058 + } 2991 3059 2992 3060 prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); 2993 3061 ··· 2999 3059 if (ret) { 3000 3060 if (vsi_flags & ICE_VSI_FLAG_INIT) { 3001 3061 ret = -EIO; 3002 - goto err_vsi_cfg_tc_lan; 3062 + goto free_coalesce; 3003 3063 } 3004 3064 3005 - kfree(coalesce); 3006 - return ice_schedule_reset(pf, ICE_RESET_PFR); 3065 + ret = ice_schedule_reset(pf, ICE_RESET_PFR); 3066 + goto free_coalesce; 3007 3067 } 3008 3068 3009 3069 ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors); 3010 - kfree(coalesce); 3070 + clear_bit(ICE_VSI_REBUILD_PENDING, vsi->state); 3011 3071 3012 - return 0; 3013 - 3014 - err_vsi_cfg_tc_lan: 3015 - ice_vsi_decfg(vsi); 3072 + free_coalesce: 3016 3073 kfree(coalesce); 3017 - err_vsi_cfg: 3074 + decfg: 3075 + if (ret) 3076 + ice_vsi_decfg(vsi); 3077 + unlock: 3078 + mutex_unlock(&vsi->xdp_state_lock); 3018 3079 return ret; 3019 3080 } 3020 3081
+2 -8
drivers/net/ethernet/intel/ice/ice_lib.h
··· 44 44 struct ice_vsi * 45 45 ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params); 46 46 47 - void 48 - ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index, 49 - enum netdev_queue_type type, struct napi_struct *napi); 50 - 51 - void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked); 52 - 53 - void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector); 54 - 55 47 void ice_vsi_set_napi_queues(struct ice_vsi *vsi); 48 + 49 + void ice_vsi_clear_napi_queues(struct ice_vsi *vsi); 56 50 57 51 int ice_vsi_release(struct ice_vsi *vsi); 58 52
+41 -13
drivers/net/ethernet/intel/ice/ice_main.c
··· 608 608 memset(&vsi->mqprio_qopt, 0, sizeof(vsi->mqprio_qopt)); 609 609 } 610 610 } 611 + 612 + if (vsi->netdev) 613 + netif_device_detach(vsi->netdev); 611 614 skip: 612 615 613 616 /* clear SW filtering DB */ 614 617 ice_clear_hw_tbls(hw); 615 618 /* disable the VSIs and their queues that are not already DOWN */ 619 + set_bit(ICE_VSI_REBUILD_PENDING, ice_get_main_vsi(pf)->state); 616 620 ice_pf_dis_all_vsi(pf, false); 617 621 618 622 if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) ··· 3005 3001 struct netlink_ext_ack *extack) 3006 3002 { 3007 3003 unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; 3008 - bool if_running = netif_running(vsi->netdev); 3009 3004 int ret = 0, xdp_ring_err = 0; 3005 + bool if_running; 3010 3006 3011 3007 if (prog && !prog->aux->xdp_has_frags) { 3012 3008 if (frame_size > ice_max_xdp_frame_size(vsi)) { ··· 3017 3013 } 3018 3014 3019 3015 /* hot swap progs and avoid toggling link */ 3020 - if (ice_is_xdp_ena_vsi(vsi) == !!prog) { 3016 + if (ice_is_xdp_ena_vsi(vsi) == !!prog || 3017 + test_bit(ICE_VSI_REBUILD_PENDING, vsi->state)) { 3021 3018 ice_vsi_assign_bpf_prog(vsi, prog); 3022 3019 return 0; 3023 3020 } 3024 3021 3022 + if_running = netif_running(vsi->netdev) && 3023 + !test_and_set_bit(ICE_VSI_DOWN, vsi->state); 3024 + 3025 3025 /* need to stop netdev while setting up the program for Rx rings */ 3026 - if (if_running && !test_and_set_bit(ICE_VSI_DOWN, vsi->state)) { 3026 + if (if_running) { 3027 3027 ret = ice_down(vsi); 3028 3028 if (ret) { 3029 3029 NL_SET_ERR_MSG_MOD(extack, "Preparing device for XDP attach failed"); ··· 3093 3085 { 3094 3086 struct ice_netdev_priv *np = netdev_priv(dev); 3095 3087 struct ice_vsi *vsi = np->vsi; 3088 + int ret; 3096 3089 3097 3090 if (vsi->type != ICE_VSI_PF) { 3098 3091 NL_SET_ERR_MSG_MOD(xdp->extack, "XDP can be loaded only on PF VSI"); 3099 3092 return -EINVAL; 3100 3093 } 3101 3094 3095 + mutex_lock(&vsi->xdp_state_lock); 3096 + 3102 3097 switch (xdp->command) { 3103 3098 case XDP_SETUP_PROG: 3104 - return ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); 3099 + ret = ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); 3100 + break; 3105 3101 case XDP_SETUP_XSK_POOL: 3106 - return ice_xsk_pool_setup(vsi, xdp->xsk.pool, 3107 - xdp->xsk.queue_id); 3102 + ret = ice_xsk_pool_setup(vsi, xdp->xsk.pool, xdp->xsk.queue_id); 3103 + break; 3108 3104 default: 3109 - return -EINVAL; 3105 + ret = -EINVAL; 3110 3106 } 3107 + 3108 + mutex_unlock(&vsi->xdp_state_lock); 3109 + return ret; 3111 3110 } 3112 3111 3113 3112 /** ··· 3570 3555 if (!vsi->netdev) 3571 3556 return; 3572 3557 3573 - ice_for_each_q_vector(vsi, v_idx) { 3558 + ice_for_each_q_vector(vsi, v_idx) 3574 3559 netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi, 3575 3560 ice_napi_poll); 3576 - __ice_q_vector_set_napi_queues(vsi->q_vectors[v_idx], false); 3577 - } 3578 3561 } 3579 3562 3580 3563 /** ··· 5550 5537 if (ret) 5551 5538 goto err_reinit; 5552 5539 ice_vsi_map_rings_to_vectors(pf->vsi[v]); 5540 + rtnl_lock(); 5553 5541 ice_vsi_set_napi_queues(pf->vsi[v]); 5542 + rtnl_unlock(); 5554 5543 } 5555 5544 5556 5545 ret = ice_req_irq_msix_misc(pf); ··· 5566 5551 5567 5552 err_reinit: 5568 5553 while (v--) 5569 - if (pf->vsi[v]) 5554 + if (pf->vsi[v]) { 5555 + rtnl_lock(); 5556 + ice_vsi_clear_napi_queues(pf->vsi[v]); 5557 + rtnl_unlock(); 5570 5558 ice_vsi_free_q_vectors(pf->vsi[v]); 5559 + } 5571 5560 5572 5561 return ret; 5573 5562 } ··· 5636 5617 ice_for_each_vsi(pf, v) { 5637 5618 if (!pf->vsi[v]) 5638 5619 continue; 5620 + rtnl_lock(); 5621 + ice_vsi_clear_napi_queues(pf->vsi[v]); 5622 + rtnl_unlock(); 5639 5623 ice_vsi_free_q_vectors(pf->vsi[v]); 5640 5624 } 5641 5625 ice_clear_interrupt_scheme(pf); ··· 7252 7230 if (tx_err) 7253 7231 netdev_err(vsi->netdev, "Failed stop Tx rings, VSI %d error %d\n", 7254 7232 vsi->vsi_num, tx_err); 7255 - if (!tx_err && ice_is_xdp_ena_vsi(vsi)) { 7233 + if (!tx_err && vsi->xdp_rings) { 7256 7234 tx_err = ice_vsi_stop_xdp_tx_rings(vsi); 7257 7235 if (tx_err) 7258 7236 netdev_err(vsi->netdev, "Failed stop XDP rings, VSI %d error %d\n", ··· 7269 7247 ice_for_each_txq(vsi, i) 7270 7248 ice_clean_tx_ring(vsi->tx_rings[i]); 7271 7249 7272 - if (ice_is_xdp_ena_vsi(vsi)) 7250 + if (vsi->xdp_rings) 7273 7251 ice_for_each_xdp_txq(vsi, i) 7274 7252 ice_clean_tx_ring(vsi->xdp_rings[i]); 7275 7253 ··· 7474 7452 err = netif_set_real_num_rx_queues(vsi->netdev, vsi->num_rxq); 7475 7453 if (err) 7476 7454 goto err_set_qs; 7455 + 7456 + ice_vsi_set_napi_queues(vsi); 7477 7457 } 7478 7458 7479 7459 err = ice_up_complete(vsi); ··· 7613 7589 */ 7614 7590 static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) 7615 7591 { 7592 + struct ice_vsi *vsi = ice_get_main_vsi(pf); 7616 7593 struct device *dev = ice_pf_to_dev(pf); 7617 7594 struct ice_hw *hw = &pf->hw; 7618 7595 bool dvm; ··· 7755 7730 7756 7731 ice_rebuild_arfs(pf); 7757 7732 } 7733 + 7734 + if (vsi && vsi->netdev) 7735 + netif_device_attach(vsi->netdev); 7758 7736 7759 7737 ice_update_pf_netdev_link(pf); 7760 7738
+5 -13
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 39 39 sizeof(vsi_stat->rx_ring_stats[q_idx]->rx_stats)); 40 40 memset(&vsi_stat->tx_ring_stats[q_idx]->stats, 0, 41 41 sizeof(vsi_stat->tx_ring_stats[q_idx]->stats)); 42 - if (ice_is_xdp_ena_vsi(vsi)) 42 + if (vsi->xdp_rings) 43 43 memset(&vsi->xdp_rings[q_idx]->ring_stats->stats, 0, 44 44 sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats)); 45 45 } ··· 52 52 static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) 53 53 { 54 54 ice_clean_tx_ring(vsi->tx_rings[q_idx]); 55 - if (ice_is_xdp_ena_vsi(vsi)) 55 + if (vsi->xdp_rings) 56 56 ice_clean_tx_ring(vsi->xdp_rings[q_idx]); 57 57 ice_clean_rx_ring(vsi->rx_rings[q_idx]); 58 58 } ··· 165 165 struct ice_q_vector *q_vector; 166 166 struct ice_tx_ring *tx_ring; 167 167 struct ice_rx_ring *rx_ring; 168 - int timeout = 50; 169 168 int fail = 0; 170 169 int err; 171 170 ··· 174 175 tx_ring = vsi->tx_rings[q_idx]; 175 176 rx_ring = vsi->rx_rings[q_idx]; 176 177 q_vector = rx_ring->q_vector; 177 - 178 - while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { 179 - timeout--; 180 - if (!timeout) 181 - return -EBUSY; 182 - usleep_range(1000, 2000); 183 - } 184 178 185 179 synchronize_net(); 186 180 netif_carrier_off(vsi->netdev); ··· 186 194 err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); 187 195 if (!fail) 188 196 fail = err; 189 - if (ice_is_xdp_ena_vsi(vsi)) { 197 + if (vsi->xdp_rings) { 190 198 struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; 191 199 192 200 memset(&txq_meta, 0, sizeof(txq_meta)); ··· 253 261 netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 254 262 netif_carrier_on(vsi->netdev); 255 263 } 256 - clear_bit(ICE_CFG_BUSY, vsi->state); 257 264 258 265 return fail; 259 266 } ··· 381 390 goto failure; 382 391 } 383 392 384 - if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); 393 + if_running = !test_bit(ICE_VSI_DOWN, vsi->state) && 394 + ice_is_xdp_ena_vsi(vsi); 385 395 386 396 if (if_running) { 387 397 struct ice_rx_ring *rx_ring = vsi->rx_rings[qid];
+10
drivers/net/ethernet/intel/igb/igb_main.c
··· 6960 6960 6961 6961 static void igb_tsync_interrupt(struct igb_adapter *adapter) 6962 6962 { 6963 + const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS | 6964 + TSINTR_TT0 | TSINTR_TT1 | 6965 + TSINTR_AUTT0 | TSINTR_AUTT1); 6963 6966 struct e1000_hw *hw = &adapter->hw; 6964 6967 u32 tsicr = rd32(E1000_TSICR); 6965 6968 struct ptp_clock_event event; 6969 + 6970 + if (hw->mac.type == e1000_82580) { 6971 + /* 82580 has a hardware bug that requires an explicit 6972 + * write to clear the TimeSync interrupt cause. 6973 + */ 6974 + wr32(E1000_TSICR, tsicr & mask); 6975 + } 6966 6976 6967 6977 if (tsicr & TSINTR_SYS_WRAP) { 6968 6978 event.type = PTP_CLOCK_PPS;
+1
drivers/net/ethernet/intel/igc/igc_main.c
··· 7413 7413 rtnl_lock(); 7414 7414 if (netif_running(netdev)) { 7415 7415 if (igc_open(netdev)) { 7416 + rtnl_unlock(); 7416 7417 netdev_err(netdev, "igc_open failed after reset\n"); 7417 7418 return; 7418 7419 }
+2 -12
drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
··· 1442 1442 vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0, 1443 1443 rule->cookie, false); 1444 1444 1445 - vcap_free_rule(rule); 1446 - 1447 - /* Check that the rule has been freed: tricky to access since this 1448 - * memory should not be accessible anymore 1449 - */ 1450 - KUNIT_EXPECT_PTR_NE(test, NULL, rule); 1451 - ret = list_empty(&rule->keyfields); 1452 - KUNIT_EXPECT_EQ(test, true, ret); 1453 - ret = list_empty(&rule->actionfields); 1454 - KUNIT_EXPECT_EQ(test, true, ret); 1455 - 1456 - vcap_del_rule(&test_vctrl, &test_netdev, id); 1445 + ret = vcap_del_rule(&test_vctrl, &test_netdev, id); 1446 + KUNIT_EXPECT_EQ(test, 0, ret); 1457 1447 } 1458 1448 1459 1449 static void vcap_api_set_rule_counter_test(struct kunit *test)
+13 -9
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1872 1872 1873 1873 for (i = 0; i < apc->num_queues; i++) { 1874 1874 napi = &apc->tx_qp[i].tx_cq.napi; 1875 - napi_synchronize(napi); 1876 - napi_disable(napi); 1877 - netif_napi_del(napi); 1878 - 1875 + if (apc->tx_qp[i].txq.napi_initialized) { 1876 + napi_synchronize(napi); 1877 + napi_disable(napi); 1878 + netif_napi_del(napi); 1879 + apc->tx_qp[i].txq.napi_initialized = false; 1880 + } 1879 1881 mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qp[i].tx_object); 1880 1882 1881 1883 mana_deinit_cq(apc, &apc->tx_qp[i].tx_cq); ··· 1933 1931 txq->ndev = net; 1934 1932 txq->net_txq = netdev_get_tx_queue(net, i); 1935 1933 txq->vp_offset = apc->tx_vp_offset; 1934 + txq->napi_initialized = false; 1936 1935 skb_queue_head_init(&txq->pending_skbs); 1937 1936 1938 1937 memset(&spec, 0, sizeof(spec)); ··· 2000 1997 2001 1998 netif_napi_add_tx(net, &cq->napi, mana_poll); 2002 1999 napi_enable(&cq->napi); 2000 + txq->napi_initialized = true; 2003 2001 2004 2002 mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT); 2005 2003 } ··· 2012 2008 } 2013 2009 2014 2010 static void mana_destroy_rxq(struct mana_port_context *apc, 2015 - struct mana_rxq *rxq, bool validate_state) 2011 + struct mana_rxq *rxq, bool napi_initialized) 2016 2012 2017 2013 { 2018 2014 struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; ··· 2027 2023 2028 2024 napi = &rxq->rx_cq.napi; 2029 2025 2030 - if (validate_state) 2026 + if (napi_initialized) { 2031 2027 napi_synchronize(napi); 2032 2028 2033 - napi_disable(napi); 2029 + napi_disable(napi); 2034 2030 2031 + netif_napi_del(napi); 2032 + } 2035 2033 xdp_rxq_info_unreg(&rxq->xdp_rxq); 2036 - 2037 - netif_napi_del(napi); 2038 2034 2039 2035 mana_destroy_wq_obj(apc, GDMA_RQ, rxq->rxobj); 2040 2036
+49 -33
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 156 156 #define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7 157 157 158 158 /* XDP */ 159 - #define AM65_CPSW_XDP_CONSUMED 2 160 - #define AM65_CPSW_XDP_REDIRECT 1 159 + #define AM65_CPSW_XDP_CONSUMED BIT(1) 160 + #define AM65_CPSW_XDP_REDIRECT BIT(0) 161 161 #define AM65_CPSW_XDP_PASS 0 162 162 163 163 /* Include headroom compatible with both skb and xdpf */ 164 - #define AM65_CPSW_HEADROOM (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) 164 + #define AM65_CPSW_HEADROOM_NA (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) 165 + #define AM65_CPSW_HEADROOM ALIGN(AM65_CPSW_HEADROOM_NA, sizeof(long)) 165 166 166 167 static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave, 167 168 const u8 *dev_addr) ··· 934 933 host_desc = k3_cppi_desc_pool_alloc(tx_chn->desc_pool); 935 934 if (unlikely(!host_desc)) { 936 935 ndev->stats.tx_dropped++; 937 - return -ENOMEM; 936 + return AM65_CPSW_XDP_CONSUMED; /* drop */ 938 937 } 939 938 940 939 am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type); ··· 943 942 pkt_len, DMA_TO_DEVICE); 944 943 if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) { 945 944 ndev->stats.tx_dropped++; 946 - ret = -ENOMEM; 945 + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ 947 946 goto pool_free; 948 947 } 949 948 ··· 978 977 /* Inform BQL */ 979 978 netdev_tx_completed_queue(netif_txq, 1, pkt_len); 980 979 ndev->stats.tx_errors++; 980 + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ 981 981 goto dma_unmap; 982 982 } 983 983 ··· 998 996 int desc_idx, int cpu, int *len) 999 997 { 1000 998 struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; 999 + struct am65_cpsw_ndev_priv *ndev_priv; 1001 1000 struct net_device *ndev = port->ndev; 1001 + struct am65_cpsw_ndev_stats *stats; 1002 1002 int ret = AM65_CPSW_XDP_CONSUMED; 1003 1003 struct am65_cpsw_tx_chn *tx_chn; 1004 1004 struct netdev_queue *netif_txq; ··· 1008 1004 struct bpf_prog *prog; 1009 1005 struct page *page; 1010 1006 u32 act; 1007 + int err; 1011 1008 1012 1009 prog = READ_ONCE(port->xdp_prog); 1013 1010 if (!prog) ··· 1017 1012 act = bpf_prog_run_xdp(prog, xdp); 1018 1013 /* XDP prog might have changed packet data and boundaries */ 1019 1014 *len = xdp->data_end - xdp->data; 1015 + 1016 + ndev_priv = netdev_priv(ndev); 1017 + stats = this_cpu_ptr(ndev_priv->stats); 1020 1018 1021 1019 switch (act) { 1022 1020 case XDP_PASS: ··· 1031 1023 1032 1024 xdpf = xdp_convert_buff_to_frame(xdp); 1033 1025 if (unlikely(!xdpf)) 1034 - break; 1026 + goto drop; 1035 1027 1036 1028 __netif_tx_lock(netif_txq, cpu); 1037 - ret = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, 1029 + err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, 1038 1030 AM65_CPSW_TX_BUF_TYPE_XDP_TX); 1039 1031 __netif_tx_unlock(netif_txq); 1040 - if (ret) 1041 - break; 1032 + if (err) 1033 + goto drop; 1042 1034 1043 - ndev->stats.rx_bytes += *len; 1044 - ndev->stats.rx_packets++; 1035 + u64_stats_update_begin(&stats->syncp); 1036 + stats->rx_bytes += *len; 1037 + stats->rx_packets++; 1038 + u64_stats_update_end(&stats->syncp); 1045 1039 ret = AM65_CPSW_XDP_CONSUMED; 1046 1040 goto out; 1047 1041 case XDP_REDIRECT: 1048 1042 if (unlikely(xdp_do_redirect(ndev, xdp, prog))) 1049 - break; 1043 + goto drop; 1050 1044 1051 - ndev->stats.rx_bytes += *len; 1052 - ndev->stats.rx_packets++; 1045 + u64_stats_update_begin(&stats->syncp); 1046 + stats->rx_bytes += *len; 1047 + stats->rx_packets++; 1048 + u64_stats_update_end(&stats->syncp); 1053 1049 ret = AM65_CPSW_XDP_REDIRECT; 1054 1050 goto out; 1055 1051 default: 1056 1052 bpf_warn_invalid_xdp_action(ndev, prog, act); 1057 1053 fallthrough; 1058 1054 case XDP_ABORTED: 1055 + drop: 1059 1056 trace_xdp_exception(ndev, prog, act); 1060 1057 fallthrough; 1061 1058 case XDP_DROP: ··· 1069 1056 1070 1057 page = virt_to_head_page(xdp->data); 1071 1058 am65_cpsw_put_page(rx_chn, page, true, desc_idx); 1072 - 1073 1059 out: 1074 1060 return ret; 1075 1061 } ··· 1107 1095 } 1108 1096 1109 1097 static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, 1110 - u32 flow_idx, int cpu) 1098 + u32 flow_idx, int cpu, int *xdp_state) 1111 1099 { 1112 1100 struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; 1113 1101 u32 buf_dma_len, pkt_len, port_id = 0, csum_info; ··· 1126 1114 void **swdata; 1127 1115 u32 *psdata; 1128 1116 1117 + *xdp_state = AM65_CPSW_XDP_PASS; 1129 1118 ret = k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma); 1130 1119 if (ret) { 1131 1120 if (ret != -ENODATA) ··· 1174 1161 } 1175 1162 1176 1163 if (port->xdp_prog) { 1177 - xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq); 1178 - 1179 - xdp_prepare_buff(&xdp, page_addr, skb_headroom(skb), 1164 + xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq); 1165 + xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, 1180 1166 pkt_len, false); 1181 - 1182 - ret = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, 1183 - cpu, &pkt_len); 1184 - if (ret != AM65_CPSW_XDP_PASS) 1185 - return ret; 1167 + *xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, 1168 + cpu, &pkt_len); 1169 + if (*xdp_state != AM65_CPSW_XDP_PASS) 1170 + goto allocate; 1186 1171 1187 1172 /* Compute additional headroom to be reserved */ 1188 1173 headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb); ··· 1204 1193 stats->rx_bytes += pkt_len; 1205 1194 u64_stats_update_end(&stats->syncp); 1206 1195 1196 + allocate: 1207 1197 new_page = page_pool_dev_alloc_pages(rx_chn->page_pool); 1208 - if (unlikely(!new_page)) 1198 + if (unlikely(!new_page)) { 1199 + dev_err(dev, "page alloc failed\n"); 1209 1200 return -ENOMEM; 1201 + } 1202 + 1210 1203 rx_chn->pages[desc_idx] = new_page; 1211 1204 1212 1205 if (netif_dormant(ndev)) { ··· 1244 1229 struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx); 1245 1230 int flow = AM65_CPSW_MAX_RX_FLOWS; 1246 1231 int cpu = smp_processor_id(); 1247 - bool xdp_redirect = false; 1232 + int xdp_state_or = 0; 1248 1233 int cur_budget, ret; 1234 + int xdp_state; 1249 1235 int num_rx = 0; 1250 1236 1251 1237 /* process every flow */ ··· 1254 1238 cur_budget = budget - num_rx; 1255 1239 1256 1240 while (cur_budget--) { 1257 - ret = am65_cpsw_nuss_rx_packets(common, flow, cpu); 1258 - if (ret) { 1259 - if (ret == AM65_CPSW_XDP_REDIRECT) 1260 - xdp_redirect = true; 1241 + ret = am65_cpsw_nuss_rx_packets(common, flow, cpu, 1242 + &xdp_state); 1243 + xdp_state_or |= xdp_state; 1244 + if (ret) 1261 1245 break; 1262 - } 1263 1246 num_rx++; 1264 1247 } 1265 1248 ··· 1266 1251 break; 1267 1252 } 1268 1253 1269 - if (xdp_redirect) 1254 + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) 1270 1255 xdp_do_flush(); 1271 1256 1272 1257 dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); ··· 1933 1918 static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, 1934 1919 struct xdp_frame **frames, u32 flags) 1935 1920 { 1921 + struct am65_cpsw_common *common = am65_ndev_to_common(ndev); 1936 1922 struct am65_cpsw_tx_chn *tx_chn; 1937 1923 struct netdev_queue *netif_txq; 1938 1924 int cpu = smp_processor_id(); 1939 1925 int i, nxmit = 0; 1940 1926 1941 - tx_chn = &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES]; 1927 + tx_chn = &common->tx_chns[cpu % common->tx_ch_num]; 1942 1928 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); 1943 1929 1944 1930 __netif_tx_lock(netif_txq, cpu);
+3
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 436 436 * @tx_bytes: TX byte count for statistics 437 437 * @tx_stat_sync: Synchronization object for TX stats 438 438 * @dma_err_task: Work structure to process Axi DMA errors 439 + * @stopping: Set when @dma_err_task shouldn't do anything because we are 440 + * about to stop the device. 439 441 * @tx_irq: Axidma TX IRQ number 440 442 * @rx_irq: Axidma RX IRQ number 441 443 * @eth_irq: Ethernet core IRQ number ··· 509 507 struct u64_stats_sync tx_stat_sync; 510 508 511 509 struct work_struct dma_err_task; 510 + bool stopping; 512 511 513 512 int tx_irq; 514 513 int rx_irq;
+8
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1460 1460 struct axienet_local *lp = netdev_priv(ndev); 1461 1461 1462 1462 /* Enable worker thread for Axi DMA error handling */ 1463 + lp->stopping = false; 1463 1464 INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler); 1464 1465 1465 1466 napi_enable(&lp->napi_rx); ··· 1581 1580 dev_dbg(&ndev->dev, "axienet_close()\n"); 1582 1581 1583 1582 if (!lp->use_dmaengine) { 1583 + WRITE_ONCE(lp->stopping, true); 1584 + flush_work(&lp->dma_err_task); 1585 + 1584 1586 napi_disable(&lp->napi_tx); 1585 1587 napi_disable(&lp->napi_rx); 1586 1588 } ··· 2157 2153 struct axienet_local *lp = container_of(work, struct axienet_local, 2158 2154 dma_err_task); 2159 2155 struct net_device *ndev = lp->ndev; 2156 + 2157 + /* Don't bother if we are going to stop anyway */ 2158 + if (READ_ONCE(lp->stopping)) 2159 + return; 2160 2160 2161 2161 napi_disable(&lp->napi_tx); 2162 2162 napi_disable(&lp->napi_rx);
+5
drivers/net/mctp/Kconfig
··· 21 21 Say y here if you need to connect to MCTP endpoints over serial. To 22 22 compile as a module, use m; the module will be called mctp-serial. 23 23 24 + config MCTP_SERIAL_TEST 25 + bool "MCTP serial tests" if !KUNIT_ALL_TESTS 26 + depends on MCTP_SERIAL=y && KUNIT=y 27 + default KUNIT_ALL_TESTS 28 + 24 29 config MCTP_TRANSPORT_I2C 25 30 tristate "MCTP SMBus/I2C transport" 26 31 # i2c-mux is optional, but we must build as a module if i2c-mux is a module
+111 -2
drivers/net/mctp/mctp-serial.c
··· 91 91 * will be those non-escaped bytes, and does not include the escaped 92 92 * byte. 93 93 */ 94 - for (i = 1; i + dev->txpos + 1 < dev->txlen; i++) { 95 - if (needs_escape(dev->txbuf[dev->txpos + i + 1])) 94 + for (i = 1; i + dev->txpos < dev->txlen; i++) { 95 + if (needs_escape(dev->txbuf[dev->txpos + i])) 96 96 break; 97 97 } 98 98 ··· 521 521 MODULE_LICENSE("GPL v2"); 522 522 MODULE_AUTHOR("Jeremy Kerr <jk@codeconstruct.com.au>"); 523 523 MODULE_DESCRIPTION("MCTP Serial transport"); 524 + 525 + #if IS_ENABLED(CONFIG_MCTP_SERIAL_TEST) 526 + #include <kunit/test.h> 527 + 528 + #define MAX_CHUNKS 6 529 + struct test_chunk_tx { 530 + u8 input_len; 531 + u8 input[MCTP_SERIAL_MTU]; 532 + u8 chunks[MAX_CHUNKS]; 533 + }; 534 + 535 + static void test_next_chunk_len(struct kunit *test) 536 + { 537 + struct mctp_serial devx; 538 + struct mctp_serial *dev = &devx; 539 + int next; 540 + 541 + const struct test_chunk_tx *params = test->param_value; 542 + 543 + memset(dev, 0x0, sizeof(*dev)); 544 + memcpy(dev->txbuf, params->input, params->input_len); 545 + dev->txlen = params->input_len; 546 + 547 + for (size_t i = 0; i < MAX_CHUNKS; i++) { 548 + next = next_chunk_len(dev); 549 + dev->txpos += next; 550 + KUNIT_EXPECT_EQ(test, next, params->chunks[i]); 551 + 552 + if (next == 0) { 553 + KUNIT_EXPECT_EQ(test, dev->txpos, dev->txlen); 554 + return; 555 + } 556 + } 557 + 558 + KUNIT_FAIL_AND_ABORT(test, "Ran out of chunks"); 559 + } 560 + 561 + static struct test_chunk_tx chunk_tx_tests[] = { 562 + { 563 + .input_len = 5, 564 + .input = { 0x00, 0x11, 0x22, 0x7e, 0x80 }, 565 + .chunks = { 3, 1, 1, 0}, 566 + }, 567 + { 568 + .input_len = 5, 569 + .input = { 0x00, 0x11, 0x22, 0x7e, 0x7d }, 570 + .chunks = { 3, 1, 1, 0}, 571 + }, 572 + { 573 + .input_len = 3, 574 + .input = { 0x7e, 0x11, 0x22, }, 575 + .chunks = { 1, 2, 0}, 576 + }, 577 + { 578 + .input_len = 3, 579 + .input = { 0x7e, 0x7e, 0x7d, }, 580 + .chunks = { 1, 1, 1, 0}, 581 + }, 582 + { 583 + .input_len = 4, 584 + .input = { 0x7e, 0x7e, 0x00, 0x7d, }, 585 + .chunks = { 1, 1, 1, 1, 0}, 586 + }, 587 + { 588 + .input_len = 6, 589 + .input = { 0x7e, 0x7e, 0x00, 0x7d, 0x10, 0x10}, 590 + .chunks = { 1, 1, 1, 1, 2, 0}, 591 + }, 592 + { 593 + .input_len = 1, 594 + .input = { 0x7e }, 595 + .chunks = { 1, 0 }, 596 + }, 597 + { 598 + .input_len = 1, 599 + .input = { 0x80 }, 600 + .chunks = { 1, 0 }, 601 + }, 602 + { 603 + .input_len = 3, 604 + .input = { 0x80, 0x80, 0x00 }, 605 + .chunks = { 3, 0 }, 606 + }, 607 + { 608 + .input_len = 7, 609 + .input = { 0x01, 0x00, 0x08, 0xc8, 0x00, 0x80, 0x02 }, 610 + .chunks = { 7, 0 }, 611 + }, 612 + { 613 + .input_len = 7, 614 + .input = { 0x01, 0x00, 0x08, 0xc8, 0x7e, 0x80, 0x02 }, 615 + .chunks = { 4, 1, 2, 0 }, 616 + }, 617 + }; 618 + 619 + KUNIT_ARRAY_PARAM(chunk_tx, chunk_tx_tests, NULL); 620 + 621 + static struct kunit_case mctp_serial_test_cases[] = { 622 + KUNIT_CASE_PARAM(test_next_chunk_len, chunk_tx_gen_params), 623 + }; 624 + 625 + static struct kunit_suite mctp_serial_test_suite = { 626 + .name = "mctp_serial", 627 + .test_cases = mctp_serial_test_cases, 628 + }; 629 + 630 + kunit_test_suite(mctp_serial_test_suite); 631 + 632 + #endif /* CONFIG_MCTP_SERIAL_TEST */
+2
drivers/net/phy/phy_device.c
··· 3347 3347 err = of_phy_led(phydev, led); 3348 3348 if (err) { 3349 3349 of_node_put(led); 3350 + of_node_put(leds); 3350 3351 phy_leds_unregister(phydev); 3351 3352 return err; 3352 3353 } 3353 3354 } 3354 3355 3356 + of_node_put(leds); 3355 3357 return 0; 3356 3358 } 3357 3359
+13 -4
drivers/net/usb/r8152.c
··· 5178 5178 data = (u8 *)mac; 5179 5179 data += __le16_to_cpu(mac->fw_offset); 5180 5180 5181 - generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, data, 5182 - type); 5181 + if (generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, 5182 + data, type) < 0) { 5183 + dev_err(&tp->intf->dev, "Write %s fw fail\n", 5184 + type ? "PLA" : "USB"); 5185 + return; 5186 + } 5183 5187 5184 5188 ocp_write_word(tp, type, __le16_to_cpu(mac->bp_ba_addr), 5185 5189 __le16_to_cpu(mac->bp_ba_value)); 5186 5190 5187 - generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD, 5188 - __le16_to_cpu(mac->bp_num) << 1, mac->bp, type); 5191 + if (generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD, 5192 + ALIGN(__le16_to_cpu(mac->bp_num) << 1, 4), 5193 + mac->bp, type) < 0) { 5194 + dev_err(&tp->intf->dev, "Write %s bp fail\n", 5195 + type ? "PLA" : "USB"); 5196 + return; 5197 + } 5189 5198 5190 5199 bp_en_addr = __le16_to_cpu(mac->bp_en_addr); 5191 5200 if (bp_en_addr)
+3 -8
drivers/net/usb/usbnet.c
··· 61 61 62 62 /*-------------------------------------------------------------------------*/ 63 63 64 - // randomly generated ethernet address 65 - static u8 node_id [ETH_ALEN]; 66 - 67 64 /* use ethtool to change the level for any given device */ 68 65 static int msg_level = -1; 69 66 module_param (msg_level, int, 0); ··· 1722 1725 1723 1726 dev->net = net; 1724 1727 strscpy(net->name, "usb%d", sizeof(net->name)); 1725 - eth_hw_addr_set(net, node_id); 1726 1728 1727 1729 /* rx and tx sides can use different message sizes; 1728 1730 * bind() should set rx_urb_size in that case. ··· 1797 1801 goto out4; 1798 1802 } 1799 1803 1800 - /* let userspace know we have a random address */ 1801 - if (ether_addr_equal(net->dev_addr, node_id)) 1802 - net->addr_assign_type = NET_ADDR_RANDOM; 1804 + /* this flags the device for user space */ 1805 + if (!is_valid_ether_addr(net->dev_addr)) 1806 + eth_hw_addr_random(net); 1803 1807 1804 1808 if ((dev->driver_info->flags & FLAG_WLAN) != 0) 1805 1809 SET_NETDEV_DEVTYPE(net, &wlan_type); ··· 2207 2211 BUILD_BUG_ON( 2208 2212 sizeof_field(struct sk_buff, cb) < sizeof(struct skb_data)); 2209 2213 2210 - eth_random_addr(node_id); 2211 2214 return 0; 2212 2215 } 2213 2216 module_init(usbnet_init);
+2 -2
drivers/net/wireless/ath/ath11k/ahb.c
··· 413 413 return ret; 414 414 } 415 415 416 - static void ath11k_ahb_power_down(struct ath11k_base *ab, bool is_suspend) 416 + static void ath11k_ahb_power_down(struct ath11k_base *ab) 417 417 { 418 418 struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab); 419 419 ··· 1280 1280 struct ath11k_base *ab = platform_get_drvdata(pdev); 1281 1281 1282 1282 if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) { 1283 - ath11k_ahb_power_down(ab, false); 1283 + ath11k_ahb_power_down(ab); 1284 1284 ath11k_debugfs_soc_destroy(ab); 1285 1285 ath11k_qmi_deinit_service(ab); 1286 1286 goto qmi_fail;
+33 -86
drivers/net/wireless/ath/ath11k/core.c
··· 906 906 return ret; 907 907 } 908 908 909 + ret = ath11k_wow_enable(ab); 910 + if (ret) { 911 + ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret); 912 + return ret; 913 + } 914 + 909 915 ret = ath11k_dp_rx_pktlog_stop(ab, false); 910 916 if (ret) { 911 917 ath11k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n", ··· 922 916 ath11k_ce_stop_shadow_timers(ab); 923 917 ath11k_dp_stop_shadow_timers(ab); 924 918 925 - /* PM framework skips suspend_late/resume_early callbacks 926 - * if other devices report errors in their suspend callbacks. 927 - * However ath11k_core_resume() would still be called because 928 - * here we return success thus kernel put us on dpm_suspended_list. 929 - * Since we won't go through a power down/up cycle, there is 930 - * no chance to call complete(&ab->restart_completed) in 931 - * ath11k_core_restart(), making ath11k_core_resume() timeout. 932 - * So call it here to avoid this issue. This also works in case 933 - * no error happens thus suspend_late/resume_early get called, 934 - * because it will be reinitialized in ath11k_core_resume_early(). 935 - */ 936 - complete(&ab->restart_completed); 919 + ath11k_hif_irq_disable(ab); 920 + ath11k_hif_ce_irq_disable(ab); 921 + 922 + ret = ath11k_hif_suspend(ab); 923 + if (ret) { 924 + ath11k_warn(ab, "failed to suspend hif: %d\n", ret); 925 + return ret; 926 + } 937 927 938 928 return 0; 939 929 } 940 930 EXPORT_SYMBOL(ath11k_core_suspend); 941 - 942 - int ath11k_core_suspend_late(struct ath11k_base *ab) 943 - { 944 - struct ath11k_pdev *pdev; 945 - struct ath11k *ar; 946 - 947 - if (!ab->hw_params.supports_suspend) 948 - return -EOPNOTSUPP; 949 - 950 - /* so far single_pdev_only chips have supports_suspend as true 951 - * and only the first pdev is valid. 952 - */ 953 - pdev = ath11k_core_get_single_pdev(ab); 954 - ar = pdev->ar; 955 - if (!ar || ar->state != ATH11K_STATE_OFF) 956 - return 0; 957 - 958 - ath11k_hif_irq_disable(ab); 959 - ath11k_hif_ce_irq_disable(ab); 960 - 961 - ath11k_hif_power_down(ab, true); 962 - 963 - return 0; 964 - } 965 - EXPORT_SYMBOL(ath11k_core_suspend_late); 966 - 967 - int ath11k_core_resume_early(struct ath11k_base *ab) 968 - { 969 - int ret; 970 - struct ath11k_pdev *pdev; 971 - struct ath11k *ar; 972 - 973 - if (!ab->hw_params.supports_suspend) 974 - return -EOPNOTSUPP; 975 - 976 - /* so far single_pdev_only chips have supports_suspend as true 977 - * and only the first pdev is valid. 978 - */ 979 - pdev = ath11k_core_get_single_pdev(ab); 980 - ar = pdev->ar; 981 - if (!ar || ar->state != ATH11K_STATE_OFF) 982 - return 0; 983 - 984 - reinit_completion(&ab->restart_completed); 985 - ret = ath11k_hif_power_up(ab); 986 - if (ret) 987 - ath11k_warn(ab, "failed to power up hif during resume: %d\n", ret); 988 - 989 - return ret; 990 - } 991 - EXPORT_SYMBOL(ath11k_core_resume_early); 992 931 993 932 int ath11k_core_resume(struct ath11k_base *ab) 994 933 { 995 934 int ret; 996 935 struct ath11k_pdev *pdev; 997 936 struct ath11k *ar; 998 - long time_left; 999 937 1000 938 if (!ab->hw_params.supports_suspend) 1001 939 return -EOPNOTSUPP; 1002 940 1003 - /* so far single_pdev_only chips have supports_suspend as true 941 + /* so far signle_pdev_only chips have supports_suspend as true 1004 942 * and only the first pdev is valid. 1005 943 */ 1006 944 pdev = ath11k_core_get_single_pdev(ab); ··· 952 1002 if (!ar || ar->state != ATH11K_STATE_OFF) 953 1003 return 0; 954 1004 955 - time_left = wait_for_completion_timeout(&ab->restart_completed, 956 - ATH11K_RESET_TIMEOUT_HZ); 957 - if (time_left == 0) { 958 - ath11k_warn(ab, "timeout while waiting for restart complete"); 959 - return -ETIMEDOUT; 1005 + ret = ath11k_hif_resume(ab); 1006 + if (ret) { 1007 + ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret); 1008 + return ret; 960 1009 } 961 1010 962 - if (ab->hw_params.current_cc_support && 963 - ar->alpha2[0] != 0 && ar->alpha2[1] != 0) { 964 - ret = ath11k_reg_set_cc(ar); 965 - if (ret) { 966 - ath11k_warn(ab, "failed to set country code during resume: %d\n", 967 - ret); 968 - return ret; 969 - } 970 - } 1011 + ath11k_hif_ce_irq_enable(ab); 1012 + ath11k_hif_irq_enable(ab); 971 1013 972 1014 ret = ath11k_dp_rx_pktlog_start(ab); 973 - if (ret) 1015 + if (ret) { 974 1016 ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n", 975 1017 ret); 1018 + return ret; 1019 + } 976 1020 977 - return ret; 1021 + ret = ath11k_wow_wakeup(ab); 1022 + if (ret) { 1023 + ath11k_warn(ab, "failed to wakeup wow during resume: %d\n", ret); 1024 + return ret; 1025 + } 1026 + 1027 + return 0; 978 1028 } 979 1029 EXPORT_SYMBOL(ath11k_core_resume); 980 1030 ··· 2069 2119 2070 2120 if (!ab->is_reset) 2071 2121 ath11k_core_post_reconfigure_recovery(ab); 2072 - 2073 - complete(&ab->restart_completed); 2074 2122 } 2075 2123 2076 2124 static void ath11k_core_reset(struct work_struct *work) ··· 2138 2190 ath11k_hif_irq_disable(ab); 2139 2191 ath11k_hif_ce_irq_disable(ab); 2140 2192 2141 - ath11k_hif_power_down(ab, false); 2193 + ath11k_hif_power_down(ab); 2142 2194 ath11k_hif_power_up(ab); 2143 2195 2144 2196 ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n"); ··· 2211 2263 2212 2264 mutex_unlock(&ab->core_lock); 2213 2265 2214 - ath11k_hif_power_down(ab, false); 2266 + ath11k_hif_power_down(ab); 2215 2267 ath11k_mac_destroy(ab); 2216 2268 ath11k_core_soc_destroy(ab); 2217 2269 ath11k_fw_destroy(ab); ··· 2264 2316 timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0); 2265 2317 init_completion(&ab->htc_suspend); 2266 2318 init_completion(&ab->wow.wakeup_completed); 2267 - init_completion(&ab->restart_completed); 2268 2319 2269 2320 ab->dev = dev; 2270 2321 ab->hif.bus = bus;
-4
drivers/net/wireless/ath/ath11k/core.h
··· 1036 1036 DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT); 1037 1037 } fw; 1038 1038 1039 - struct completion restart_completed; 1040 - 1041 1039 #ifdef CONFIG_NL80211_TESTMODE 1042 1040 struct { 1043 1041 u32 data_pos; ··· 1235 1237 int ath11k_core_check_dt(struct ath11k_base *ath11k); 1236 1238 int ath11k_core_check_smbios(struct ath11k_base *ab); 1237 1239 void ath11k_core_halt(struct ath11k *ar); 1238 - int ath11k_core_resume_early(struct ath11k_base *ab); 1239 1240 int ath11k_core_resume(struct ath11k_base *ab); 1240 1241 int ath11k_core_suspend(struct ath11k_base *ab); 1241 - int ath11k_core_suspend_late(struct ath11k_base *ab); 1242 1242 void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab); 1243 1243 bool ath11k_core_coldboot_cal_support(struct ath11k_base *ab); 1244 1244
+3 -9
drivers/net/wireless/ath/ath11k/hif.h
··· 18 18 int (*start)(struct ath11k_base *ab); 19 19 void (*stop)(struct ath11k_base *ab); 20 20 int (*power_up)(struct ath11k_base *ab); 21 - void (*power_down)(struct ath11k_base *ab, bool is_suspend); 21 + void (*power_down)(struct ath11k_base *ab); 22 22 int (*suspend)(struct ath11k_base *ab); 23 23 int (*resume)(struct ath11k_base *ab); 24 24 int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id, ··· 67 67 68 68 static inline int ath11k_hif_power_up(struct ath11k_base *ab) 69 69 { 70 - if (!ab->hif.ops->power_up) 71 - return -EOPNOTSUPP; 72 - 73 70 return ab->hif.ops->power_up(ab); 74 71 } 75 72 76 - static inline void ath11k_hif_power_down(struct ath11k_base *ab, bool is_suspend) 73 + static inline void ath11k_hif_power_down(struct ath11k_base *ab) 77 74 { 78 - if (!ab->hif.ops->power_down) 79 - return; 80 - 81 - ab->hif.ops->power_down(ab, is_suspend); 75 + ab->hif.ops->power_down(ab); 82 76 } 83 77 84 78 static inline int ath11k_hif_suspend(struct ath11k_base *ab)
+1
drivers/net/wireless/ath/ath11k/mac.c
··· 7900 7900 } 7901 7901 7902 7902 if (psd) { 7903 + arvif->reg_tpc_info.is_psd_power = true; 7903 7904 arvif->reg_tpc_info.num_pwr_levels = psd->count; 7904 7905 7905 7906 for (i = 0; i < arvif->reg_tpc_info.num_pwr_levels; i++) {
+2 -10
drivers/net/wireless/ath/ath11k/mhi.c
··· 453 453 return 0; 454 454 } 455 455 456 - void ath11k_mhi_stop(struct ath11k_pci *ab_pci, bool is_suspend) 456 + void ath11k_mhi_stop(struct ath11k_pci *ab_pci) 457 457 { 458 - /* During suspend we need to use mhi_power_down_keep_dev() 459 - * workaround, otherwise ath11k_core_resume() will timeout 460 - * during resume. 461 - */ 462 - if (is_suspend) 463 - mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true); 464 - else 465 - mhi_power_down(ab_pci->mhi_ctrl, true); 466 - 458 + mhi_power_down(ab_pci->mhi_ctrl, true); 467 459 mhi_unprepare_after_power_down(ab_pci->mhi_ctrl); 468 460 } 469 461
+2 -1
drivers/net/wireless/ath/ath11k/mhi.h
··· 18 18 #define MHICTRL_RESET_MASK 0x2 19 19 20 20 int ath11k_mhi_start(struct ath11k_pci *ar_pci); 21 - void ath11k_mhi_stop(struct ath11k_pci *ar_pci, bool is_suspend); 21 + void ath11k_mhi_stop(struct ath11k_pci *ar_pci); 22 22 int ath11k_mhi_register(struct ath11k_pci *ar_pci); 23 23 void ath11k_mhi_unregister(struct ath11k_pci *ar_pci); 24 24 void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab); ··· 26 26 27 27 int ath11k_mhi_suspend(struct ath11k_pci *ar_pci); 28 28 int ath11k_mhi_resume(struct ath11k_pci *ar_pci); 29 + 29 30 #endif
+7 -37
drivers/net/wireless/ath/ath11k/pci.c
··· 638 638 return 0; 639 639 } 640 640 641 - static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend) 641 + static void ath11k_pci_power_down(struct ath11k_base *ab) 642 642 { 643 643 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 644 644 ··· 649 649 650 650 ath11k_pci_msi_disable(ab_pci); 651 651 652 - ath11k_mhi_stop(ab_pci, is_suspend); 652 + ath11k_mhi_stop(ab_pci); 653 653 clear_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags); 654 654 ath11k_pci_sw_reset(ab_pci->ab, false); 655 655 } ··· 970 970 ath11k_pci_set_irq_affinity_hint(ab_pci, NULL); 971 971 972 972 if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) { 973 - ath11k_pci_power_down(ab, false); 973 + ath11k_pci_power_down(ab); 974 974 ath11k_debugfs_soc_destroy(ab); 975 975 ath11k_qmi_deinit_service(ab); 976 976 goto qmi_fail; ··· 998 998 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 999 999 1000 1000 ath11k_pci_set_irq_affinity_hint(ab_pci, NULL); 1001 - ath11k_pci_power_down(ab, false); 1001 + ath11k_pci_power_down(ab); 1002 1002 } 1003 1003 1004 1004 static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev) ··· 1035 1035 return ret; 1036 1036 } 1037 1037 1038 - static __maybe_unused int ath11k_pci_pm_suspend_late(struct device *dev) 1039 - { 1040 - struct ath11k_base *ab = dev_get_drvdata(dev); 1041 - int ret; 1042 - 1043 - ret = ath11k_core_suspend_late(ab); 1044 - if (ret) 1045 - ath11k_warn(ab, "failed to late suspend core: %d\n", ret); 1046 - 1047 - /* Similar to ath11k_pci_pm_suspend(), we return success here 1048 - * even error happens, to allow system suspend/hibernation survive. 1049 - */ 1050 - return 0; 1051 - } 1052 - 1053 - static __maybe_unused int ath11k_pci_pm_resume_early(struct device *dev) 1054 - { 1055 - struct ath11k_base *ab = dev_get_drvdata(dev); 1056 - int ret; 1057 - 1058 - ret = ath11k_core_resume_early(ab); 1059 - if (ret) 1060 - ath11k_warn(ab, "failed to early resume core: %d\n", ret); 1061 - 1062 - return ret; 1063 - } 1064 - 1065 - static const struct dev_pm_ops __maybe_unused ath11k_pci_pm_ops = { 1066 - SET_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend, 1067 - ath11k_pci_pm_resume) 1068 - SET_LATE_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend_late, 1069 - ath11k_pci_pm_resume_early) 1070 - }; 1038 + static SIMPLE_DEV_PM_OPS(ath11k_pci_pm_ops, 1039 + ath11k_pci_pm_suspend, 1040 + ath11k_pci_pm_resume); 1071 1041 1072 1042 static struct pci_driver ath11k_pci_driver = { 1073 1043 .name = "ath11k_pci",
+1 -1
drivers/net/wireless/ath/ath11k/qmi.c
··· 2877 2877 } 2878 2878 2879 2879 /* reset the firmware */ 2880 - ath11k_hif_power_down(ab, false); 2880 + ath11k_hif_power_down(ab); 2881 2881 ath11k_hif_power_up(ab); 2882 2882 ath11k_dbg(ab, ATH11K_DBG_QMI, "exit wait for cold boot done\n"); 2883 2883 return 0;
+101 -67
drivers/ptp/ptp_ocp.c
··· 316 316 #define OCP_SERIAL_LEN 6 317 317 #define OCP_SMA_NUM 4 318 318 319 + enum { 320 + PORT_GNSS, 321 + PORT_GNSS2, 322 + PORT_MAC, /* miniature atomic clock */ 323 + PORT_NMEA, 324 + 325 + __PORT_COUNT, 326 + }; 327 + 319 328 struct ptp_ocp { 320 329 struct pci_dev *pdev; 321 330 struct device dev; ··· 366 357 struct delayed_work sync_work; 367 358 int id; 368 359 int n_irqs; 369 - struct ptp_ocp_serial_port gnss_port; 370 - struct ptp_ocp_serial_port gnss2_port; 371 - struct ptp_ocp_serial_port mac_port; /* miniature atomic clock */ 372 - struct ptp_ocp_serial_port nmea_port; 360 + struct ptp_ocp_serial_port port[__PORT_COUNT]; 373 361 bool fw_loader; 374 362 u8 fw_tag; 375 363 u16 fw_version; ··· 661 655 }, 662 656 }, 663 657 { 664 - OCP_SERIAL_RESOURCE(gnss_port), 658 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 665 659 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 666 660 .extra = &(struct ptp_ocp_serial_port) { 667 661 .baud = 115200, 668 662 }, 669 663 }, 670 664 { 671 - OCP_SERIAL_RESOURCE(gnss2_port), 665 + OCP_SERIAL_RESOURCE(port[PORT_GNSS2]), 672 666 .offset = 0x00170000 + 0x1000, .irq_vec = 4, 673 667 .extra = &(struct ptp_ocp_serial_port) { 674 668 .baud = 115200, 675 669 }, 676 670 }, 677 671 { 678 - OCP_SERIAL_RESOURCE(mac_port), 672 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 679 673 .offset = 0x00180000 + 0x1000, .irq_vec = 5, 680 674 .extra = &(struct ptp_ocp_serial_port) { 681 675 .baud = 57600, 682 676 }, 683 677 }, 684 678 { 685 - OCP_SERIAL_RESOURCE(nmea_port), 679 + OCP_SERIAL_RESOURCE(port[PORT_NMEA]), 686 680 .offset = 0x00190000 + 0x1000, .irq_vec = 10, 687 681 }, 688 682 { ··· 746 740 .offset = 0x01000000, .size = 0x10000, 747 741 }, 748 742 { 749 - OCP_SERIAL_RESOURCE(gnss_port), 743 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 750 744 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 751 745 .extra = &(struct ptp_ocp_serial_port) { 752 746 .baud = 115200, ··· 845 839 }, 846 840 }, 847 841 { 848 - OCP_SERIAL_RESOURCE(mac_port), 842 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 849 843 .offset = 0x00190000, .irq_vec = 7, 850 844 .extra = &(struct ptp_ocp_serial_port) { 851 845 .baud = 9600, ··· 956 950 .offset = 0x00220000, .size = 0x1000, 957 951 }, 958 952 { 959 - OCP_SERIAL_RESOURCE(gnss_port), 953 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 960 954 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 961 955 .extra = &(struct ptp_ocp_serial_port) { 962 956 .baud = 9600, 963 957 }, 964 958 }, 965 959 { 966 - OCP_SERIAL_RESOURCE(mac_port), 960 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 967 961 .offset = 0x00180000 + 0x1000, .irq_vec = 5, 968 962 .extra = &(struct ptp_ocp_serial_port) { 969 963 .baud = 115200, ··· 1653 1647 if (idx >= ARRAY_SIZE(gnss_name)) 1654 1648 idx = ARRAY_SIZE(gnss_name) - 1; 1655 1649 return gnss_name[idx]; 1650 + } 1651 + 1652 + static const char * 1653 + ptp_ocp_tty_port_name(int idx) 1654 + { 1655 + static const char * const tty_name[] = { 1656 + "GNSS", "GNSS2", "MAC", "NMEA" 1657 + }; 1658 + return tty_name[idx]; 1656 1659 } 1657 1660 1658 1661 struct ptp_ocp_nvmem_match_info { ··· 3362 3347 static EXT_ATTR_RO(freq, frequency, 3); 3363 3348 3364 3349 static ssize_t 3350 + ptp_ocp_tty_show(struct device *dev, struct device_attribute *attr, char *buf) 3351 + { 3352 + struct dev_ext_attribute *ea = to_ext_attr(attr); 3353 + struct ptp_ocp *bp = dev_get_drvdata(dev); 3354 + 3355 + return sysfs_emit(buf, "ttyS%d", bp->port[(uintptr_t)ea->var].line); 3356 + } 3357 + 3358 + static umode_t 3359 + ptp_ocp_timecard_tty_is_visible(struct kobject *kobj, struct attribute *attr, int n) 3360 + { 3361 + struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj)); 3362 + struct ptp_ocp_serial_port *port; 3363 + struct device_attribute *dattr; 3364 + struct dev_ext_attribute *ea; 3365 + 3366 + if (strncmp(attr->name, "tty", 3)) 3367 + return attr->mode; 3368 + 3369 + dattr = container_of(attr, struct device_attribute, attr); 3370 + ea = container_of(dattr, struct dev_ext_attribute, attr); 3371 + port = &bp->port[(uintptr_t)ea->var]; 3372 + return port->line == -1 ? 0 : 0444; 3373 + } 3374 + 3375 + #define EXT_TTY_ATTR_RO(_name, _val) \ 3376 + struct dev_ext_attribute dev_attr_tty##_name = \ 3377 + { __ATTR(tty##_name, 0444, ptp_ocp_tty_show, NULL), (void *)_val } 3378 + 3379 + static EXT_TTY_ATTR_RO(GNSS, PORT_GNSS); 3380 + static EXT_TTY_ATTR_RO(GNSS2, PORT_GNSS2); 3381 + static EXT_TTY_ATTR_RO(MAC, PORT_MAC); 3382 + static EXT_TTY_ATTR_RO(NMEA, PORT_NMEA); 3383 + static struct attribute *ptp_ocp_timecard_tty_attrs[] = { 3384 + &dev_attr_ttyGNSS.attr.attr, 3385 + &dev_attr_ttyGNSS2.attr.attr, 3386 + &dev_attr_ttyMAC.attr.attr, 3387 + &dev_attr_ttyNMEA.attr.attr, 3388 + NULL, 3389 + }; 3390 + 3391 + static const struct attribute_group ptp_ocp_timecard_tty_group = { 3392 + .name = "tty", 3393 + .attrs = ptp_ocp_timecard_tty_attrs, 3394 + .is_visible = ptp_ocp_timecard_tty_is_visible, 3395 + }; 3396 + 3397 + static ssize_t 3365 3398 serialnum_show(struct device *dev, struct device_attribute *attr, char *buf) 3366 3399 { 3367 3400 struct ptp_ocp *bp = dev_get_drvdata(dev); ··· 3838 3775 3839 3776 static const struct ocp_attr_group fb_timecard_groups[] = { 3840 3777 { .cap = OCP_CAP_BASIC, .group = &fb_timecard_group }, 3778 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3841 3779 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group }, 3842 3780 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group }, 3843 3781 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal2_group }, ··· 3878 3814 3879 3815 static const struct ocp_attr_group art_timecard_groups[] = { 3880 3816 { .cap = OCP_CAP_BASIC, .group = &art_timecard_group }, 3817 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3881 3818 { }, 3882 3819 }; 3883 3820 ··· 3906 3841 3907 3842 static const struct ocp_attr_group adva_timecard_groups[] = { 3908 3843 { .cap = OCP_CAP_BASIC, .group = &adva_timecard_group }, 3844 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3909 3845 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group }, 3910 3846 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group }, 3911 3847 { .cap = OCP_CAP_FREQ, .group = &fb_timecard_freq0_group }, ··· 4026 3960 bp = dev_get_drvdata(dev); 4027 3961 4028 3962 seq_printf(s, "%7s: /dev/ptp%d\n", "PTP", ptp_clock_index(bp->ptp)); 4029 - if (bp->gnss_port.line != -1) 4030 - seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1", 4031 - bp->gnss_port.line); 4032 - if (bp->gnss2_port.line != -1) 4033 - seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2", 4034 - bp->gnss2_port.line); 4035 - if (bp->mac_port.line != -1) 4036 - seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port.line); 4037 - if (bp->nmea_port.line != -1) 4038 - seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port.line); 3963 + for (i = 0; i < __PORT_COUNT; i++) { 3964 + if (bp->port[i].line != -1) 3965 + seq_printf(s, "%7s: /dev/ttyS%d\n", ptp_ocp_tty_port_name(i), 3966 + bp->port[i].line); 3967 + } 4039 3968 4040 3969 memset(sma_val, 0xff, sizeof(sma_val)); 4041 3970 if (bp->sma_map1) { ··· 4340 4279 static int 4341 4280 ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev) 4342 4281 { 4343 - int err; 4282 + int i, err; 4344 4283 4345 4284 mutex_lock(&ptp_ocp_lock); 4346 4285 err = idr_alloc(&ptp_ocp_idr, bp, 0, 0, GFP_KERNEL); ··· 4353 4292 4354 4293 bp->ptp_info = ptp_ocp_clock_info; 4355 4294 spin_lock_init(&bp->lock); 4356 - bp->gnss_port.line = -1; 4357 - bp->gnss2_port.line = -1; 4358 - bp->mac_port.line = -1; 4359 - bp->nmea_port.line = -1; 4295 + 4296 + for (i = 0; i < __PORT_COUNT; i++) 4297 + bp->port[i].line = -1; 4298 + 4360 4299 bp->pdev = pdev; 4361 4300 4362 4301 device_initialize(&bp->dev); ··· 4413 4352 struct pps_device *pps; 4414 4353 char buf[32]; 4415 4354 4416 - if (bp->gnss_port.line != -1) { 4417 - sprintf(buf, "ttyS%d", bp->gnss_port.line); 4418 - ptp_ocp_link_child(bp, buf, "ttyGNSS"); 4419 - } 4420 - if (bp->gnss2_port.line != -1) { 4421 - sprintf(buf, "ttyS%d", bp->gnss2_port.line); 4422 - ptp_ocp_link_child(bp, buf, "ttyGNSS2"); 4423 - } 4424 - if (bp->mac_port.line != -1) { 4425 - sprintf(buf, "ttyS%d", bp->mac_port.line); 4426 - ptp_ocp_link_child(bp, buf, "ttyMAC"); 4427 - } 4428 - if (bp->nmea_port.line != -1) { 4429 - sprintf(buf, "ttyS%d", bp->nmea_port.line); 4430 - ptp_ocp_link_child(bp, buf, "ttyNMEA"); 4431 - } 4432 4355 sprintf(buf, "ptp%d", ptp_clock_index(bp->ptp)); 4433 4356 ptp_ocp_link_child(bp, buf, "ptp"); 4434 4357 ··· 4461 4416 }; 4462 4417 struct device *dev = &bp->pdev->dev; 4463 4418 u32 reg; 4419 + int i; 4464 4420 4465 4421 ptp_ocp_phc_info(bp); 4466 4422 4467 - ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port.line, 4468 - bp->gnss_port.baud); 4469 - ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port.line, 4470 - bp->gnss2_port.baud); 4471 - ptp_ocp_serial_info(dev, "MAC", bp->mac_port.line, bp->mac_port.baud); 4472 - if (bp->nmea_out && bp->nmea_port.line != -1) { 4473 - bp->nmea_port.baud = -1; 4423 + for (i = 0; i < __PORT_COUNT; i++) { 4424 + if (i == PORT_NMEA && bp->nmea_out && bp->port[PORT_NMEA].line != -1) { 4425 + bp->port[PORT_NMEA].baud = -1; 4474 4426 4475 - reg = ioread32(&bp->nmea_out->uart_baud); 4476 - if (reg < ARRAY_SIZE(nmea_baud)) 4477 - bp->nmea_port.baud = nmea_baud[reg]; 4478 - 4479 - ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port.line, 4480 - bp->nmea_port.baud); 4427 + reg = ioread32(&bp->nmea_out->uart_baud); 4428 + if (reg < ARRAY_SIZE(nmea_baud)) 4429 + bp->port[PORT_NMEA].baud = nmea_baud[reg]; 4430 + } 4431 + ptp_ocp_serial_info(dev, ptp_ocp_tty_port_name(i), bp->port[i].line, 4432 + bp->port[i].baud); 4481 4433 } 4482 4434 } 4483 4435 ··· 4483 4441 { 4484 4442 struct device *dev = &bp->dev; 4485 4443 4486 - sysfs_remove_link(&dev->kobj, "ttyGNSS"); 4487 - sysfs_remove_link(&dev->kobj, "ttyGNSS2"); 4488 - sysfs_remove_link(&dev->kobj, "ttyMAC"); 4489 4444 sysfs_remove_link(&dev->kobj, "ptp"); 4490 4445 sysfs_remove_link(&dev->kobj, "pps"); 4491 4446 } ··· 4512 4473 for (i = 0; i < 4; i++) 4513 4474 if (bp->signal_out[i]) 4514 4475 ptp_ocp_unregister_ext(bp->signal_out[i]); 4515 - if (bp->gnss_port.line != -1) 4516 - serial8250_unregister_port(bp->gnss_port.line); 4517 - if (bp->gnss2_port.line != -1) 4518 - serial8250_unregister_port(bp->gnss2_port.line); 4519 - if (bp->mac_port.line != -1) 4520 - serial8250_unregister_port(bp->mac_port.line); 4521 - if (bp->nmea_port.line != -1) 4522 - serial8250_unregister_port(bp->nmea_port.line); 4476 + for (i = 0; i < __PORT_COUNT; i++) 4477 + if (bp->port[i].line != -1) 4478 + serial8250_unregister_port(bp->port[i].line); 4523 4479 platform_device_unregister(bp->spi_flash); 4524 4480 platform_device_unregister(bp->i2c_ctrl); 4525 4481 if (bp->i2c_clk)
-9
include/linux/bpf-cgroup.h
··· 390 390 __ret; \ 391 391 }) 392 392 393 - #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) \ 394 - ({ \ 395 - int __ret = 0; \ 396 - if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT)) \ 397 - copy_from_sockptr(&__ret, optlen, sizeof(int)); \ 398 - __ret; \ 399 - }) 400 - 401 393 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, optlen, \ 402 394 max_optlen, retval) \ 403 395 ({ \ ··· 510 518 #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; }) 511 519 #define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; }) 512 520 #define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; }) 513 - #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; }) 514 521 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \ 515 522 optlen, max_optlen, retval) ({ retval; }) 516 523 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \
-5
include/net/bluetooth/hci_core.h
··· 186 186 struct smp_csrk { 187 187 bdaddr_t bdaddr; 188 188 u8 bdaddr_type; 189 - u8 link_type; 190 189 u8 type; 191 190 u8 val[16]; 192 191 }; ··· 195 196 struct rcu_head rcu; 196 197 bdaddr_t bdaddr; 197 198 u8 bdaddr_type; 198 - u8 link_type; 199 199 u8 authenticated; 200 200 u8 type; 201 201 u8 enc_size; ··· 209 211 bdaddr_t rpa; 210 212 bdaddr_t bdaddr; 211 213 u8 addr_type; 212 - u8 link_type; 213 214 u8 val[16]; 214 215 }; 215 216 ··· 216 219 struct list_head list; 217 220 struct rcu_head rcu; 218 221 bdaddr_t bdaddr; 219 - u8 bdaddr_type; 220 - u8 link_type; 221 222 u8 type; 222 223 u8 val[HCI_LINK_KEY_SIZE]; 223 224 u8 pin_len;
+4
include/net/bluetooth/hci_sync.h
··· 73 73 void *data, hci_cmd_sync_work_destroy_t destroy); 74 74 int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 75 75 void *data, hci_cmd_sync_work_destroy_t destroy); 76 + int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 77 + void *data, hci_cmd_sync_work_destroy_t destroy); 78 + int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 79 + void *data, hci_cmd_sync_work_destroy_t destroy); 76 80 struct hci_cmd_sync_work_entry * 77 81 hci_cmd_sync_lookup_entry(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 78 82 void *data, hci_cmd_sync_work_destroy_t destroy);
+2
include/net/mana/mana.h
··· 98 98 99 99 atomic_t pending_sends; 100 100 101 + bool napi_initialized; 102 + 101 103 struct mana_stats_tx stats; 102 104 }; 103 105
+5 -1
net/bluetooth/hci_conn.c
··· 2952 2952 return 0; 2953 2953 } 2954 2954 2955 - return hci_cmd_sync_queue_once(hdev, abort_conn_sync, conn, NULL); 2955 + /* Run immediately if on cmd_sync_work since this may be called 2956 + * as a result to MGMT_OP_DISCONNECT/MGMT_OP_UNPAIR which does 2957 + * already queue its callback on cmd_sync_work. 2958 + */ 2959 + return hci_cmd_sync_run_once(hdev, abort_conn_sync, conn, NULL); 2956 2960 }
+40 -2
net/bluetooth/hci_sync.c
··· 112 112 skb_queue_tail(&req->cmd_q, skb); 113 113 } 114 114 115 - static int hci_cmd_sync_run(struct hci_request *req) 115 + static int hci_req_sync_run(struct hci_request *req) 116 116 { 117 117 struct hci_dev *hdev = req->hdev; 118 118 struct sk_buff *skb; ··· 169 169 170 170 hdev->req_status = HCI_REQ_PEND; 171 171 172 - err = hci_cmd_sync_run(&req); 172 + err = hci_req_sync_run(&req); 173 173 if (err < 0) 174 174 return ERR_PTR(err); 175 175 ··· 781 781 return hci_cmd_sync_queue(hdev, func, data, destroy); 782 782 } 783 783 EXPORT_SYMBOL(hci_cmd_sync_queue_once); 784 + 785 + /* Run HCI command: 786 + * 787 + * - hdev must be running 788 + * - if on cmd_sync_work then run immediately otherwise queue 789 + */ 790 + int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 791 + void *data, hci_cmd_sync_work_destroy_t destroy) 792 + { 793 + /* Only queue command if hdev is running which means it had been opened 794 + * and is either on init phase or is already up. 795 + */ 796 + if (!test_bit(HCI_RUNNING, &hdev->flags)) 797 + return -ENETDOWN; 798 + 799 + /* If on cmd_sync_work then run immediately otherwise queue */ 800 + if (current_work() == &hdev->cmd_sync_work) 801 + return func(hdev, data); 802 + 803 + return hci_cmd_sync_submit(hdev, func, data, destroy); 804 + } 805 + EXPORT_SYMBOL(hci_cmd_sync_run); 806 + 807 + /* Run HCI command entry once: 808 + * 809 + * - Lookup if an entry already exist and only if it doesn't creates a new entry 810 + * and run it. 811 + * - if on cmd_sync_work then run immediately otherwise queue 812 + */ 813 + int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 814 + void *data, hci_cmd_sync_work_destroy_t destroy) 815 + { 816 + if (hci_cmd_sync_lookup_entry(hdev, func, data, destroy)) 817 + return 0; 818 + 819 + return hci_cmd_sync_run(hdev, func, data, destroy); 820 + } 821 + EXPORT_SYMBOL(hci_cmd_sync_run_once); 784 822 785 823 /* Lookup HCI command entry: 786 824 *
+67 -77
net/bluetooth/mgmt.c
··· 2830 2830 bt_dev_dbg(hdev, "debug_keys %u key_count %u", cp->debug_keys, 2831 2831 key_count); 2832 2832 2833 - for (i = 0; i < key_count; i++) { 2834 - struct mgmt_link_key_info *key = &cp->keys[i]; 2835 - 2836 - /* Considering SMP over BREDR/LE, there is no need to check addr_type */ 2837 - if (key->type > 0x08) 2838 - return mgmt_cmd_status(sk, hdev->id, 2839 - MGMT_OP_LOAD_LINK_KEYS, 2840 - MGMT_STATUS_INVALID_PARAMS); 2841 - } 2842 - 2843 2833 hci_dev_lock(hdev); 2844 2834 2845 2835 hci_link_keys_clear(hdev); ··· 2851 2861 key->val)) { 2852 2862 bt_dev_warn(hdev, "Skipping blocked link key for %pMR", 2853 2863 &key->addr.bdaddr); 2864 + continue; 2865 + } 2866 + 2867 + if (key->addr.type != BDADDR_BREDR) { 2868 + bt_dev_warn(hdev, 2869 + "Invalid link address type %u for %pMR", 2870 + key->addr.type, &key->addr.bdaddr); 2871 + continue; 2872 + } 2873 + 2874 + if (key->type > 0x08) { 2875 + bt_dev_warn(hdev, "Invalid link key type %u for %pMR", 2876 + key->type, &key->addr.bdaddr); 2854 2877 continue; 2855 2878 } 2856 2879 ··· 2924 2921 if (!conn) 2925 2922 return 0; 2926 2923 2927 - return hci_abort_conn_sync(hdev, conn, HCI_ERROR_REMOTE_USER_TERM); 2924 + /* Disregard any possible error since the likes of hci_abort_conn_sync 2925 + * will clean up the connection no matter the error. 2926 + */ 2927 + hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM); 2928 + 2929 + return 0; 2928 2930 } 2929 2931 2930 2932 static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data, ··· 3061 3053 return err; 3062 3054 } 3063 3055 3056 + static void disconnect_complete(struct hci_dev *hdev, void *data, int err) 3057 + { 3058 + struct mgmt_pending_cmd *cmd = data; 3059 + 3060 + cmd->cmd_complete(cmd, mgmt_status(err)); 3061 + mgmt_pending_free(cmd); 3062 + } 3063 + 3064 + static int disconnect_sync(struct hci_dev *hdev, void *data) 3065 + { 3066 + struct mgmt_pending_cmd *cmd = data; 3067 + struct mgmt_cp_disconnect *cp = cmd->param; 3068 + struct hci_conn *conn; 3069 + 3070 + if (cp->addr.type == BDADDR_BREDR) 3071 + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, 3072 + &cp->addr.bdaddr); 3073 + else 3074 + conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr, 3075 + le_addr_type(cp->addr.type)); 3076 + 3077 + if (!conn) 3078 + return -ENOTCONN; 3079 + 3080 + /* Disregard any possible error since the likes of hci_abort_conn_sync 3081 + * will clean up the connection no matter the error. 3082 + */ 3083 + hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM); 3084 + 3085 + return 0; 3086 + } 3087 + 3064 3088 static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data, 3065 3089 u16 len) 3066 3090 { 3067 3091 struct mgmt_cp_disconnect *cp = data; 3068 3092 struct mgmt_rp_disconnect rp; 3069 3093 struct mgmt_pending_cmd *cmd; 3070 - struct hci_conn *conn; 3071 3094 int err; 3072 3095 3073 3096 bt_dev_dbg(hdev, "sock %p", sk); ··· 3121 3082 goto failed; 3122 3083 } 3123 3084 3124 - if (pending_find(MGMT_OP_DISCONNECT, hdev)) { 3125 - err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT, 3126 - MGMT_STATUS_BUSY, &rp, sizeof(rp)); 3127 - goto failed; 3128 - } 3129 - 3130 - if (cp->addr.type == BDADDR_BREDR) 3131 - conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, 3132 - &cp->addr.bdaddr); 3133 - else 3134 - conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr, 3135 - le_addr_type(cp->addr.type)); 3136 - 3137 - if (!conn || conn->state == BT_OPEN || conn->state == BT_CLOSED) { 3138 - err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT, 3139 - MGMT_STATUS_NOT_CONNECTED, &rp, 3140 - sizeof(rp)); 3141 - goto failed; 3142 - } 3143 - 3144 - cmd = mgmt_pending_add(sk, MGMT_OP_DISCONNECT, hdev, data, len); 3085 + cmd = mgmt_pending_new(sk, MGMT_OP_DISCONNECT, hdev, data, len); 3145 3086 if (!cmd) { 3146 3087 err = -ENOMEM; 3147 3088 goto failed; ··· 3129 3110 3130 3111 cmd->cmd_complete = generic_cmd_complete; 3131 3112 3132 - err = hci_disconnect(conn, HCI_ERROR_REMOTE_USER_TERM); 3113 + err = hci_cmd_sync_queue(hdev, disconnect_sync, cmd, 3114 + disconnect_complete); 3133 3115 if (err < 0) 3134 - mgmt_pending_remove(cmd); 3116 + mgmt_pending_free(cmd); 3135 3117 3136 3118 failed: 3137 3119 hci_dev_unlock(hdev); ··· 7092 7072 7093 7073 for (i = 0; i < irk_count; i++) { 7094 7074 struct mgmt_irk_info *irk = &cp->irks[i]; 7095 - u8 addr_type = le_addr_type(irk->addr.type); 7096 7075 7097 7076 if (hci_is_blocked_key(hdev, 7098 7077 HCI_BLOCKED_KEY_TYPE_IRK, ··· 7101 7082 continue; 7102 7083 } 7103 7084 7104 - /* When using SMP over BR/EDR, the addr type should be set to BREDR */ 7105 - if (irk->addr.type == BDADDR_BREDR) 7106 - addr_type = BDADDR_BREDR; 7107 - 7108 7085 hci_add_irk(hdev, &irk->addr.bdaddr, 7109 - addr_type, irk->val, 7086 + le_addr_type(irk->addr.type), irk->val, 7110 7087 BDADDR_ANY); 7111 7088 } 7112 7089 ··· 7167 7152 7168 7153 bt_dev_dbg(hdev, "key_count %u", key_count); 7169 7154 7170 - for (i = 0; i < key_count; i++) { 7171 - struct mgmt_ltk_info *key = &cp->keys[i]; 7172 - 7173 - if (!ltk_is_valid(key)) 7174 - return mgmt_cmd_status(sk, hdev->id, 7175 - MGMT_OP_LOAD_LONG_TERM_KEYS, 7176 - MGMT_STATUS_INVALID_PARAMS); 7177 - } 7178 - 7179 7155 hci_dev_lock(hdev); 7180 7156 7181 7157 hci_smp_ltks_clear(hdev); ··· 7174 7168 for (i = 0; i < key_count; i++) { 7175 7169 struct mgmt_ltk_info *key = &cp->keys[i]; 7176 7170 u8 type, authenticated; 7177 - u8 addr_type = le_addr_type(key->addr.type); 7178 7171 7179 7172 if (hci_is_blocked_key(hdev, 7180 7173 HCI_BLOCKED_KEY_TYPE_LTK, 7181 7174 key->val)) { 7182 7175 bt_dev_warn(hdev, "Skipping blocked LTK for %pMR", 7176 + &key->addr.bdaddr); 7177 + continue; 7178 + } 7179 + 7180 + if (!ltk_is_valid(key)) { 7181 + bt_dev_warn(hdev, "Invalid LTK for %pMR", 7183 7182 &key->addr.bdaddr); 7184 7183 continue; 7185 7184 } ··· 7214 7203 continue; 7215 7204 } 7216 7205 7217 - /* When using SMP over BR/EDR, the addr type should be set to BREDR */ 7218 - if (key->addr.type == BDADDR_BREDR) 7219 - addr_type = BDADDR_BREDR; 7220 - 7221 7206 hci_add_ltk(hdev, &key->addr.bdaddr, 7222 - addr_type, type, authenticated, 7207 + le_addr_type(key->addr.type), type, authenticated, 7223 7208 key->val, key->enc_size, key->ediv, key->rand); 7224 7209 } 7225 7210 ··· 9509 9502 9510 9503 ev.store_hint = persistent; 9511 9504 bacpy(&ev.key.addr.bdaddr, &key->bdaddr); 9512 - ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type); 9505 + ev.key.addr.type = BDADDR_BREDR; 9513 9506 ev.key.type = key->type; 9514 9507 memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE); 9515 9508 ev.key.pin_len = key->pin_len; ··· 9560 9553 ev.store_hint = persistent; 9561 9554 9562 9555 bacpy(&ev.key.addr.bdaddr, &key->bdaddr); 9563 - ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type); 9556 + ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type); 9564 9557 ev.key.type = mgmt_ltk_type(key); 9565 9558 ev.key.enc_size = key->enc_size; 9566 9559 ev.key.ediv = key->ediv; ··· 9589 9582 9590 9583 bacpy(&ev.rpa, &irk->rpa); 9591 9584 bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr); 9592 - ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type); 9585 + ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type); 9593 9586 memcpy(ev.irk.val, irk->val, sizeof(irk->val)); 9594 9587 9595 9588 mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL); ··· 9618 9611 ev.store_hint = persistent; 9619 9612 9620 9613 bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr); 9621 - ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type); 9614 + ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type); 9622 9615 ev.key.type = csrk->type; 9623 9616 memcpy(ev.key.val, csrk->val, sizeof(csrk->val)); 9624 9617 ··· 9696 9689 mgmt_event_skb(skb, NULL); 9697 9690 } 9698 9691 9699 - static void disconnect_rsp(struct mgmt_pending_cmd *cmd, void *data) 9700 - { 9701 - struct sock **sk = data; 9702 - 9703 - cmd->cmd_complete(cmd, 0); 9704 - 9705 - *sk = cmd->sk; 9706 - sock_hold(*sk); 9707 - 9708 - mgmt_pending_remove(cmd); 9709 - } 9710 - 9711 9692 static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data) 9712 9693 { 9713 9694 struct hci_dev *hdev = data; ··· 9739 9744 if (link_type != ACL_LINK && link_type != LE_LINK) 9740 9745 return; 9741 9746 9742 - mgmt_pending_foreach(MGMT_OP_DISCONNECT, hdev, disconnect_rsp, &sk); 9743 - 9744 9747 bacpy(&ev.addr.bdaddr, bdaddr); 9745 9748 ev.addr.type = link_to_bdaddr(link_type, addr_type); 9746 9749 ev.reason = reason; ··· 9751 9758 9752 9759 if (sk) 9753 9760 sock_put(sk); 9754 - 9755 - mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp, 9756 - hdev); 9757 9761 } 9758 9762 9759 9763 void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
-7
net/bluetooth/smp.c
··· 1060 1060 } 1061 1061 1062 1062 if (smp->remote_irk) { 1063 - smp->remote_irk->link_type = hcon->type; 1064 1063 mgmt_new_irk(hdev, smp->remote_irk, persistent); 1065 1064 1066 1065 /* Now that user space can be considered to know the ··· 1079 1080 } 1080 1081 1081 1082 if (smp->csrk) { 1082 - smp->csrk->link_type = hcon->type; 1083 1083 smp->csrk->bdaddr_type = hcon->dst_type; 1084 1084 bacpy(&smp->csrk->bdaddr, &hcon->dst); 1085 1085 mgmt_new_csrk(hdev, smp->csrk, persistent); 1086 1086 } 1087 1087 1088 1088 if (smp->responder_csrk) { 1089 - smp->responder_csrk->link_type = hcon->type; 1090 1089 smp->responder_csrk->bdaddr_type = hcon->dst_type; 1091 1090 bacpy(&smp->responder_csrk->bdaddr, &hcon->dst); 1092 1091 mgmt_new_csrk(hdev, smp->responder_csrk, persistent); 1093 1092 } 1094 1093 1095 1094 if (smp->ltk) { 1096 - smp->ltk->link_type = hcon->type; 1097 1095 smp->ltk->bdaddr_type = hcon->dst_type; 1098 1096 bacpy(&smp->ltk->bdaddr, &hcon->dst); 1099 1097 mgmt_new_ltk(hdev, smp->ltk, persistent); 1100 1098 } 1101 1099 1102 1100 if (smp->responder_ltk) { 1103 - smp->responder_ltk->link_type = hcon->type; 1104 1101 smp->responder_ltk->bdaddr_type = hcon->dst_type; 1105 1102 bacpy(&smp->responder_ltk->bdaddr, &hcon->dst); 1106 1103 mgmt_new_ltk(hdev, smp->responder_ltk, persistent); ··· 1116 1121 key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst, 1117 1122 smp->link_key, type, 0, &persistent); 1118 1123 if (key) { 1119 - key->link_type = hcon->type; 1120 - key->bdaddr_type = hcon->dst_type; 1121 1124 mgmt_new_link_key(hdev, key, persistent); 1122 1125 1123 1126 /* Don't keep debug keys around if the relevant
+2 -4
net/bridge/br_fdb.c
··· 1469 1469 modified = true; 1470 1470 } 1471 1471 1472 - if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1472 + if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1473 1473 /* Refresh entry */ 1474 1474 fdb->used = jiffies; 1475 - } else if (!test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags)) { 1476 - /* Take over SW learned entry */ 1477 - set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags); 1475 + } else { 1478 1476 modified = true; 1479 1477 } 1480 1478
+4
net/can/bcm.c
··· 1470 1470 1471 1471 /* remove device reference, if this is our bound device */ 1472 1472 if (bo->bound && bo->ifindex == dev->ifindex) { 1473 + #if IS_ENABLED(CONFIG_PROC_FS) 1474 + if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) 1475 + remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir); 1476 + #endif 1473 1477 bo->bound = 0; 1474 1478 bo->ifindex = 0; 1475 1479 notify_enodev = 1;
+1 -1
net/core/net-sysfs.c
··· 1524 1524 }; 1525 1525 #else 1526 1526 /* Fake declaration, all the code using it should be dead */ 1527 - extern const struct attribute_group dql_group; 1527 + static const struct attribute_group dql_group = {}; 1528 1528 #endif /* CONFIG_BQL */ 1529 1529 1530 1530 #ifdef CONFIG_XPS
+24 -5
net/ipv4/fou_core.c
··· 50 50 51 51 static inline struct fou *fou_from_sock(struct sock *sk) 52 52 { 53 - return sk->sk_user_data; 53 + return rcu_dereference_sk_user_data(sk); 54 54 } 55 55 56 56 static int fou_recv_pull(struct sk_buff *skb, struct fou *fou, size_t len) ··· 233 233 struct sk_buff *skb) 234 234 { 235 235 const struct net_offload __rcu **offloads; 236 - u8 proto = fou_from_sock(sk)->protocol; 236 + struct fou *fou = fou_from_sock(sk); 237 237 const struct net_offload *ops; 238 238 struct sk_buff *pp = NULL; 239 + u8 proto; 240 + 241 + if (!fou) 242 + goto out; 243 + 244 + proto = fou->protocol; 239 245 240 246 /* We can clear the encap_mark for FOU as we are essentially doing 241 247 * one of two possible things. We are either adding an L4 tunnel ··· 269 263 int nhoff) 270 264 { 271 265 const struct net_offload __rcu **offloads; 272 - u8 proto = fou_from_sock(sk)->protocol; 266 + struct fou *fou = fou_from_sock(sk); 273 267 const struct net_offload *ops; 274 - int err = -ENOSYS; 268 + u8 proto; 269 + int err; 270 + 271 + if (!fou) { 272 + err = -ENOENT; 273 + goto out; 274 + } 275 + 276 + proto = fou->protocol; 275 277 276 278 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 277 279 ops = rcu_dereference(offloads[proto]); 278 - if (WARN_ON(!ops || !ops->callbacks.gro_complete)) 280 + if (WARN_ON(!ops || !ops->callbacks.gro_complete)) { 281 + err = -ENOSYS; 279 282 goto out; 283 + } 280 284 281 285 err = ops->callbacks.gro_complete(skb, nhoff); 282 286 ··· 335 319 struct fou *fou = fou_from_sock(sk); 336 320 struct gro_remcsum grc; 337 321 u8 proto; 322 + 323 + if (!fou) 324 + goto out; 338 325 339 326 skb_gro_remcsum_init(&grc); 340 327
+1 -1
net/ipv4/tcp_bpf.c
··· 577 577 err = sk_stream_error(sk, msg->msg_flags, err); 578 578 release_sock(sk); 579 579 sk_psock_put(sk, psock); 580 - return copied ? copied : err; 580 + return copied > 0 ? copied : err; 581 581 } 582 582 583 583 enum {
+1
net/ipv6/ila/ila.h
··· 108 108 void ila_lwt_fini(void); 109 109 110 110 int ila_xlat_init_net(struct net *net); 111 + void ila_xlat_pre_exit_net(struct net *net); 111 112 void ila_xlat_exit_net(struct net *net); 112 113 113 114 int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info);
+6
net/ipv6/ila/ila_main.c
··· 71 71 return err; 72 72 } 73 73 74 + static __net_exit void ila_pre_exit_net(struct net *net) 75 + { 76 + ila_xlat_pre_exit_net(net); 77 + } 78 + 74 79 static __net_exit void ila_exit_net(struct net *net) 75 80 { 76 81 ila_xlat_exit_net(net); ··· 83 78 84 79 static struct pernet_operations ila_net_ops = { 85 80 .init = ila_init_net, 81 + .pre_exit = ila_pre_exit_net, 86 82 .exit = ila_exit_net, 87 83 .id = &ila_net_id, 88 84 .size = sizeof(struct ila_net),
+9 -4
net/ipv6/ila/ila_xlat.c
··· 619 619 return 0; 620 620 } 621 621 622 + void ila_xlat_pre_exit_net(struct net *net) 623 + { 624 + struct ila_net *ilan = net_generic(net, ila_net_id); 625 + 626 + if (ilan->xlat.hooks_registered) 627 + nf_unregister_net_hooks(net, ila_nf_hook_ops, 628 + ARRAY_SIZE(ila_nf_hook_ops)); 629 + } 630 + 622 631 void ila_xlat_exit_net(struct net *net) 623 632 { 624 633 struct ila_net *ilan = net_generic(net, ila_net_id); ··· 635 626 rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL); 636 627 637 628 free_bucket_spinlocks(ilan->xlat.locks); 638 - 639 - if (ilan->xlat.hooks_registered) 640 - nf_unregister_net_hooks(net, ila_nf_hook_ops, 641 - ARRAY_SIZE(ila_nf_hook_ops)); 642 629 } 643 630 644 631 static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila)
+7 -4
net/sched/sch_cake.c
··· 786 786 * queue, accept the collision, update the host tags. 787 787 */ 788 788 q->way_collisions++; 789 - if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 790 - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 791 - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 792 - } 793 789 allocate_src = cake_dsrc(flow_mode); 794 790 allocate_dst = cake_ddst(flow_mode); 791 + 792 + if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 793 + if (allocate_src) 794 + q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 795 + if (allocate_dst) 796 + q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 797 + } 795 798 found: 796 799 /* reserve queue for future packets in same flow */ 797 800 reduced_hash = outer_hash + k;
+4 -5
net/sched/sch_netem.c
··· 742 742 743 743 err = qdisc_enqueue(skb, q->qdisc, &to_free); 744 744 kfree_skb_list(to_free); 745 - if (err != NET_XMIT_SUCCESS && 746 - net_xmit_drop_count(err)) { 747 - qdisc_qstats_drop(sch); 748 - qdisc_tree_reduce_backlog(sch, 1, 749 - pkt_len); 745 + if (err != NET_XMIT_SUCCESS) { 746 + if (net_xmit_drop_count(err)) 747 + qdisc_qstats_drop(sch); 748 + qdisc_tree_reduce_backlog(sch, 1, pkt_len); 750 749 } 751 750 goto tfifo_dequeue; 752 751 }
+3
net/smc/smc.h
··· 284 284 285 285 struct smc_sock { /* smc sock container */ 286 286 struct sock sk; 287 + #if IS_ENABLED(CONFIG_IPV6) 288 + struct ipv6_pinfo *pinet6; 289 + #endif 287 290 struct socket *clcsock; /* internal tcp socket */ 288 291 void (*clcsk_state_change)(struct sock *sk); 289 292 /* original stat_change fct. */
+7 -1
net/smc/smc_inet.c
··· 60 60 }; 61 61 62 62 #if IS_ENABLED(CONFIG_IPV6) 63 + struct smc6_sock { 64 + struct smc_sock smc; 65 + struct ipv6_pinfo inet6; 66 + }; 67 + 63 68 static struct proto smc_inet6_prot = { 64 69 .name = "INET6_SMC", 65 70 .owner = THIS_MODULE, ··· 72 67 .hash = smc_hash_sk, 73 68 .unhash = smc_unhash_sk, 74 69 .release_cb = smc_release_cb, 75 - .obj_size = sizeof(struct smc_sock), 70 + .obj_size = sizeof(struct smc6_sock), 76 71 .h.smc_hash = &smc_v6_hashinfo, 77 72 .slab_flags = SLAB_TYPESAFE_BY_RCU, 73 + .ipv6_pinfo_offset = offsetof(struct smc6_sock, inet6), 78 74 }; 79 75 80 76 static const struct proto_ops smc_inet6_stream_ops = {
+2 -2
net/socket.c
··· 2362 2362 int do_sock_getsockopt(struct socket *sock, bool compat, int level, 2363 2363 int optname, sockptr_t optval, sockptr_t optlen) 2364 2364 { 2365 - int max_optlen __maybe_unused; 2365 + int max_optlen __maybe_unused = 0; 2366 2366 const struct proto_ops *ops; 2367 2367 int err; 2368 2368 ··· 2371 2371 return err; 2372 2372 2373 2373 if (!compat) 2374 - max_optlen = BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen); 2374 + copy_from_sockptr(&max_optlen, optlen, sizeof(int)); 2375 2375 2376 2376 ops = READ_ONCE(sock->ops); 2377 2377 if (level == SOL_SOCKET) {
+4 -3
tools/net/ynl/lib/ynl.py
··· 388 388 389 389 def decode(self, ynl, nl_msg, op): 390 390 msg = self._decode(nl_msg) 391 + if op is None: 392 + op = ynl.rsp_by_value[msg.cmd()] 391 393 fixed_header_size = ynl._struct_size(op.fixed_header) 392 394 msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size) 393 395 return msg ··· 923 921 print("Netlink done while checking for ntf!?") 924 922 continue 925 923 926 - op = self.rsp_by_value[nl_msg.cmd()] 927 - decoded = self.nlproto.decode(self, nl_msg, op) 924 + decoded = self.nlproto.decode(self, nl_msg, None) 928 925 if decoded.cmd() not in self.async_msg_ids: 929 926 print("Unexpected msg id done while checking for ntf", decoded) 930 927 continue ··· 981 980 if nl_msg.extack: 982 981 self._decode_extack(req_msg, op, nl_msg.extack) 983 982 else: 984 - op = self.rsp_by_value[nl_msg.cmd()] 983 + op = None 985 984 req_flags = [] 986 985 987 986 if nl_msg.error:
+2 -1
tools/testing/selftests/net/Makefile
··· 85 85 TEST_PROGS += sctp_vrf.sh 86 86 TEST_GEN_FILES += sctp_hello 87 87 TEST_GEN_FILES += ip_local_port_range 88 - TEST_GEN_FILES += bind_wildcard 88 + TEST_GEN_PROGS += bind_wildcard 89 + TEST_GEN_PROGS += bind_timewait 89 90 TEST_PROGS += test_vxlan_mdb.sh 90 91 TEST_PROGS += test_bridge_neigh_suppress.sh 91 92 TEST_PROGS += test_vxlan_nolocalbypass.sh