Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from Bluetooth, netfilter, BPF and WiFi.

I didn't collect precise data but feels like we've got a lot of 6.5
fixes here. WiFi fixes are most user-awaited.

Current release - regressions:

- Bluetooth: fix hci_link_tx_to RCU lock usage

Current release - new code bugs:

- bpf: mprog: fix maximum program check on mprog attachment

- eth: ti: icssg-prueth: fix signedness bug in prueth_init_tx_chns()

Previous releases - regressions:

- ipv6: tcp: add a missing nf_reset_ct() in 3WHS handling

- vringh: don't use vringh_kiov_advance() in vringh_iov_xfer(), it
doesn't handle zero length like we expected

- wifi:
- cfg80211: fix cqm_config access race, fix crashes with brcmfmac
- iwlwifi: mvm: handle PS changes in vif_cfg_changed
- mac80211: fix mesh id corruption on 32 bit systems
- mt76: mt76x02: fix MT76x0 external LNA gain handling

- Bluetooth: fix handling of HCI_QUIRK_STRICT_DUPLICATE_FILTER

- l2tp: fix handling of transhdrlen in __ip{,6}_append_data()

- dsa: mv88e6xxx: avoid EEPROM timeout when EEPROM is absent

- eth: stmmac: fix the incorrect parameter after refactoring

Previous releases - always broken:

- net: replace calls to sock->ops->connect() with kernel_connect(),
prevent address rewrite in kernel_bind(); otherwise BPF hooks may
modify arguments, unexpectedly to the caller

- tcp: fix delayed ACKs when reads and writes align with MSS

- bpf:
- verifier: unconditionally reset backtrack_state masks on global
func exit
- s390: let arch_prepare_bpf_trampoline return program size, fix
struct_ops offsets
- sockmap: fix accounting of available bytes in presence of PEEKs
- sockmap: reject sk_msg egress redirects to non-TCP sockets

- ipv4/fib: send netlink notify when delete source address routes

- ethtool: plca: fix width of reads when parsing netlink commands

- netfilter: nft_payload: rebuild vlan header on h_proto access

- Bluetooth: hci_codec: fix leaking memory of local_codecs

- eth: intel: ice: always add legacy 32byte RXDID in supported_rxdids

- eth: stmmac:
- dwmac-stm32: fix resume on STM32 MCU
- remove buggy and unneeded stmmac_poll_controller, depend on NAPI

- ibmveth: always recompute TCP pseudo-header checksum, fix use of
the driver with Open vSwitch

- wifi:
- rtw88: rtw8723d: fix MAC address offset in EEPROM
- mt76: fix lock dependency problem for wed_lock
- mwifiex: sanity check data reported by the device
- iwlwifi: ensure ack flag is properly cleared
- iwlwifi: mvm: fix a memory corruption due to bad pointer arithm
- iwlwifi: mvm: fix incorrect usage of scan API

Misc:

- wifi: mac80211: work around Cisco AP 9115 VHT MPDU length"

* tag 'net-6.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (99 commits)
MAINTAINERS: update Matthieu's email address
mptcp: userspace pm allow creating id 0 subflow
mptcp: fix delegated action races
net: stmmac: remove unneeded stmmac_poll_controller
net: lan743x: also select PHYLIB
net: ethernet: mediatek: disable irq before schedule napi
net: mana: Fix oversized sge0 for GSO packets
net: mana: Fix the tso_bytes calculation
net: mana: Fix TX CQE error handling
netlink: annotate data-races around sk->sk_err
sctp: update hb timer immediately after users change hb_interval
sctp: update transport state when processing a dupcook packet
tcp: fix delayed ACKs for MSS boundary condition
tcp: fix quick-ack counting to count actual ACKs of new data
page_pool: fix documentation typos
tipc: fix a potential deadlock on &tx->lock
net: stmmac: dwmac-stm32: fix resume on STM32 MCU
ipv4: Set offload_failed flag in fibmatch results
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
netfilter: nf_tables: Deduplicate nft_register_obj audit logs
...

+1358 -600
+1
.mailmap
··· 377 377 Matthew Wilcox <willy@infradead.org> <willy@linux.intel.com> 378 378 Matthew Wilcox <willy@infradead.org> <willy@parisc-linux.org> 379 379 Matthias Fuchs <socketcan@esd.eu> <matthias.fuchs@esd.eu> 380 + Matthieu Baerts <matttbe@kernel.org> <matthieu.baerts@tessares.net> 380 381 Matthieu CASTET <castet.matthieu@free.fr> 381 382 Matti Vaittinen <mazziesaccount@gmail.com> <matti.vaittinen@fi.rohmeurope.com> 382 383 Matt Ranostay <matt.ranostay@konsulko.com> <matt@ranostay.consulting>
+2 -10
MAINTAINERS
··· 470 470 ADM8211 WIRELESS DRIVER 471 471 L: linux-wireless@vger.kernel.org 472 472 S: Orphan 473 - W: https://wireless.wiki.kernel.org/ 474 473 F: drivers/net/wireless/admtek/adm8211.* 475 474 476 475 ADP1653 FLASH CONTROLLER DRIVER ··· 9530 9531 F: drivers/iio/pressure/mprls0025pa.c 9531 9532 9532 9533 HOST AP DRIVER 9533 - M: Jouni Malinen <j@w1.fi> 9534 9534 L: linux-wireless@vger.kernel.org 9535 9535 S: Obsolete 9536 - W: http://w1.fi/hostap-driver.html 9537 9536 F: drivers/net/wireless/intersil/hostap/ 9538 9537 9539 9538 HP BIOSCFG DRIVER ··· 14944 14947 K: \bmdo_ 14945 14948 14946 14949 NETWORKING [MPTCP] 14947 - M: Matthieu Baerts <matthieu.baerts@tessares.net> 14950 + M: Matthieu Baerts <matttbe@kernel.org> 14948 14951 M: Mat Martineau <martineau@kernel.org> 14949 14952 L: netdev@vger.kernel.org 14950 14953 L: mptcp@lists.linux.dev ··· 17599 17602 M: Jeff Johnson <quic_jjohnson@quicinc.com> 17600 17603 L: ath12k@lists.infradead.org 17601 17604 S: Supported 17605 + W: https://wireless.wiki.kernel.org/en/users/Drivers/ath12k 17602 17606 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 17603 17607 F: drivers/net/wireless/ath/ath12k/ 17604 17608 ··· 18130 18132 M: Ping-Ke Shih <pkshih@realtek.com> 18131 18133 L: linux-wireless@vger.kernel.org 18132 18134 S: Maintained 18133 - W: https://wireless.wiki.kernel.org/ 18134 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git 18135 18135 F: drivers/net/wireless/realtek/rtlwifi/ 18136 18136 18137 18137 REALTEK WIRELESS DRIVER (rtw88) ··· 18657 18661 RTL8180 WIRELESS DRIVER 18658 18662 L: linux-wireless@vger.kernel.org 18659 18663 S: Orphan 18660 - W: https://wireless.wiki.kernel.org/ 18661 18664 F: drivers/net/wireless/realtek/rtl818x/rtl8180/ 18662 18665 18663 18666 RTL8187 WIRELESS DRIVER ··· 18664 18669 M: Larry Finger <Larry.Finger@lwfinger.net> 18665 18670 L: linux-wireless@vger.kernel.org 18666 18671 S: Maintained 18667 - W: https://wireless.wiki.kernel.org/ 18668 18672 F: drivers/net/wireless/realtek/rtl818x/rtl8187/ 18669 18673 18670 18674 RTL8XXXU WIRELESS DRIVER (rtl8xxxu) 18671 18675 M: Jes Sorensen <Jes.Sorensen@gmail.com> 18672 18676 L: linux-wireless@vger.kernel.org 18673 18677 S: Maintained 18674 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8xxxu-devel 18675 18678 F: drivers/net/wireless/realtek/rtl8xxxu/ 18676 18679 18677 18680 RTRS TRANSPORT DRIVERS ··· 21651 21658 S: Orphan 21652 21659 W: https://wireless.wiki.kernel.org/en/users/Drivers/wl12xx 21653 21660 W: https://wireless.wiki.kernel.org/en/users/Drivers/wl1251 21654 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/luca/wl12xx.git 21655 21661 F: drivers/net/wireless/ti/ 21656 21662 21657 21663 TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER
+1 -1
arch/s390/net/bpf_jit_comp.c
··· 2513 2513 return -E2BIG; 2514 2514 } 2515 2515 2516 - return ret; 2516 + return tjit.common.prg; 2517 2517 } 2518 2518 2519 2519 bool bpf_jit_supports_subprog_tailcalls(void)
+1
drivers/bluetooth/btusb.c
··· 4419 4419 4420 4420 if (id->driver_info & BTUSB_QCA_ROME) { 4421 4421 data->setup_on_usb = btusb_setup_qca; 4422 + hdev->shutdown = btusb_shutdown_qca; 4422 4423 hdev->set_bdaddr = btusb_set_bdaddr_ath3012; 4423 4424 hdev->cmd_timeout = btusb_qca_cmd_timeout; 4424 4425 set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+3
drivers/dma/ti/k3-udma-glue.c
··· 558 558 tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq); 559 559 } 560 560 561 + if (!tx_chn->virq) 562 + return -ENXIO; 563 + 561 564 return tx_chn->virq; 562 565 } 563 566 EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq);
+4 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 2958 2958 * from the wrong location resulting in the switch booting 2959 2959 * to wrong mode and inoperable. 2960 2960 */ 2961 - mv88e6xxx_g1_wait_eeprom_done(chip); 2961 + if (chip->info->ops->get_eeprom) 2962 + mv88e6xxx_g2_eeprom_wait(chip); 2962 2963 2963 2964 gpiod_set_value_cansleep(gpiod, 1); 2964 2965 usleep_range(10000, 20000); 2965 2966 gpiod_set_value_cansleep(gpiod, 0); 2966 2967 usleep_range(10000, 20000); 2967 2968 2968 - mv88e6xxx_g1_wait_eeprom_done(chip); 2969 + if (chip->info->ops->get_eeprom) 2970 + mv88e6xxx_g2_eeprom_wait(chip); 2969 2971 } 2970 2972 } 2971 2973
-31
drivers/net/dsa/mv88e6xxx/global1.c
··· 75 75 return mv88e6xxx_g1_wait_bit(chip, MV88E6XXX_G1_STS, bit, 1); 76 76 } 77 77 78 - void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip) 79 - { 80 - const unsigned long timeout = jiffies + 1 * HZ; 81 - u16 val; 82 - int err; 83 - 84 - /* Wait up to 1 second for the switch to finish reading the 85 - * EEPROM. 86 - */ 87 - while (time_before(jiffies, timeout)) { 88 - err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_STS, &val); 89 - if (err) { 90 - dev_err(chip->dev, "Error reading status"); 91 - return; 92 - } 93 - 94 - /* If the switch is still resetting, it may not 95 - * respond on the bus, and so MDIO read returns 96 - * 0xffff. Differentiate between that, and waiting for 97 - * the EEPROM to be done by bit 0 being set. 98 - */ 99 - if (val != 0xffff && 100 - val & BIT(MV88E6XXX_G1_STS_IRQ_EEPROM_DONE)) 101 - return; 102 - 103 - usleep_range(1000, 2000); 104 - } 105 - 106 - dev_err(chip->dev, "Timeout waiting for EEPROM done"); 107 - } 108 - 109 78 /* Offset 0x01: Switch MAC Address Register Bytes 0 & 1 110 79 * Offset 0x02: Switch MAC Address Register Bytes 2 & 3 111 80 * Offset 0x03: Switch MAC Address Register Bytes 4 & 5
-1
drivers/net/dsa/mv88e6xxx/global1.h
··· 282 282 int mv88e6185_g1_reset(struct mv88e6xxx_chip *chip); 283 283 int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip); 284 284 int mv88e6250_g1_reset(struct mv88e6xxx_chip *chip); 285 - void mv88e6xxx_g1_wait_eeprom_done(struct mv88e6xxx_chip *chip); 286 285 287 286 int mv88e6185_g1_ppu_enable(struct mv88e6xxx_chip *chip); 288 287 int mv88e6185_g1_ppu_disable(struct mv88e6xxx_chip *chip);
+1 -1
drivers/net/dsa/mv88e6xxx/global2.c
··· 340 340 * Offset 0x15: EEPROM Addr (for 8-bit data access) 341 341 */ 342 342 343 - static int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip) 343 + int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip) 344 344 { 345 345 int bit = __bf_shf(MV88E6XXX_G2_EEPROM_CMD_BUSY); 346 346 int err;
+1
drivers/net/dsa/mv88e6xxx/global2.h
··· 365 365 366 366 int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target, 367 367 int port); 368 + int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip); 368 369 369 370 extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops; 370 371 extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;
+12 -13
drivers/net/ethernet/ibm/ibmveth.c
··· 1303 1303 * the user space for finding a flow. During this process, OVS computes 1304 1304 * checksum on the first packet when CHECKSUM_PARTIAL flag is set. 1305 1305 * 1306 - * So, re-compute TCP pseudo header checksum when configured for 1307 - * trunk mode. 1306 + * So, re-compute TCP pseudo header checksum. 1308 1307 */ 1308 + 1309 1309 if (iph_proto == IPPROTO_TCP) { 1310 1310 struct tcphdr *tcph = (struct tcphdr *)(skb->data + iphlen); 1311 + 1311 1312 if (tcph->check == 0x0000) { 1312 1313 /* Recompute TCP pseudo header checksum */ 1313 - if (adapter->is_active_trunk) { 1314 - tcphdrlen = skb->len - iphlen; 1315 - if (skb_proto == ETH_P_IP) 1316 - tcph->check = 1317 - ~csum_tcpudp_magic(iph->saddr, 1318 - iph->daddr, tcphdrlen, iph_proto, 0); 1319 - else if (skb_proto == ETH_P_IPV6) 1320 - tcph->check = 1321 - ~csum_ipv6_magic(&iph6->saddr, 1322 - &iph6->daddr, tcphdrlen, iph_proto, 0); 1323 - } 1314 + tcphdrlen = skb->len - iphlen; 1315 + if (skb_proto == ETH_P_IP) 1316 + tcph->check = 1317 + ~csum_tcpudp_magic(iph->saddr, 1318 + iph->daddr, tcphdrlen, iph_proto, 0); 1319 + else if (skb_proto == ETH_P_IPV6) 1320 + tcph->check = 1321 + ~csum_ipv6_magic(&iph6->saddr, 1322 + &iph6->daddr, tcphdrlen, iph_proto, 0); 1324 1323 /* Setup SKB fields for checksum offload */ 1325 1324 skb_partial_csum_set(skb, iphlen, 1326 1325 offsetof(struct tcphdr, check));
+7 -5
drivers/net/ethernet/intel/ice/ice_virtchnl.c
··· 2617 2617 goto err; 2618 2618 } 2619 2619 2620 - /* Read flexiflag registers to determine whether the 2621 - * corresponding RXDID is configured and supported or not. 2622 - * Since Legacy 16byte descriptor format is not supported, 2623 - * start from Legacy 32byte descriptor. 2620 + /* RXDIDs supported by DDP package can be read from the register 2621 + * to get the supported RXDID bitmap. But the legacy 32byte RXDID 2622 + * is not listed in DDP package, add it in the bitmap manually. 2623 + * Legacy 16byte descriptor is not supported. 2624 2624 */ 2625 - for (i = ICE_RXDID_LEGACY_1; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) { 2625 + rxdid->supported_rxdids |= BIT(ICE_RXDID_LEGACY_1); 2626 + 2627 + for (i = ICE_RXDID_FLEX_NIC; i < ICE_FLEX_DESC_RXDID_MAX_NUM; i++) { 2626 2628 regval = rd32(hw, GLFLXP_RXDID_FLAGS(i, 0)); 2627 2629 if ((regval >> GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) 2628 2630 & GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M)
+1 -1
drivers/net/ethernet/marvell/sky2.h
··· 2195 2195 struct sk_buff *skb; 2196 2196 dma_addr_t data_addr; 2197 2197 DEFINE_DMA_UNMAP_LEN(data_size); 2198 - dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT]; 2198 + dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT ?: 1]; 2199 2199 }; 2200 2200 2201 2201 enum flow_control {
+2 -2
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3171 3171 3172 3172 eth->rx_events++; 3173 3173 if (likely(napi_schedule_prep(&eth->rx_napi))) { 3174 - __napi_schedule(&eth->rx_napi); 3175 3174 mtk_rx_irq_disable(eth, eth->soc->txrx.rx_irq_done_mask); 3175 + __napi_schedule(&eth->rx_napi); 3176 3176 } 3177 3177 3178 3178 return IRQ_HANDLED; ··· 3184 3184 3185 3185 eth->tx_events++; 3186 3186 if (likely(napi_schedule_prep(&eth->tx_napi))) { 3187 - __napi_schedule(&eth->tx_napi); 3188 3187 mtk_tx_irq_disable(eth, MTK_TX_DONE_INT); 3188 + __napi_schedule(&eth->tx_napi); 3189 3189 } 3190 3190 3191 3191 return IRQ_HANDLED;
+1
drivers/net/ethernet/microchip/Kconfig
··· 46 46 tristate "LAN743x support" 47 47 depends on PCI 48 48 depends on PTP_1588_CLOCK_OPTIONAL 49 + select PHYLIB 49 50 select FIXED_PHY 50 51 select CRC16 51 52 select CRC32
+150 -69
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 91 91 return 0; 92 92 } 93 93 94 + static void mana_add_sge(struct mana_tx_package *tp, struct mana_skb_head *ash, 95 + int sg_i, dma_addr_t da, int sge_len, u32 gpa_mkey) 96 + { 97 + ash->dma_handle[sg_i] = da; 98 + ash->size[sg_i] = sge_len; 99 + 100 + tp->wqe_req.sgl[sg_i].address = da; 101 + tp->wqe_req.sgl[sg_i].mem_key = gpa_mkey; 102 + tp->wqe_req.sgl[sg_i].size = sge_len; 103 + } 104 + 94 105 static int mana_map_skb(struct sk_buff *skb, struct mana_port_context *apc, 95 - struct mana_tx_package *tp) 106 + struct mana_tx_package *tp, int gso_hs) 96 107 { 97 108 struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; 109 + int hsg = 1; /* num of SGEs of linear part */ 98 110 struct gdma_dev *gd = apc->ac->gdma_dev; 111 + int skb_hlen = skb_headlen(skb); 112 + int sge0_len, sge1_len = 0; 99 113 struct gdma_context *gc; 100 114 struct device *dev; 101 115 skb_frag_t *frag; 102 116 dma_addr_t da; 117 + int sg_i; 103 118 int i; 104 119 105 120 gc = gd->gdma_context; 106 121 dev = gc->dev; 107 - da = dma_map_single(dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE); 108 122 123 + if (gso_hs && gso_hs < skb_hlen) { 124 + sge0_len = gso_hs; 125 + sge1_len = skb_hlen - gso_hs; 126 + } else { 127 + sge0_len = skb_hlen; 128 + } 129 + 130 + da = dma_map_single(dev, skb->data, sge0_len, DMA_TO_DEVICE); 109 131 if (dma_mapping_error(dev, da)) 110 132 return -ENOMEM; 111 133 112 - ash->dma_handle[0] = da; 113 - ash->size[0] = skb_headlen(skb); 134 + mana_add_sge(tp, ash, 0, da, sge0_len, gd->gpa_mkey); 114 135 115 - tp->wqe_req.sgl[0].address = ash->dma_handle[0]; 116 - tp->wqe_req.sgl[0].mem_key = gd->gpa_mkey; 117 - tp->wqe_req.sgl[0].size = ash->size[0]; 118 - 119 - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 120 - frag = &skb_shinfo(skb)->frags[i]; 121 - da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), 122 - DMA_TO_DEVICE); 123 - 136 + if (sge1_len) { 137 + sg_i = 1; 138 + da = dma_map_single(dev, skb->data + sge0_len, sge1_len, 139 + DMA_TO_DEVICE); 124 140 if (dma_mapping_error(dev, da)) 125 141 goto frag_err; 126 142 127 - ash->dma_handle[i + 1] = da; 128 - ash->size[i + 1] = skb_frag_size(frag); 143 + mana_add_sge(tp, ash, sg_i, da, sge1_len, gd->gpa_mkey); 144 + hsg = 2; 145 + } 129 146 130 - tp->wqe_req.sgl[i + 1].address = ash->dma_handle[i + 1]; 131 - tp->wqe_req.sgl[i + 1].mem_key = gd->gpa_mkey; 132 - tp->wqe_req.sgl[i + 1].size = ash->size[i + 1]; 147 + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 148 + sg_i = hsg + i; 149 + 150 + frag = &skb_shinfo(skb)->frags[i]; 151 + da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), 152 + DMA_TO_DEVICE); 153 + if (dma_mapping_error(dev, da)) 154 + goto frag_err; 155 + 156 + mana_add_sge(tp, ash, sg_i, da, skb_frag_size(frag), 157 + gd->gpa_mkey); 133 158 } 134 159 135 160 return 0; 136 161 137 162 frag_err: 138 - for (i = i - 1; i >= 0; i--) 139 - dma_unmap_page(dev, ash->dma_handle[i + 1], ash->size[i + 1], 163 + for (i = sg_i - 1; i >= hsg; i--) 164 + dma_unmap_page(dev, ash->dma_handle[i], ash->size[i], 140 165 DMA_TO_DEVICE); 141 166 142 - dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE); 167 + for (i = hsg - 1; i >= 0; i--) 168 + dma_unmap_single(dev, ash->dma_handle[i], ash->size[i], 169 + DMA_TO_DEVICE); 143 170 144 171 return -ENOMEM; 172 + } 173 + 174 + /* Handle the case when GSO SKB linear length is too large. 175 + * MANA NIC requires GSO packets to put only the packet header to SGE0. 176 + * So, we need 2 SGEs for the skb linear part which contains more than the 177 + * header. 178 + * Return a positive value for the number of SGEs, or a negative value 179 + * for an error. 180 + */ 181 + static int mana_fix_skb_head(struct net_device *ndev, struct sk_buff *skb, 182 + int gso_hs) 183 + { 184 + int num_sge = 1 + skb_shinfo(skb)->nr_frags; 185 + int skb_hlen = skb_headlen(skb); 186 + 187 + if (gso_hs < skb_hlen) { 188 + num_sge++; 189 + } else if (gso_hs > skb_hlen) { 190 + if (net_ratelimit()) 191 + netdev_err(ndev, 192 + "TX nonlinear head: hs:%d, skb_hlen:%d\n", 193 + gso_hs, skb_hlen); 194 + 195 + return -EINVAL; 196 + } 197 + 198 + return num_sge; 199 + } 200 + 201 + /* Get the GSO packet's header size */ 202 + static int mana_get_gso_hs(struct sk_buff *skb) 203 + { 204 + int gso_hs; 205 + 206 + if (skb->encapsulation) { 207 + gso_hs = skb_inner_tcp_all_headers(skb); 208 + } else { 209 + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { 210 + gso_hs = skb_transport_offset(skb) + 211 + sizeof(struct udphdr); 212 + } else { 213 + gso_hs = skb_tcp_all_headers(skb); 214 + } 215 + } 216 + 217 + return gso_hs; 145 218 } 146 219 147 220 netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) 148 221 { 149 222 enum mana_tx_pkt_format pkt_fmt = MANA_SHORT_PKT_FMT; 150 223 struct mana_port_context *apc = netdev_priv(ndev); 224 + int gso_hs = 0; /* zero for non-GSO pkts */ 151 225 u16 txq_idx = skb_get_queue_mapping(skb); 152 226 struct gdma_dev *gd = apc->ac->gdma_dev; 153 227 bool ipv4 = false, ipv6 = false; ··· 233 159 struct mana_txq *txq; 234 160 struct mana_cq *cq; 235 161 int err, len; 236 - u16 ihs; 237 162 238 163 if (unlikely(!apc->port_is_up)) 239 164 goto tx_drop; ··· 282 209 pkg.wqe_req.client_data_unit = 0; 283 210 284 211 pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags; 285 - WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); 286 - 287 - if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { 288 - pkg.wqe_req.sgl = pkg.sgl_array; 289 - } else { 290 - pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge, 291 - sizeof(struct gdma_sge), 292 - GFP_ATOMIC); 293 - if (!pkg.sgl_ptr) 294 - goto tx_drop_count; 295 - 296 - pkg.wqe_req.sgl = pkg.sgl_ptr; 297 - } 298 212 299 213 if (skb->protocol == htons(ETH_P_IP)) 300 214 ipv4 = true; ··· 289 229 ipv6 = true; 290 230 291 231 if (skb_is_gso(skb)) { 232 + int num_sge; 233 + 234 + gso_hs = mana_get_gso_hs(skb); 235 + 236 + num_sge = mana_fix_skb_head(ndev, skb, gso_hs); 237 + if (num_sge > 0) 238 + pkg.wqe_req.num_sge = num_sge; 239 + else 240 + goto tx_drop_count; 241 + 242 + u64_stats_update_begin(&tx_stats->syncp); 243 + if (skb->encapsulation) { 244 + tx_stats->tso_inner_packets++; 245 + tx_stats->tso_inner_bytes += skb->len - gso_hs; 246 + } else { 247 + tx_stats->tso_packets++; 248 + tx_stats->tso_bytes += skb->len - gso_hs; 249 + } 250 + u64_stats_update_end(&tx_stats->syncp); 251 + 292 252 pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4; 293 253 pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6; 294 254 ··· 332 252 &ipv6_hdr(skb)->daddr, 0, 333 253 IPPROTO_TCP, 0); 334 254 } 335 - 336 - if (skb->encapsulation) { 337 - ihs = skb_inner_tcp_all_headers(skb); 338 - u64_stats_update_begin(&tx_stats->syncp); 339 - tx_stats->tso_inner_packets++; 340 - tx_stats->tso_inner_bytes += skb->len - ihs; 341 - u64_stats_update_end(&tx_stats->syncp); 342 - } else { 343 - if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { 344 - ihs = skb_transport_offset(skb) + sizeof(struct udphdr); 345 - } else { 346 - ihs = skb_tcp_all_headers(skb); 347 - if (ipv6_has_hopopt_jumbo(skb)) 348 - ihs -= sizeof(struct hop_jumbo_hdr); 349 - } 350 - 351 - u64_stats_update_begin(&tx_stats->syncp); 352 - tx_stats->tso_packets++; 353 - tx_stats->tso_bytes += skb->len - ihs; 354 - u64_stats_update_end(&tx_stats->syncp); 355 - } 356 - 357 255 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { 358 256 csum_type = mana_checksum_info(skb); 359 257 ··· 354 296 } else { 355 297 /* Can't do offload of this type of checksum */ 356 298 if (skb_checksum_help(skb)) 357 - goto free_sgl_ptr; 299 + goto tx_drop_count; 358 300 } 359 301 } 360 302 361 - if (mana_map_skb(skb, apc, &pkg)) { 303 + WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); 304 + 305 + if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { 306 + pkg.wqe_req.sgl = pkg.sgl_array; 307 + } else { 308 + pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge, 309 + sizeof(struct gdma_sge), 310 + GFP_ATOMIC); 311 + if (!pkg.sgl_ptr) 312 + goto tx_drop_count; 313 + 314 + pkg.wqe_req.sgl = pkg.sgl_ptr; 315 + } 316 + 317 + if (mana_map_skb(skb, apc, &pkg, gso_hs)) { 362 318 u64_stats_update_begin(&tx_stats->syncp); 363 319 tx_stats->mana_map_err++; 364 320 u64_stats_update_end(&tx_stats->syncp); ··· 1330 1258 struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; 1331 1259 struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; 1332 1260 struct device *dev = gc->dev; 1333 - int i; 1261 + int hsg, i; 1334 1262 1335 - dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], DMA_TO_DEVICE); 1263 + /* Number of SGEs of linear part */ 1264 + hsg = (skb_is_gso(skb) && skb_headlen(skb) > ash->size[0]) ? 2 : 1; 1336 1265 1337 - for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++) 1266 + for (i = 0; i < hsg; i++) 1267 + dma_unmap_single(dev, ash->dma_handle[i], ash->size[i], 1268 + DMA_TO_DEVICE); 1269 + 1270 + for (i = hsg; i < skb_shinfo(skb)->nr_frags + hsg; i++) 1338 1271 dma_unmap_page(dev, ash->dma_handle[i], ash->size[i], 1339 1272 DMA_TO_DEVICE); 1340 1273 } ··· 1394 1317 case CQE_TX_VPORT_IDX_OUT_OF_RANGE: 1395 1318 case CQE_TX_VPORT_DISABLED: 1396 1319 case CQE_TX_VLAN_TAGGING_VIOLATION: 1397 - WARN_ONCE(1, "TX: CQE error %d: ignored.\n", 1398 - cqe_oob->cqe_hdr.cqe_type); 1320 + if (net_ratelimit()) 1321 + netdev_err(ndev, "TX: CQE error %d\n", 1322 + cqe_oob->cqe_hdr.cqe_type); 1323 + 1399 1324 apc->eth_stats.tx_cqe_err++; 1400 1325 break; 1401 1326 1402 1327 default: 1403 - /* If the CQE type is unexpected, log an error, assert, 1404 - * and go through the error path. 1328 + /* If the CQE type is unknown, log an error, 1329 + * and still free the SKB, update tail, etc. 1405 1330 */ 1406 - WARN_ONCE(1, "TX: Unexpected CQE type %d: HW BUG?\n", 1407 - cqe_oob->cqe_hdr.cqe_type); 1331 + if (net_ratelimit()) 1332 + netdev_err(ndev, "TX: unknown CQE type %d\n", 1333 + cqe_oob->cqe_hdr.cqe_type); 1334 + 1408 1335 apc->eth_stats.tx_cqe_unknown_type++; 1409 - return; 1336 + break; 1410 1337 } 1411 1338 1412 1339 if (WARN_ON_ONCE(txq->gdma_txq_id != completions[i].wq_num))
+1 -1
drivers/net/ethernet/qlogic/qed/qed_ll2.h
··· 110 110 enum core_tx_dest tx_dest; 111 111 u8 tx_stats_en; 112 112 bool main_func_queue; 113 + struct qed_ll2_cbs cbs; 113 114 struct qed_ll2_rx_queue rx_queue; 114 115 struct qed_ll2_tx_queue tx_queue; 115 - struct qed_ll2_cbs cbs; 116 116 }; 117 117 118 118 extern const struct qed_ll2_ops qed_ll2_ops_pass;
+12 -1
drivers/net/ethernet/renesas/rswitch.c
··· 4 4 * Copyright (C) 2022 Renesas Electronics Corporation 5 5 */ 6 6 7 + #include <linux/clk.h> 7 8 #include <linux/dma-mapping.h> 8 9 #include <linux/err.h> 9 10 #include <linux/etherdevice.h> ··· 1050 1049 static void rswitch_etha_enable_mii(struct rswitch_etha *etha) 1051 1050 { 1052 1051 rswitch_modify(etha->addr, MPIC, MPIC_PSMCS_MASK | MPIC_PSMHT_MASK, 1053 - MPIC_PSMCS(0x05) | MPIC_PSMHT(0x06)); 1052 + MPIC_PSMCS(etha->psmcs) | MPIC_PSMHT(0x06)); 1054 1053 rswitch_modify(etha->addr, MPSM, 0, MPSM_MFF_C45); 1055 1054 } 1056 1055 ··· 1694 1693 etha->index = index; 1695 1694 etha->addr = priv->addr + RSWITCH_ETHA_OFFSET + index * RSWITCH_ETHA_SIZE; 1696 1695 etha->coma_addr = priv->addr; 1696 + 1697 + /* MPIC.PSMCS = (clk [MHz] / (MDC frequency [MHz] * 2) - 1. 1698 + * Calculating PSMCS value as MDC frequency = 2.5MHz. So, multiply 1699 + * both the numerator and the denominator by 10. 1700 + */ 1701 + etha->psmcs = clk_get_rate(priv->clk) / 100000 / (25 * 2) - 1; 1697 1702 } 1698 1703 1699 1704 static int rswitch_device_alloc(struct rswitch_private *priv, int index) ··· 1906 1899 if (!priv) 1907 1900 return -ENOMEM; 1908 1901 spin_lock_init(&priv->lock); 1902 + 1903 + priv->clk = devm_clk_get(&pdev->dev, NULL); 1904 + if (IS_ERR(priv->clk)) 1905 + return PTR_ERR(priv->clk); 1909 1906 1910 1907 attr = soc_device_match(rswitch_soc_no_speed_change); 1911 1908 if (attr)
+2
drivers/net/ethernet/renesas/rswitch.h
··· 915 915 bool external_phy; 916 916 struct mii_bus *mii; 917 917 phy_interface_t phy_interface; 918 + u32 psmcs; 918 919 u8 mac_addr[MAX_ADDR_LEN]; 919 920 int link; 920 921 int speed; ··· 1013 1012 struct rswitch_mfwd mfwd; 1014 1013 1015 1014 spinlock_t lock; /* lock interrupt registers' control */ 1015 + struct clk *clk; 1016 1016 1017 1017 bool etha_no_runtime_change; 1018 1018 bool gwca_halt;
+5 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
··· 104 104 int (*parse_data)(struct stm32_dwmac *dwmac, 105 105 struct device *dev); 106 106 u32 syscfg_eth_mask; 107 + bool clk_rx_enable_in_suspend; 107 108 }; 108 109 109 110 static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat) ··· 122 121 if (ret) 123 122 return ret; 124 123 125 - if (!dwmac->dev->power.is_suspended) { 124 + if (!dwmac->ops->clk_rx_enable_in_suspend || 125 + !dwmac->dev->power.is_suspended) { 126 126 ret = clk_prepare_enable(dwmac->clk_rx); 127 127 if (ret) { 128 128 clk_disable_unprepare(dwmac->clk_tx); ··· 515 513 .suspend = stm32mp1_suspend, 516 514 .resume = stm32mp1_resume, 517 515 .parse_data = stm32mp1_parse_data, 518 - .syscfg_eth_mask = SYSCFG_MP1_ETH_MASK 516 + .syscfg_eth_mask = SYSCFG_MP1_ETH_MASK, 517 + .clk_rx_enable_in_suspend = true 519 518 }; 520 519 521 520 static const struct of_device_id stm32_dwmac_match[] = {
-30
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 6002 6002 return IRQ_HANDLED; 6003 6003 } 6004 6004 6005 - #ifdef CONFIG_NET_POLL_CONTROLLER 6006 - /* Polling receive - used by NETCONSOLE and other diagnostic tools 6007 - * to allow network I/O with interrupts disabled. 6008 - */ 6009 - static void stmmac_poll_controller(struct net_device *dev) 6010 - { 6011 - struct stmmac_priv *priv = netdev_priv(dev); 6012 - int i; 6013 - 6014 - /* If adapter is down, do nothing */ 6015 - if (test_bit(STMMAC_DOWN, &priv->state)) 6016 - return; 6017 - 6018 - if (priv->plat->flags & STMMAC_FLAG_MULTI_MSI_EN) { 6019 - for (i = 0; i < priv->plat->rx_queues_to_use; i++) 6020 - stmmac_msi_intr_rx(0, &priv->dma_conf.rx_queue[i]); 6021 - 6022 - for (i = 0; i < priv->plat->tx_queues_to_use; i++) 6023 - stmmac_msi_intr_tx(0, &priv->dma_conf.tx_queue[i]); 6024 - } else { 6025 - disable_irq(dev->irq); 6026 - stmmac_interrupt(dev->irq, dev); 6027 - enable_irq(dev->irq); 6028 - } 6029 - } 6030 - #endif 6031 - 6032 6005 /** 6033 6006 * stmmac_ioctl - Entry point for the Ioctl 6034 6007 * @dev: Device pointer. ··· 6962 6989 .ndo_get_stats64 = stmmac_get_stats64, 6963 6990 .ndo_setup_tc = stmmac_setup_tc, 6964 6991 .ndo_select_queue = stmmac_select_queue, 6965 - #ifdef CONFIG_NET_POLL_CONTROLLER 6966 - .ndo_poll_controller = stmmac_poll_controller, 6967 - #endif 6968 6992 .ndo_set_mac_address = stmmac_set_mac_address, 6969 6993 .ndo_vlan_rx_add_vid = stmmac_vlan_rx_add_vid, 6970 6994 .ndo_vlan_rx_kill_vid = stmmac_vlan_rx_kill_vid,
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 901 901 struct platform_device *pdev = to_platform_device(dev); 902 902 int ret; 903 903 904 - ret = stmmac_pltfr_init(pdev, priv->plat->bsp_priv); 904 + ret = stmmac_pltfr_init(pdev, priv->plat); 905 905 if (ret) 906 906 return ret; 907 907
+2 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1747 1747 } 1748 1748 1749 1749 tx_chn->irq = k3_udma_glue_tx_get_irq(tx_chn->tx_chn); 1750 - if (tx_chn->irq <= 0) { 1750 + if (tx_chn->irq < 0) { 1751 1751 dev_err(dev, "Failed to get tx dma irq %d\n", 1752 1752 tx_chn->irq); 1753 + ret = tx_chn->irq; 1753 1754 goto err; 1754 1755 } 1755 1756
+3 -3
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 316 316 goto fail; 317 317 } 318 318 319 - tx_chn->irq = k3_udma_glue_tx_get_irq(tx_chn->tx_chn); 320 - if (tx_chn->irq <= 0) { 321 - ret = -EINVAL; 319 + ret = k3_udma_glue_tx_get_irq(tx_chn->tx_chn); 320 + if (ret < 0) { 322 321 netdev_err(ndev, "failed to get tx irq\n"); 323 322 goto fail; 324 323 } 324 + tx_chn->irq = ret; 325 325 326 326 snprintf(tx_chn->name, sizeof(tx_chn->name), "%s-tx%d", 327 327 dev_name(dev), tx_chn->id);
+3 -1
drivers/net/usb/smsc75xx.c
··· 90 90 ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN 91 91 | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 92 92 0, index, &buf, 4); 93 - if (unlikely(ret < 0)) { 93 + if (unlikely(ret < 4)) { 94 + ret = ret < 0 ? ret : -ENODATA; 95 + 94 96 netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n", 95 97 index, ret); 96 98 return ret;
+10 -2
drivers/net/wan/fsl_ucc_hdlc.c
··· 34 34 #define TDM_PPPOHT_SLIC_MAXIN 35 35 #define RX_BD_ERRORS (R_CD_S | R_OV_S | R_CR_S | R_AB_S | R_NO_S | R_LG_S) 36 36 37 + static int uhdlc_close(struct net_device *dev); 38 + 37 39 static struct ucc_tdm_info utdm_primary_info = { 38 40 .uf_info = { 39 41 .tsa = 0, ··· 710 708 hdlc_device *hdlc = dev_to_hdlc(dev); 711 709 struct ucc_hdlc_private *priv = hdlc->priv; 712 710 struct ucc_tdm *utdm = priv->utdm; 711 + int rc = 0; 713 712 714 713 if (priv->hdlc_busy != 1) { 715 714 if (request_irq(priv->ut_info->uf_info.irq, ··· 734 731 napi_enable(&priv->napi); 735 732 netdev_reset_queue(dev); 736 733 netif_start_queue(dev); 737 - hdlc_open(dev); 734 + 735 + rc = hdlc_open(dev); 736 + if (rc) 737 + uhdlc_close(dev); 738 738 } 739 739 740 - return 0; 740 + return rc; 741 741 } 742 742 743 743 static void uhdlc_memclean(struct ucc_hdlc_private *priv) ··· 829 823 netif_stop_queue(dev); 830 824 netdev_reset_queue(dev); 831 825 priv->hdlc_busy = 0; 826 + 827 + hdlc_close(dev); 832 828 833 829 return 0; 834 830 }
+7 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
··· 442 442 * fixed parameter portion is assumed, otherwise 443 443 * ssid in the fixed portion is ignored 444 444 */ 445 - __le16 channel_list[1]; /* list of chanspecs */ 445 + union { 446 + __le16 padding; /* Reserve space for at least 1 entry for abort 447 + * which uses an on stack brcmf_scan_params_v2_le 448 + */ 449 + DECLARE_FLEX_ARRAY(__le16, channel_list); /* chanspecs */ 450 + }; 446 451 }; 447 452 448 453 struct brcmf_scan_results { ··· 707 702 708 703 struct brcmf_chanspec_list { 709 704 __le32 count; /* # of entries */ 710 - __le32 element[1]; /* variable length uint32 list */ 705 + __le32 element[]; /* variable length uint32 list */ 711 706 }; 712 707 713 708 /*
+3 -3
drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
··· 310 310 struct iwl_fw_ini_error_dump_range { 311 311 __le32 range_data_size; 312 312 union { 313 - __le32 internal_base_addr; 314 - __le64 dram_base_addr; 315 - __le32 page_num; 313 + __le32 internal_base_addr __packed; 314 + __le64 dram_base_addr __packed; 315 + __le32 page_num __packed; 316 316 struct iwl_fw_ini_fifo_hdr fifo_hdr; 317 317 struct iwl_cmd_header fw_pkt_hdr; 318 318 };
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 802 802 mvm->nvm_data->bands[0].n_channels = 1; 803 803 mvm->nvm_data->bands[0].n_bitrates = 1; 804 804 mvm->nvm_data->bands[0].bitrates = 805 - (void *)((u8 *)mvm->nvm_data->channels + 1); 805 + (void *)(mvm->nvm_data->channels + 1); 806 806 mvm->nvm_data->bands[0].bitrates->hw_value = 10; 807 807 } 808 808
+61 -56
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
··· 731 731 732 732 mvmvif->associated = vif->cfg.assoc; 733 733 734 - if (!(changes & BSS_CHANGED_ASSOC)) 735 - return; 734 + if (changes & BSS_CHANGED_ASSOC) { 735 + if (vif->cfg.assoc) { 736 + /* clear statistics to get clean beacon counter */ 737 + iwl_mvm_request_statistics(mvm, true); 738 + iwl_mvm_sf_update(mvm, vif, false); 739 + iwl_mvm_power_vif_assoc(mvm, vif); 736 740 737 - if (vif->cfg.assoc) { 738 - /* clear statistics to get clean beacon counter */ 739 - iwl_mvm_request_statistics(mvm, true); 740 - iwl_mvm_sf_update(mvm, vif, false); 741 - iwl_mvm_power_vif_assoc(mvm, vif); 741 + for_each_mvm_vif_valid_link(mvmvif, i) { 742 + memset(&mvmvif->link[i]->beacon_stats, 0, 743 + sizeof(mvmvif->link[i]->beacon_stats)); 742 744 743 - for_each_mvm_vif_valid_link(mvmvif, i) { 744 - memset(&mvmvif->link[i]->beacon_stats, 0, 745 - sizeof(mvmvif->link[i]->beacon_stats)); 745 + if (vif->p2p) { 746 + iwl_mvm_update_smps(mvm, vif, 747 + IWL_MVM_SMPS_REQ_PROT, 748 + IEEE80211_SMPS_DYNAMIC, i); 749 + } 746 750 747 - if (vif->p2p) { 748 - iwl_mvm_update_smps(mvm, vif, 749 - IWL_MVM_SMPS_REQ_PROT, 750 - IEEE80211_SMPS_DYNAMIC, i); 751 + rcu_read_lock(); 752 + link_conf = rcu_dereference(vif->link_conf[i]); 753 + if (link_conf && !link_conf->dtim_period) 754 + protect = true; 755 + rcu_read_unlock(); 751 756 } 752 757 753 - rcu_read_lock(); 754 - link_conf = rcu_dereference(vif->link_conf[i]); 755 - if (link_conf && !link_conf->dtim_period) 756 - protect = true; 757 - rcu_read_unlock(); 758 - } 758 + if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) && 759 + protect) { 760 + /* If we're not restarting and still haven't 761 + * heard a beacon (dtim period unknown) then 762 + * make sure we still have enough minimum time 763 + * remaining in the time event, since the auth 764 + * might actually have taken quite a while 765 + * (especially for SAE) and so the remaining 766 + * time could be small without us having heard 767 + * a beacon yet. 768 + */ 769 + iwl_mvm_protect_assoc(mvm, vif, 0); 770 + } 759 771 760 - if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) && 761 - protect) { 762 - /* If we're not restarting and still haven't 763 - * heard a beacon (dtim period unknown) then 764 - * make sure we still have enough minimum time 765 - * remaining in the time event, since the auth 766 - * might actually have taken quite a while 767 - * (especially for SAE) and so the remaining 768 - * time could be small without us having heard 769 - * a beacon yet. 772 + iwl_mvm_sf_update(mvm, vif, false); 773 + 774 + /* FIXME: need to decide about misbehaving AP handling */ 775 + iwl_mvm_power_vif_assoc(mvm, vif); 776 + } else if (iwl_mvm_mld_vif_have_valid_ap_sta(mvmvif)) { 777 + iwl_mvm_mei_host_disassociated(mvm); 778 + 779 + /* If update fails - SF might be running in associated 780 + * mode while disassociated - which is forbidden. 770 781 */ 771 - iwl_mvm_protect_assoc(mvm, vif, 0); 782 + ret = iwl_mvm_sf_update(mvm, vif, false); 783 + WARN_ONCE(ret && 784 + !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, 785 + &mvm->status), 786 + "Failed to update SF upon disassociation\n"); 787 + 788 + /* If we get an assert during the connection (after the 789 + * station has been added, but before the vif is set 790 + * to associated), mac80211 will re-add the station and 791 + * then configure the vif. Since the vif is not 792 + * associated, we would remove the station here and 793 + * this would fail the recovery. 794 + */ 795 + iwl_mvm_mld_vif_delete_all_stas(mvm, vif); 772 796 } 773 797 774 - iwl_mvm_sf_update(mvm, vif, false); 775 - 776 - /* FIXME: need to decide about misbehaving AP handling */ 777 - iwl_mvm_power_vif_assoc(mvm, vif); 778 - } else if (iwl_mvm_mld_vif_have_valid_ap_sta(mvmvif)) { 779 - iwl_mvm_mei_host_disassociated(mvm); 780 - 781 - /* If update fails - SF might be running in associated 782 - * mode while disassociated - which is forbidden. 783 - */ 784 - ret = iwl_mvm_sf_update(mvm, vif, false); 785 - WARN_ONCE(ret && 786 - !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, 787 - &mvm->status), 788 - "Failed to update SF upon disassociation\n"); 789 - 790 - /* If we get an assert during the connection (after the 791 - * station has been added, but before the vif is set 792 - * to associated), mac80211 will re-add the station and 793 - * then configure the vif. Since the vif is not 794 - * associated, we would remove the station here and 795 - * this would fail the recovery. 796 - */ 797 - iwl_mvm_mld_vif_delete_all_stas(mvm, vif); 798 + iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes); 798 799 } 799 800 800 - iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes); 801 + if (changes & BSS_CHANGED_PS) { 802 + ret = iwl_mvm_power_update_mac(mvm); 803 + if (ret) 804 + IWL_ERR(mvm, "failed to update power mode\n"); 805 + } 801 806 } 802 807 803 808 static void
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 2342 2342 if (gen_flags & IWL_UMAC_SCAN_GEN_FLAGS_V2_FRAGMENTED_LMAC2) 2343 2343 gp->num_of_fragments[SCAN_HB_LMAC_IDX] = IWL_SCAN_NUM_OF_FRAGS; 2344 2344 2345 - if (version < 12) { 2345 + if (version < 16) { 2346 2346 gp->scan_start_mac_or_link_id = scan_vif->id; 2347 2347 } else { 2348 2348 struct iwl_mvm_vif_link_info *link_info;
+3
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 1612 1612 iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]); 1613 1613 1614 1614 memset(&info->status, 0, sizeof(info->status)); 1615 + info->flags &= ~(IEEE80211_TX_STAT_ACK | IEEE80211_TX_STAT_TX_FILTERED); 1615 1616 1616 1617 /* inform mac80211 about what happened with the frame */ 1617 1618 switch (status & TX_STATUS_MSK) { ··· 1965 1964 */ 1966 1965 if (!is_flush) 1967 1966 info->flags |= IEEE80211_TX_STAT_ACK; 1967 + else 1968 + info->flags &= ~IEEE80211_TX_STAT_ACK; 1968 1969 } 1969 1970 1970 1971 /*
+19 -3
drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
··· 918 918 919 919 mwifiex_dbg_dump(priv->adapter, EVT_D, "RXBA_SYNC event:", 920 920 event_buf, len); 921 - while (tlv_buf_left >= sizeof(*tlv_rxba)) { 921 + while (tlv_buf_left > sizeof(*tlv_rxba)) { 922 922 tlv_type = le16_to_cpu(tlv_rxba->header.type); 923 923 tlv_len = le16_to_cpu(tlv_rxba->header.len); 924 + if (size_add(sizeof(tlv_rxba->header), tlv_len) > tlv_buf_left) { 925 + mwifiex_dbg(priv->adapter, WARN, 926 + "TLV size (%zu) overflows event_buf buf_left=%d\n", 927 + size_add(sizeof(tlv_rxba->header), tlv_len), 928 + tlv_buf_left); 929 + return; 930 + } 931 + 924 932 if (tlv_type != TLV_TYPE_RXBA_SYNC) { 925 933 mwifiex_dbg(priv->adapter, ERROR, 926 934 "Wrong TLV id=0x%x\n", tlv_type); ··· 937 929 938 930 tlv_seq_num = le16_to_cpu(tlv_rxba->seq_num); 939 931 tlv_bitmap_len = le16_to_cpu(tlv_rxba->bitmap_len); 932 + if (size_add(sizeof(*tlv_rxba), tlv_bitmap_len) > tlv_buf_left) { 933 + mwifiex_dbg(priv->adapter, WARN, 934 + "TLV size (%zu) overflows event_buf buf_left=%d\n", 935 + size_add(sizeof(*tlv_rxba), tlv_bitmap_len), 936 + tlv_buf_left); 937 + return; 938 + } 939 + 940 940 mwifiex_dbg(priv->adapter, INFO, 941 941 "%pM tid=%d seq_num=%d bitmap_len=%d\n", 942 942 tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num, ··· 981 965 } 982 966 } 983 967 984 - tlv_buf_left -= (sizeof(*tlv_rxba) + tlv_len); 985 - tmp = (u8 *)tlv_rxba + tlv_len + sizeof(*tlv_rxba); 968 + tlv_buf_left -= (sizeof(tlv_rxba->header) + tlv_len); 969 + tmp = (u8 *)tlv_rxba + sizeof(tlv_rxba->header) + tlv_len; 986 970 tlv_rxba = (struct mwifiex_ie_types_rxba_sync *)tmp; 987 971 } 988 972 }
+1 -1
drivers/net/wireless/marvell/mwifiex/fw.h
··· 779 779 u8 reserved; 780 780 __le16 seq_num; 781 781 __le16 bitmap_len; 782 - u8 bitmap[1]; 782 + u8 bitmap[]; 783 783 } __packed; 784 784 785 785 struct chan_band_param_set {
+9 -7
drivers/net/wireless/marvell/mwifiex/sta_rx.c
··· 86 86 rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length); 87 87 rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off; 88 88 89 - if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) { 89 + if (sizeof(rx_pkt_hdr->eth803_hdr) + sizeof(rfc1042_header) + 90 + rx_pkt_off > skb->len) { 90 91 mwifiex_dbg(priv->adapter, ERROR, 91 92 "wrong rx packet offset: len=%d, rx_pkt_off=%d\n", 92 93 skb->len, rx_pkt_off); ··· 96 95 return -1; 97 96 } 98 97 99 - if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header, 100 - sizeof(bridge_tunnel_header))) || 101 - (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header, 102 - sizeof(rfc1042_header)) && 103 - ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP && 104 - ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX)) { 98 + if (sizeof(*rx_pkt_hdr) + rx_pkt_off <= skb->len && 99 + ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header, 100 + sizeof(bridge_tunnel_header))) || 101 + (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header, 102 + sizeof(rfc1042_header)) && 103 + ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_AARP && 104 + ntohs(rx_pkt_hdr->rfc1042_hdr.snap_type) != ETH_P_IPX))) { 105 105 /* 106 106 * Replace the 803 header and rfc1042 header (llc/snap) with an 107 107 * EthernetII header, keep the src/dst and snap_type
+4 -4
drivers/net/wireless/mediatek/mt76/dma.c
··· 93 93 { 94 94 struct mt76_txwi_cache *t = NULL; 95 95 96 - spin_lock(&dev->wed_lock); 96 + spin_lock_bh(&dev->wed_lock); 97 97 if (!list_empty(&dev->rxwi_cache)) { 98 98 t = list_first_entry(&dev->rxwi_cache, struct mt76_txwi_cache, 99 99 list); 100 100 list_del(&t->list); 101 101 } 102 - spin_unlock(&dev->wed_lock); 102 + spin_unlock_bh(&dev->wed_lock); 103 103 104 104 return t; 105 105 } ··· 145 145 if (!t) 146 146 return; 147 147 148 - spin_lock(&dev->wed_lock); 148 + spin_lock_bh(&dev->wed_lock); 149 149 list_add(&t->list, &dev->rxwi_cache); 150 - spin_unlock(&dev->wed_lock); 150 + spin_unlock_bh(&dev->wed_lock); 151 151 } 152 152 EXPORT_SYMBOL_GPL(mt76_put_rxwi); 153 153
-7
drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.c
··· 131 131 s8 *lna_2g, s8 *lna_5g, 132 132 struct ieee80211_channel *chan) 133 133 { 134 - u16 val; 135 134 u8 lna; 136 - 137 - val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1); 138 - if (val & MT_EE_NIC_CONF_1_LNA_EXT_2G) 139 - *lna_2g = 0; 140 - if (val & MT_EE_NIC_CONF_1_LNA_EXT_5G) 141 - memset(lna_5g, 0, sizeof(s8) * 3); 142 135 143 136 if (chan->band == NL80211_BAND_2GHZ) 144 137 lna = *lna_2g;
+11 -2
drivers/net/wireless/mediatek/mt76/mt76x2/eeprom.c
··· 256 256 struct ieee80211_channel *chan = dev->mphy.chandef.chan; 257 257 int channel = chan->hw_value; 258 258 s8 lna_5g[3], lna_2g; 259 - u8 lna; 259 + bool use_lna; 260 + u8 lna = 0; 260 261 u16 val; 261 262 262 263 if (chan->band == NL80211_BAND_2GHZ) ··· 276 275 dev->cal.rx.mcu_gain |= (lna_5g[1] & 0xff) << 16; 277 276 dev->cal.rx.mcu_gain |= (lna_5g[2] & 0xff) << 24; 278 277 279 - lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan); 278 + val = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_1); 279 + if (chan->band == NL80211_BAND_2GHZ) 280 + use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_2G); 281 + else 282 + use_lna = !(val & MT_EE_NIC_CONF_1_LNA_EXT_5G); 283 + 284 + if (use_lna) 285 + lna = mt76x02_get_lna_gain(dev, &lna_2g, lna_5g, chan); 286 + 280 287 dev->cal.rx.lna_gain = mt76x02_sign_extend(lna, 8); 281 288 } 282 289 EXPORT_SYMBOL_GPL(mt76x2_read_rx_gain);
+1
drivers/net/wireless/realtek/rtw88/rtw8723d.h
··· 46 46 u8 vender_id[2]; /* 0x100 */ 47 47 u8 product_id[2]; /* 0x102 */ 48 48 u8 usb_option; /* 0x104 */ 49 + u8 res5[2]; /* 0x105 */ 49 50 u8 mac_addr[ETH_ALEN]; /* 0x107 */ 50 51 }; 51 52
-1
drivers/ptp/ptp_ocp.c
··· 3998 3998 return 0; 3999 3999 4000 4000 out: 4001 - ptp_ocp_dev_release(&bp->dev); 4002 4001 put_device(&bp->dev); 4003 4002 return err; 4004 4003 }
+11 -1
drivers/vhost/vringh.c
··· 123 123 done += partlen; 124 124 len -= partlen; 125 125 ptr += partlen; 126 + iov->consumed += partlen; 127 + iov->iov[iov->i].iov_len -= partlen; 128 + iov->iov[iov->i].iov_base += partlen; 126 129 127 - vringh_kiov_advance(iov, partlen); 130 + if (!iov->iov[iov->i].iov_len) { 131 + /* Fix up old iov element then increment. */ 132 + iov->iov[iov->i].iov_len = iov->consumed; 133 + iov->iov[iov->i].iov_base -= iov->consumed; 134 + 135 + iov->consumed = 0; 136 + iov->i++; 137 + } 128 138 } 129 139 return done; 130 140 }
+1 -1
include/linux/bpf.h
··· 1307 1307 static inline struct bpf_trampoline *bpf_trampoline_get(u64 key, 1308 1308 struct bpf_attach_target_info *tgt_info) 1309 1309 { 1310 - return ERR_PTR(-EOPNOTSUPP); 1310 + return NULL; 1311 1311 } 1312 1312 static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} 1313 1313 #define DEFINE_BPF_DISPATCHER(name)
+1
include/linux/netfilter/nf_conntrack_sctp.h
··· 9 9 enum sctp_conntrack state; 10 10 11 11 __be32 vtag[IP_CT_DIR_MAX]; 12 + u8 init[IP_CT_DIR_MAX]; 12 13 u8 last_dir; 13 14 u8 flags; 14 15 };
+1 -1
include/net/bluetooth/hci_core.h
··· 350 350 struct list_head list; 351 351 struct mutex lock; 352 352 353 - char name[8]; 353 + const char *name; 354 354 unsigned long flags; 355 355 __u16 id; 356 356 __u8 bus;
+4 -2
include/net/cfg80211.h
··· 5941 5941 * @event_lock: (private) lock for event list 5942 5942 * @owner_nlportid: (private) owner socket port ID 5943 5943 * @nl_owner_dead: (private) owner socket went away 5944 + * @cqm_rssi_work: (private) CQM RSSI reporting work 5944 5945 * @cqm_config: (private) nl80211 RSSI monitor state 5945 5946 * @pmsr_list: (private) peer measurement requests 5946 5947 * @pmsr_lock: (private) peer measurements requests/results lock ··· 6014 6013 } wext; 6015 6014 #endif 6016 6015 6017 - struct cfg80211_cqm_config *cqm_config; 6016 + struct wiphy_work cqm_rssi_work; 6017 + struct cfg80211_cqm_config __rcu *cqm_config; 6018 6018 6019 6019 struct list_head pmsr_list; 6020 6020 spinlock_t pmsr_lock; ··· 7233 7231 int uapsd_queues; 7234 7232 const u8 *ap_mld_addr; 7235 7233 struct { 7236 - const u8 *addr; 7234 + u8 addr[ETH_ALEN] __aligned(2); 7237 7235 struct cfg80211_bss *bss; 7238 7236 u16 status; 7239 7237 } links[IEEE80211_MLD_MAX_NUM_LINKS];
+1
include/net/ip_fib.h
··· 154 154 int fib_nhs; 155 155 bool fib_nh_is_v6; 156 156 bool nh_updated; 157 + bool pfsrc_removed; 157 158 struct nexthop *nh; 158 159 struct rcu_head rcu; 159 160 struct fib_nh fib_nh[];
+3 -2
include/net/mana/mana.h
··· 103 103 104 104 /* skb data and frags dma mappings */ 105 105 struct mana_skb_head { 106 - dma_addr_t dma_handle[MAX_SKB_FRAGS + 1]; 106 + /* GSO pkts may have 2 SGEs for the linear part*/ 107 + dma_addr_t dma_handle[MAX_SKB_FRAGS + 2]; 107 108 108 - u32 size[MAX_SKB_FRAGS + 1]; 109 + u32 size[MAX_SKB_FRAGS + 2]; 109 110 }; 110 111 111 112 #define MANA_HEADROOM sizeof(struct mana_skb_head)
+1 -1
include/net/neighbour.h
··· 539 539 READ_ONCE(hh->hh_len)) 540 540 return neigh_hh_output(hh, skb); 541 541 542 - return n->output(n, skb); 542 + return READ_ONCE(n->output)(n, skb); 543 543 } 544 544 545 545 static inline struct neighbour *
+3 -3
include/net/page_pool/helpers.h
··· 16 16 * page_pool_alloc_pages() call. Drivers should use 17 17 * page_pool_dev_alloc_pages() replacing dev_alloc_pages(). 18 18 * 19 - * API keeps track of in-flight pages, in order to let API user know 19 + * The API keeps track of in-flight pages, in order to let API users know 20 20 * when it is safe to free a page_pool object. Thus, API users 21 21 * must call page_pool_put_page() to free the page, or attach 22 - * the page to a page_pool-aware objects like skbs marked with 22 + * the page to a page_pool-aware object like skbs marked with 23 23 * skb_mark_for_recycle(). 24 24 * 25 - * API user must call page_pool_put_page() once on a page, as it 25 + * API users must call page_pool_put_page() once on a page, as it 26 26 * will either recycle the page, or in case of refcnt > 1, it will 27 27 * release the DMA mapping and in-flight state accounting. 28 28 */
+4 -2
include/net/tcp.h
··· 348 348 struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp, 349 349 bool force_schedule); 350 350 351 - static inline void tcp_dec_quickack_mode(struct sock *sk, 352 - const unsigned int pkts) 351 + static inline void tcp_dec_quickack_mode(struct sock *sk) 353 352 { 354 353 struct inet_connection_sock *icsk = inet_csk(sk); 355 354 356 355 if (icsk->icsk_ack.quick) { 356 + /* How many ACKs S/ACKing new data have we sent? */ 357 + const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0; 358 + 357 359 if (pkts >= icsk->icsk_ack.quick) { 358 360 icsk->icsk_ack.quick = 0; 359 361 /* Leaving quickack mode we deflate ATO. */
+19 -25
kernel/bpf/memalloc.c
··· 965 965 return !ret ? NULL : ret + LLIST_NODE_SZ; 966 966 } 967 967 968 - /* Most of the logic is taken from setup_kmalloc_cache_index_table() */ 969 968 static __init int bpf_mem_cache_adjust_size(void) 970 969 { 971 - unsigned int size, index; 970 + unsigned int size; 972 971 973 - /* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be 974 - * up-to 256-bytes. 972 + /* Adjusting the indexes in size_index() according to the object_size 973 + * of underlying slab cache, so bpf_mem_alloc() will select a 974 + * bpf_mem_cache with unit_size equal to the object_size of 975 + * the underlying slab cache. 976 + * 977 + * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is 978 + * 256-bytes, so only do adjustment for [8-bytes, 192-bytes]. 975 979 */ 976 - size = KMALLOC_MIN_SIZE; 977 - if (size <= 192) 978 - index = size_index[(size - 1) / 8]; 979 - else 980 - index = fls(size - 1) - 1; 981 - for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8) 982 - size_index[(size - 1) / 8] = index; 980 + for (size = 192; size >= 8; size -= 8) { 981 + unsigned int kmalloc_size, index; 983 982 984 - /* The minimal alignment is 64-bytes, so disable 96-bytes cache and 985 - * use 128-bytes cache instead. 986 - */ 987 - if (KMALLOC_MIN_SIZE >= 64) { 988 - index = size_index[(128 - 1) / 8]; 989 - for (size = 64 + 8; size <= 96; size += 8) 990 - size_index[(size - 1) / 8] = index; 991 - } 983 + kmalloc_size = kmalloc_size_roundup(size); 984 + if (kmalloc_size == size) 985 + continue; 992 986 993 - /* The minimal alignment is 128-bytes, so disable 192-bytes cache and 994 - * use 256-bytes cache instead. 995 - */ 996 - if (KMALLOC_MIN_SIZE >= 128) { 997 - index = fls(256 - 1) - 1; 998 - for (size = 128 + 8; size <= 192; size += 8) 987 + if (kmalloc_size <= 192) 988 + index = size_index[(kmalloc_size - 1) / 8]; 989 + else 990 + index = fls(kmalloc_size - 1) - 1; 991 + /* Only overwrite if necessary */ 992 + if (size_index[(size - 1) / 8] != index) 999 993 size_index[(size - 1) / 8] = index; 1000 994 } 1001 995
+3
kernel/bpf/mprog.c
··· 253 253 goto out; 254 254 } 255 255 idx = tidx; 256 + } else if (bpf_mprog_total(entry) == bpf_mprog_max()) { 257 + ret = -ERANGE; 258 + goto out; 256 259 } 257 260 if (flags & BPF_F_BEFORE) { 258 261 tidx = bpf_mprog_pos_before(entry, &rtuple);
+3 -5
kernel/bpf/verifier.c
··· 4047 4047 bitmap_from_u64(mask, bt_reg_mask(bt)); 4048 4048 for_each_set_bit(i, mask, 32) { 4049 4049 reg = &st->frame[0]->regs[i]; 4050 - if (reg->type != SCALAR_VALUE) { 4051 - bt_clear_reg(bt, i); 4052 - continue; 4053 - } 4054 - reg->precise = true; 4050 + bt_clear_reg(bt, i); 4051 + if (reg->type == SCALAR_VALUE) 4052 + reg->precise = true; 4055 4053 } 4056 4054 return 0; 4057 4055 }
+35 -28
net/bluetooth/hci_conn.c
··· 2413 2413 if (!test_bit(HCI_CONN_AUTH, &conn->flags)) 2414 2414 goto auth; 2415 2415 2416 - /* An authenticated FIPS approved combination key has sufficient 2417 - * security for security level 4. */ 2418 - if (conn->key_type == HCI_LK_AUTH_COMBINATION_P256 && 2419 - sec_level == BT_SECURITY_FIPS) 2420 - goto encrypt; 2421 - 2422 - /* An authenticated combination key has sufficient security for 2423 - security level 3. */ 2424 - if ((conn->key_type == HCI_LK_AUTH_COMBINATION_P192 || 2425 - conn->key_type == HCI_LK_AUTH_COMBINATION_P256) && 2426 - sec_level == BT_SECURITY_HIGH) 2427 - goto encrypt; 2428 - 2429 - /* An unauthenticated combination key has sufficient security for 2430 - security level 1 and 2. */ 2431 - if ((conn->key_type == HCI_LK_UNAUTH_COMBINATION_P192 || 2432 - conn->key_type == HCI_LK_UNAUTH_COMBINATION_P256) && 2433 - (sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW)) 2434 - goto encrypt; 2435 - 2436 - /* A combination key has always sufficient security for the security 2437 - levels 1 or 2. High security level requires the combination key 2438 - is generated using maximum PIN code length (16). 2439 - For pre 2.1 units. */ 2440 - if (conn->key_type == HCI_LK_COMBINATION && 2441 - (sec_level == BT_SECURITY_MEDIUM || sec_level == BT_SECURITY_LOW || 2442 - conn->pin_length == 16)) 2443 - goto encrypt; 2416 + switch (conn->key_type) { 2417 + case HCI_LK_AUTH_COMBINATION_P256: 2418 + /* An authenticated FIPS approved combination key has 2419 + * sufficient security for security level 4 or lower. 2420 + */ 2421 + if (sec_level <= BT_SECURITY_FIPS) 2422 + goto encrypt; 2423 + break; 2424 + case HCI_LK_AUTH_COMBINATION_P192: 2425 + /* An authenticated combination key has sufficient security for 2426 + * security level 3 or lower. 2427 + */ 2428 + if (sec_level <= BT_SECURITY_HIGH) 2429 + goto encrypt; 2430 + break; 2431 + case HCI_LK_UNAUTH_COMBINATION_P192: 2432 + case HCI_LK_UNAUTH_COMBINATION_P256: 2433 + /* An unauthenticated combination key has sufficient security 2434 + * for security level 2 or lower. 2435 + */ 2436 + if (sec_level <= BT_SECURITY_MEDIUM) 2437 + goto encrypt; 2438 + break; 2439 + case HCI_LK_COMBINATION: 2440 + /* A combination key has always sufficient security for the 2441 + * security levels 2 or lower. High security level requires the 2442 + * combination key is generated using maximum PIN code length 2443 + * (16). For pre 2.1 units. 2444 + */ 2445 + if (sec_level <= BT_SECURITY_MEDIUM || conn->pin_length == 16) 2446 + goto encrypt; 2447 + break; 2448 + default: 2449 + break; 2450 + } 2444 2451 2445 2452 auth: 2446 2453 if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
+11 -3
net/bluetooth/hci_core.c
··· 2617 2617 if (id < 0) 2618 2618 return id; 2619 2619 2620 - snprintf(hdev->name, sizeof(hdev->name), "hci%d", id); 2620 + error = dev_set_name(&hdev->dev, "hci%u", id); 2621 + if (error) 2622 + return error; 2623 + 2624 + hdev->name = dev_name(&hdev->dev); 2621 2625 hdev->id = id; 2622 2626 2623 2627 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); ··· 2642 2638 2643 2639 if (!IS_ERR_OR_NULL(bt_debugfs)) 2644 2640 hdev->debugfs = debugfs_create_dir(hdev->name, bt_debugfs); 2645 - 2646 - dev_set_name(&hdev->dev, "%s", hdev->name); 2647 2641 2648 2642 error = device_add(&hdev->dev); 2649 2643 if (error < 0) ··· 2786 2784 hci_conn_params_clear_all(hdev); 2787 2785 hci_discovery_filter_clear(hdev); 2788 2786 hci_blocked_keys_clear(hdev); 2787 + hci_codec_list_clear(&hdev->local_codecs); 2789 2788 hci_dev_unlock(hdev); 2790 2789 2791 2790 ida_simple_remove(&hci_index_ida, hdev->id); ··· 3421 3418 if (c->type == type && c->sent) { 3422 3419 bt_dev_err(hdev, "killing stalled connection %pMR", 3423 3420 &c->dst); 3421 + /* hci_disconnect might sleep, so, we have to release 3422 + * the RCU read lock before calling it. 3423 + */ 3424 + rcu_read_unlock(); 3424 3425 hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM); 3426 + rcu_read_lock(); 3425 3427 } 3426 3428 } 3427 3429
+1
net/bluetooth/hci_event.c
··· 33 33 34 34 #include "hci_request.h" 35 35 #include "hci_debugfs.h" 36 + #include "hci_codec.h" 36 37 #include "a2mp.h" 37 38 #include "amp.h" 38 39 #include "smp.h"
-2
net/bluetooth/hci_request.h
··· 71 71 void hci_req_add_le_scan_disable(struct hci_request *req, bool rpa_le_conn); 72 72 void hci_req_add_le_passive_scan(struct hci_request *req); 73 73 74 - void hci_req_prepare_suspend(struct hci_dev *hdev, enum suspended_state next); 75 - 76 74 void hci_request_setup(struct hci_dev *hdev); 77 75 void hci_request_cancel_all(struct hci_dev *hdev);
+5 -9
net/bluetooth/hci_sync.c
··· 413 413 LE_SCAN_FILTER_DUP_ENABLE); 414 414 } 415 415 416 - static int le_scan_restart_sync(struct hci_dev *hdev, void *data) 417 - { 418 - return hci_le_scan_restart_sync(hdev); 419 - } 420 - 421 416 static void le_scan_restart(struct work_struct *work) 422 417 { 423 418 struct hci_dev *hdev = container_of(work, struct hci_dev, ··· 422 427 423 428 bt_dev_dbg(hdev, ""); 424 429 425 - hci_dev_lock(hdev); 426 - 427 - status = hci_cmd_sync_queue(hdev, le_scan_restart_sync, NULL, NULL); 430 + status = hci_le_scan_restart_sync(hdev); 428 431 if (status) { 429 432 bt_dev_err(hdev, "failed to restart LE scan: status %d", 430 433 status); 431 - goto unlock; 434 + return; 432 435 } 436 + 437 + hci_dev_lock(hdev); 433 438 434 439 if (!test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks) || 435 440 !hdev->discovery.scan_start) ··· 5074 5079 memset(hdev->eir, 0, sizeof(hdev->eir)); 5075 5080 memset(hdev->dev_class, 0, sizeof(hdev->dev_class)); 5076 5081 bacpy(&hdev->random_addr, BDADDR_ANY); 5082 + hci_codec_list_clear(&hdev->local_codecs); 5077 5083 5078 5084 hci_dev_put(hdev); 5079 5085 return err;
+6 -3
net/bluetooth/iso.c
··· 502 502 } 503 503 504 504 /* -------- Socket interface ---------- */ 505 - static struct sock *__iso_get_sock_listen_by_addr(bdaddr_t *ba) 505 + static struct sock *__iso_get_sock_listen_by_addr(bdaddr_t *src, bdaddr_t *dst) 506 506 { 507 507 struct sock *sk; 508 508 ··· 510 510 if (sk->sk_state != BT_LISTEN) 511 511 continue; 512 512 513 - if (!bacmp(&iso_pi(sk)->src, ba)) 513 + if (bacmp(&iso_pi(sk)->dst, dst)) 514 + continue; 515 + 516 + if (!bacmp(&iso_pi(sk)->src, src)) 514 517 return sk; 515 518 } 516 519 ··· 955 952 956 953 write_lock(&iso_sk_list.lock); 957 954 958 - if (__iso_get_sock_listen_by_addr(&iso_pi(sk)->src)) 955 + if (__iso_get_sock_listen_by_addr(&iso_pi(sk)->src, &iso_pi(sk)->dst)) 959 956 err = -EADDRINUSE; 960 957 961 958 write_unlock(&iso_sk_list.lock);
+1 -1
net/bridge/br_netfilter_hooks.c
··· 294 294 /* tell br_dev_xmit to continue with forwarding */ 295 295 nf_bridge->bridged_dnat = 1; 296 296 /* FIXME Need to refragment */ 297 - ret = neigh->output(neigh, skb); 297 + ret = READ_ONCE(neigh->output)(neigh, skb); 298 298 } 299 299 neigh_release(neigh); 300 300 return ret;
+8 -6
net/core/neighbour.c
··· 410 410 */ 411 411 __skb_queue_purge(&n->arp_queue); 412 412 n->arp_queue_len_bytes = 0; 413 - n->output = neigh_blackhole; 413 + WRITE_ONCE(n->output, neigh_blackhole); 414 414 if (n->nud_state & NUD_VALID) 415 415 n->nud_state = NUD_NOARP; 416 416 else ··· 920 920 { 921 921 neigh_dbg(2, "neigh %p is suspected\n", neigh); 922 922 923 - neigh->output = neigh->ops->output; 923 + WRITE_ONCE(neigh->output, neigh->ops->output); 924 924 } 925 925 926 926 /* Neighbour state is OK; ··· 932 932 { 933 933 neigh_dbg(2, "neigh %p is connected\n", neigh); 934 934 935 - neigh->output = neigh->ops->connected_output; 935 + WRITE_ONCE(neigh->output, neigh->ops->connected_output); 936 936 } 937 937 938 938 static void neigh_periodic_work(struct work_struct *work) ··· 988 988 (state == NUD_FAILED || 989 989 !time_in_range_open(jiffies, n->used, 990 990 n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) { 991 - *np = n->next; 991 + rcu_assign_pointer(*np, 992 + rcu_dereference_protected(n->next, 993 + lockdep_is_held(&tbl->lock))); 992 994 neigh_mark_dead(n); 993 995 write_unlock(&n->lock); 994 996 neigh_cleanup_and_release(n); ··· 1449 1447 if (n2) 1450 1448 n1 = n2; 1451 1449 } 1452 - n1->output(n1, skb); 1450 + READ_ONCE(n1->output)(n1, skb); 1453 1451 if (n2) 1454 1452 neigh_release(n2); 1455 1453 rcu_read_unlock(); ··· 3155 3153 rcu_read_unlock(); 3156 3154 goto out_kfree_skb; 3157 3155 } 3158 - err = neigh->output(neigh, skb); 3156 + err = READ_ONCE(neigh->output)(neigh, skb); 3159 3157 rcu_read_unlock(); 3160 3158 } 3161 3159 else if (index == NEIGH_LINK_TABLE) {
+4
net/core/sock_map.c
··· 668 668 sk = __sock_map_lookup_elem(map, key); 669 669 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 670 670 return SK_DROP; 671 + if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk)) 672 + return SK_DROP; 671 673 672 674 msg->flags = flags; 673 675 msg->sk_redir = sk; ··· 1268 1266 1269 1267 sk = __sock_hash_lookup_elem(map, key); 1270 1268 if (unlikely(!sk || !sock_map_redirect_allowed(sk))) 1269 + return SK_DROP; 1270 + if (!(flags & BPF_F_INGRESS) && !sk_is_tcp(sk)) 1271 1271 return SK_DROP; 1272 1272 1273 1273 msg->flags = flags;
+29 -16
net/ethtool/plca.c
··· 21 21 #define PLCA_REPDATA(__reply_base) \ 22 22 container_of(__reply_base, struct plca_reply_data, base) 23 23 24 - static void plca_update_sint(int *dst, const struct nlattr *attr, 25 - bool *mod) 26 - { 27 - if (!attr) 28 - return; 29 - 30 - *dst = nla_get_u32(attr); 31 - *mod = true; 32 - } 33 - 34 24 // PLCA get configuration message ------------------------------------------- // 35 25 36 26 const struct nla_policy ethnl_plca_get_cfg_policy[] = { 37 27 [ETHTOOL_A_PLCA_HEADER] = 38 28 NLA_POLICY_NESTED(ethnl_header_policy), 39 29 }; 30 + 31 + static void plca_update_sint(int *dst, struct nlattr **tb, u32 attrid, 32 + bool *mod) 33 + { 34 + const struct nlattr *attr = tb[attrid]; 35 + 36 + if (!attr || 37 + WARN_ON_ONCE(attrid >= ARRAY_SIZE(ethnl_plca_set_cfg_policy))) 38 + return; 39 + 40 + switch (ethnl_plca_set_cfg_policy[attrid].type) { 41 + case NLA_U8: 42 + *dst = nla_get_u8(attr); 43 + break; 44 + case NLA_U32: 45 + *dst = nla_get_u32(attr); 46 + break; 47 + default: 48 + WARN_ON_ONCE(1); 49 + } 50 + 51 + *mod = true; 52 + } 40 53 41 54 static int plca_get_cfg_prepare_data(const struct ethnl_req_info *req_base, 42 55 struct ethnl_reply_data *reply_base, ··· 157 144 return -EOPNOTSUPP; 158 145 159 146 memset(&plca_cfg, 0xff, sizeof(plca_cfg)); 160 - plca_update_sint(&plca_cfg.enabled, tb[ETHTOOL_A_PLCA_ENABLED], &mod); 161 - plca_update_sint(&plca_cfg.node_id, tb[ETHTOOL_A_PLCA_NODE_ID], &mod); 162 - plca_update_sint(&plca_cfg.node_cnt, tb[ETHTOOL_A_PLCA_NODE_CNT], &mod); 163 - plca_update_sint(&plca_cfg.to_tmr, tb[ETHTOOL_A_PLCA_TO_TMR], &mod); 164 - plca_update_sint(&plca_cfg.burst_cnt, tb[ETHTOOL_A_PLCA_BURST_CNT], 147 + plca_update_sint(&plca_cfg.enabled, tb, ETHTOOL_A_PLCA_ENABLED, &mod); 148 + plca_update_sint(&plca_cfg.node_id, tb, ETHTOOL_A_PLCA_NODE_ID, &mod); 149 + plca_update_sint(&plca_cfg.node_cnt, tb, ETHTOOL_A_PLCA_NODE_CNT, &mod); 150 + plca_update_sint(&plca_cfg.to_tmr, tb, ETHTOOL_A_PLCA_TO_TMR, &mod); 151 + plca_update_sint(&plca_cfg.burst_cnt, tb, ETHTOOL_A_PLCA_BURST_CNT, 165 152 &mod); 166 - plca_update_sint(&plca_cfg.burst_tmr, tb[ETHTOOL_A_PLCA_BURST_TMR], 153 + plca_update_sint(&plca_cfg.burst_tmr, tb, ETHTOOL_A_PLCA_BURST_TMR, 167 154 &mod); 168 155 if (!mod) 169 156 return 0;
+1
net/ipv4/fib_semantics.c
··· 1887 1887 continue; 1888 1888 if (fi->fib_prefsrc == local) { 1889 1889 fi->fib_flags |= RTNH_F_DEAD; 1890 + fi->pfsrc_removed = true; 1890 1891 ret++; 1891 1892 } 1892 1893 }
+4
net/ipv4/fib_trie.c
··· 2027 2027 int fib_table_flush(struct net *net, struct fib_table *tb, bool flush_all) 2028 2028 { 2029 2029 struct trie *t = (struct trie *)tb->tb_data; 2030 + struct nl_info info = { .nl_net = net }; 2030 2031 struct key_vector *pn = t->kv; 2031 2032 unsigned long cindex = 1; 2032 2033 struct hlist_node *tmp; ··· 2090 2089 2091 2090 fib_notify_alias_delete(net, n->key, &n->leaf, fa, 2092 2091 NULL); 2092 + if (fi->pfsrc_removed) 2093 + rtmsg_fib(RTM_DELROUTE, htonl(n->key), fa, 2094 + KEYLENGTH - fa->fa_slen, tb->tb_id, &info, 0); 2093 2095 hlist_del_rcu(&fa->fa_list); 2094 2096 fib_release_info(fa->fa_info); 2095 2097 alias_free_mem_rcu(fa);
+2
net/ipv4/route.c
··· 3417 3417 fa->fa_type == fri.type) { 3418 3418 fri.offload = READ_ONCE(fa->offload); 3419 3419 fri.trap = READ_ONCE(fa->trap); 3420 + fri.offload_failed = 3421 + READ_ONCE(fa->offload_failed); 3420 3422 break; 3421 3423 } 3422 3424 }
+2 -8
net/ipv4/tcp.c
··· 1621 1621 1622 1622 int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) 1623 1623 { 1624 - struct tcp_sock *tp = tcp_sk(sk); 1625 - u32 seq = tp->copied_seq; 1626 1624 struct sk_buff *skb; 1627 1625 int copied = 0; 1628 - u32 offset; 1629 1626 1630 1627 if (sk->sk_state == TCP_LISTEN) 1631 1628 return -ENOTCONN; 1632 1629 1633 - while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) { 1630 + while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) { 1634 1631 u8 tcp_flags; 1635 1632 int used; 1636 1633 ··· 1640 1643 copied = used; 1641 1644 break; 1642 1645 } 1643 - seq += used; 1644 1646 copied += used; 1645 1647 1646 - if (tcp_flags & TCPHDR_FIN) { 1647 - ++seq; 1648 + if (tcp_flags & TCPHDR_FIN) 1648 1649 break; 1649 - } 1650 1650 } 1651 1651 return copied; 1652 1652 }
+3 -1
net/ipv4/tcp_bpf.c
··· 222 222 int *addr_len) 223 223 { 224 224 struct tcp_sock *tcp = tcp_sk(sk); 225 + int peek = flags & MSG_PEEK; 225 226 u32 seq = tcp->copied_seq; 226 227 struct sk_psock *psock; 227 228 int copied = 0; ··· 312 311 copied = -EAGAIN; 313 312 } 314 313 out: 315 - WRITE_ONCE(tcp->copied_seq, seq); 314 + if (!peek) 315 + WRITE_ONCE(tcp->copied_seq, seq); 316 316 tcp_rcv_space_adjust(sk); 317 317 if (copied > 0) 318 318 __tcp_cleanup_rbuf(sk, copied);
+13
net/ipv4/tcp_input.c
··· 253 253 if (unlikely(len > icsk->icsk_ack.rcv_mss + 254 254 MAX_TCP_OPTION_SPACE)) 255 255 tcp_gro_dev_warn(sk, skb, len); 256 + /* If the skb has a len of exactly 1*MSS and has the PSH bit 257 + * set then it is likely the end of an application write. So 258 + * more data may not be arriving soon, and yet the data sender 259 + * may be waiting for an ACK if cwnd-bound or using TX zero 260 + * copy. So we set ICSK_ACK_PUSHED here so that 261 + * tcp_cleanup_rbuf() will send an ACK immediately if the app 262 + * reads all of the data and is not ping-pong. If len > MSS 263 + * then this logic does not matter (and does not hurt) because 264 + * tcp_cleanup_rbuf() will always ACK immediately if the app 265 + * reads data and there is more than an MSS of unACKed data. 266 + */ 267 + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH) 268 + icsk->icsk_ack.pending |= ICSK_ACK_PUSHED; 256 269 } else { 257 270 /* Otherwise, we make more careful check taking into account, 258 271 * that SACKs block is variable.
+3 -4
net/ipv4/tcp_output.c
··· 177 177 } 178 178 179 179 /* Account for an ACK we sent. */ 180 - static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts, 181 - u32 rcv_nxt) 180 + static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt) 182 181 { 183 182 struct tcp_sock *tp = tcp_sk(sk); 184 183 ··· 191 192 192 193 if (unlikely(rcv_nxt != tp->rcv_nxt)) 193 194 return; /* Special ACK sent by DCTCP to reflect ECN */ 194 - tcp_dec_quickack_mode(sk, pkts); 195 + tcp_dec_quickack_mode(sk); 195 196 inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK); 196 197 } 197 198 ··· 1386 1387 sk, skb); 1387 1388 1388 1389 if (likely(tcb->tcp_flags & TCPHDR_ACK)) 1389 - tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt); 1390 + tcp_event_ack_sent(sk, rcv_nxt); 1390 1391 1391 1392 if (skb->len != tcp_header_size) { 1392 1393 tcp_event_data_sent(tp, sk);
+7 -3
net/ipv6/tcp_ipv6.c
··· 1640 1640 struct sock *nsk; 1641 1641 1642 1642 sk = req->rsk_listener; 1643 - drop_reason = tcp_inbound_md5_hash(sk, skb, 1644 - &hdr->saddr, &hdr->daddr, 1645 - AF_INET6, dif, sdif); 1643 + if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) 1644 + drop_reason = SKB_DROP_REASON_XFRM_POLICY; 1645 + else 1646 + drop_reason = tcp_inbound_md5_hash(sk, skb, 1647 + &hdr->saddr, &hdr->daddr, 1648 + AF_INET6, dif, sdif); 1646 1649 if (drop_reason) { 1647 1650 sk_drops_add(sk, skb); 1648 1651 reqsk_put(req); ··· 1692 1689 } 1693 1690 goto discard_and_relse; 1694 1691 } 1692 + nf_reset_ct(skb); 1695 1693 if (nsk == sk) { 1696 1694 reqsk_put(req); 1697 1695 tcp_v6_restore_cb(skb);
+1 -1
net/l2tp/l2tp_ip6.c
··· 507 507 */ 508 508 if (len > INT_MAX - transhdrlen) 509 509 return -EMSGSIZE; 510 - ulen = len + transhdrlen; 511 510 512 511 /* Mirror BSD error message compatibility */ 513 512 if (msg->msg_flags & MSG_OOB) ··· 627 628 628 629 back_from_confirm: 629 630 lock_sock(sk); 631 + ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0; 630 632 err = ip6_append_data(sk, ip_generic_getfrag, msg, 631 633 ulen, transhdrlen, &ipc6, 632 634 &fl6, (struct rt6_info *)dst,
+5 -1
net/mac80211/cfg.c
··· 566 566 } 567 567 568 568 err = ieee80211_key_link(key, link, sta); 569 + /* KRACK protection, shouldn't happen but just silently accept key */ 570 + if (err == -EALREADY) 571 + err = 0; 569 572 570 573 out_unlock: 571 574 mutex_unlock(&local->sta_mtx); ··· 1860 1857 /* VHT can override some HT caps such as the A-MSDU max length */ 1861 1858 if (params->vht_capa) 1862 1859 ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband, 1863 - params->vht_capa, link_sta); 1860 + params->vht_capa, NULL, 1861 + link_sta); 1864 1862 1865 1863 if (params->he_capa) 1866 1864 ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband,
+1 -1
net/mac80211/ibss.c
··· 1072 1072 &chandef); 1073 1073 memcpy(&cap_ie, elems->vht_cap_elem, sizeof(cap_ie)); 1074 1074 ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband, 1075 - &cap_ie, 1075 + &cap_ie, NULL, 1076 1076 &sta->deflink); 1077 1077 if (memcmp(&cap, &sta->sta.deflink.vht_cap, sizeof(cap))) 1078 1078 rates_updated |= true;
+2 -1
net/mac80211/ieee80211_i.h
··· 676 676 struct timer_list mesh_path_root_timer; 677 677 678 678 unsigned long wrkq_flags; 679 - unsigned long mbss_changed; 679 + unsigned long mbss_changed[64 / BITS_PER_LONG]; 680 680 681 681 bool userspace_handles_dfs; 682 682 ··· 2141 2141 ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata, 2142 2142 struct ieee80211_supported_band *sband, 2143 2143 const struct ieee80211_vht_cap *vht_cap_ie, 2144 + const struct ieee80211_vht_cap *vht_cap_ie2, 2144 2145 struct link_sta_info *link_sta); 2145 2146 enum ieee80211_sta_rx_bandwidth 2146 2147 ieee80211_sta_cap_rx_bw(struct link_sta_info *link_sta);
+16 -6
net/mac80211/key.c
··· 802 802 803 803 void ieee80211_key_free_unused(struct ieee80211_key *key) 804 804 { 805 + if (!key) 806 + return; 807 + 805 808 WARN_ON(key->sdata || key->local); 806 809 ieee80211_key_free_common(key); 807 810 } ··· 857 854 * can cause warnings to appear. 858 855 */ 859 856 bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION; 860 - int ret = -EOPNOTSUPP; 857 + int ret; 861 858 862 859 mutex_lock(&sdata->local->key_mtx); 863 860 ··· 871 868 * the same cipher. Enforce the assumption for pairwise keys. 872 869 */ 873 870 if ((alt_key && alt_key->conf.cipher != key->conf.cipher) || 874 - (old_key && old_key->conf.cipher != key->conf.cipher)) 871 + (old_key && old_key->conf.cipher != key->conf.cipher)) { 872 + ret = -EOPNOTSUPP; 875 873 goto out; 874 + } 876 875 } else if (sta) { 877 876 struct link_sta_info *link_sta = &sta->deflink; 878 877 int link_id = key->conf.link_id; ··· 900 895 901 896 /* Non-pairwise keys must also not switch the cipher on rekey */ 902 897 if (!pairwise) { 903 - if (old_key && old_key->conf.cipher != key->conf.cipher) 898 + if (old_key && old_key->conf.cipher != key->conf.cipher) { 899 + ret = -EOPNOTSUPP; 904 900 goto out; 901 + } 905 902 } 906 903 907 904 /* ··· 911 904 * new version of the key to avoid nonce reuse or replay issues. 912 905 */ 913 906 if (ieee80211_key_identical(sdata, old_key, key)) { 914 - ieee80211_key_free_unused(key); 915 - ret = 0; 916 - goto out; 907 + ret = -EALREADY; 908 + goto unlock; 917 909 } 918 910 919 911 key->local = sdata->local; ··· 936 930 ieee80211_key_free(key, delay_tailroom); 937 931 } 938 932 933 + key = NULL; 934 + 939 935 out: 936 + ieee80211_key_free_unused(key); 937 + unlock: 940 938 mutex_unlock(&sdata->local->key_mtx); 941 939 942 940 return ret;
+4 -4
net/mac80211/mesh.c
··· 1175 1175 1176 1176 /* if we race with running work, worst case this work becomes a noop */ 1177 1177 for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE) 1178 - set_bit(bit, &ifmsh->mbss_changed); 1178 + set_bit(bit, ifmsh->mbss_changed); 1179 1179 set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags); 1180 1180 wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); 1181 1181 } ··· 1257 1257 1258 1258 /* clear any mesh work (for next join) we may have accrued */ 1259 1259 ifmsh->wrkq_flags = 0; 1260 - ifmsh->mbss_changed = 0; 1260 + memset(ifmsh->mbss_changed, 0, sizeof(ifmsh->mbss_changed)); 1261 1261 1262 1262 local->fif_other_bss--; 1263 1263 atomic_dec(&local->iff_allmultis); ··· 1724 1724 u32 bit; 1725 1725 u64 changed = 0; 1726 1726 1727 - for_each_set_bit(bit, &ifmsh->mbss_changed, 1727 + for_each_set_bit(bit, ifmsh->mbss_changed, 1728 1728 sizeof(changed) * BITS_PER_BYTE) { 1729 - clear_bit(bit, &ifmsh->mbss_changed); 1729 + clear_bit(bit, ifmsh->mbss_changed); 1730 1730 changed |= BIT(bit); 1731 1731 } 1732 1732
+1 -1
net/mac80211/mesh_plink.c
··· 451 451 changed |= IEEE80211_RC_BW_CHANGED; 452 452 453 453 ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband, 454 - elems->vht_cap_elem, 454 + elems->vht_cap_elem, NULL, 455 455 &sta->deflink); 456 456 457 457 ieee80211_he_cap_ie_to_sta_he_cap(sdata, sband, elems->he_cap,
+35 -10
net/mac80211/mlme.c
··· 4202 4202 elems->ht_cap_elem, 4203 4203 link_sta); 4204 4204 4205 - if (elems->vht_cap_elem && !(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_VHT)) 4205 + if (elems->vht_cap_elem && 4206 + !(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_VHT)) { 4207 + const struct ieee80211_vht_cap *bss_vht_cap = NULL; 4208 + const struct cfg80211_bss_ies *ies; 4209 + 4210 + /* 4211 + * Cisco AP module 9115 with FW 17.3 has a bug and sends a 4212 + * too large maximum MPDU length in the association response 4213 + * (indicating 12k) that it cannot actually process ... 4214 + * Work around that. 4215 + */ 4216 + rcu_read_lock(); 4217 + ies = rcu_dereference(cbss->ies); 4218 + if (ies) { 4219 + const struct element *elem; 4220 + 4221 + elem = cfg80211_find_elem(WLAN_EID_VHT_CAPABILITY, 4222 + ies->data, ies->len); 4223 + if (elem && elem->datalen >= sizeof(*bss_vht_cap)) 4224 + bss_vht_cap = (const void *)elem->data; 4225 + } 4226 + 4206 4227 ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband, 4207 4228 elems->vht_cap_elem, 4208 - link_sta); 4229 + bss_vht_cap, link_sta); 4230 + rcu_read_unlock(); 4231 + } 4209 4232 4210 4233 if (elems->he_operation && !(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_HE) && 4211 4234 elems->he_cap) { ··· 5130 5107 continue; 5131 5108 5132 5109 valid_links |= BIT(link_id); 5133 - if (assoc_data->link[link_id].disabled) { 5110 + if (assoc_data->link[link_id].disabled) 5134 5111 dormant_links |= BIT(link_id); 5135 - } else if (link_id != assoc_data->assoc_link_id) { 5112 + 5113 + if (link_id != assoc_data->assoc_link_id) { 5136 5114 err = ieee80211_sta_allocate_link(sta, link_id); 5137 5115 if (err) 5138 5116 goto out_err; ··· 5148 5124 struct ieee80211_link_data *link; 5149 5125 struct link_sta_info *link_sta; 5150 5126 5151 - if (!cbss || assoc_data->link[link_id].disabled) 5127 + if (!cbss) 5152 5128 continue; 5153 5129 5154 5130 link = sdata_dereference(sdata->link[link_id], sdata); ··· 5453 5429 for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) { 5454 5430 struct ieee80211_link_data *link; 5455 5431 5456 - link = sdata_dereference(sdata->link[link_id], sdata); 5457 - if (!link) 5458 - continue; 5459 - 5460 5432 if (!assoc_data->link[link_id].bss) 5461 5433 continue; 5462 5434 5463 5435 resp.links[link_id].bss = assoc_data->link[link_id].bss; 5464 - resp.links[link_id].addr = link->conf->addr; 5436 + ether_addr_copy(resp.links[link_id].addr, 5437 + assoc_data->link[link_id].addr); 5465 5438 resp.links[link_id].status = assoc_data->link[link_id].status; 5439 + 5440 + link = sdata_dereference(sdata->link[link_id], sdata); 5441 + if (!link) 5442 + continue; 5466 5443 5467 5444 /* get uapsd queues configuration - same for all links */ 5468 5445 resp.uapsd_queues = 0;
+2 -1
net/mac80211/tx.c
··· 665 665 } 666 666 667 667 if (unlikely(tx->key && tx->key->flags & KEY_FLAG_TAINTED && 668 - !ieee80211_is_deauth(hdr->frame_control))) 668 + !ieee80211_is_deauth(hdr->frame_control)) && 669 + tx->skb->protocol != tx->sdata->control_port_protocol) 669 670 return TX_DROP; 670 671 671 672 if (!skip_hw && tx->key &&
+14 -2
net/mac80211/vht.c
··· 4 4 * 5 5 * Portions of this file 6 6 * Copyright(c) 2015 - 2016 Intel Deutschland GmbH 7 - * Copyright (C) 2018 - 2022 Intel Corporation 7 + * Copyright (C) 2018 - 2023 Intel Corporation 8 8 */ 9 9 10 10 #include <linux/ieee80211.h> ··· 116 116 ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata, 117 117 struct ieee80211_supported_band *sband, 118 118 const struct ieee80211_vht_cap *vht_cap_ie, 119 + const struct ieee80211_vht_cap *vht_cap_ie2, 119 120 struct link_sta_info *link_sta) 120 121 { 121 122 struct ieee80211_sta_vht_cap *vht_cap = &link_sta->pub->vht_cap; 122 123 struct ieee80211_sta_vht_cap own_cap; 123 124 u32 cap_info, i; 124 125 bool have_80mhz; 126 + u32 mpdu_len; 125 127 126 128 memset(vht_cap, 0, sizeof(*vht_cap)); 127 129 ··· 320 318 link_sta->pub->bandwidth = ieee80211_sta_cur_vht_bw(link_sta); 321 319 322 320 /* 321 + * Work around the Cisco 9115 FW 17.3 bug by taking the min of 322 + * both reported MPDU lengths. 323 + */ 324 + mpdu_len = vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK; 325 + if (vht_cap_ie2) 326 + mpdu_len = min_t(u32, mpdu_len, 327 + le32_get_bits(vht_cap_ie2->vht_cap_info, 328 + IEEE80211_VHT_CAP_MAX_MPDU_MASK)); 329 + 330 + /* 323 331 * FIXME - should the amsdu len be per link? store per link 324 332 * and maintain a minimum? 325 333 */ 326 - switch (vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK) { 334 + switch (mpdu_len) { 327 335 case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454: 328 336 link_sta->pub->agg.max_amsdu_len = IEEE80211_MAX_MPDU_LEN_VHT_11454; 329 337 break;
-6
net/mptcp/pm_userspace.c
··· 307 307 goto create_err; 308 308 } 309 309 310 - if (addr_l.id == 0) { 311 - NL_SET_ERR_MSG_ATTR(info->extack, laddr, "missing local addr id"); 312 - err = -EINVAL; 313 - goto create_err; 314 - } 315 - 316 310 err = mptcp_pm_parse_addr(raddr, info, &addr_r); 317 311 if (err < 0) { 318 312 NL_SET_ERR_MSG_ATTR(info->extack, raddr, "error parsing remote addr");
+14 -14
net/mptcp/protocol.c
··· 3425 3425 sk_reset_timer(ssk, &icsk->icsk_delack_timer, timeout); 3426 3426 } 3427 3427 3428 - void mptcp_subflow_process_delegated(struct sock *ssk) 3428 + void mptcp_subflow_process_delegated(struct sock *ssk, long status) 3429 3429 { 3430 3430 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 3431 3431 struct sock *sk = subflow->conn; 3432 3432 3433 - if (test_bit(MPTCP_DELEGATE_SEND, &subflow->delegated_status)) { 3433 + if (status & BIT(MPTCP_DELEGATE_SEND)) { 3434 3434 mptcp_data_lock(sk); 3435 3435 if (!sock_owned_by_user(sk)) 3436 3436 __mptcp_subflow_push_pending(sk, ssk, true); 3437 3437 else 3438 3438 __set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags); 3439 3439 mptcp_data_unlock(sk); 3440 - mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_SEND); 3441 3440 } 3442 - if (test_bit(MPTCP_DELEGATE_ACK, &subflow->delegated_status)) { 3441 + if (status & BIT(MPTCP_DELEGATE_ACK)) 3443 3442 schedule_3rdack_retransmission(ssk); 3444 - mptcp_subflow_delegated_done(subflow, MPTCP_DELEGATE_ACK); 3445 - } 3446 3443 } 3447 3444 3448 3445 static int mptcp_hash(struct sock *sk) ··· 3965 3968 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3966 3969 3967 3970 bh_lock_sock_nested(ssk); 3968 - if (!sock_owned_by_user(ssk) && 3969 - mptcp_subflow_has_delegated_action(subflow)) 3970 - mptcp_subflow_process_delegated(ssk); 3971 - /* ... elsewhere tcp_release_cb_override already processed 3972 - * the action or will do at next release_sock(). 3973 - * In both case must dequeue the subflow here - on the same 3974 - * CPU that scheduled it. 3975 - */ 3971 + if (!sock_owned_by_user(ssk)) { 3972 + mptcp_subflow_process_delegated(ssk, xchg(&subflow->delegated_status, 0)); 3973 + } else { 3974 + /* tcp_release_cb_override already processed 3975 + * the action or will do at next release_sock(). 3976 + * In both case must dequeue the subflow here - on the same 3977 + * CPU that scheduled it. 3978 + */ 3979 + smp_wmb(); 3980 + clear_bit(MPTCP_DELEGATE_SCHEDULED, &subflow->delegated_status); 3981 + } 3976 3982 bh_unlock_sock(ssk); 3977 3983 sock_put(ssk); 3978 3984
+12 -23
net/mptcp/protocol.h
··· 444 444 445 445 DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions); 446 446 447 - #define MPTCP_DELEGATE_SEND 0 448 - #define MPTCP_DELEGATE_ACK 1 447 + #define MPTCP_DELEGATE_SCHEDULED 0 448 + #define MPTCP_DELEGATE_SEND 1 449 + #define MPTCP_DELEGATE_ACK 2 449 450 451 + #define MPTCP_DELEGATE_ACTIONS_MASK (~BIT(MPTCP_DELEGATE_SCHEDULED)) 450 452 /* MPTCP subflow context */ 451 453 struct mptcp_subflow_context { 452 454 struct list_head node;/* conn_list of subflows */ ··· 566 564 return subflow->map_seq + mptcp_subflow_get_map_offset(subflow); 567 565 } 568 566 569 - void mptcp_subflow_process_delegated(struct sock *ssk); 567 + void mptcp_subflow_process_delegated(struct sock *ssk, long actions); 570 568 571 569 static inline void mptcp_subflow_delegate(struct mptcp_subflow_context *subflow, int action) 572 570 { 571 + long old, set_bits = BIT(MPTCP_DELEGATE_SCHEDULED) | BIT(action); 573 572 struct mptcp_delegated_action *delegated; 574 573 bool schedule; 575 574 576 575 /* the caller held the subflow bh socket lock */ 577 576 lockdep_assert_in_softirq(); 578 577 579 - /* The implied barrier pairs with mptcp_subflow_delegated_done(), and 580 - * ensures the below list check sees list updates done prior to status 581 - * bit changes 578 + /* The implied barrier pairs with tcp_release_cb_override() 579 + * mptcp_napi_poll(), and ensures the below list check sees list 580 + * updates done prior to delegated status bits changes 582 581 */ 583 - if (!test_and_set_bit(action, &subflow->delegated_status)) { 584 - /* still on delegated list from previous scheduling */ 585 - if (!list_empty(&subflow->delegated_node)) 582 + old = set_mask_bits(&subflow->delegated_status, 0, set_bits); 583 + if (!(old & BIT(MPTCP_DELEGATE_SCHEDULED))) { 584 + if (WARN_ON_ONCE(!list_empty(&subflow->delegated_node))) 586 585 return; 587 586 588 587 delegated = this_cpu_ptr(&mptcp_delegated_actions); ··· 606 603 ret = list_first_entry(&delegated->head, struct mptcp_subflow_context, delegated_node); 607 604 list_del_init(&ret->delegated_node); 608 605 return ret; 609 - } 610 - 611 - static inline bool mptcp_subflow_has_delegated_action(const struct mptcp_subflow_context *subflow) 612 - { 613 - return !!READ_ONCE(subflow->delegated_status); 614 - } 615 - 616 - static inline void mptcp_subflow_delegated_done(struct mptcp_subflow_context *subflow, int action) 617 - { 618 - /* pairs with mptcp_subflow_delegate, ensures delegate_node is updated before 619 - * touching the status bit 620 - */ 621 - smp_wmb(); 622 - clear_bit(action, &subflow->delegated_status); 623 606 } 624 607 625 608 int mptcp_is_enabled(const struct net *net);
+8 -2
net/mptcp/subflow.c
··· 1956 1956 static void tcp_release_cb_override(struct sock *ssk) 1957 1957 { 1958 1958 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1959 + long status; 1959 1960 1960 - if (mptcp_subflow_has_delegated_action(subflow)) 1961 - mptcp_subflow_process_delegated(ssk); 1961 + /* process and clear all the pending actions, but leave the subflow into 1962 + * the napi queue. To respect locking, only the same CPU that originated 1963 + * the action can touch the list. mptcp_napi_poll will take care of it. 1964 + */ 1965 + status = set_mask_bits(&subflow->delegated_status, MPTCP_DELEGATE_ACTIONS_MASK, 0); 1966 + if (status) 1967 + mptcp_subflow_process_delegated(ssk, status); 1962 1968 1963 1969 tcp_release_cb(ssk); 1964 1970 }
+4 -4
net/netfilter/ipvs/ip_vs_sync.c
··· 1439 1439 sin.sin_addr.s_addr = addr; 1440 1440 sin.sin_port = 0; 1441 1441 1442 - return sock->ops->bind(sock, (struct sockaddr*)&sin, sizeof(sin)); 1442 + return kernel_bind(sock, (struct sockaddr *)&sin, sizeof(sin)); 1443 1443 } 1444 1444 1445 1445 static void get_mcast_sockaddr(union ipvs_sockaddr *sa, int *salen, ··· 1505 1505 } 1506 1506 1507 1507 get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->mcfg, id); 1508 - result = sock->ops->connect(sock, (struct sockaddr *) &mcast_addr, 1509 - salen, 0); 1508 + result = kernel_connect(sock, (struct sockaddr *)&mcast_addr, 1509 + salen, 0); 1510 1510 if (result < 0) { 1511 1511 pr_err("Error connecting to the multicast addr\n"); 1512 1512 goto error; ··· 1546 1546 1547 1547 get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id); 1548 1548 sock->sk->sk_bound_dev_if = dev->ifindex; 1549 - result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen); 1549 + result = kernel_bind(sock, (struct sockaddr *)&mcast_addr, salen); 1550 1550 if (result < 0) { 1551 1551 pr_err("Error binding to the multicast addr\n"); 1552 1552 goto error;
+33 -10
net/netfilter/nf_conntrack_proto_sctp.c
··· 112 112 /* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA}, 113 113 /* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/ 114 114 /* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */ 115 - /* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */ 115 + /* cookie_ack */ {sCL, sCL, sCW, sES, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */ 116 116 /* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL}, 117 117 /* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, 118 118 /* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, ··· 126 126 /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV}, 127 127 /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV}, 128 128 /* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV}, 129 - /* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */ 129 + /* cookie_echo */ {sIV, sCL, sCE, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */ 130 130 /* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV}, 131 131 /* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV}, 132 132 /* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, ··· 412 412 /* (D) vtag must be same as init_vtag as found in INIT_ACK */ 413 413 if (sh->vtag != ct->proto.sctp.vtag[dir]) 414 414 goto out_unlock; 415 + } else if (sch->type == SCTP_CID_COOKIE_ACK) { 416 + ct->proto.sctp.init[dir] = 0; 417 + ct->proto.sctp.init[!dir] = 0; 415 418 } else if (sch->type == SCTP_CID_HEARTBEAT) { 416 419 if (ct->proto.sctp.vtag[dir] == 0) { 417 420 pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir); ··· 464 461 } 465 462 466 463 /* If it is an INIT or an INIT ACK note down the vtag */ 467 - if (sch->type == SCTP_CID_INIT || 468 - sch->type == SCTP_CID_INIT_ACK) { 469 - struct sctp_inithdr _inithdr, *ih; 464 + if (sch->type == SCTP_CID_INIT) { 465 + struct sctp_inithdr _ih, *ih; 470 466 471 - ih = skb_header_pointer(skb, offset + sizeof(_sch), 472 - sizeof(_inithdr), &_inithdr); 473 - if (ih == NULL) 467 + ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih); 468 + if (!ih) 474 469 goto out_unlock; 475 - pr_debug("Setting vtag %x for dir %d\n", 476 - ih->init_tag, !dir); 470 + 471 + if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir]) 472 + ct->proto.sctp.init[!dir] = 0; 473 + ct->proto.sctp.init[dir] = 1; 474 + 475 + pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir); 477 476 ct->proto.sctp.vtag[!dir] = ih->init_tag; 478 477 479 478 /* don't renew timeout on init retransmit so ··· 486 481 old_state == SCTP_CONNTRACK_CLOSED && 487 482 nf_ct_is_confirmed(ct)) 488 483 ignore = true; 484 + } else if (sch->type == SCTP_CID_INIT_ACK) { 485 + struct sctp_inithdr _ih, *ih; 486 + __be32 vtag; 487 + 488 + ih = skb_header_pointer(skb, offset + sizeof(_sch), sizeof(*ih), &_ih); 489 + if (!ih) 490 + goto out_unlock; 491 + 492 + vtag = ct->proto.sctp.vtag[!dir]; 493 + if (!ct->proto.sctp.init[!dir] && vtag && vtag != ih->init_tag) 494 + goto out_unlock; 495 + /* collision */ 496 + if (ct->proto.sctp.init[dir] && ct->proto.sctp.init[!dir] && 497 + vtag != ih->init_tag) 498 + goto out_unlock; 499 + 500 + pr_debug("Setting vtag %x for dir %d\n", ih->init_tag, !dir); 501 + ct->proto.sctp.vtag[!dir] = ih->init_tag; 489 502 } 490 503 491 504 ct->proto.sctp.state = new_state;
+28 -16
net/netfilter/nf_tables_api.c
··· 7871 7871 return nft_delobj(&ctx, obj); 7872 7872 } 7873 7873 7874 - void nft_obj_notify(struct net *net, const struct nft_table *table, 7875 - struct nft_object *obj, u32 portid, u32 seq, int event, 7876 - u16 flags, int family, int report, gfp_t gfp) 7874 + static void 7875 + __nft_obj_notify(struct net *net, const struct nft_table *table, 7876 + struct nft_object *obj, u32 portid, u32 seq, int event, 7877 + u16 flags, int family, int report, gfp_t gfp) 7877 7878 { 7878 7879 struct nftables_pernet *nft_net = nft_pernet(net); 7879 7880 struct sk_buff *skb; 7880 7881 int err; 7881 - char *buf = kasprintf(gfp, "%s:%u", 7882 - table->name, nft_net->base_seq); 7883 - 7884 - audit_log_nfcfg(buf, 7885 - family, 7886 - obj->handle, 7887 - event == NFT_MSG_NEWOBJ ? 7888 - AUDIT_NFT_OP_OBJ_REGISTER : 7889 - AUDIT_NFT_OP_OBJ_UNREGISTER, 7890 - gfp); 7891 - kfree(buf); 7892 7882 7893 7883 if (!report && 7894 7884 !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES)) ··· 7901 7911 err: 7902 7912 nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS); 7903 7913 } 7914 + 7915 + void nft_obj_notify(struct net *net, const struct nft_table *table, 7916 + struct nft_object *obj, u32 portid, u32 seq, int event, 7917 + u16 flags, int family, int report, gfp_t gfp) 7918 + { 7919 + struct nftables_pernet *nft_net = nft_pernet(net); 7920 + char *buf = kasprintf(gfp, "%s:%u", 7921 + table->name, nft_net->base_seq); 7922 + 7923 + audit_log_nfcfg(buf, 7924 + family, 7925 + obj->handle, 7926 + event == NFT_MSG_NEWOBJ ? 7927 + AUDIT_NFT_OP_OBJ_REGISTER : 7928 + AUDIT_NFT_OP_OBJ_UNREGISTER, 7929 + gfp); 7930 + kfree(buf); 7931 + 7932 + __nft_obj_notify(net, table, obj, portid, seq, event, 7933 + flags, family, report, gfp); 7934 + } 7904 7935 EXPORT_SYMBOL_GPL(nft_obj_notify); 7905 7936 7906 7937 static void nf_tables_obj_notify(const struct nft_ctx *ctx, 7907 7938 struct nft_object *obj, int event) 7908 7939 { 7909 - nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, ctx->seq, event, 7910 - ctx->flags, ctx->family, ctx->report, GFP_KERNEL); 7940 + __nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, 7941 + ctx->seq, event, ctx->flags, ctx->family, 7942 + ctx->report, GFP_KERNEL); 7911 7943 } 7912 7944 7913 7945 /*
+12 -1
net/netfilter/nft_payload.c
··· 154 154 return pkt->inneroff; 155 155 } 156 156 157 + static bool nft_payload_need_vlan_copy(const struct nft_payload *priv) 158 + { 159 + unsigned int len = priv->offset + priv->len; 160 + 161 + /* data past ether src/dst requested, copy needed */ 162 + if (len > offsetof(struct ethhdr, h_proto)) 163 + return true; 164 + 165 + return false; 166 + } 167 + 157 168 void nft_payload_eval(const struct nft_expr *expr, 158 169 struct nft_regs *regs, 159 170 const struct nft_pktinfo *pkt) ··· 183 172 goto err; 184 173 185 174 if (skb_vlan_tag_present(skb) && 186 - priv->offset >= offsetof(struct ethhdr, h_proto)) { 175 + nft_payload_need_vlan_copy(priv)) { 187 176 if (!nft_payload_copy_vlan(dest, skb, 188 177 priv->offset, priv->len)) 189 178 goto err;
+29 -17
net/netfilter/nft_set_rbtree.c
··· 233 233 rb_erase(&rbe->node, &priv->root); 234 234 } 235 235 236 - static int nft_rbtree_gc_elem(const struct nft_set *__set, 237 - struct nft_rbtree *priv, 238 - struct nft_rbtree_elem *rbe, 239 - u8 genmask) 236 + static const struct nft_rbtree_elem * 237 + nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv, 238 + struct nft_rbtree_elem *rbe, u8 genmask) 240 239 { 241 240 struct nft_set *set = (struct nft_set *)__set; 242 241 struct rb_node *prev = rb_prev(&rbe->node); ··· 245 246 246 247 gc = nft_trans_gc_alloc(set, 0, GFP_ATOMIC); 247 248 if (!gc) 248 - return -ENOMEM; 249 + return ERR_PTR(-ENOMEM); 249 250 250 251 /* search for end interval coming before this element. 251 252 * end intervals don't carry a timeout extension, they ··· 260 261 prev = rb_prev(prev); 261 262 } 262 263 264 + rbe_prev = NULL; 263 265 if (prev) { 264 266 rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); 265 267 nft_rbtree_gc_remove(net, set, priv, rbe_prev); ··· 272 272 */ 273 273 gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); 274 274 if (WARN_ON_ONCE(!gc)) 275 - return -ENOMEM; 275 + return ERR_PTR(-ENOMEM); 276 276 277 277 nft_trans_gc_elem_add(gc, rbe_prev); 278 278 } ··· 280 280 nft_rbtree_gc_remove(net, set, priv, rbe); 281 281 gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); 282 282 if (WARN_ON_ONCE(!gc)) 283 - return -ENOMEM; 283 + return ERR_PTR(-ENOMEM); 284 284 285 285 nft_trans_gc_elem_add(gc, rbe); 286 286 287 287 nft_trans_gc_queue_sync_done(gc); 288 288 289 - return 0; 289 + return rbe_prev; 290 290 } 291 291 292 292 static bool nft_rbtree_update_first(const struct nft_set *set, ··· 314 314 struct nft_rbtree *priv = nft_set_priv(set); 315 315 u8 cur_genmask = nft_genmask_cur(net); 316 316 u8 genmask = nft_genmask_next(net); 317 - int d, err; 317 + int d; 318 318 319 319 /* Descend the tree to search for an existing element greater than the 320 320 * key value to insert that is greater than the new element. This is the ··· 363 363 */ 364 364 if (nft_set_elem_expired(&rbe->ext) && 365 365 nft_set_elem_active(&rbe->ext, cur_genmask)) { 366 - err = nft_rbtree_gc_elem(set, priv, rbe, genmask); 367 - if (err < 0) 368 - return err; 366 + const struct nft_rbtree_elem *removed_end; 367 + 368 + removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask); 369 + if (IS_ERR(removed_end)) 370 + return PTR_ERR(removed_end); 371 + 372 + if (removed_end == rbe_le || removed_end == rbe_ge) 373 + return -EAGAIN; 369 374 370 375 continue; 371 376 } ··· 491 486 struct nft_rbtree_elem *rbe = elem->priv; 492 487 int err; 493 488 494 - write_lock_bh(&priv->lock); 495 - write_seqcount_begin(&priv->count); 496 - err = __nft_rbtree_insert(net, set, rbe, ext); 497 - write_seqcount_end(&priv->count); 498 - write_unlock_bh(&priv->lock); 489 + do { 490 + if (fatal_signal_pending(current)) 491 + return -EINTR; 492 + 493 + cond_resched(); 494 + 495 + write_lock_bh(&priv->lock); 496 + write_seqcount_begin(&priv->count); 497 + err = __nft_rbtree_insert(net, set, rbe, ext); 498 + write_seqcount_end(&priv->count); 499 + write_unlock_bh(&priv->lock); 500 + } while (err == -EAGAIN); 499 501 500 502 return err; 501 503 }
+4 -4
net/netlink/af_netlink.c
··· 352 352 if (!nlk_test_bit(RECV_NO_ENOBUFS, sk)) { 353 353 if (!test_and_set_bit(NETLINK_S_CONGESTED, 354 354 &nlk_sk(sk)->state)) { 355 - sk->sk_err = ENOBUFS; 355 + WRITE_ONCE(sk->sk_err, ENOBUFS); 356 356 sk_error_report(sk); 357 357 } 358 358 } ··· 1605 1605 goto out; 1606 1606 } 1607 1607 1608 - sk->sk_err = p->code; 1608 + WRITE_ONCE(sk->sk_err, p->code); 1609 1609 sk_error_report(sk); 1610 1610 out: 1611 1611 return ret; ··· 1991 1991 atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { 1992 1992 ret = netlink_dump(sk); 1993 1993 if (ret) { 1994 - sk->sk_err = -ret; 1994 + WRITE_ONCE(sk->sk_err, -ret); 1995 1995 sk_error_report(sk); 1996 1996 } 1997 1997 } ··· 2511 2511 err_bad_put: 2512 2512 nlmsg_free(skb); 2513 2513 err_skb: 2514 - NETLINK_CB(in_skb).sk->sk_err = ENOBUFS; 2514 + WRITE_ONCE(NETLINK_CB(in_skb).sk->sk_err, ENOBUFS); 2515 2515 sk_error_report(NETLINK_CB(in_skb).sk); 2516 2516 } 2517 2517 EXPORT_SYMBOL(netlink_ack);
+2
net/nfc/llcp_core.c
··· 1636 1636 timer_setup(&local->sdreq_timer, nfc_llcp_sdreq_timer, 0); 1637 1637 INIT_WORK(&local->sdreq_timeout_work, nfc_llcp_sdreq_timeout_work); 1638 1638 1639 + spin_lock(&llcp_devices_lock); 1639 1640 list_add(&local->list, &llcp_devices); 1641 + spin_unlock(&llcp_devices_lock); 1640 1642 1641 1643 return 0; 1642 1644 }
+2 -2
net/rds/tcp_connect.c
··· 145 145 addrlen = sizeof(sin); 146 146 } 147 147 148 - ret = sock->ops->bind(sock, addr, addrlen); 148 + ret = kernel_bind(sock, addr, addrlen); 149 149 if (ret) { 150 150 rdsdebug("bind failed with %d at address %pI6c\n", 151 151 ret, &conn->c_laddr); ··· 173 173 * own the socket 174 174 */ 175 175 rds_tcp_set_callbacks(sock, cp); 176 - ret = sock->ops->connect(sock, addr, addrlen, O_NONBLOCK); 176 + ret = kernel_connect(sock, addr, addrlen, O_NONBLOCK); 177 177 178 178 rdsdebug("connect to address %pI6c returned %d\n", &conn->c_faddr, ret); 179 179 if (ret == -EINPROGRESS)
+1 -1
net/rds/tcp_listen.c
··· 306 306 addr_len = sizeof(*sin); 307 307 } 308 308 309 - ret = sock->ops->bind(sock, (struct sockaddr *)&ss, addr_len); 309 + ret = kernel_bind(sock, (struct sockaddr *)&ss, addr_len); 310 310 if (ret < 0) { 311 311 rdsdebug("could not bind %s listener socket: %d\n", 312 312 isv6 ? "IPv6" : "IPv4", ret);
+26 -6
net/rfkill/core.c
··· 48 48 bool persistent; 49 49 bool polling_paused; 50 50 bool suspended; 51 + bool need_sync; 51 52 52 53 const struct rfkill_ops *ops; 53 54 void *data; ··· 367 366 368 367 if (prev != curr) 369 368 rfkill_event(rfkill); 369 + } 370 + 371 + static void rfkill_sync(struct rfkill *rfkill) 372 + { 373 + lockdep_assert_held(&rfkill_global_mutex); 374 + 375 + if (!rfkill->need_sync) 376 + return; 377 + 378 + rfkill_set_block(rfkill, rfkill_global_states[rfkill->type].cur); 379 + rfkill->need_sync = false; 370 380 } 371 381 372 382 static void rfkill_update_global_state(enum rfkill_type type, bool blocked) ··· 742 730 { 743 731 struct rfkill *rfkill = to_rfkill(dev); 744 732 733 + mutex_lock(&rfkill_global_mutex); 734 + rfkill_sync(rfkill); 735 + mutex_unlock(&rfkill_global_mutex); 736 + 745 737 return sysfs_emit(buf, "%d\n", (rfkill->state & RFKILL_BLOCK_SW) ? 1 : 0); 746 738 } 747 739 ··· 767 751 return -EINVAL; 768 752 769 753 mutex_lock(&rfkill_global_mutex); 754 + rfkill_sync(rfkill); 770 755 rfkill_set_block(rfkill, state); 771 756 mutex_unlock(&rfkill_global_mutex); 772 757 ··· 800 783 { 801 784 struct rfkill *rfkill = to_rfkill(dev); 802 785 786 + mutex_lock(&rfkill_global_mutex); 787 + rfkill_sync(rfkill); 788 + mutex_unlock(&rfkill_global_mutex); 789 + 803 790 return sysfs_emit(buf, "%d\n", user_state_from_blocked(rfkill->state)); 804 791 } 805 792 ··· 826 805 return -EINVAL; 827 806 828 807 mutex_lock(&rfkill_global_mutex); 808 + rfkill_sync(rfkill); 829 809 rfkill_set_block(rfkill, state == RFKILL_USER_STATE_SOFT_BLOCKED); 830 810 mutex_unlock(&rfkill_global_mutex); 831 811 ··· 1054 1032 1055 1033 static void rfkill_sync_work(struct work_struct *work) 1056 1034 { 1057 - struct rfkill *rfkill; 1058 - bool cur; 1059 - 1060 - rfkill = container_of(work, struct rfkill, sync_work); 1035 + struct rfkill *rfkill = container_of(work, struct rfkill, sync_work); 1061 1036 1062 1037 mutex_lock(&rfkill_global_mutex); 1063 - cur = rfkill_global_states[rfkill->type].cur; 1064 - rfkill_set_block(rfkill, cur); 1038 + rfkill_sync(rfkill); 1065 1039 mutex_unlock(&rfkill_global_mutex); 1066 1040 } 1067 1041 ··· 1105 1087 round_jiffies_relative(POLL_INTERVAL)); 1106 1088 1107 1089 if (!rfkill->persistent || rfkill_epo_lock_active) { 1090 + rfkill->need_sync = true; 1108 1091 schedule_work(&rfkill->sync_work); 1109 1092 } else { 1110 1093 #ifdef CONFIG_RFKILL_INPUT ··· 1190 1171 ev = kzalloc(sizeof(*ev), GFP_KERNEL); 1191 1172 if (!ev) 1192 1173 goto free; 1174 + rfkill_sync(rfkill); 1193 1175 rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD); 1194 1176 list_add_tail(&ev->list, &data->events); 1195 1177 }
+1 -2
net/sctp/associola.c
··· 1159 1159 /* Add any peer addresses from the new association. */ 1160 1160 list_for_each_entry(trans, &new->peer.transport_addr_list, 1161 1161 transports) 1162 - if (!sctp_assoc_lookup_paddr(asoc, &trans->ipaddr) && 1163 - !sctp_assoc_add_peer(asoc, &trans->ipaddr, 1162 + if (!sctp_assoc_add_peer(asoc, &trans->ipaddr, 1164 1163 GFP_ATOMIC, trans->state)) 1165 1164 return -ENOMEM; 1166 1165
+1
net/sctp/socket.c
··· 2450 2450 if (trans) { 2451 2451 trans->hbinterval = 2452 2452 msecs_to_jiffies(params->spp_hbinterval); 2453 + sctp_transport_reset_hb_timer(trans); 2453 2454 } else if (asoc) { 2454 2455 asoc->hbinterval = 2455 2456 msecs_to_jiffies(params->spp_hbinterval);
+29 -7
net/socket.c
··· 737 737 return ret; 738 738 } 739 739 740 + static int __sock_sendmsg(struct socket *sock, struct msghdr *msg) 741 + { 742 + int err = security_socket_sendmsg(sock, msg, 743 + msg_data_left(msg)); 744 + 745 + return err ?: sock_sendmsg_nosec(sock, msg); 746 + } 747 + 740 748 /** 741 749 * sock_sendmsg - send a message through @sock 742 750 * @sock: socket ··· 755 747 */ 756 748 int sock_sendmsg(struct socket *sock, struct msghdr *msg) 757 749 { 758 - int err = security_socket_sendmsg(sock, msg, 759 - msg_data_left(msg)); 750 + struct sockaddr_storage *save_addr = (struct sockaddr_storage *)msg->msg_name; 751 + struct sockaddr_storage address; 752 + int ret; 760 753 761 - return err ?: sock_sendmsg_nosec(sock, msg); 754 + if (msg->msg_name) { 755 + memcpy(&address, msg->msg_name, msg->msg_namelen); 756 + msg->msg_name = &address; 757 + } 758 + 759 + ret = __sock_sendmsg(sock, msg); 760 + msg->msg_name = save_addr; 761 + 762 + return ret; 762 763 } 763 764 EXPORT_SYMBOL(sock_sendmsg); 764 765 ··· 1155 1138 if (sock->type == SOCK_SEQPACKET) 1156 1139 msg.msg_flags |= MSG_EOR; 1157 1140 1158 - res = sock_sendmsg(sock, &msg); 1141 + res = __sock_sendmsg(sock, &msg); 1159 1142 *from = msg.msg_iter; 1160 1143 return res; 1161 1144 } ··· 2191 2174 if (sock->file->f_flags & O_NONBLOCK) 2192 2175 flags |= MSG_DONTWAIT; 2193 2176 msg.msg_flags = flags; 2194 - err = sock_sendmsg(sock, &msg); 2177 + err = __sock_sendmsg(sock, &msg); 2195 2178 2196 2179 out_put: 2197 2180 fput_light(sock->file, fput_needed); ··· 2555 2538 err = sock_sendmsg_nosec(sock, msg_sys); 2556 2539 goto out_freectl; 2557 2540 } 2558 - err = sock_sendmsg(sock, msg_sys); 2541 + err = __sock_sendmsg(sock, msg_sys); 2559 2542 /* 2560 2543 * If this is sendmmsg() and sending to current destination address was 2561 2544 * successful, remember it. ··· 3516 3499 3517 3500 int kernel_bind(struct socket *sock, struct sockaddr *addr, int addrlen) 3518 3501 { 3519 - return READ_ONCE(sock->ops)->bind(sock, addr, addrlen); 3502 + struct sockaddr_storage address; 3503 + 3504 + memcpy(&address, addr, addrlen); 3505 + 3506 + return READ_ONCE(sock->ops)->bind(sock, (struct sockaddr *)&address, 3507 + addrlen); 3520 3508 } 3521 3509 EXPORT_SYMBOL(kernel_bind); 3522 3510
+2 -2
net/tipc/crypto.c
··· 1441 1441 struct tipc_crypto *tx = tipc_net(net)->crypto_tx; 1442 1442 struct tipc_key key; 1443 1443 1444 - spin_lock(&tx->lock); 1444 + spin_lock_bh(&tx->lock); 1445 1445 key = tx->key; 1446 1446 WARN_ON(!key.active || tx_key != key.active); 1447 1447 1448 1448 /* Free the active key */ 1449 1449 tipc_crypto_key_set_state(tx, key.passive, 0, key.pending); 1450 1450 tipc_crypto_key_detach(tx->aead[key.active], &tx->lock); 1451 - spin_unlock(&tx->lock); 1451 + spin_unlock_bh(&tx->lock); 1452 1452 1453 1453 pr_warn("%s: key is revoked\n", tx->name); 1454 1454 return -EKEYREVOKED;
+7 -7
net/wireless/core.c
··· 1181 1181 } 1182 1182 EXPORT_SYMBOL(wiphy_rfkill_set_hw_state_reason); 1183 1183 1184 - void cfg80211_cqm_config_free(struct wireless_dev *wdev) 1185 - { 1186 - kfree(wdev->cqm_config); 1187 - wdev->cqm_config = NULL; 1188 - } 1189 - 1190 1184 static void _cfg80211_unregister_wdev(struct wireless_dev *wdev, 1191 1185 bool unregister_netdev) 1192 1186 { 1193 1187 struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy); 1188 + struct cfg80211_cqm_config *cqm_config; 1194 1189 unsigned int link_id; 1195 1190 1196 1191 ASSERT_RTNL(); ··· 1222 1227 kfree_sensitive(wdev->wext.keys); 1223 1228 wdev->wext.keys = NULL; 1224 1229 #endif 1225 - cfg80211_cqm_config_free(wdev); 1230 + wiphy_work_cancel(wdev->wiphy, &wdev->cqm_rssi_work); 1231 + /* deleted from the list, so can't be found from nl80211 any more */ 1232 + cqm_config = rcu_access_pointer(wdev->cqm_config); 1233 + kfree_rcu(cqm_config, rcu_head); 1226 1234 1227 1235 /* 1228 1236 * Ensure that all events have been processed and ··· 1376 1378 wdev->wext.default_mgmt_key = -1; 1377 1379 wdev->wext.connect.auth_type = NL80211_AUTHTYPE_AUTOMATIC; 1378 1380 #endif 1381 + 1382 + wiphy_work_init(&wdev->cqm_rssi_work, cfg80211_cqm_rssi_notify_work); 1379 1383 1380 1384 if (wdev->wiphy->flags & WIPHY_FLAG_PS_ON_BY_DEFAULT) 1381 1385 wdev->ps = true;
+5 -2
net/wireless/core.h
··· 295 295 }; 296 296 297 297 struct cfg80211_cqm_config { 298 + struct rcu_head rcu_head; 298 299 u32 rssi_hyst; 299 300 s32 last_rssi_event_value; 301 + enum nl80211_cqm_rssi_threshold_event last_rssi_event_type; 300 302 int n_rssi_thresholds; 301 303 s32 rssi_thresholds[] __counted_by(n_rssi_thresholds); 302 304 }; 305 + 306 + void cfg80211_cqm_rssi_notify_work(struct wiphy *wiphy, 307 + struct wiphy_work *work); 303 308 304 309 void cfg80211_destroy_ifaces(struct cfg80211_registered_device *rdev); 305 310 ··· 570 565 */ 571 566 #define CFG80211_DEV_WARN_ON(cond) ({bool __r = (cond); __r; }) 572 567 #endif 573 - 574 - void cfg80211_cqm_config_free(struct wireless_dev *wdev); 575 568 576 569 void cfg80211_release_pmsr(struct wireless_dev *wdev, u32 portid); 577 570 void cfg80211_pmsr_wdev_down(struct wireless_dev *wdev);
+2 -1
net/wireless/mlme.c
··· 52 52 cr.links[link_id].bssid = data->links[link_id].bss->bssid; 53 53 cr.links[link_id].addr = data->links[link_id].addr; 54 54 /* need to have local link addresses for MLO connections */ 55 - WARN_ON(cr.ap_mld_addr && !cr.links[link_id].addr); 55 + WARN_ON(cr.ap_mld_addr && 56 + !is_valid_ether_addr(cr.links[link_id].addr)); 56 57 57 58 BUG_ON(!cr.links[link_id].bss->channel); 58 59
+82 -34
net/wireless/nl80211.c
··· 5909 5909 nlmsg_free(msg); 5910 5910 } 5911 5911 5912 + static int nl80211_validate_ap_phy_operation(struct cfg80211_ap_settings *params) 5913 + { 5914 + struct ieee80211_channel *channel = params->chandef.chan; 5915 + 5916 + if ((params->he_cap || params->he_oper) && 5917 + (channel->flags & IEEE80211_CHAN_NO_HE)) 5918 + return -EOPNOTSUPP; 5919 + 5920 + if ((params->eht_cap || params->eht_oper) && 5921 + (channel->flags & IEEE80211_CHAN_NO_EHT)) 5922 + return -EOPNOTSUPP; 5923 + 5924 + return 0; 5925 + } 5926 + 5912 5927 static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info) 5913 5928 { 5914 5929 struct cfg80211_registered_device *rdev = info->user_ptr[0]; ··· 6190 6175 } 6191 6176 6192 6177 err = nl80211_calculate_ap_params(params); 6178 + if (err) 6179 + goto out_unlock; 6180 + 6181 + err = nl80211_validate_ap_phy_operation(params); 6193 6182 if (err) 6194 6183 goto out_unlock; 6195 6184 ··· 8501 8482 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 8502 8483 struct net_device *dev = info->user_ptr[1]; 8503 8484 struct wireless_dev *wdev = dev->ieee80211_ptr; 8504 - struct mesh_config cfg; 8485 + struct mesh_config cfg = {}; 8505 8486 u32 mask; 8506 8487 int err; 8507 8488 ··· 12815 12796 } 12816 12797 12817 12798 static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev, 12818 - struct net_device *dev) 12799 + struct net_device *dev, 12800 + struct cfg80211_cqm_config *cqm_config) 12819 12801 { 12820 12802 struct wireless_dev *wdev = dev->ieee80211_ptr; 12821 12803 s32 last, low, high; ··· 12825 12805 int err; 12826 12806 12827 12807 /* RSSI reporting disabled? */ 12828 - if (!wdev->cqm_config) 12808 + if (!cqm_config) 12829 12809 return rdev_set_cqm_rssi_range_config(rdev, dev, 0, 0); 12830 12810 12831 12811 /* ··· 12834 12814 * connection is established and enough beacons received to calculate 12835 12815 * the average. 12836 12816 */ 12837 - if (!wdev->cqm_config->last_rssi_event_value && 12817 + if (!cqm_config->last_rssi_event_value && 12838 12818 wdev->links[0].client.current_bss && 12839 12819 rdev->ops->get_station) { 12840 12820 struct station_info sinfo = {}; ··· 12848 12828 12849 12829 cfg80211_sinfo_release_content(&sinfo); 12850 12830 if (sinfo.filled & BIT_ULL(NL80211_STA_INFO_BEACON_SIGNAL_AVG)) 12851 - wdev->cqm_config->last_rssi_event_value = 12831 + cqm_config->last_rssi_event_value = 12852 12832 (s8) sinfo.rx_beacon_signal_avg; 12853 12833 } 12854 12834 12855 - last = wdev->cqm_config->last_rssi_event_value; 12856 - hyst = wdev->cqm_config->rssi_hyst; 12857 - n = wdev->cqm_config->n_rssi_thresholds; 12835 + last = cqm_config->last_rssi_event_value; 12836 + hyst = cqm_config->rssi_hyst; 12837 + n = cqm_config->n_rssi_thresholds; 12858 12838 12859 12839 for (i = 0; i < n; i++) { 12860 12840 i = array_index_nospec(i, n); 12861 - if (last < wdev->cqm_config->rssi_thresholds[i]) 12841 + if (last < cqm_config->rssi_thresholds[i]) 12862 12842 break; 12863 12843 } 12864 12844 12865 12845 low_index = i - 1; 12866 12846 if (low_index >= 0) { 12867 12847 low_index = array_index_nospec(low_index, n); 12868 - low = wdev->cqm_config->rssi_thresholds[low_index] - hyst; 12848 + low = cqm_config->rssi_thresholds[low_index] - hyst; 12869 12849 } else { 12870 12850 low = S32_MIN; 12871 12851 } 12872 12852 if (i < n) { 12873 12853 i = array_index_nospec(i, n); 12874 - high = wdev->cqm_config->rssi_thresholds[i] + hyst - 1; 12854 + high = cqm_config->rssi_thresholds[i] + hyst - 1; 12875 12855 } else { 12876 12856 high = S32_MAX; 12877 12857 } ··· 12884 12864 u32 hysteresis) 12885 12865 { 12886 12866 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 12867 + struct cfg80211_cqm_config *cqm_config = NULL, *old; 12887 12868 struct net_device *dev = info->user_ptr[1]; 12888 12869 struct wireless_dev *wdev = dev->ieee80211_ptr; 12889 12870 int i, err; ··· 12902 12881 wdev->iftype != NL80211_IFTYPE_P2P_CLIENT) 12903 12882 return -EOPNOTSUPP; 12904 12883 12905 - wdev_lock(wdev); 12906 - cfg80211_cqm_config_free(wdev); 12907 - wdev_unlock(wdev); 12908 - 12909 12884 if (n_thresholds <= 1 && rdev->ops->set_cqm_rssi_config) { 12910 12885 if (n_thresholds == 0 || thresholds[0] == 0) /* Disabling */ 12911 12886 return rdev_set_cqm_rssi_config(rdev, dev, 0, 0); ··· 12918 12901 n_thresholds = 0; 12919 12902 12920 12903 wdev_lock(wdev); 12921 - if (n_thresholds) { 12922 - struct cfg80211_cqm_config *cqm_config; 12904 + old = rcu_dereference_protected(wdev->cqm_config, 12905 + lockdep_is_held(&wdev->mtx)); 12923 12906 12907 + if (n_thresholds) { 12924 12908 cqm_config = kzalloc(struct_size(cqm_config, rssi_thresholds, 12925 12909 n_thresholds), 12926 12910 GFP_KERNEL); ··· 12936 12918 flex_array_size(cqm_config, rssi_thresholds, 12937 12919 n_thresholds)); 12938 12920 12939 - wdev->cqm_config = cqm_config; 12921 + rcu_assign_pointer(wdev->cqm_config, cqm_config); 12922 + } else { 12923 + RCU_INIT_POINTER(wdev->cqm_config, NULL); 12940 12924 } 12941 12925 12942 - err = cfg80211_cqm_rssi_update(rdev, dev); 12943 - 12926 + err = cfg80211_cqm_rssi_update(rdev, dev, cqm_config); 12927 + if (err) { 12928 + rcu_assign_pointer(wdev->cqm_config, old); 12929 + kfree_rcu(cqm_config, rcu_head); 12930 + } else { 12931 + kfree_rcu(old, rcu_head); 12932 + } 12944 12933 unlock: 12945 12934 wdev_unlock(wdev); 12946 12935 ··· 19098 19073 enum nl80211_cqm_rssi_threshold_event rssi_event, 19099 19074 s32 rssi_level, gfp_t gfp) 19100 19075 { 19101 - struct sk_buff *msg; 19102 19076 struct wireless_dev *wdev = dev->ieee80211_ptr; 19103 - struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy); 19077 + struct cfg80211_cqm_config *cqm_config; 19104 19078 19105 19079 trace_cfg80211_cqm_rssi_notify(dev, rssi_event, rssi_level); 19106 19080 ··· 19107 19083 rssi_event != NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH)) 19108 19084 return; 19109 19085 19110 - if (wdev->cqm_config) { 19111 - wdev->cqm_config->last_rssi_event_value = rssi_level; 19112 - 19113 - cfg80211_cqm_rssi_update(rdev, dev); 19114 - 19115 - if (rssi_level == 0) 19116 - rssi_level = wdev->cqm_config->last_rssi_event_value; 19086 + rcu_read_lock(); 19087 + cqm_config = rcu_dereference(wdev->cqm_config); 19088 + if (cqm_config) { 19089 + cqm_config->last_rssi_event_value = rssi_level; 19090 + cqm_config->last_rssi_event_type = rssi_event; 19091 + wiphy_work_queue(wdev->wiphy, &wdev->cqm_rssi_work); 19117 19092 } 19093 + rcu_read_unlock(); 19094 + } 19095 + EXPORT_SYMBOL(cfg80211_cqm_rssi_notify); 19118 19096 19119 - msg = cfg80211_prepare_cqm(dev, NULL, gfp); 19097 + void cfg80211_cqm_rssi_notify_work(struct wiphy *wiphy, struct wiphy_work *work) 19098 + { 19099 + struct wireless_dev *wdev = container_of(work, struct wireless_dev, 19100 + cqm_rssi_work); 19101 + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); 19102 + enum nl80211_cqm_rssi_threshold_event rssi_event; 19103 + struct cfg80211_cqm_config *cqm_config; 19104 + struct sk_buff *msg; 19105 + s32 rssi_level; 19106 + 19107 + wdev_lock(wdev); 19108 + cqm_config = rcu_dereference_protected(wdev->cqm_config, 19109 + lockdep_is_held(&wdev->mtx)); 19110 + if (!wdev->cqm_config) 19111 + goto unlock; 19112 + 19113 + cfg80211_cqm_rssi_update(rdev, wdev->netdev, cqm_config); 19114 + 19115 + rssi_level = cqm_config->last_rssi_event_value; 19116 + rssi_event = cqm_config->last_rssi_event_type; 19117 + 19118 + msg = cfg80211_prepare_cqm(wdev->netdev, NULL, GFP_KERNEL); 19120 19119 if (!msg) 19121 - return; 19120 + goto unlock; 19122 19121 19123 19122 if (nla_put_u32(msg, NL80211_ATTR_CQM_RSSI_THRESHOLD_EVENT, 19124 19123 rssi_event)) ··· 19151 19104 rssi_level)) 19152 19105 goto nla_put_failure; 19153 19106 19154 - cfg80211_send_cqm(msg, gfp); 19107 + cfg80211_send_cqm(msg, GFP_KERNEL); 19155 19108 19156 - return; 19109 + goto unlock; 19157 19110 19158 19111 nla_put_failure: 19159 19112 nlmsg_free(msg); 19113 + unlock: 19114 + wdev_unlock(wdev); 19160 19115 } 19161 - EXPORT_SYMBOL(cfg80211_cqm_rssi_notify); 19162 19116 19163 19117 void cfg80211_cqm_txe_notify(struct net_device *dev, 19164 19118 const u8 *peer, u32 num_packets,
+4
net/wireless/scan.c
··· 908 908 !cfg80211_find_ssid_match(ap, request)) 909 909 continue; 910 910 911 + if (!is_broadcast_ether_addr(request->bssid) && 912 + !ether_addr_equal(request->bssid, ap->bssid)) 913 + continue; 914 + 911 915 if (!request->n_ssids && ap->multi_bss && !ap->transmitted_bssid) 912 916 continue; 913 917
+2
tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
··· 185 185 186 186 do_test("bpf_cubic", NULL); 187 187 188 + ASSERT_EQ(cubic_skel->bss->bpf_cubic_acked_called, 1, "pkts_acked called"); 189 + 188 190 bpf_link__destroy(link); 189 191 bpf_cubic__destroy(cubic_skel); 190 192 }
+51
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 475 475 test_sockmap_drop_prog__destroy(drop); 476 476 } 477 477 478 + static void test_sockmap_skb_verdict_peek(void) 479 + { 480 + int err, map, verdict, s, c1, p1, zero = 0, sent, recvd, avail; 481 + struct test_sockmap_pass_prog *pass; 482 + char snd[256] = "0123456789"; 483 + char rcv[256] = "0"; 484 + 485 + pass = test_sockmap_pass_prog__open_and_load(); 486 + if (!ASSERT_OK_PTR(pass, "open_and_load")) 487 + return; 488 + verdict = bpf_program__fd(pass->progs.prog_skb_verdict); 489 + map = bpf_map__fd(pass->maps.sock_map_rx); 490 + 491 + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); 492 + if (!ASSERT_OK(err, "bpf_prog_attach")) 493 + goto out; 494 + 495 + s = socket_loopback(AF_INET, SOCK_STREAM); 496 + if (!ASSERT_GT(s, -1, "socket_loopback(s)")) 497 + goto out; 498 + 499 + err = create_pair(s, AF_INET, SOCK_STREAM, &c1, &p1); 500 + if (!ASSERT_OK(err, "create_pairs(s)")) 501 + goto out; 502 + 503 + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); 504 + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) 505 + goto out_close; 506 + 507 + sent = xsend(p1, snd, sizeof(snd), 0); 508 + ASSERT_EQ(sent, sizeof(snd), "xsend(p1)"); 509 + recvd = recv(c1, rcv, sizeof(rcv), MSG_PEEK); 510 + ASSERT_EQ(recvd, sizeof(rcv), "recv(c1)"); 511 + err = ioctl(c1, FIONREAD, &avail); 512 + ASSERT_OK(err, "ioctl(FIONREAD) error"); 513 + ASSERT_EQ(avail, sizeof(snd), "after peek ioctl(FIONREAD)"); 514 + recvd = recv(c1, rcv, sizeof(rcv), 0); 515 + ASSERT_EQ(recvd, sizeof(rcv), "recv(p0)"); 516 + err = ioctl(c1, FIONREAD, &avail); 517 + ASSERT_OK(err, "ioctl(FIONREAD) error"); 518 + ASSERT_EQ(avail, 0, "after read ioctl(FIONREAD)"); 519 + 520 + out_close: 521 + close(c1); 522 + close(p1); 523 + out: 524 + test_sockmap_pass_prog__destroy(pass); 525 + } 526 + 478 527 void test_sockmap_basic(void) 479 528 { 480 529 if (test__start_subtest("sockmap create_update_free")) ··· 564 515 test_sockmap_skb_verdict_fionread(true); 565 516 if (test__start_subtest("sockmap skb_verdict fionread on drop")) 566 517 test_sockmap_skb_verdict_fionread(false); 518 + if (test__start_subtest("sockmap skb_verdict msg_f_peek")) 519 + test_sockmap_skb_verdict_peek(); 567 520 }
+84
tools/testing/selftests/bpf/prog_tests/tc_opts.c
··· 2378 2378 test_tc_chain_mixed(BPF_TCX_INGRESS); 2379 2379 test_tc_chain_mixed(BPF_TCX_EGRESS); 2380 2380 } 2381 + 2382 + static int generate_dummy_prog(void) 2383 + { 2384 + const struct bpf_insn prog_insns[] = { 2385 + BPF_MOV64_IMM(BPF_REG_0, 0), 2386 + BPF_EXIT_INSN(), 2387 + }; 2388 + const size_t prog_insn_cnt = sizeof(prog_insns) / sizeof(struct bpf_insn); 2389 + LIBBPF_OPTS(bpf_prog_load_opts, opts); 2390 + const size_t log_buf_sz = 256; 2391 + char *log_buf; 2392 + int fd = -1; 2393 + 2394 + log_buf = malloc(log_buf_sz); 2395 + if (!ASSERT_OK_PTR(log_buf, "log_buf_alloc")) 2396 + return fd; 2397 + opts.log_buf = log_buf; 2398 + opts.log_size = log_buf_sz; 2399 + 2400 + log_buf[0] = '\0'; 2401 + opts.log_level = 0; 2402 + fd = bpf_prog_load(BPF_PROG_TYPE_SCHED_CLS, "tcx_prog", "GPL", 2403 + prog_insns, prog_insn_cnt, &opts); 2404 + ASSERT_STREQ(log_buf, "", "log_0"); 2405 + ASSERT_GE(fd, 0, "prog_fd"); 2406 + free(log_buf); 2407 + return fd; 2408 + } 2409 + 2410 + static void test_tc_opts_max_target(int target, int flags, bool relative) 2411 + { 2412 + int err, ifindex, i, prog_fd, last_fd = -1; 2413 + LIBBPF_OPTS(bpf_prog_attach_opts, opta); 2414 + const int max_progs = 63; 2415 + 2416 + ASSERT_OK(system("ip link add dev tcx_opts1 type veth peer name tcx_opts2"), "add veth"); 2417 + ifindex = if_nametoindex("tcx_opts1"); 2418 + ASSERT_NEQ(ifindex, 0, "non_zero_ifindex"); 2419 + 2420 + assert_mprog_count_ifindex(ifindex, target, 0); 2421 + 2422 + for (i = 0; i < max_progs; i++) { 2423 + prog_fd = generate_dummy_prog(); 2424 + if (!ASSERT_GE(prog_fd, 0, "dummy_prog")) 2425 + goto cleanup; 2426 + err = bpf_prog_attach_opts(prog_fd, ifindex, target, &opta); 2427 + if (!ASSERT_EQ(err, 0, "prog_attach")) 2428 + goto cleanup; 2429 + assert_mprog_count_ifindex(ifindex, target, i + 1); 2430 + if (i == max_progs - 1 && relative) 2431 + last_fd = prog_fd; 2432 + else 2433 + close(prog_fd); 2434 + } 2435 + 2436 + prog_fd = generate_dummy_prog(); 2437 + if (!ASSERT_GE(prog_fd, 0, "dummy_prog")) 2438 + goto cleanup; 2439 + opta.flags = flags; 2440 + if (last_fd > 0) 2441 + opta.relative_fd = last_fd; 2442 + err = bpf_prog_attach_opts(prog_fd, ifindex, target, &opta); 2443 + ASSERT_EQ(err, -ERANGE, "prog_64_attach"); 2444 + assert_mprog_count_ifindex(ifindex, target, max_progs); 2445 + close(prog_fd); 2446 + cleanup: 2447 + if (last_fd > 0) 2448 + close(last_fd); 2449 + ASSERT_OK(system("ip link del dev tcx_opts1"), "del veth"); 2450 + ASSERT_EQ(if_nametoindex("tcx_opts1"), 0, "dev1_removed"); 2451 + ASSERT_EQ(if_nametoindex("tcx_opts2"), 0, "dev2_removed"); 2452 + } 2453 + 2454 + void serial_test_tc_opts_max(void) 2455 + { 2456 + test_tc_opts_max_target(BPF_TCX_INGRESS, 0, false); 2457 + test_tc_opts_max_target(BPF_TCX_EGRESS, 0, false); 2458 + 2459 + test_tc_opts_max_target(BPF_TCX_INGRESS, BPF_F_BEFORE, false); 2460 + test_tc_opts_max_target(BPF_TCX_EGRESS, BPF_F_BEFORE, true); 2461 + 2462 + test_tc_opts_max_target(BPF_TCX_INGRESS, BPF_F_AFTER, true); 2463 + test_tc_opts_max_target(BPF_TCX_EGRESS, BPF_F_AFTER, false); 2464 + }
+3
tools/testing/selftests/bpf/progs/bpf_cubic.c
··· 490 490 } 491 491 } 492 492 493 + int bpf_cubic_acked_called = 0; 494 + 493 495 void BPF_STRUCT_OPS(bpf_cubic_acked, struct sock *sk, 494 496 const struct ack_sample *sample) 495 497 { ··· 499 497 struct bictcp *ca = inet_csk_ca(sk); 500 498 __u32 delay; 501 499 500 + bpf_cubic_acked_called = 1; 502 501 /* Some calls are for duplicates without timetamps */ 503 502 if (sample->rtt_us < 0) 504 503 return;
+3 -2
tools/testing/selftests/netfilter/Makefile
··· 6 6 nft_concat_range.sh nft_conntrack_helper.sh \ 7 7 nft_queue.sh nft_meta.sh nf_nat_edemux.sh \ 8 8 ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \ 9 - conntrack_vrf.sh nft_synproxy.sh rpath.sh nft_audit.sh 9 + conntrack_vrf.sh nft_synproxy.sh rpath.sh nft_audit.sh \ 10 + conntrack_sctp_collision.sh 10 11 11 12 HOSTPKG_CONFIG := pkg-config 12 13 13 14 CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null) 14 15 LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl) 15 16 16 - TEST_GEN_FILES = nf-queue connect_close audit_logread 17 + TEST_GEN_FILES = nf-queue connect_close audit_logread sctp_collision 17 18 18 19 include ../lib.mk
+89
tools/testing/selftests/netfilter/conntrack_sctp_collision.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Testing For SCTP COLLISION SCENARIO as Below: 5 + # 6 + # 14:35:47.655279 IP CLIENT_IP.PORT > SERVER_IP.PORT: sctp (1) [INIT] [init tag: 2017837359] 7 + # 14:35:48.353250 IP SERVER_IP.PORT > CLIENT_IP.PORT: sctp (1) [INIT] [init tag: 1187206187] 8 + # 14:35:48.353275 IP CLIENT_IP.PORT > SERVER_IP.PORT: sctp (1) [INIT ACK] [init tag: 2017837359] 9 + # 14:35:48.353283 IP SERVER_IP.PORT > CLIENT_IP.PORT: sctp (1) [COOKIE ECHO] 10 + # 14:35:48.353977 IP CLIENT_IP.PORT > SERVER_IP.PORT: sctp (1) [COOKIE ACK] 11 + # 14:35:48.855335 IP SERVER_IP.PORT > CLIENT_IP.PORT: sctp (1) [INIT ACK] [init tag: 164579970] 12 + # 13 + # TOPO: SERVER_NS (link0)<--->(link1) ROUTER_NS (link2)<--->(link3) CLIENT_NS 14 + 15 + CLIENT_NS=$(mktemp -u client-XXXXXXXX) 16 + CLIENT_IP="198.51.200.1" 17 + CLIENT_PORT=1234 18 + 19 + SERVER_NS=$(mktemp -u server-XXXXXXXX) 20 + SERVER_IP="198.51.100.1" 21 + SERVER_PORT=1234 22 + 23 + ROUTER_NS=$(mktemp -u router-XXXXXXXX) 24 + CLIENT_GW="198.51.200.2" 25 + SERVER_GW="198.51.100.2" 26 + 27 + # setup the topo 28 + setup() { 29 + ip net add $CLIENT_NS 30 + ip net add $SERVER_NS 31 + ip net add $ROUTER_NS 32 + ip -n $SERVER_NS link add link0 type veth peer name link1 netns $ROUTER_NS 33 + ip -n $CLIENT_NS link add link3 type veth peer name link2 netns $ROUTER_NS 34 + 35 + ip -n $SERVER_NS link set link0 up 36 + ip -n $SERVER_NS addr add $SERVER_IP/24 dev link0 37 + ip -n $SERVER_NS route add $CLIENT_IP dev link0 via $SERVER_GW 38 + 39 + ip -n $ROUTER_NS link set link1 up 40 + ip -n $ROUTER_NS link set link2 up 41 + ip -n $ROUTER_NS addr add $SERVER_GW/24 dev link1 42 + ip -n $ROUTER_NS addr add $CLIENT_GW/24 dev link2 43 + ip net exec $ROUTER_NS sysctl -wq net.ipv4.ip_forward=1 44 + 45 + ip -n $CLIENT_NS link set link3 up 46 + ip -n $CLIENT_NS addr add $CLIENT_IP/24 dev link3 47 + ip -n $CLIENT_NS route add $SERVER_IP dev link3 via $CLIENT_GW 48 + 49 + # simulate the delay on OVS upcall by setting up a delay for INIT_ACK with 50 + # tc on $SERVER_NS side 51 + tc -n $SERVER_NS qdisc add dev link0 root handle 1: htb 52 + tc -n $SERVER_NS class add dev link0 parent 1: classid 1:1 htb rate 100mbit 53 + tc -n $SERVER_NS filter add dev link0 parent 1: protocol ip u32 match ip protocol 132 \ 54 + 0xff match u8 2 0xff at 32 flowid 1:1 55 + tc -n $SERVER_NS qdisc add dev link0 parent 1:1 handle 10: netem delay 1200ms 56 + 57 + # simulate the ctstate check on OVS nf_conntrack 58 + ip net exec $ROUTER_NS iptables -A FORWARD -m state --state INVALID,UNTRACKED -j DROP 59 + ip net exec $ROUTER_NS iptables -A INPUT -p sctp -j DROP 60 + 61 + # use a smaller number for assoc's max_retrans to reproduce the issue 62 + modprobe sctp 63 + ip net exec $CLIENT_NS sysctl -wq net.sctp.association_max_retrans=3 64 + } 65 + 66 + cleanup() { 67 + ip net exec $CLIENT_NS pkill sctp_collision 2>&1 >/dev/null 68 + ip net exec $SERVER_NS pkill sctp_collision 2>&1 >/dev/null 69 + ip net del "$CLIENT_NS" 70 + ip net del "$SERVER_NS" 71 + ip net del "$ROUTER_NS" 72 + } 73 + 74 + do_test() { 75 + ip net exec $SERVER_NS ./sctp_collision server \ 76 + $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT & 77 + ip net exec $CLIENT_NS ./sctp_collision client \ 78 + $CLIENT_IP $CLIENT_PORT $SERVER_IP $SERVER_PORT 79 + } 80 + 81 + # NOTE: one way to work around the issue is set a smaller hb_interval 82 + # ip net exec $CLIENT_NS sysctl -wq net.sctp.hb_interval=3500 83 + 84 + # run the test case 85 + trap cleanup EXIT 86 + setup && \ 87 + echo "Test for SCTP Collision in nf_conntrack:" && \ 88 + do_test && echo "PASS!" 89 + exit $?
+101 -16
tools/testing/selftests/netfilter/nft_audit.sh
··· 12 12 } 13 13 14 14 logfile=$(mktemp) 15 + rulefile=$(mktemp) 15 16 echo "logging into $logfile" 16 17 ./audit_logread >"$logfile" & 17 18 logread_pid=$! 18 - trap 'kill $logread_pid; rm -f $logfile' EXIT 19 + trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT 19 20 exec 3<"$logfile" 20 21 21 22 do_test() { # (cmd, log) ··· 27 26 res=$(diff -a -u <(echo "$2") - <&3) 28 27 [ $? -eq 0 ] && { echo "OK"; return; } 29 28 echo "FAIL" 30 - echo "$res" 31 - ((RC++)) 29 + grep -v '^\(---\|+++\|@@\)' <<< "$res" 30 + ((RC--)) 32 31 } 33 32 34 33 nft flush ruleset 34 + 35 + # adding tables, chains and rules 35 36 36 37 for table in t1 t2; do 37 38 do_test "nft add table $table" \ ··· 65 62 "table=$table family=2 entries=6 op=nft_register_rule" 66 63 done 67 64 65 + for ((i = 0; i < 500; i++)); do 66 + echo "add rule t2 c3 counter accept comment \"rule $i\"" 67 + done >$rulefile 68 + do_test "nft -f $rulefile" \ 69 + 'table=t2 family=2 entries=500 op=nft_register_rule' 70 + 71 + # adding sets and elements 72 + 73 + settype='type inet_service; counter' 74 + setelem='{ 22, 80, 443 }' 75 + setblock="{ $settype; elements = $setelem; }" 76 + do_test "nft add set t1 s $setblock" \ 77 + "table=t1 family=2 entries=4 op=nft_register_set" 78 + 79 + do_test "nft add set t1 s2 $setblock; add set t1 s3 { $settype; }" \ 80 + "table=t1 family=2 entries=5 op=nft_register_set" 81 + 82 + do_test "nft add element t1 s3 $setelem" \ 83 + "table=t1 family=2 entries=3 op=nft_register_setelem" 84 + 85 + # adding counters 86 + 87 + do_test 'nft add counter t1 c1' \ 88 + 'table=t1 family=2 entries=1 op=nft_register_obj' 89 + 90 + do_test 'nft add counter t2 c1; add counter t2 c2' \ 91 + 'table=t2 family=2 entries=2 op=nft_register_obj' 92 + 93 + # adding/updating quotas 94 + 95 + do_test 'nft add quota t1 q1 { 10 bytes }' \ 96 + 'table=t1 family=2 entries=1 op=nft_register_obj' 97 + 98 + do_test 'nft add quota t2 q1 { 10 bytes }; add quota t2 q2 { 10 bytes }' \ 99 + 'table=t2 family=2 entries=2 op=nft_register_obj' 100 + 101 + # changing the quota value triggers obj update path 102 + do_test 'nft add quota t1 q1 { 20 bytes }' \ 103 + 'table=t1 family=2 entries=1 op=nft_register_obj' 104 + 105 + # resetting rules 106 + 68 107 do_test 'nft reset rules t1 c2' \ 69 108 'table=t1 family=2 entries=3 op=nft_reset_rule' 70 109 ··· 114 69 'table=t1 family=2 entries=3 op=nft_reset_rule 115 70 table=t1 family=2 entries=3 op=nft_reset_rule 116 71 table=t1 family=2 entries=3 op=nft_reset_rule' 117 - 118 - do_test 'nft reset rules' \ 119 - 'table=t1 family=2 entries=3 op=nft_reset_rule 120 - table=t1 family=2 entries=3 op=nft_reset_rule 121 - table=t1 family=2 entries=3 op=nft_reset_rule 122 - table=t2 family=2 entries=3 op=nft_reset_rule 123 - table=t2 family=2 entries=3 op=nft_reset_rule 124 - table=t2 family=2 entries=3 op=nft_reset_rule' 125 - 126 - for ((i = 0; i < 500; i++)); do 127 - echo "add rule t2 c3 counter accept comment \"rule $i\"" 128 - done | do_test 'nft -f -' \ 129 - 'table=t2 family=2 entries=500 op=nft_register_rule' 130 72 131 73 do_test 'nft reset rules t2 c3' \ 132 74 'table=t2 family=2 entries=189 op=nft_reset_rule ··· 136 104 table=t2 family=2 entries=180 op=nft_reset_rule 137 105 table=t2 family=2 entries=188 op=nft_reset_rule 138 106 table=t2 family=2 entries=135 op=nft_reset_rule' 107 + 108 + # resetting sets and elements 109 + 110 + elem=(22 ,80 ,443) 111 + relem="" 112 + for i in {1..3}; do 113 + relem+="${elem[((i - 1))]}" 114 + do_test "nft reset element t1 s { $relem }" \ 115 + "table=t1 family=2 entries=$i op=nft_reset_setelem" 116 + done 117 + 118 + do_test 'nft reset set t1 s' \ 119 + 'table=t1 family=2 entries=3 op=nft_reset_setelem' 120 + 121 + # deleting rules 122 + 123 + readarray -t handles < <(nft -a list chain t1 c1 | \ 124 + sed -n 's/.*counter.* handle \(.*\)$/\1/p') 125 + 126 + do_test "nft delete rule t1 c1 handle ${handles[0]}" \ 127 + 'table=t1 family=2 entries=1 op=nft_unregister_rule' 128 + 129 + cmd='delete rule t1 c1 handle' 130 + do_test "nft $cmd ${handles[1]}; $cmd ${handles[2]}" \ 131 + 'table=t1 family=2 entries=2 op=nft_unregister_rule' 132 + 133 + do_test 'nft flush chain t1 c2' \ 134 + 'table=t1 family=2 entries=3 op=nft_unregister_rule' 135 + 136 + do_test 'nft flush table t2' \ 137 + 'table=t2 family=2 entries=509 op=nft_unregister_rule' 138 + 139 + # deleting chains 140 + 141 + do_test 'nft delete chain t2 c2' \ 142 + 'table=t2 family=2 entries=1 op=nft_unregister_chain' 143 + 144 + # deleting sets and elements 145 + 146 + do_test 'nft delete element t1 s { 22 }' \ 147 + 'table=t1 family=2 entries=1 op=nft_unregister_setelem' 148 + 149 + do_test 'nft delete element t1 s { 80, 443 }' \ 150 + 'table=t1 family=2 entries=2 op=nft_unregister_setelem' 151 + 152 + do_test 'nft flush set t1 s2' \ 153 + 'table=t1 family=2 entries=3 op=nft_unregister_setelem' 154 + 155 + do_test 'nft delete set t1 s2' \ 156 + 'table=t1 family=2 entries=1 op=nft_unregister_set' 157 + 158 + do_test 'nft delete set t1 s3' \ 159 + 'table=t1 family=2 entries=1 op=nft_unregister_set' 139 160 140 161 exit $RC
+99
tools/testing/selftests/netfilter/sctp_collision.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <stdio.h> 4 + #include <stdlib.h> 5 + #include <string.h> 6 + #include <unistd.h> 7 + #include <arpa/inet.h> 8 + 9 + int main(int argc, char *argv[]) 10 + { 11 + struct sockaddr_in saddr = {}, daddr = {}; 12 + int sd, ret, len = sizeof(daddr); 13 + struct timeval tv = {25, 0}; 14 + char buf[] = "hello"; 15 + 16 + if (argc != 6 || (strcmp(argv[1], "server") && strcmp(argv[1], "client"))) { 17 + printf("%s <server|client> <LOCAL_IP> <LOCAL_PORT> <REMOTE_IP> <REMOTE_PORT>\n", 18 + argv[0]); 19 + return -1; 20 + } 21 + 22 + sd = socket(AF_INET, SOCK_SEQPACKET, IPPROTO_SCTP); 23 + if (sd < 0) { 24 + printf("Failed to create sd\n"); 25 + return -1; 26 + } 27 + 28 + saddr.sin_family = AF_INET; 29 + saddr.sin_addr.s_addr = inet_addr(argv[2]); 30 + saddr.sin_port = htons(atoi(argv[3])); 31 + 32 + ret = bind(sd, (struct sockaddr *)&saddr, sizeof(saddr)); 33 + if (ret < 0) { 34 + printf("Failed to bind to address\n"); 35 + goto out; 36 + } 37 + 38 + ret = listen(sd, 5); 39 + if (ret < 0) { 40 + printf("Failed to listen on port\n"); 41 + goto out; 42 + } 43 + 44 + daddr.sin_family = AF_INET; 45 + daddr.sin_addr.s_addr = inet_addr(argv[4]); 46 + daddr.sin_port = htons(atoi(argv[5])); 47 + 48 + /* make test shorter than 25s */ 49 + ret = setsockopt(sd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv)); 50 + if (ret < 0) { 51 + printf("Failed to setsockopt SO_RCVTIMEO\n"); 52 + goto out; 53 + } 54 + 55 + if (!strcmp(argv[1], "server")) { 56 + sleep(1); /* wait a bit for client's INIT */ 57 + ret = connect(sd, (struct sockaddr *)&daddr, len); 58 + if (ret < 0) { 59 + printf("Failed to connect to peer\n"); 60 + goto out; 61 + } 62 + ret = recvfrom(sd, buf, sizeof(buf), 0, (struct sockaddr *)&daddr, &len); 63 + if (ret < 0) { 64 + printf("Failed to recv msg %d\n", ret); 65 + goto out; 66 + } 67 + ret = sendto(sd, buf, strlen(buf) + 1, 0, (struct sockaddr *)&daddr, len); 68 + if (ret < 0) { 69 + printf("Failed to send msg %d\n", ret); 70 + goto out; 71 + } 72 + printf("Server: sent! %d\n", ret); 73 + } 74 + 75 + if (!strcmp(argv[1], "client")) { 76 + usleep(300000); /* wait a bit for server's listening */ 77 + ret = connect(sd, (struct sockaddr *)&daddr, len); 78 + if (ret < 0) { 79 + printf("Failed to connect to peer\n"); 80 + goto out; 81 + } 82 + sleep(1); /* wait a bit for server's delayed INIT_ACK to reproduce the issue */ 83 + ret = sendto(sd, buf, strlen(buf) + 1, 0, (struct sockaddr *)&daddr, len); 84 + if (ret < 0) { 85 + printf("Failed to send msg %d\n", ret); 86 + goto out; 87 + } 88 + ret = recvfrom(sd, buf, sizeof(buf), 0, (struct sockaddr *)&daddr, &len); 89 + if (ret < 0) { 90 + printf("Failed to recv msg %d\n", ret); 91 + goto out; 92 + } 93 + printf("Client: rcvd! %d\n", ret); 94 + } 95 + ret = 0; 96 + out: 97 + close(sd); 98 + return ret; 99 + }