Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.16-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from Bluetooth, CAN, WiFi and Netfilter.

More code here than I would have liked. That said, better now than
next week. Nothing particularly scary stands out. The improvement to
the OpenVPN input validation is a bit large but better get them in
before the code makes it to a final release. Some of the changes we
got from sub-trees could have been split better between the fix and
-next refactoring, IMHO, that has been communicated.

We have one known regression in a TI AM65 board not getting link. The
investigation is going a bit slow, a number of people are on vacation.
We'll try to wrap it up, but don't think it should hold up the
release.

Current release - fix to a fix:

- Bluetooth: L2CAP: fix attempting to adjust outgoing MTU, it broke
some headphones and speakers

Current release - regressions:

- wifi: ath12k: fix packets received in WBM error ring with REO LUT
enabled, fix Rx performance regression

- wifi: iwlwifi:
- fix crash due to a botched indexing conversion
- mask reserved bits in chan_state_active_bitmap, avoid FW assert()

Current release - new code bugs:

- nf_conntrack: fix crash due to removal of uninitialised entry

- eth: airoha: fix potential UaF in airoha_npu_get()

Previous releases - regressions:

- net: fix segmentation after TCP/UDP fraglist GRO

- af_packet: fix the SO_SNDTIMEO constraint not taking effect and a
potential soft lockup waiting for a completion

- rpl: fix UaF in rpl_do_srh_inline() for sneaky skb geometry

- virtio-net: fix recursive rtnl_lock() during probe()

- eth: stmmac: populate entire system_counterval_t in get_time_fn()

- eth: libwx: fix a number of crashes in the driver Rx path

- hv_netvsc: prevent IPv6 addrconf after IFF_SLAVE lost that meaning

Previous releases - always broken:

- mptcp: fix races in handling connection fallback to pure TCP

- rxrpc: assorted error handling and race fixes

- sched: another batch of "security" fixes for qdiscs (QFQ, HTB)

- tls: always refresh the queue when reading sock, avoid UaF

- phy: don't register LEDs for genphy, avoid deadlock

- Bluetooth: btintel: check if controller is ISO capable on
btintel_classify_pkt_type(), work around FW returning incorrect
capabilities

Misc:

- make OpenVPN Netlink input checking more strict before it makes it
to a final release

- wifi: cfg80211: remove scan request n_channels __counted_by, it's
only yielding false positives"

* tag 'net-6.16-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (66 commits)
rxrpc: Fix to use conn aborts for conn-wide failures
rxrpc: Fix transmission of an abort in response to an abort
rxrpc: Fix notification vs call-release vs recvmsg
rxrpc: Fix recv-recv race of completed call
rxrpc: Fix irq-disabled in local_bh_enable()
selftests/tc-testing: Test htb_dequeue_tree with deactivation and row emptying
net/sched: Return NULL when htb_lookup_leaf encounters an empty rbtree
net: bridge: Do not offload IGMP/MLD messages
selftests: Add test cases for vlan_filter modification during runtime
net: vlan: fix VLAN 0 refcount imbalance of toggling filtering during runtime
tls: always refresh the queue when reading sock
virtio-net: fix recursived rtnl_lock() during probe()
net/mlx5: Update the list of the PCI supported devices
hv_netvsc: Set VF priv_flags to IFF_NO_ADDRCONF before open to prevent IPv6 addrconf
phonet/pep: Move call to pn_skb_get_dst_sockaddr() earlier in pep_sock_accept()
Bluetooth: L2CAP: Fix attempting to adjust outgoing MTU
netfilter: nf_conntrack: fix crash due to removal of uninitialised entry
net: fix segmentation after TCP/UDP fraglist GRO
ipv6: mcast: Delay put pmc->idev in mld_del_delrec()
net: airoha: fix potential use-after-free in airoha_npu_get()
...

+1591 -549
+147 -6
Documentation/netlink/specs/ovpn.yaml
··· 161 161 type: uint 162 162 doc: Number of packets transmitted at the transport level 163 163 - 164 + name: peer-new-input 165 + subset-of: peer 166 + attributes: 167 + - 168 + name: id 169 + - 170 + name: remote-ipv4 171 + - 172 + name: remote-ipv6 173 + - 174 + name: remote-ipv6-scope-id 175 + - 176 + name: remote-port 177 + - 178 + name: socket 179 + - 180 + name: vpn-ipv4 181 + - 182 + name: vpn-ipv6 183 + - 184 + name: local-ipv4 185 + - 186 + name: local-ipv6 187 + - 188 + name: keepalive-interval 189 + - 190 + name: keepalive-timeout 191 + - 192 + name: peer-set-input 193 + subset-of: peer 194 + attributes: 195 + - 196 + name: id 197 + - 198 + name: remote-ipv4 199 + - 200 + name: remote-ipv6 201 + - 202 + name: remote-ipv6-scope-id 203 + - 204 + name: remote-port 205 + - 206 + name: vpn-ipv4 207 + - 208 + name: vpn-ipv6 209 + - 210 + name: local-ipv4 211 + - 212 + name: local-ipv6 213 + - 214 + name: keepalive-interval 215 + - 216 + name: keepalive-timeout 217 + - 218 + name: peer-del-input 219 + subset-of: peer 220 + attributes: 221 + - 222 + name: id 223 + - 164 224 name: keyconf 165 225 attributes: 166 226 - ··· 276 216 obtain the actual cipher IV 277 217 checks: 278 218 exact-len: nonce-tail-size 219 + 220 + - 221 + name: keyconf-get 222 + subset-of: keyconf 223 + attributes: 224 + - 225 + name: peer-id 226 + - 227 + name: slot 228 + - 229 + name: key-id 230 + - 231 + name: cipher-alg 232 + - 233 + name: keyconf-swap-input 234 + subset-of: keyconf 235 + attributes: 236 + - 237 + name: peer-id 238 + - 239 + name: keyconf-del-input 240 + subset-of: keyconf 241 + attributes: 242 + - 243 + name: peer-id 244 + - 245 + name: slot 279 246 - 280 247 name: ovpn 281 248 attributes: ··· 322 235 type: nest 323 236 doc: Peer specific cipher configuration 324 237 nested-attributes: keyconf 238 + - 239 + name: ovpn-peer-new-input 240 + subset-of: ovpn 241 + attributes: 242 + - 243 + name: ifindex 244 + - 245 + name: peer 246 + nested-attributes: peer-new-input 247 + - 248 + name: ovpn-peer-set-input 249 + subset-of: ovpn 250 + attributes: 251 + - 252 + name: ifindex 253 + - 254 + name: peer 255 + nested-attributes: peer-set-input 256 + - 257 + name: ovpn-peer-del-input 258 + subset-of: ovpn 259 + attributes: 260 + - 261 + name: ifindex 262 + - 263 + name: peer 264 + nested-attributes: peer-del-input 265 + - 266 + name: ovpn-keyconf-get 267 + subset-of: ovpn 268 + attributes: 269 + - 270 + name: ifindex 271 + - 272 + name: keyconf 273 + nested-attributes: keyconf-get 274 + - 275 + name: ovpn-keyconf-swap-input 276 + subset-of: ovpn 277 + attributes: 278 + - 279 + name: ifindex 280 + - 281 + name: keyconf 282 + nested-attributes: keyconf-swap-input 283 + - 284 + name: ovpn-keyconf-del-input 285 + subset-of: ovpn 286 + attributes: 287 + - 288 + name: ifindex 289 + - 290 + name: keyconf 291 + nested-attributes: keyconf-del-input 325 292 326 293 operations: 327 294 list: 328 295 - 329 296 name: peer-new 330 - attribute-set: ovpn 297 + attribute-set: ovpn-peer-new-input 331 298 flags: [ admin-perm ] 332 299 doc: Add a remote peer 333 300 do: ··· 393 252 - peer 394 253 - 395 254 name: peer-set 396 - attribute-set: ovpn 255 + attribute-set: ovpn-peer-set-input 397 256 flags: [ admin-perm ] 398 257 doc: modify a remote peer 399 258 do: ··· 427 286 - peer 428 287 - 429 288 name: peer-del 430 - attribute-set: ovpn 289 + attribute-set: ovpn-peer-del-input 431 290 flags: [ admin-perm ] 432 291 doc: Delete existing remote peer 433 292 do: ··· 457 316 - keyconf 458 317 - 459 318 name: key-get 460 - attribute-set: ovpn 319 + attribute-set: ovpn-keyconf-get 461 320 flags: [ admin-perm ] 462 321 doc: Retrieve non-sensitive data about peer key and cipher 463 322 do: ··· 472 331 - keyconf 473 332 - 474 333 name: key-swap 475 - attribute-set: ovpn 334 + attribute-set: ovpn-keyconf-swap-input 476 335 flags: [ admin-perm ] 477 336 doc: Swap primary and secondary session keys for a specific peer 478 337 do: ··· 491 350 mcgrp: peers 492 351 - 493 352 name: key-del 494 - attribute-set: ovpn 353 + attribute-set: ovpn-keyconf-del-input 495 354 flags: [ admin-perm ] 496 355 doc: Delete cipher key for a specific peer 497 356 do:
+1 -1
drivers/bluetooth/bfusb.c
··· 670 670 hdev->flush = bfusb_flush; 671 671 hdev->send = bfusb_send_frame; 672 672 673 - set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks); 673 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS); 674 674 675 675 if (hci_register_dev(hdev) < 0) { 676 676 BT_ERR("Can't register HCI device");
+1 -1
drivers/bluetooth/bpa10x.c
··· 398 398 hdev->send = bpa10x_send_frame; 399 399 hdev->set_diag = bpa10x_set_diag; 400 400 401 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 401 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 402 402 403 403 err = hci_register_dev(hdev); 404 404 if (err < 0) {
+4 -4
drivers/bluetooth/btbcm.c
··· 135 135 if (btbcm_set_bdaddr_from_efi(hdev) != 0) { 136 136 bt_dev_info(hdev, "BCM: Using default device address (%pMR)", 137 137 &bda->bdaddr); 138 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 138 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 139 139 } 140 140 } 141 141 ··· 467 467 468 468 /* Read DMI and disable broken Read LE Min/Max Tx Power */ 469 469 if (dmi_first_match(disable_broken_read_transmit_power)) 470 - set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks); 470 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER); 471 471 472 472 return 0; 473 473 } ··· 706 706 707 707 btbcm_check_bdaddr(hdev); 708 708 709 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 709 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 710 710 711 711 return 0; 712 712 } ··· 769 769 kfree_skb(skb); 770 770 } 771 771 772 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 772 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 773 773 774 774 return 0; 775 775 }
+15 -15
drivers/bluetooth/btintel.c
··· 88 88 if (!bacmp(&bda->bdaddr, BDADDR_INTEL)) { 89 89 bt_dev_err(hdev, "Found Intel default device address (%pMR)", 90 90 &bda->bdaddr); 91 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 91 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 92 92 } 93 93 94 94 kfree_skb(skb); ··· 2027 2027 */ 2028 2028 if (!bacmp(&params->otp_bdaddr, BDADDR_ANY)) { 2029 2029 bt_dev_info(hdev, "No device address configured"); 2030 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 2030 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 2031 2031 } 2032 2032 2033 2033 download: ··· 2295 2295 */ 2296 2296 if (!bacmp(&ver->otp_bd_addr, BDADDR_ANY)) { 2297 2297 bt_dev_info(hdev, "No device address configured"); 2298 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 2298 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 2299 2299 } 2300 2300 } 2301 2301 ··· 2670 2670 * Distinguish ISO data packets form ACL data packets 2671 2671 * based on their connection handle value range. 2672 2672 */ 2673 - if (hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) { 2673 + if (iso_capable(hdev) && hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) { 2674 2674 __u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle); 2675 2675 2676 2676 if (hci_handle(handle) >= BTINTEL_ISODATA_HANDLE_BASE) ··· 3435 3435 } 3436 3436 3437 3437 /* Apply the common HCI quirks for Intel device */ 3438 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 3439 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 3440 - set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks); 3438 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 3439 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 3440 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG); 3441 3441 3442 3442 /* Set up the quality report callback for Intel devices */ 3443 3443 hdev->set_quality_report = btintel_set_quality_report; ··· 3475 3475 */ 3476 3476 if (!btintel_test_flag(hdev, 3477 3477 INTEL_ROM_LEGACY_NO_WBS_SUPPORT)) 3478 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, 3479 - &hdev->quirks); 3478 + hci_set_quirk(hdev, 3479 + HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 3480 3480 3481 3481 err = btintel_legacy_rom_setup(hdev, &ver); 3482 3482 break; ··· 3491 3491 * 3492 3492 * All Legacy bootloader devices support WBS 3493 3493 */ 3494 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, 3495 - &hdev->quirks); 3494 + hci_set_quirk(hdev, 3495 + HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 3496 3496 3497 3497 /* These variants don't seem to support LE Coded PHY */ 3498 - set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks); 3498 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED); 3499 3499 3500 3500 /* Setup MSFT Extension support */ 3501 3501 btintel_set_msft_opcode(hdev, ver.hw_variant); ··· 3571 3571 * 3572 3572 * All Legacy bootloader devices support WBS 3573 3573 */ 3574 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 3574 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 3575 3575 3576 3576 /* These variants don't seem to support LE Coded PHY */ 3577 - set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks); 3577 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED); 3578 3578 3579 3579 /* Setup MSFT Extension support */ 3580 3580 btintel_set_msft_opcode(hdev, ver.hw_variant); ··· 3600 3600 * 3601 3601 * All TLV based devices support WBS 3602 3602 */ 3603 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 3603 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 3604 3604 3605 3605 /* Setup MSFT Extension support */ 3606 3606 btintel_set_msft_opcode(hdev,
+4 -4
drivers/bluetooth/btintel_pcie.c
··· 2081 2081 } 2082 2082 2083 2083 /* Apply the common HCI quirks for Intel device */ 2084 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 2085 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 2086 - set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks); 2084 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 2085 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 2086 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG); 2087 2087 2088 2088 /* Set up the quality report callback for Intel devices */ 2089 2089 hdev->set_quality_report = btintel_set_quality_report; ··· 2123 2123 * 2124 2124 * All TLV based devices support WBS 2125 2125 */ 2126 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 2126 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 2127 2127 2128 2128 /* Setup MSFT Extension support */ 2129 2129 btintel_set_msft_opcode(hdev,
+2 -2
drivers/bluetooth/btmtksdio.c
··· 1141 1141 } 1142 1142 1143 1143 /* Enable WBS with mSBC codec */ 1144 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 1144 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 1145 1145 1146 1146 /* Enable GPIO reset mechanism */ 1147 1147 if (bdev->reset) { ··· 1384 1384 SET_HCIDEV_DEV(hdev, &func->dev); 1385 1385 1386 1386 hdev->manufacturer = 70; 1387 - set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); 1387 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP); 1388 1388 1389 1389 sdio_set_drvdata(func, bdev); 1390 1390
+1 -1
drivers/bluetooth/btmtkuart.c
··· 872 872 SET_HCIDEV_DEV(hdev, &serdev->dev); 873 873 874 874 hdev->manufacturer = 70; 875 - set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); 875 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP); 876 876 877 877 if (btmtkuart_is_standalone(bdev)) { 878 878 err = clk_prepare_enable(bdev->osc);
+1 -1
drivers/bluetooth/btnxpuart.c
··· 1807 1807 "local-bd-address", 1808 1808 (u8 *)&ba, sizeof(ba)); 1809 1809 if (bacmp(&ba, BDADDR_ANY)) 1810 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 1810 + hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 1811 1811 1812 1812 if (hci_register_dev(hdev) < 0) { 1813 1813 dev_err(&serdev->dev, "Can't register HCI device\n");
+1 -1
drivers/bluetooth/btqca.c
··· 739 739 740 740 bda = (struct hci_rp_read_bd_addr *)skb->data; 741 741 if (!bacmp(&bda->bdaddr, &config->bdaddr)) 742 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 742 + hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 743 743 744 744 kfree_skb(skb); 745 745
+1 -1
drivers/bluetooth/btqcomsmd.c
··· 117 117 /* Devices do not have persistent storage for BD address. Retrieve 118 118 * it from the firmware node property. 119 119 */ 120 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 120 + hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 121 121 122 122 return 0; 123 123 }
+5 -5
drivers/bluetooth/btrtl.c
··· 1287 1287 /* Enable controller to do both LE scan and BR/EDR inquiry 1288 1288 * simultaneously. 1289 1289 */ 1290 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 1290 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 1291 1291 1292 1292 /* Enable central-peripheral role (able to create new connections with 1293 1293 * an existing connection in slave role). ··· 1301 1301 case CHIP_ID_8851B: 1302 1302 case CHIP_ID_8922A: 1303 1303 case CHIP_ID_8852BT: 1304 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 1304 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 1305 1305 1306 1306 /* RTL8852C needs to transmit mSBC data continuously without 1307 1307 * the zero length of USB packets for the ALT 6 supported chips ··· 1312 1312 if (btrtl_dev->project_id == CHIP_ID_8852A || 1313 1313 btrtl_dev->project_id == CHIP_ID_8852B || 1314 1314 btrtl_dev->project_id == CHIP_ID_8852C) 1315 - set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks); 1315 + hci_set_quirk(hdev, 1316 + HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER); 1316 1317 1317 1318 hci_set_aosp_capable(hdev); 1318 1319 break; ··· 1332 1331 * but it doesn't support any features from page 2 - 1333 1332 * it either responds with garbage or with error status 1334 1333 */ 1335 - set_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2, 1336 - &hdev->quirks); 1334 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2); 1337 1335 break; 1338 1336 default: 1339 1337 break;
+1 -1
drivers/bluetooth/btsdio.c
··· 327 327 hdev->send = btsdio_send_frame; 328 328 329 329 if (func->vendor == 0x0104 && func->device == 0x00c5) 330 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 330 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 331 331 332 332 err = hci_register_dev(hdev); 333 333 if (err < 0) {
+79 -69
drivers/bluetooth/btusb.c
··· 2472 2472 * Probably will need to be expanded in the future; 2473 2473 * without these the controller will lock up. 2474 2474 */ 2475 - set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks); 2476 - set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks); 2477 - set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks); 2478 - set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks); 2479 - set_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &hdev->quirks); 2480 - set_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks); 2475 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY); 2476 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING); 2477 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL); 2478 + hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER); 2479 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_VOICE_SETTING); 2480 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE); 2481 2481 2482 2482 /* Clear the reset quirk since this is not an actual 2483 2483 * early Bluetooth 1.1 device from CSR. 2484 2484 */ 2485 - clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 2486 - clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 2485 + hci_clear_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 2486 + hci_clear_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 2487 2487 2488 2488 /* 2489 2489 * Special workaround for these BT 4.0 chip clones, and potentially more: ··· 3192 3192 { 0x00190200, 40, 4, 16 }, /* WCN785x 2.0 */ 3193 3193 }; 3194 3194 3195 + static u16 qca_extract_board_id(const struct qca_version *ver) 3196 + { 3197 + u16 flag = le16_to_cpu(ver->flag); 3198 + u16 board_id = 0; 3199 + 3200 + if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) { 3201 + /* The board_id should be split into two bytes 3202 + * The 1st byte is chip ID, and the 2nd byte is platform ID 3203 + * For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID 3204 + * we have several platforms, and platform IDs are continuously added 3205 + * Platform ID: 3206 + * 0x00 is for Mobile 3207 + * 0x01 is for X86 3208 + * 0x02 is for Automotive 3209 + * 0x03 is for Consumer electronic 3210 + */ 3211 + board_id = (ver->chip_id << 8) + ver->platform_id; 3212 + } 3213 + 3214 + /* Take 0xffff as invalid board ID */ 3215 + if (board_id == 0xffff) 3216 + board_id = 0; 3217 + 3218 + return board_id; 3219 + } 3220 + 3195 3221 static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request, 3196 3222 void *data, u16 size) 3197 3223 { ··· 3374 3348 const struct qca_version *ver) 3375 3349 { 3376 3350 u32 rom_version = le32_to_cpu(ver->rom_version); 3377 - u16 flag = le16_to_cpu(ver->flag); 3351 + const char *variant; 3352 + int len; 3353 + u16 board_id; 3378 3354 3379 - if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) { 3380 - /* The board_id should be split into two bytes 3381 - * The 1st byte is chip ID, and the 2nd byte is platform ID 3382 - * For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID 3383 - * we have several platforms, and platform IDs are continuously added 3384 - * Platform ID: 3385 - * 0x00 is for Mobile 3386 - * 0x01 is for X86 3387 - * 0x02 is for Automotive 3388 - * 0x03 is for Consumer electronic 3389 - */ 3390 - u16 board_id = (ver->chip_id << 8) + ver->platform_id; 3391 - const char *variant; 3355 + board_id = qca_extract_board_id(ver); 3392 3356 3393 - switch (le32_to_cpu(ver->ram_version)) { 3394 - case WCN6855_2_0_RAM_VERSION_GF: 3395 - case WCN6855_2_1_RAM_VERSION_GF: 3396 - variant = "_gf"; 3397 - break; 3398 - default: 3399 - variant = ""; 3400 - break; 3401 - } 3402 - 3403 - if (board_id == 0) { 3404 - snprintf(fwname, max_size, "qca/nvm_usb_%08x%s.bin", 3405 - rom_version, variant); 3406 - } else { 3407 - snprintf(fwname, max_size, "qca/nvm_usb_%08x%s_%04x.bin", 3408 - rom_version, variant, board_id); 3409 - } 3410 - } else { 3411 - snprintf(fwname, max_size, "qca/nvm_usb_%08x.bin", 3412 - rom_version); 3357 + switch (le32_to_cpu(ver->ram_version)) { 3358 + case WCN6855_2_0_RAM_VERSION_GF: 3359 + case WCN6855_2_1_RAM_VERSION_GF: 3360 + variant = "_gf"; 3361 + break; 3362 + default: 3363 + variant = NULL; 3364 + break; 3413 3365 } 3414 3366 3367 + len = snprintf(fwname, max_size, "qca/nvm_usb_%08x", rom_version); 3368 + if (variant) 3369 + len += snprintf(fwname + len, max_size - len, "%s", variant); 3370 + if (board_id) 3371 + len += snprintf(fwname + len, max_size - len, "_%04x", board_id); 3372 + len += snprintf(fwname + len, max_size - len, ".bin"); 3415 3373 } 3416 3374 3417 3375 static int btusb_setup_qca_load_nvm(struct hci_dev *hdev, ··· 3504 3494 /* Mark HCI_OP_ENHANCED_SETUP_SYNC_CONN as broken as it doesn't seem to 3505 3495 * work with the likes of HSP/HFP mSBC. 3506 3496 */ 3507 - set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks); 3497 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN); 3508 3498 3509 3499 return 0; 3510 3500 } ··· 4018 4008 } 4019 4009 #endif 4020 4010 if (id->driver_info & BTUSB_CW6622) 4021 - set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks); 4011 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY); 4022 4012 4023 4013 if (id->driver_info & BTUSB_BCM2045) 4024 - set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks); 4014 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY); 4025 4015 4026 4016 if (id->driver_info & BTUSB_BCM92035) 4027 4017 hdev->setup = btusb_setup_bcm92035; ··· 4078 4068 hdev->reset = btmtk_reset_sync; 4079 4069 hdev->set_bdaddr = btmtk_set_bdaddr; 4080 4070 hdev->send = btusb_send_frame_mtk; 4081 - set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks); 4082 - set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); 4071 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN); 4072 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP); 4083 4073 data->recv_acl = btmtk_usb_recv_acl; 4084 4074 data->suspend = btmtk_usb_suspend; 4085 4075 data->resume = btmtk_usb_resume; ··· 4087 4077 } 4088 4078 4089 4079 if (id->driver_info & BTUSB_SWAVE) { 4090 - set_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks); 4091 - set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks); 4080 + hci_set_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE); 4081 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS); 4092 4082 } 4093 4083 4094 4084 if (id->driver_info & BTUSB_INTEL_BOOT) { 4095 4085 hdev->manufacturer = 2; 4096 - set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); 4086 + hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE); 4097 4087 } 4098 4088 4099 4089 if (id->driver_info & BTUSB_ATH3012) { 4100 4090 data->setup_on_usb = btusb_setup_qca; 4101 4091 hdev->set_bdaddr = btusb_set_bdaddr_ath3012; 4102 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 4103 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 4092 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 4093 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 4104 4094 } 4105 4095 4106 4096 if (id->driver_info & BTUSB_QCA_ROME) { ··· 4108 4098 hdev->shutdown = btusb_shutdown_qca; 4109 4099 hdev->set_bdaddr = btusb_set_bdaddr_ath3012; 4110 4100 hdev->reset = btusb_qca_reset; 4111 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 4101 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 4112 4102 btusb_check_needs_reset_resume(intf); 4113 4103 } 4114 4104 ··· 4122 4112 hdev->shutdown = btusb_shutdown_qca; 4123 4113 hdev->set_bdaddr = btusb_set_bdaddr_wcn6855; 4124 4114 hdev->reset = btusb_qca_reset; 4125 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 4115 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 4126 4116 hci_set_msft_opcode(hdev, 0xFD70); 4127 4117 } 4128 4118 ··· 4150 4140 4151 4141 if (id->driver_info & BTUSB_ACTIONS_SEMI) { 4152 4142 /* Support is advertised, but not implemented */ 4153 - set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks); 4154 - set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks); 4155 - set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks); 4156 - set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks); 4157 - set_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks); 4158 - set_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &hdev->quirks); 4159 - set_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, &hdev->quirks); 4143 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING); 4144 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER); 4145 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT); 4146 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN); 4147 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE); 4148 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_CREATE_CONN); 4149 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT); 4160 4150 } 4161 4151 4162 4152 if (!reset) 4163 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 4153 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 4164 4154 4165 4155 if (force_scofix || id->driver_info & BTUSB_WRONG_SCO_MTU) { 4166 4156 if (!disable_scofix) 4167 - set_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks); 4157 + hci_set_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE); 4168 4158 } 4169 4159 4170 4160 if (id->driver_info & BTUSB_BROKEN_ISOC) 4171 4161 data->isoc = NULL; 4172 4162 4173 4163 if (id->driver_info & BTUSB_WIDEBAND_SPEECH) 4174 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 4164 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 4175 4165 4176 4166 if (id->driver_info & BTUSB_INVALID_LE_STATES) 4177 - set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks); 4167 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES); 4178 4168 4179 4169 if (id->driver_info & BTUSB_DIGIANSWER) { 4180 4170 data->cmdreq_type = USB_TYPE_VENDOR; 4181 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 4171 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 4182 4172 } 4183 4173 4184 4174 if (id->driver_info & BTUSB_CSR) { ··· 4187 4177 4188 4178 /* Old firmware would otherwise execute USB reset */ 4189 4179 if (bcdDevice < 0x117) 4190 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 4180 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 4191 4181 4192 4182 /* This must be set first in case we disable it for fakes */ 4193 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 4183 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 4194 4184 4195 4185 /* Fake CSR devices with broken commands */ 4196 4186 if (le16_to_cpu(udev->descriptor.idVendor) == 0x0a12 && ··· 4203 4193 4204 4194 /* New sniffer firmware has crippled HCI interface */ 4205 4195 if (le16_to_cpu(udev->descriptor.bcdDevice) > 0x997) 4206 - set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); 4196 + hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE); 4207 4197 } 4208 4198 4209 4199 if (id->driver_info & BTUSB_INTEL_BOOT) {
+1 -1
drivers/bluetooth/hci_aml.c
··· 424 424 425 425 if (!bacmp(&paddr->bdaddr, AML_BDADDR_DEFAULT)) { 426 426 bt_dev_info(hdev, "amlbt using default bdaddr (%pM)", &paddr->bdaddr); 427 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 427 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 428 428 } 429 429 430 430 exit:
+2 -2
drivers/bluetooth/hci_bcm.c
··· 643 643 * Allow the bootloader to set a valid address through the 644 644 * device tree. 645 645 */ 646 - if (test_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks)) 647 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hu->hdev->quirks); 646 + if (hci_test_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR)) 647 + hci_set_quirk(hu->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 648 648 649 649 if (!bcm_request_irq(bcm)) 650 650 err = bcm_setup_sleep(hu);
+5 -5
drivers/bluetooth/hci_bcm4377.c
··· 1435 1435 1436 1436 bda = (struct hci_rp_read_bd_addr *)skb->data; 1437 1437 if (!bcm4377_is_valid_bdaddr(bcm4377, &bda->bdaddr)) 1438 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &bcm4377->hdev->quirks); 1438 + hci_set_quirk(bcm4377->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 1439 1439 1440 1440 kfree_skb(skb); 1441 1441 return 0; ··· 2389 2389 hdev->setup = bcm4377_hci_setup; 2390 2390 2391 2391 if (bcm4377->hw->broken_mws_transport_config) 2392 - set_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &hdev->quirks); 2392 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG); 2393 2393 if (bcm4377->hw->broken_ext_scan) 2394 - set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks); 2394 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN); 2395 2395 if (bcm4377->hw->broken_le_coded) 2396 - set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks); 2396 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED); 2397 2397 if (bcm4377->hw->broken_le_ext_adv_report_phy) 2398 - set_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, &hdev->quirks); 2398 + hci_set_quirk(hdev, HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY); 2399 2399 2400 2400 pci_set_drvdata(pdev, bcm4377); 2401 2401 hci_set_drvdata(hdev, bcm4377);
+1 -1
drivers/bluetooth/hci_intel.c
··· 660 660 */ 661 661 if (!bacmp(&params.otp_bdaddr, BDADDR_ANY)) { 662 662 bt_dev_info(hdev, "No device address configured"); 663 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); 663 + hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR); 664 664 } 665 665 666 666 /* With this Intel bootloader only the hardware variant and device
+3 -3
drivers/bluetooth/hci_ldisc.c
··· 667 667 SET_HCIDEV_DEV(hdev, hu->tty->dev); 668 668 669 669 if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags)) 670 - set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); 670 + hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE); 671 671 672 672 if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags)) 673 - set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks); 673 + hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG); 674 674 675 675 if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags)) 676 - set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); 676 + hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE); 677 677 678 678 /* Only call open() for the protocol after hdev is fully initialized as 679 679 * open() (or a timer/workqueue it starts) may attempt to reference it.
+2 -2
drivers/bluetooth/hci_ll.c
··· 649 649 /* This means that there was an error getting the BD address 650 650 * during probe, so mark the device as having a bad address. 651 651 */ 652 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks); 652 + hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR); 653 653 } else if (bacmp(&lldev->bdaddr, BDADDR_ANY)) { 654 654 err = ll_set_bdaddr(hu->hdev, &lldev->bdaddr); 655 655 if (err) 656 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks); 656 + hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR); 657 657 } 658 658 659 659 /* Operational speed if any */
+1 -1
drivers/bluetooth/hci_nokia.c
··· 439 439 440 440 if (btdev->man_id == NOKIA_ID_BCM2048) { 441 441 hu->hdev->set_bdaddr = btbcm_set_bdaddr; 442 - set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks); 442 + hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR); 443 443 dev_dbg(dev, "bcm2048 has invalid bluetooth address!"); 444 444 } 445 445
+7 -7
drivers/bluetooth/hci_qca.c
··· 1892 1892 /* Enable controller to do both LE scan and BR/EDR inquiry 1893 1893 * simultaneously. 1894 1894 */ 1895 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 1895 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 1896 1896 1897 1897 switch (soc_type) { 1898 1898 case QCA_QCA2066: ··· 1944 1944 case QCA_WCN7850: 1945 1945 qcadev = serdev_device_get_drvdata(hu->serdev); 1946 1946 if (qcadev->bdaddr_property_broken) 1947 - set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks); 1947 + hci_set_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN); 1948 1948 1949 1949 hci_set_aosp_capable(hdev); 1950 1950 ··· 2487 2487 hdev = qcadev->serdev_hu.hdev; 2488 2488 2489 2489 if (power_ctrl_enabled) { 2490 - set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); 2490 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP); 2491 2491 hdev->shutdown = qca_power_off; 2492 2492 } 2493 2493 ··· 2496 2496 * be queried via hci. Same with the valid le states quirk. 2497 2497 */ 2498 2498 if (data->capabilities & QCA_CAP_WIDEBAND_SPEECH) 2499 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, 2500 - &hdev->quirks); 2499 + hci_set_quirk(hdev, 2500 + HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 2501 2501 2502 2502 if (!(data->capabilities & QCA_CAP_VALID_LE_STATES)) 2503 - set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks); 2503 + hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES); 2504 2504 } 2505 2505 2506 2506 return 0; ··· 2550 2550 * invoked and the SOC is already in the initial state, so 2551 2551 * don't also need to send the VSC. 2552 2552 */ 2553 - if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks) || 2553 + if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP) || 2554 2554 hci_dev_test_flag(hdev, HCI_SETUP)) 2555 2555 return; 2556 2556
+4 -4
drivers/bluetooth/hci_serdev.c
··· 152 152 * BT SOC is completely powered OFF during BT OFF, holding port 153 153 * open may drain the battery. 154 154 */ 155 - if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks)) { 155 + if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP)) { 156 156 clear_bit(HCI_UART_PROTO_READY, &hu->flags); 157 157 serdev_device_close(hu->serdev); 158 158 } ··· 358 358 SET_HCIDEV_DEV(hdev, &hu->serdev->dev); 359 359 360 360 if (test_bit(HCI_UART_NO_SUSPEND_NOTIFIER, &hu->flags)) 361 - set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks); 361 + hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER); 362 362 363 363 if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags)) 364 - set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); 364 + hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE); 365 365 366 366 if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags)) 367 - set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks); 367 + hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG); 368 368 369 369 if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags)) 370 370 return 0;
+4 -4
drivers/bluetooth/hci_vhci.c
··· 415 415 hdev->get_codec_config_data = vhci_get_codec_config_data; 416 416 hdev->wakeup = vhci_wakeup; 417 417 hdev->setup = vhci_setup; 418 - set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks); 419 - set_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks); 418 + hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP); 419 + hci_set_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED); 420 420 421 421 /* bit 6 is for external configuration */ 422 422 if (opcode & 0x40) 423 - set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks); 423 + hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG); 424 424 425 425 /* bit 7 is for raw device */ 426 426 if (opcode & 0x80) 427 - set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks); 427 + hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE); 428 428 429 429 if (hci_register_dev(hdev) < 0) { 430 430 BT_ERR("Can't register HCI device");
+5 -5
drivers/bluetooth/virtio_bt.c
··· 327 327 hdev->setup = virtbt_setup_intel; 328 328 hdev->shutdown = virtbt_shutdown_generic; 329 329 hdev->set_bdaddr = virtbt_set_bdaddr_intel; 330 - set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 331 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 332 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 330 + hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER); 331 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 332 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 333 333 break; 334 334 335 335 case VIRTIO_BT_CONFIG_VENDOR_REALTEK: 336 336 hdev->manufacturer = 93; 337 337 hdev->setup = virtbt_setup_realtek; 338 338 hdev->shutdown = virtbt_shutdown_generic; 339 - set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 340 - set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks); 339 + hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY); 340 + hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED); 341 341 break; 342 342 } 343 343 }
+41 -20
drivers/net/can/m_can/tcan4x5x-core.c
··· 343 343 of_property_read_bool(cdev->dev->of_node, "ti,nwkrq-voltage-vio"); 344 344 } 345 345 346 - static int tcan4x5x_get_gpios(struct m_can_classdev *cdev, 347 - const struct tcan4x5x_version_info *version_info) 346 + static int tcan4x5x_get_gpios(struct m_can_classdev *cdev) 348 347 { 349 348 struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev); 350 349 int ret; 351 350 352 - if (version_info->has_wake_pin) { 353 - tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake", 354 - GPIOD_OUT_HIGH); 355 - if (IS_ERR(tcan4x5x->device_wake_gpio)) { 356 - if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER) 357 - return -EPROBE_DEFER; 351 + tcan4x5x->device_wake_gpio = devm_gpiod_get_optional(cdev->dev, 352 + "device-wake", 353 + GPIOD_OUT_HIGH); 354 + if (IS_ERR(tcan4x5x->device_wake_gpio)) { 355 + if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER) 356 + return -EPROBE_DEFER; 358 357 359 - tcan4x5x_disable_wake(cdev); 360 - } 358 + tcan4x5x->device_wake_gpio = NULL; 361 359 } 362 360 363 361 tcan4x5x->reset_gpio = devm_gpiod_get_optional(cdev->dev, "reset", ··· 367 369 if (ret) 368 370 return ret; 369 371 370 - if (version_info->has_state_pin) { 371 - tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev, 372 - "device-state", 373 - GPIOD_IN); 374 - if (IS_ERR(tcan4x5x->device_state_gpio)) { 375 - tcan4x5x->device_state_gpio = NULL; 376 - tcan4x5x_disable_state(cdev); 377 - } 372 + tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev, 373 + "device-state", 374 + GPIOD_IN); 375 + if (IS_ERR(tcan4x5x->device_state_gpio)) 376 + tcan4x5x->device_state_gpio = NULL; 377 + 378 + return 0; 379 + } 380 + 381 + static int tcan4x5x_check_gpios(struct m_can_classdev *cdev, 382 + const struct tcan4x5x_version_info *version_info) 383 + { 384 + struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev); 385 + int ret; 386 + 387 + if (version_info->has_wake_pin && !tcan4x5x->device_wake_gpio) { 388 + ret = tcan4x5x_disable_wake(cdev); 389 + if (ret) 390 + return ret; 391 + } 392 + 393 + if (version_info->has_state_pin && !tcan4x5x->device_state_gpio) { 394 + ret = tcan4x5x_disable_state(cdev); 395 + if (ret) 396 + return ret; 378 397 } 379 398 380 399 return 0; ··· 483 468 goto out_m_can_class_free_dev; 484 469 } 485 470 471 + ret = tcan4x5x_get_gpios(mcan_class); 472 + if (ret) { 473 + dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret)); 474 + goto out_power; 475 + } 476 + 486 477 version_info = tcan4x5x_find_version(priv); 487 478 if (IS_ERR(version_info)) { 488 479 ret = PTR_ERR(version_info); 489 480 goto out_power; 490 481 } 491 482 492 - ret = tcan4x5x_get_gpios(mcan_class, version_info); 483 + ret = tcan4x5x_check_gpios(mcan_class, version_info); 493 484 if (ret) { 494 - dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret)); 485 + dev_err(&spi->dev, "Checking gpios failed %pe\n", ERR_PTR(ret)); 495 486 goto out_power; 496 487 } 497 488
+2 -1
drivers/net/ethernet/airoha/airoha_npu.c
··· 401 401 return ERR_PTR(-ENODEV); 402 402 403 403 pdev = of_find_device_by_node(np); 404 - of_node_put(np); 405 404 406 405 if (!pdev) { 407 406 dev_err(dev, "cannot find device node %s\n", np->name); 407 + of_node_put(np); 408 408 return ERR_PTR(-ENODEV); 409 409 } 410 + of_node_put(np); 410 411 411 412 if (!try_module_get(THIS_MODULE)) { 412 413 dev_err(dev, "failed to get the device driver module\n");
+2 -1
drivers/net/ethernet/intel/fm10k/fm10k.h
··· 189 189 struct fm10k_ring_container rx, tx; 190 190 191 191 struct napi_struct napi; 192 + struct rcu_head rcu; /* to avoid race with update stats on free */ 193 + 192 194 cpumask_t affinity_mask; 193 195 char name[IFNAMSIZ + 9]; 194 196 195 197 #ifdef CONFIG_DEBUG_FS 196 198 struct dentry *dbg_q_vector; 197 199 #endif /* CONFIG_DEBUG_FS */ 198 - struct rcu_head rcu; /* to avoid race with update stats on free */ 199 200 200 201 /* for dynamic allocation of rings associated with this q_vector */ 201 202 struct fm10k_ring ring[] ____cacheline_internodealigned_in_smp;
+1 -1
drivers/net/ethernet/intel/i40e/i40e.h
··· 945 945 u16 reg_idx; /* register index of the interrupt */ 946 946 947 947 struct napi_struct napi; 948 + struct rcu_head rcu; /* to avoid race with update stats on free */ 948 949 949 950 struct i40e_ring_container rx; 950 951 struct i40e_ring_container tx; ··· 956 955 cpumask_t affinity_mask; 957 956 struct irq_affinity_notify affinity_notify; 958 957 959 - struct rcu_head rcu; /* to avoid race with update stats on free */ 960 958 char name[I40E_INT_NAME_STR_LEN]; 961 959 bool arm_wb_state; 962 960 bool in_busy_poll;
+1 -1
drivers/net/ethernet/intel/ice/ice_debugfs.c
··· 606 606 607 607 pf->ice_debugfs_pf_fwlog = debugfs_create_dir("fwlog", 608 608 pf->ice_debugfs_pf); 609 - if (IS_ERR(pf->ice_debugfs_pf)) 609 + if (IS_ERR(pf->ice_debugfs_pf_fwlog)) 610 610 goto err_create_module_files; 611 611 612 612 fw_modules_dir = debugfs_create_dir("modules",
+2 -1
drivers/net/ethernet/intel/ice/ice_lag.c
··· 2226 2226 struct ice_lag *lag = pf->lag; 2227 2227 struct net_device *tmp_nd; 2228 2228 2229 - if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) || !lag) 2229 + if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) || 2230 + !lag || !lag->upper_netdev) 2230 2231 return false; 2231 2232 2232 2233 rcu_read_lock();
+2 -1
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 507 507 struct ixgbe_ring_container rx, tx; 508 508 509 509 struct napi_struct napi; 510 + struct rcu_head rcu; /* to avoid race with update stats on free */ 511 + 510 512 cpumask_t affinity_mask; 511 513 int numa_node; 512 - struct rcu_head rcu; /* to avoid race with update stats on free */ 513 514 char name[IFNAMSIZ + 9]; 514 515 515 516 /* for dynamic allocation of rings associated with this q_vector */
+8 -4
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1154 1154 } 1155 1155 } 1156 1156 1157 - static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe, 1158 - u32 cqe_bcnt) 1157 + static unsigned int mlx5e_lro_update_hdr(struct sk_buff *skb, 1158 + struct mlx5_cqe64 *cqe, 1159 + u32 cqe_bcnt) 1159 1160 { 1160 1161 struct ethhdr *eth = (struct ethhdr *)(skb->data); 1161 1162 struct tcphdr *tcp; ··· 1206 1205 tcp->check = tcp_v6_check(payload_len, &ipv6->saddr, 1207 1206 &ipv6->daddr, check); 1208 1207 } 1208 + 1209 + return (unsigned int)((unsigned char *)tcp + tcp->doff * 4 - skb->data); 1209 1210 } 1210 1211 1211 1212 static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) ··· 1564 1561 mlx5e_macsec_offload_handle_rx_skb(netdev, skb, cqe); 1565 1562 1566 1563 if (lro_num_seg > 1) { 1567 - mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt); 1568 - skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg); 1564 + unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt); 1565 + 1566 + skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg); 1569 1567 /* Subtract one since we already counted this as one 1570 1568 * "regular" packet in mlx5e_complete_rx_cqe() 1571 1569 */
+1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 2257 2257 { PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */ 2258 2258 { PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */ 2259 2259 { PCI_VDEVICE(MELLANOX, 0x1025) }, /* ConnectX-9 */ 2260 + { PCI_VDEVICE(MELLANOX, 0x1027) }, /* ConnectX-10 */ 2260 2261 { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ 2261 2262 { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ 2262 2263 { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+7 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 433 433 return -ETIMEDOUT; 434 434 } 435 435 436 + *system = (struct system_counterval_t) { 437 + .cycles = 0, 438 + .cs_id = CSID_X86_ART, 439 + .use_nsecs = false, 440 + }; 441 + 436 442 num_snapshot = (readl(ioaddr + GMAC_TIMESTAMP_STATUS) & 437 443 GMAC_TIMESTAMP_ATSNS_MASK) >> 438 444 GMAC_TIMESTAMP_ATSNS_SHIFT; ··· 454 448 } 455 449 456 450 system->cycles *= intel_priv->crossts_adj; 457 - system->cs_id = CSID_X86_ART; 451 + 458 452 priv->plat->flags &= ~STMMAC_FLAG_INT_SNAPSHOT_EN; 459 453 460 454 return 0;
+5 -4
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 1912 1912 struct wx_ring *ring) 1913 1913 { 1914 1914 u16 reg_idx = ring->reg_idx; 1915 - union wx_rx_desc *rx_desc; 1916 1915 u64 rdba = ring->dma; 1917 1916 u32 rxdctl; 1918 1917 ··· 1941 1942 memset(ring->rx_buffer_info, 0, 1942 1943 sizeof(struct wx_rx_buffer) * ring->count); 1943 1944 1944 - /* initialize Rx descriptor 0 */ 1945 - rx_desc = WX_RX_DESC(ring, 0); 1946 - rx_desc->wb.upper.length = 0; 1945 + /* reset ntu and ntc to place SW in sync with hardware */ 1946 + ring->next_to_clean = 0; 1947 + ring->next_to_use = 0; 1947 1948 1948 1949 /* enable receive descriptor ring */ 1949 1950 wr32m(wx, WX_PX_RR_CFG(reg_idx), ··· 2777 2778 hwstats->fdirmiss += rd32(wx, WX_RDB_FDIR_MISS); 2778 2779 } 2779 2780 2781 + /* qmprc is not cleared on read, manual reset it */ 2782 + hwstats->qmprc = 0; 2780 2783 for (i = wx->num_vfs * wx->num_rx_queues_per_pool; 2781 2784 i < wx->mac.max_rx_queues; i++) 2782 2785 hwstats->qmprc += rd32(wx, WX_PX_MPRC(i));
+7 -13
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 174 174 skb_frag_off(frag), 175 175 skb_frag_size(frag), 176 176 DMA_FROM_DEVICE); 177 - 178 - /* If the page was released, just unmap it. */ 179 - if (unlikely(WX_CB(skb)->page_released)) 180 - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); 181 177 } 182 178 183 179 static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring, ··· 223 227 struct sk_buff *skb, 224 228 int rx_buffer_pgcnt) 225 229 { 226 - if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma) 227 - /* the page has been released from the ring */ 228 - WX_CB(skb)->page_released = true; 229 - 230 230 /* clear contents of rx_buffer */ 231 231 rx_buffer->page = NULL; 232 232 rx_buffer->skb = NULL; ··· 307 315 return false; 308 316 dma = page_pool_get_dma_addr(page); 309 317 310 - bi->page_dma = dma; 318 + bi->dma = dma; 311 319 bi->page = page; 312 320 bi->page_offset = 0; 313 321 ··· 344 352 DMA_FROM_DEVICE); 345 353 346 354 rx_desc->read.pkt_addr = 347 - cpu_to_le64(bi->page_dma + bi->page_offset); 355 + cpu_to_le64(bi->dma + bi->page_offset); 348 356 349 357 rx_desc++; 350 358 bi++; ··· 357 365 358 366 /* clear the status bits for the next_to_use descriptor */ 359 367 rx_desc->wb.upper.status_error = 0; 368 + /* clear the length for the next_to_use descriptor */ 369 + rx_desc->wb.upper.length = 0; 360 370 361 371 cleaned_count--; 362 372 } while (cleaned_count); ··· 2417 2423 if (rx_buffer->skb) { 2418 2424 struct sk_buff *skb = rx_buffer->skb; 2419 2425 2420 - if (WX_CB(skb)->page_released) 2421 - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); 2422 - 2423 2426 dev_kfree_skb(skb); 2424 2427 } 2425 2428 ··· 2439 2448 rx_buffer = rx_ring->rx_buffer_info; 2440 2449 } 2441 2450 } 2451 + 2452 + /* Zero out the descriptor ring */ 2453 + memset(rx_ring->desc, 0, rx_ring->size); 2442 2454 2443 2455 rx_ring->next_to_alloc = 0; 2444 2456 rx_ring->next_to_clean = 0;
-2
drivers/net/ethernet/wangxun/libwx/wx_type.h
··· 909 909 struct wx_cb { 910 910 dma_addr_t dma; 911 911 u16 append_cnt; /* number of skb's appended */ 912 - bool page_released; 913 912 bool dma_released; 914 913 }; 915 914 ··· 997 998 struct wx_rx_buffer { 998 999 struct sk_buff *skb; 999 1000 dma_addr_t dma; 1000 - dma_addr_t page_dma; 1001 1001 struct page *page; 1002 1002 unsigned int page_offset; 1003 1003 };
+1 -1
drivers/net/ethernet/xilinx/xilinx_emaclite.c
··· 286 286 287 287 /* Read the remaining data */ 288 288 for (; length > 0; length--) 289 - *to_u8_ptr = *from_u8_ptr; 289 + *to_u8_ptr++ = *from_u8_ptr++; 290 290 } 291 291 } 292 292
+4 -1
drivers/net/hyperv/netvsc_drv.c
··· 2317 2317 if (!ndev) 2318 2318 return NOTIFY_DONE; 2319 2319 2320 - /* set slave flag before open to prevent IPv6 addrconf */ 2320 + /* Set slave flag and no addrconf flag before open 2321 + * to prevent IPv6 addrconf. 2322 + */ 2321 2323 vf_netdev->flags |= IFF_SLAVE; 2324 + vf_netdev->priv_flags |= IFF_NO_ADDRCONF; 2322 2325 return NOTIFY_DONE; 2323 2326 } 2324 2327
+7
drivers/net/ovpn/io.c
··· 62 62 unsigned int pkt_len; 63 63 int ret; 64 64 65 + /* 66 + * GSO state from the transport layer is not valid for the tunnel/data 67 + * path. Reset all GSO fields to prevent any further GSO processing 68 + * from entering an inconsistent state. 69 + */ 70 + skb_gso_reset(skb); 71 + 65 72 /* we can't guarantee the packet wasn't corrupted before entering the 66 73 * VPN, therefore we give other layers a chance to check that 67 74 */
+43 -8
drivers/net/ovpn/netlink.c
··· 352 352 return -EINVAL; 353 353 354 354 ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER], 355 - ovpn_peer_nl_policy, info->extack); 355 + ovpn_peer_new_input_nl_policy, info->extack); 356 356 if (ret) 357 357 return ret; 358 358 ··· 476 476 return -EINVAL; 477 477 478 478 ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER], 479 - ovpn_peer_nl_policy, info->extack); 479 + ovpn_peer_set_input_nl_policy, info->extack); 480 480 if (ret) 481 481 return ret; 482 482 ··· 654 654 struct ovpn_peer *peer; 655 655 struct sk_buff *msg; 656 656 u32 peer_id; 657 - int ret; 657 + int ret, i; 658 658 659 659 if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER)) 660 660 return -EINVAL; ··· 667 667 if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_PEER], attrs, 668 668 OVPN_A_PEER_ID)) 669 669 return -EINVAL; 670 + 671 + /* OVPN_CMD_PEER_GET expects only the PEER_ID, therefore 672 + * ensure that the user hasn't specified any other attribute. 673 + * 674 + * Unfortunately this check cannot be performed via netlink 675 + * spec/policy and must be open-coded. 676 + */ 677 + for (i = 0; i < OVPN_A_PEER_MAX + 1; i++) { 678 + if (i == OVPN_A_PEER_ID) 679 + continue; 680 + 681 + if (attrs[i]) { 682 + NL_SET_ERR_MSG_FMT_MOD(info->extack, 683 + "unexpected attribute %u", i); 684 + return -EINVAL; 685 + } 686 + } 670 687 671 688 peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]); 672 689 peer = ovpn_peer_get_by_id(ovpn, peer_id); ··· 785 768 return -EINVAL; 786 769 787 770 ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER], 788 - ovpn_peer_nl_policy, info->extack); 771 + ovpn_peer_del_input_nl_policy, info->extack); 789 772 if (ret) 790 773 return ret; 791 774 ··· 986 969 struct ovpn_peer *peer; 987 970 struct sk_buff *msg; 988 971 u32 peer_id; 989 - int ret; 972 + int ret, i; 990 973 991 974 if (GENL_REQ_ATTR_CHECK(info, OVPN_A_KEYCONF)) 992 975 return -EINVAL; 993 976 994 977 ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX, 995 978 info->attrs[OVPN_A_KEYCONF], 996 - ovpn_keyconf_nl_policy, info->extack); 979 + ovpn_keyconf_get_nl_policy, info->extack); 997 980 if (ret) 998 981 return ret; 999 982 ··· 1004 987 if (NL_REQ_ATTR_CHECK(info->extack, info->attrs[OVPN_A_KEYCONF], attrs, 1005 988 OVPN_A_KEYCONF_SLOT)) 1006 989 return -EINVAL; 990 + 991 + /* OVPN_CMD_KEY_GET expects only the PEER_ID and the SLOT, therefore 992 + * ensure that the user hasn't specified any other attribute. 993 + * 994 + * Unfortunately this check cannot be performed via netlink 995 + * spec/policy and must be open-coded. 996 + */ 997 + for (i = 0; i < OVPN_A_KEYCONF_MAX + 1; i++) { 998 + if (i == OVPN_A_KEYCONF_PEER_ID || 999 + i == OVPN_A_KEYCONF_SLOT) 1000 + continue; 1001 + 1002 + if (attrs[i]) { 1003 + NL_SET_ERR_MSG_FMT_MOD(info->extack, 1004 + "unexpected attribute %u", i); 1005 + return -EINVAL; 1006 + } 1007 + } 1007 1008 1008 1009 peer_id = nla_get_u32(attrs[OVPN_A_KEYCONF_PEER_ID]); 1009 1010 peer = ovpn_peer_get_by_id(ovpn, peer_id); ··· 1072 1037 1073 1038 ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX, 1074 1039 info->attrs[OVPN_A_KEYCONF], 1075 - ovpn_keyconf_nl_policy, info->extack); 1040 + ovpn_keyconf_swap_input_nl_policy, info->extack); 1076 1041 if (ret) 1077 1042 return ret; 1078 1043 ··· 1109 1074 1110 1075 ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX, 1111 1076 info->attrs[OVPN_A_KEYCONF], 1112 - ovpn_keyconf_nl_policy, info->extack); 1077 + ovpn_keyconf_del_input_nl_policy, info->extack); 1113 1078 if (ret) 1114 1079 return ret; 1115 1080
+1
drivers/net/ovpn/udp.c
··· 344 344 int ret; 345 345 346 346 skb->dev = peer->ovpn->dev; 347 + skb->mark = READ_ONCE(sk->sk_mark); 347 348 /* no checksum performed at this layer */ 348 349 skb->ip_summed = CHECKSUM_NONE; 349 350
+4 -2
drivers/net/phy/phy_device.c
··· 3416 3416 /* Get the LEDs from the device tree, and instantiate standard 3417 3417 * LEDs for them. 3418 3418 */ 3419 - if (IS_ENABLED(CONFIG_PHYLIB_LEDS)) 3419 + if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) && 3420 + !phy_driver_is_genphy_10g(phydev)) 3420 3421 err = of_phy_leds(phydev); 3421 3422 3422 3423 out: ··· 3434 3433 3435 3434 cancel_delayed_work_sync(&phydev->state_queue); 3436 3435 3437 - if (IS_ENABLED(CONFIG_PHYLIB_LEDS)) 3436 + if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) && 3437 + !phy_driver_is_genphy_10g(phydev)) 3438 3438 phy_leds_unregister(phydev); 3439 3439 3440 3440 phydev->state = PHY_DOWN;
+4
drivers/net/usb/sierra_net.c
··· 689 689 status); 690 690 return -ENODEV; 691 691 } 692 + if (!dev->status) { 693 + dev_err(&dev->udev->dev, "No status endpoint found"); 694 + return -ENODEV; 695 + } 692 696 /* Initialize sierra private data */ 693 697 priv = kzalloc(sizeof *priv, GFP_KERNEL); 694 698 if (!priv)
+1 -1
drivers/net/virtio_net.c
··· 7059 7059 otherwise get link status from config. */ 7060 7060 netif_carrier_off(dev); 7061 7061 if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { 7062 - virtnet_config_changed_work(&vi->config_work); 7062 + virtio_config_changed(vi->vdev); 7063 7063 } else { 7064 7064 vi->status = VIRTIO_NET_S_LINK_UP; 7065 7065 virtnet_update_settings(vi);
+2 -1
drivers/net/wireless/ath/ath12k/dp_rx.c
··· 1060 1060 } 1061 1061 1062 1062 rx_tid = &peer->rx_tid[tid]; 1063 - paddr_aligned = rx_tid->qbuf.paddr_aligned; 1064 1063 /* Update the tid queue if it is already setup */ 1065 1064 if (rx_tid->active) { 1066 1065 ret = ath12k_peer_rx_tid_reo_update(ar, peer, rx_tid, ··· 1071 1072 } 1072 1073 1073 1074 if (!ab->hw_params->reoq_lut_support) { 1075 + paddr_aligned = rx_tid->qbuf.paddr_aligned; 1074 1076 ret = ath12k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id, 1075 1077 peer_mac, 1076 1078 paddr_aligned, tid, ··· 1098 1098 return ret; 1099 1099 } 1100 1100 1101 + paddr_aligned = rx_tid->qbuf.paddr_aligned; 1101 1102 if (ab->hw_params->reoq_lut_support) { 1102 1103 /* Update the REO queue LUT at the corresponding peer id 1103 1104 * and tid with qaddr.
+3 -2
drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2012-2014, 2018-2024 Intel Corporation 3 + * Copyright (C) 2012-2014, 2018-2025 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 754 754 * according to the BIOS definitions. 755 755 * For LARI cmd version 11 - bits 0:4 are supported. 756 756 * For LARI cmd version 12 - bits 0:6 are supported and bits 7:31 are 757 - * reserved. No need to mask out the reserved bits. 757 + * reserved. 758 758 * @force_disable_channels_bitmap: Bitmap of disabled bands/channels. 759 759 * Each bit represents a set of channels in a specific band that should be 760 760 * disabled ··· 787 787 /* Activate UNII-1 (5.2GHz) for World Wide */ 788 788 #define ACTIVATE_5G2_IN_WW_MASK BIT(4) 789 789 #define CHAN_STATE_ACTIVE_BITMAP_CMD_V11 0x1F 790 + #define CHAN_STATE_ACTIVE_BITMAP_CMD_V12 0x7F 790 791 791 792 /** 792 793 * struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete
+1
drivers/net/wireless/intel/iwlwifi/fw/regulatory.c
··· 614 614 615 615 ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value); 616 616 if (!ret) { 617 + value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12; 617 618 if (cmd_ver < 8) 618 619 value &= ~ACTIVATE_5G2_IN_WW_MASK; 619 620
+3 -1
drivers/net/wireless/intel/iwlwifi/mld/regulatory.c
··· 251 251 cpu_to_le32(value &= DSM_UNII4_ALLOW_BITMAP); 252 252 253 253 ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value); 254 - if (!ret) 254 + if (!ret) { 255 + value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12; 255 256 cmd.chan_state_active_bitmap = cpu_to_le32(value); 257 + } 256 258 257 259 ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ENABLE_6E, &value); 258 260 if (!ret)
+4 -2
drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
··· 546 546 } 547 547 548 548 if (WARN_ON(trans->do_top_reset && 549 - trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_SC)) 550 - return -EINVAL; 549 + trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_SC)) { 550 + ret = -EINVAL; 551 + goto out; 552 + } 551 553 552 554 /* we need to wait later - set state */ 553 555 if (trans->do_top_reset)
+4 -4
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 2101 2101 2102 2102 bc_ent = cpu_to_le16(len | (sta_id << 12)); 2103 2103 2104 - scd_bc_tbl[txq_id * BC_TABLE_SIZE + write_ptr].tfd_offset = bc_ent; 2104 + scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + write_ptr].tfd_offset = bc_ent; 2105 2105 2106 2106 if (write_ptr < TFD_QUEUE_SIZE_BC_DUP) 2107 - scd_bc_tbl[txq_id * BC_TABLE_SIZE + TFD_QUEUE_SIZE_MAX + write_ptr].tfd_offset = 2107 + scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + TFD_QUEUE_SIZE_MAX + write_ptr].tfd_offset = 2108 2108 bc_ent; 2109 2109 } 2110 2110 ··· 2328 2328 2329 2329 bc_ent = cpu_to_le16(1 | (sta_id << 12)); 2330 2330 2331 - scd_bc_tbl[txq_id * BC_TABLE_SIZE + read_ptr].tfd_offset = bc_ent; 2331 + scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + read_ptr].tfd_offset = bc_ent; 2332 2332 2333 2333 if (read_ptr < TFD_QUEUE_SIZE_BC_DUP) 2334 - scd_bc_tbl[txq_id * BC_TABLE_SIZE + TFD_QUEUE_SIZE_MAX + read_ptr].tfd_offset = 2334 + scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + TFD_QUEUE_SIZE_MAX + read_ptr].tfd_offset = 2335 2335 bc_ent; 2336 2336 } 2337 2337
+2
include/net/bluetooth/hci.h
··· 377 377 * This quirk must be set before hci_register_dev is called. 378 378 */ 379 379 HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, 380 + 381 + __HCI_NUM_QUIRKS, 380 382 }; 381 383 382 384 /* HCI device flags */
+27 -23
include/net/bluetooth/hci_core.h
··· 464 464 465 465 unsigned int auto_accept_delay; 466 466 467 - unsigned long quirks; 467 + DECLARE_BITMAP(quirk_flags, __HCI_NUM_QUIRKS); 468 468 469 469 atomic_t cmd_cnt; 470 470 unsigned int acl_cnt; ··· 656 656 u8 (*classify_pkt_type)(struct hci_dev *hdev, struct sk_buff *skb); 657 657 }; 658 658 659 + #define hci_set_quirk(hdev, nr) set_bit((nr), (hdev)->quirk_flags) 660 + #define hci_clear_quirk(hdev, nr) clear_bit((nr), (hdev)->quirk_flags) 661 + #define hci_test_quirk(hdev, nr) test_bit((nr), (hdev)->quirk_flags) 662 + 659 663 #define HCI_PHY_HANDLE(handle) (handle & 0xff) 660 664 661 665 enum conn_reasons { ··· 833 829 #define hci_dev_test_and_clear_flag(hdev, nr) test_and_clear_bit((nr), (hdev)->dev_flags) 834 830 #define hci_dev_test_and_change_flag(hdev, nr) test_and_change_bit((nr), (hdev)->dev_flags) 835 831 836 - #define hci_dev_clear_volatile_flags(hdev) \ 837 - do { \ 838 - hci_dev_clear_flag(hdev, HCI_LE_SCAN); \ 839 - hci_dev_clear_flag(hdev, HCI_LE_ADV); \ 840 - hci_dev_clear_flag(hdev, HCI_LL_RPA_RESOLUTION);\ 841 - hci_dev_clear_flag(hdev, HCI_PERIODIC_INQ); \ 842 - hci_dev_clear_flag(hdev, HCI_QUALITY_REPORT); \ 832 + #define hci_dev_clear_volatile_flags(hdev) \ 833 + do { \ 834 + hci_dev_clear_flag((hdev), HCI_LE_SCAN); \ 835 + hci_dev_clear_flag((hdev), HCI_LE_ADV); \ 836 + hci_dev_clear_flag((hdev), HCI_LL_RPA_RESOLUTION); \ 837 + hci_dev_clear_flag((hdev), HCI_PERIODIC_INQ); \ 838 + hci_dev_clear_flag((hdev), HCI_QUALITY_REPORT); \ 843 839 } while (0) 844 840 845 841 #define hci_dev_le_state_simultaneous(hdev) \ 846 - (!test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) && \ 847 - (hdev->le_states[4] & 0x08) && /* Central */ \ 848 - (hdev->le_states[4] & 0x40) && /* Peripheral */ \ 849 - (hdev->le_states[3] & 0x10)) /* Simultaneous */ 842 + (!hci_test_quirk((hdev), HCI_QUIRK_BROKEN_LE_STATES) && \ 843 + ((hdev)->le_states[4] & 0x08) && /* Central */ \ 844 + ((hdev)->le_states[4] & 0x40) && /* Peripheral */ \ 845 + ((hdev)->le_states[3] & 0x10)) /* Simultaneous */ 850 846 851 847 /* ----- HCI interface to upper protocols ----- */ 852 848 int l2cap_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr); ··· 1935 1931 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_2M)) 1936 1932 1937 1933 #define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED) && \ 1938 - !test_bit(HCI_QUIRK_BROKEN_LE_CODED, \ 1939 - &(dev)->quirks)) 1934 + !hci_test_quirk((dev), \ 1935 + HCI_QUIRK_BROKEN_LE_CODED)) 1940 1936 1941 1937 #define scan_coded(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_CODED) || \ 1942 1938 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED)) ··· 1944 1940 #define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY) 1945 1941 1946 1942 #define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \ 1947 - (hdev->commands[39] & 0x04)) 1943 + ((dev)->commands[39] & 0x04)) 1948 1944 1949 1945 #define read_key_size_capable(dev) \ 1950 1946 ((dev)->commands[20] & 0x10 && \ 1951 - !test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks)) 1947 + !hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE)) 1952 1948 1953 1949 #define read_voice_setting_capable(dev) \ 1954 1950 ((dev)->commands[9] & 0x04 && \ 1955 - !test_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &(dev)->quirks)) 1951 + !hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_VOICE_SETTING)) 1956 1952 1957 1953 /* Use enhanced synchronous connection if command is supported and its quirk 1958 1954 * has not been set. 1959 1955 */ 1960 1956 #define enhanced_sync_conn_capable(dev) \ 1961 1957 (((dev)->commands[29] & 0x08) && \ 1962 - !test_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &(dev)->quirks)) 1958 + !hci_test_quirk((dev), HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN)) 1963 1959 1964 1960 /* Use ext scanning if set ext scan param and ext scan enable is supported */ 1965 1961 #define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \ 1966 1962 ((dev)->commands[37] & 0x40) && \ 1967 - !test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks)) 1963 + !hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_SCAN)) 1968 1964 1969 1965 /* Use ext create connection if command is supported */ 1970 1966 #define use_ext_conn(dev) (((dev)->commands[37] & 0x80) && \ 1971 - !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &(dev)->quirks)) 1967 + !hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_CREATE_CONN)) 1972 1968 /* Extended advertising support */ 1973 1969 #define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV)) 1974 1970 ··· 1983 1979 */ 1984 1980 #define use_enhanced_conn_complete(dev) ((ll_privacy_capable(dev) || \ 1985 1981 ext_adv_capable(dev)) && \ 1986 - !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, \ 1987 - &(dev)->quirks)) 1982 + !hci_test_quirk((dev), \ 1983 + HCI_QUIRK_BROKEN_EXT_CREATE_CONN)) 1988 1984 1989 1985 /* Periodic advertising support */ 1990 1986 #define per_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_PERIODIC_ADV)) ··· 2001 1997 #define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER) 2002 1998 2003 1999 #define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \ 2004 - (!test_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &(dev)->quirks))) 2000 + (!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG))) 2005 2001 2006 2002 /* ----- HCI protocols ----- */ 2007 2003 #define HCI_PROTO_DEFER 0x01
+1 -1
include/net/cfg80211.h
··· 2690 2690 s8 tsf_report_link_id; 2691 2691 2692 2692 /* keep last */ 2693 - struct ieee80211_channel *channels[] __counted_by(n_channels); 2693 + struct ieee80211_channel *channels[]; 2694 2694 }; 2695 2695 2696 2696 static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask)
+13 -2
include/net/netfilter/nf_conntrack.h
··· 306 306 /* use after obtaining a reference count */ 307 307 static inline bool nf_ct_should_gc(const struct nf_conn *ct) 308 308 { 309 - return nf_ct_is_expired(ct) && nf_ct_is_confirmed(ct) && 310 - !nf_ct_is_dying(ct); 309 + if (!nf_ct_is_confirmed(ct)) 310 + return false; 311 + 312 + /* load ct->timeout after is_confirmed() test. 313 + * Pairs with __nf_conntrack_confirm() which: 314 + * 1. Increases ct->timeout value 315 + * 2. Inserts ct into rcu hlist 316 + * 3. Sets the confirmed bit 317 + * 4. Unlocks the hlist lock 318 + */ 319 + smp_acquire__after_ctrl_dep(); 320 + 321 + return nf_ct_is_expired(ct) && !nf_ct_is_dying(ct); 311 322 } 312 323 313 324 #define NF_CT_DAY (86400 * HZ)
-5
include/net/netfilter/nf_tables.h
··· 1142 1142 int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain); 1143 1143 void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain); 1144 1144 1145 - struct nft_hook; 1146 - void nf_tables_chain_device_notify(const struct nft_chain *chain, 1147 - const struct nft_hook *hook, 1148 - const struct net_device *dev, int event); 1149 - 1150 1145 enum nft_chain_types { 1151 1146 NFT_CHAIN_T_DEFAULT = 0, 1152 1147 NFT_CHAIN_T_ROUTE,
+5 -1
include/trace/events/rxrpc.h
··· 322 322 EM(rxrpc_call_put_kernel, "PUT kernel ") \ 323 323 EM(rxrpc_call_put_poke, "PUT poke ") \ 324 324 EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \ 325 + EM(rxrpc_call_put_release_recvmsg_q, "PUT rls-rcmq") \ 325 326 EM(rxrpc_call_put_release_sock, "PUT rls-sock") \ 326 327 EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ 327 328 EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ 328 - EM(rxrpc_call_put_unnotify, "PUT unnotify") \ 329 329 EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ 330 330 EM(rxrpc_call_put_userid, "PUT user-id ") \ 331 331 EM(rxrpc_call_see_accept, "SEE accept ") \ 332 332 EM(rxrpc_call_see_activate_client, "SEE act-clnt") \ 333 + EM(rxrpc_call_see_already_released, "SEE alrdy-rl") \ 333 334 EM(rxrpc_call_see_connect_failed, "SEE con-fail") \ 334 335 EM(rxrpc_call_see_connected, "SEE connect ") \ 335 336 EM(rxrpc_call_see_conn_abort, "SEE conn-abt") \ 337 + EM(rxrpc_call_see_discard, "SEE discard ") \ 336 338 EM(rxrpc_call_see_disconnected, "SEE disconn ") \ 337 339 EM(rxrpc_call_see_distribute_error, "SEE dist-err") \ 338 340 EM(rxrpc_call_see_input, "SEE input ") \ 341 + EM(rxrpc_call_see_notify_released, "SEE nfy-rlsd") \ 342 + EM(rxrpc_call_see_recvmsg, "SEE recvmsg ") \ 339 343 EM(rxrpc_call_see_release, "SEE release ") \ 340 344 EM(rxrpc_call_see_userid_exists, "SEE u-exists") \ 341 345 EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \
-10
include/uapi/linux/netfilter/nf_tables.h
··· 142 142 NFT_MSG_DESTROYOBJ, 143 143 NFT_MSG_DESTROYFLOWTABLE, 144 144 NFT_MSG_GETSETELEM_RESET, 145 - NFT_MSG_NEWDEV, 146 - NFT_MSG_DELDEV, 147 145 NFT_MSG_MAX, 148 146 }; 149 147 ··· 1784 1786 * enum nft_device_attributes - nf_tables device netlink attributes 1785 1787 * 1786 1788 * @NFTA_DEVICE_NAME: name of this device (NLA_STRING) 1787 - * @NFTA_DEVICE_TABLE: table containing the flowtable or chain hooking into the device (NLA_STRING) 1788 - * @NFTA_DEVICE_FLOWTABLE: flowtable hooking into the device (NLA_STRING) 1789 - * @NFTA_DEVICE_CHAIN: chain hooking into the device (NLA_STRING) 1790 - * @NFTA_DEVICE_SPEC: hook spec matching the device (NLA_STRING) 1791 1789 */ 1792 1790 enum nft_devices_attributes { 1793 1791 NFTA_DEVICE_UNSPEC, 1794 1792 NFTA_DEVICE_NAME, 1795 - NFTA_DEVICE_TABLE, 1796 - NFTA_DEVICE_FLOWTABLE, 1797 - NFTA_DEVICE_CHAIN, 1798 - NFTA_DEVICE_SPEC, 1799 1793 __NFTA_DEVICE_MAX 1800 1794 }; 1801 1795 #define NFTA_DEVICE_MAX (__NFTA_DEVICE_MAX - 1)
-2
include/uapi/linux/netfilter/nfnetlink.h
··· 25 25 #define NFNLGRP_ACCT_QUOTA NFNLGRP_ACCT_QUOTA 26 26 NFNLGRP_NFTRACE, 27 27 #define NFNLGRP_NFTRACE NFNLGRP_NFTRACE 28 - NFNLGRP_NFT_DEV, 29 - #define NFNLGRP_NFT_DEV NFNLGRP_NFT_DEV 30 28 __NFNLGRP_MAX, 31 29 }; 32 30 #define NFNLGRP_MAX (__NFNLGRP_MAX - 1)
+33 -9
net/8021q/vlan.c
··· 357 357 return err; 358 358 } 359 359 360 + static void vlan_vid0_add(struct net_device *dev) 361 + { 362 + struct vlan_info *vlan_info; 363 + int err; 364 + 365 + if (!(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) 366 + return; 367 + 368 + pr_info("adding VLAN 0 to HW filter on device %s\n", dev->name); 369 + 370 + err = vlan_vid_add(dev, htons(ETH_P_8021Q), 0); 371 + if (err) 372 + return; 373 + 374 + vlan_info = rtnl_dereference(dev->vlan_info); 375 + vlan_info->auto_vid0 = true; 376 + } 377 + 378 + static void vlan_vid0_del(struct net_device *dev) 379 + { 380 + struct vlan_info *vlan_info = rtnl_dereference(dev->vlan_info); 381 + 382 + if (!vlan_info || !vlan_info->auto_vid0) 383 + return; 384 + 385 + vlan_info->auto_vid0 = false; 386 + vlan_vid_del(dev, htons(ETH_P_8021Q), 0); 387 + } 388 + 360 389 static int vlan_device_event(struct notifier_block *unused, unsigned long event, 361 390 void *ptr) 362 391 { ··· 407 378 return notifier_from_errno(err); 408 379 } 409 380 410 - if ((event == NETDEV_UP) && 411 - (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) { 412 - pr_info("adding VLAN 0 to HW filter on device %s\n", 413 - dev->name); 414 - vlan_vid_add(dev, htons(ETH_P_8021Q), 0); 415 - } 416 - if (event == NETDEV_DOWN && 417 - (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) 418 - vlan_vid_del(dev, htons(ETH_P_8021Q), 0); 381 + if (event == NETDEV_UP) 382 + vlan_vid0_add(dev); 383 + else if (event == NETDEV_DOWN) 384 + vlan_vid0_del(dev); 419 385 420 386 vlan_info = rtnl_dereference(dev->vlan_info); 421 387 if (!vlan_info)
+1
net/8021q/vlan.h
··· 33 33 struct vlan_group grp; 34 34 struct list_head vid_list; 35 35 unsigned int nr_vids; 36 + bool auto_vid0; 36 37 struct rcu_head rcu; 37 38 }; 38 39
+2 -2
net/bluetooth/hci_core.c
··· 2654 2654 /* Devices that are marked for raw-only usage are unconfigured 2655 2655 * and should not be included in normal operation. 2656 2656 */ 2657 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 2657 + if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE)) 2658 2658 hci_dev_set_flag(hdev, HCI_UNCONFIGURED); 2659 2659 2660 2660 /* Mark Remote Wakeup connection flag as supported if driver has wakeup ··· 2784 2784 int ret = 0; 2785 2785 2786 2786 if (!hdev->suspend_notifier.notifier_call && 2787 - !test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) { 2787 + !hci_test_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER)) { 2788 2788 hdev->suspend_notifier.notifier_call = hci_suspend_notifier; 2789 2789 ret = register_pm_notifier(&hdev->suspend_notifier); 2790 2790 }
+4 -4
net/bluetooth/hci_debugfs.c
··· 38 38 struct hci_dev *hdev = file->private_data; \ 39 39 char buf[3]; \ 40 40 \ 41 - buf[0] = test_bit(__quirk, &hdev->quirks) ? 'Y' : 'N'; \ 41 + buf[0] = test_bit(__quirk, hdev->quirk_flags) ? 'Y' : 'N'; \ 42 42 buf[1] = '\n'; \ 43 43 buf[2] = '\0'; \ 44 44 return simple_read_from_buffer(user_buf, count, ppos, buf, 2); \ ··· 59 59 if (err) \ 60 60 return err; \ 61 61 \ 62 - if (enable == test_bit(__quirk, &hdev->quirks)) \ 62 + if (enable == test_bit(__quirk, hdev->quirk_flags)) \ 63 63 return -EALREADY; \ 64 64 \ 65 - change_bit(__quirk, &hdev->quirks); \ 65 + change_bit(__quirk, hdev->quirk_flags); \ 66 66 \ 67 67 return count; \ 68 68 } \ ··· 1356 1356 * for the vendor callback. Instead just store the desired value and 1357 1357 * the setting will be programmed when the controller gets powered on. 1358 1358 */ 1359 - if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) && 1359 + if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) && 1360 1360 (!test_bit(HCI_RUNNING, &hdev->flags) || 1361 1361 hci_dev_test_flag(hdev, HCI_USER_CHANNEL))) 1362 1362 goto done;
+9 -10
net/bluetooth/hci_event.c
··· 908 908 return rp->status; 909 909 910 910 if (hdev->max_page < rp->max_page) { 911 - if (test_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2, 912 - &hdev->quirks)) 911 + if (hci_test_quirk(hdev, 912 + HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2)) 913 913 bt_dev_warn(hdev, "broken local ext features page 2"); 914 914 else 915 915 hdev->max_page = rp->max_page; ··· 936 936 hdev->acl_pkts = __le16_to_cpu(rp->acl_max_pkt); 937 937 hdev->sco_pkts = __le16_to_cpu(rp->sco_max_pkt); 938 938 939 - if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) { 939 + if (hci_test_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE)) { 940 940 hdev->sco_mtu = 64; 941 941 hdev->sco_pkts = 8; 942 942 } ··· 2971 2971 * state to indicate completion. 2972 2972 */ 2973 2973 if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) || 2974 - !test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks)) 2974 + !hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) 2975 2975 hci_discovery_set_state(hdev, DISCOVERY_STOPPED); 2976 2976 goto unlock; 2977 2977 } ··· 2990 2990 * state to indicate completion. 2991 2991 */ 2992 2992 if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) || 2993 - !test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks)) 2993 + !hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) 2994 2994 hci_discovery_set_state(hdev, DISCOVERY_STOPPED); 2995 2995 } 2996 2996 ··· 3614 3614 /* We skip the WRITE_AUTH_PAYLOAD_TIMEOUT for ATS2851 based controllers 3615 3615 * to avoid unexpected SMP command errors when pairing. 3616 3616 */ 3617 - if (test_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, 3618 - &hdev->quirks)) 3617 + if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT)) 3619 3618 goto notify; 3620 3619 3621 3620 /* Set the default Authenticated Payload Timeout after ··· 5913 5914 * while we have an existing one in peripheral role. 5914 5915 */ 5915 5916 if (hdev->conn_hash.le_num_peripheral > 0 && 5916 - (test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) || 5917 + (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES) || 5917 5918 !(hdev->le_states[3] & 0x10))) 5918 5919 return NULL; 5919 5920 ··· 6309 6310 evt_type = __le16_to_cpu(info->type) & LE_EXT_ADV_EVT_TYPE_MASK; 6310 6311 legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type); 6311 6312 6312 - if (test_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, 6313 - &hdev->quirks)) { 6313 + if (hci_test_quirk(hdev, 6314 + HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY)) { 6314 6315 info->primary_phy &= 0x1f; 6315 6316 info->secondary_phy &= 0x1f; 6316 6317 }
+31 -32
net/bluetooth/hci_sync.c
··· 393 393 if (hdev->discovery.type != DISCOV_TYPE_INTERLEAVED) 394 394 goto _return; 395 395 396 - if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks)) { 396 + if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) { 397 397 if (!test_bit(HCI_INQUIRY, &hdev->flags) && 398 398 hdev->discovery.state != DISCOVERY_RESOLVING) 399 399 goto discov_stopped; ··· 3587 3587 if (ret < 0 || !bacmp(&ba, BDADDR_ANY)) 3588 3588 return; 3589 3589 3590 - if (test_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks)) 3590 + if (hci_test_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN)) 3591 3591 baswap(&hdev->public_addr, &ba); 3592 3592 else 3593 3593 bacpy(&hdev->public_addr, &ba); ··· 3662 3662 bt_dev_dbg(hdev, ""); 3663 3663 3664 3664 /* Reset */ 3665 - if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) { 3665 + if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) { 3666 3666 err = hci_reset_sync(hdev); 3667 3667 if (err) 3668 3668 return err; ··· 3675 3675 { 3676 3676 int err; 3677 3677 3678 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 3678 + if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE)) 3679 3679 return 0; 3680 3680 3681 3681 err = hci_init0_sync(hdev); ··· 3718 3718 * supported commands. 3719 3719 */ 3720 3720 if (hdev->hci_ver > BLUETOOTH_VER_1_1 && 3721 - !test_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks)) 3721 + !hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS)) 3722 3722 return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCAL_COMMANDS, 3723 3723 0, NULL, HCI_CMD_TIMEOUT); 3724 3724 ··· 3732 3732 bt_dev_dbg(hdev, ""); 3733 3733 3734 3734 /* Reset */ 3735 - if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) { 3735 + if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) { 3736 3736 err = hci_reset_sync(hdev); 3737 3737 if (err) 3738 3738 return err; ··· 3795 3795 if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED)) 3796 3796 return 0; 3797 3797 3798 - if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks)) 3798 + if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL)) 3799 3799 return 0; 3800 3800 3801 3801 memset(&cp, 0, sizeof(cp)); ··· 3822 3822 * a hci_set_event_filter_sync() call succeeds, but we do 3823 3823 * the check both for parity and as a future reminder. 3824 3824 */ 3825 - if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks)) 3825 + if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL)) 3826 3826 return 0; 3827 3827 3828 3828 return hci_set_event_filter_sync(hdev, HCI_FLT_CLEAR_ALL, 0x00, ··· 3846 3846 3847 3847 /* Check if the controller supports SCO and HCI_OP_WRITE_SYNC_FLOWCTL */ 3848 3848 if (!lmp_sco_capable(hdev) || !(hdev->commands[10] & BIT(4)) || 3849 - !test_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks)) 3849 + !hci_test_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED)) 3850 3850 return 0; 3851 3851 3852 3852 memset(&cp, 0, sizeof(cp)); ··· 3921 3921 u8 mode; 3922 3922 3923 3923 if (!lmp_inq_rssi_capable(hdev) && 3924 - !test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks)) 3924 + !hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE)) 3925 3925 return 0; 3926 3926 3927 3927 /* If Extended Inquiry Result events are supported, then ··· 4111 4111 } 4112 4112 4113 4113 if (lmp_inq_rssi_capable(hdev) || 4114 - test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks)) 4114 + hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE)) 4115 4115 events[4] |= 0x02; /* Inquiry Result with RSSI */ 4116 4116 4117 4117 if (lmp_ext_feat_capable(hdev)) ··· 4163 4163 struct hci_cp_read_stored_link_key cp; 4164 4164 4165 4165 if (!(hdev->commands[6] & 0x20) || 4166 - test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks)) 4166 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY)) 4167 4167 return 0; 4168 4168 4169 4169 memset(&cp, 0, sizeof(cp)); ··· 4212 4212 { 4213 4213 if (!(hdev->commands[18] & 0x04) || 4214 4214 !(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) || 4215 - test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks)) 4215 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING)) 4216 4216 return 0; 4217 4217 4218 4218 return __hci_cmd_sync_status(hdev, HCI_OP_READ_DEF_ERR_DATA_REPORTING, ··· 4226 4226 * this command in the bit mask of supported commands. 4227 4227 */ 4228 4228 if (!(hdev->commands[13] & 0x01) || 4229 - test_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks)) 4229 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE)) 4230 4230 return 0; 4231 4231 4232 4232 return __hci_cmd_sync_status(hdev, HCI_OP_READ_PAGE_SCAN_TYPE, ··· 4421 4421 static int hci_le_read_tx_power_sync(struct hci_dev *hdev) 4422 4422 { 4423 4423 if (!(hdev->commands[38] & 0x80) || 4424 - test_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks)) 4424 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER)) 4425 4425 return 0; 4426 4426 4427 4427 return __hci_cmd_sync_status(hdev, HCI_OP_LE_READ_TRANSMIT_POWER, ··· 4464 4464 __le16 timeout = cpu_to_le16(hdev->rpa_timeout); 4465 4465 4466 4466 if (!(hdev->commands[35] & 0x04) || 4467 - test_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks)) 4467 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT)) 4468 4468 return 0; 4469 4469 4470 4470 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_RPA_TIMEOUT, ··· 4609 4609 * just disable this command. 4610 4610 */ 4611 4611 if (!(hdev->commands[6] & 0x80) || 4612 - test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks)) 4612 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY)) 4613 4613 return 0; 4614 4614 4615 4615 memset(&cp, 0, sizeof(cp)); ··· 4735 4735 4736 4736 if (!(hdev->commands[18] & 0x08) || 4737 4737 !(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) || 4738 - test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks)) 4738 + hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING)) 4739 4739 return 0; 4740 4740 4741 4741 if (enabled == hdev->err_data_reporting) ··· 4948 4948 size_t i; 4949 4949 4950 4950 if (!hci_dev_test_flag(hdev, HCI_SETUP) && 4951 - !test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks)) 4951 + !hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP)) 4952 4952 return 0; 4953 4953 4954 4954 bt_dev_dbg(hdev, ""); ··· 4959 4959 ret = hdev->setup(hdev); 4960 4960 4961 4961 for (i = 0; i < ARRAY_SIZE(hci_broken_table); i++) { 4962 - if (test_bit(hci_broken_table[i].quirk, &hdev->quirks)) 4962 + if (hci_test_quirk(hdev, hci_broken_table[i].quirk)) 4963 4963 bt_dev_warn(hdev, "%s", hci_broken_table[i].desc); 4964 4964 } 4965 4965 ··· 4967 4967 * BD_ADDR invalid before creating the HCI device or in 4968 4968 * its setup callback. 4969 4969 */ 4970 - invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) || 4971 - test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 4970 + invalid_bdaddr = hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) || 4971 + hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY); 4972 4972 if (!ret) { 4973 - if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks) && 4973 + if (hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY) && 4974 4974 !bacmp(&hdev->public_addr, BDADDR_ANY)) 4975 4975 hci_dev_get_bd_addr_from_property(hdev); 4976 4976 ··· 4992 4992 * In case any of them is set, the controller has to 4993 4993 * start up as unconfigured. 4994 4994 */ 4995 - if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) || 4995 + if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) || 4996 4996 invalid_bdaddr) 4997 4997 hci_dev_set_flag(hdev, HCI_UNCONFIGURED); 4998 4998 ··· 5052 5052 * then they need to be reprogrammed after the init procedure 5053 5053 * completed. 5054 5054 */ 5055 - if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) && 5055 + if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) && 5056 5056 !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) && 5057 5057 hci_dev_test_flag(hdev, HCI_VENDOR_DIAG) && hdev->set_diag) 5058 5058 ret = hdev->set_diag(hdev, true); ··· 5309 5309 /* Reset device */ 5310 5310 skb_queue_purge(&hdev->cmd_q); 5311 5311 atomic_set(&hdev->cmd_cnt, 1); 5312 - if (test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks) && 5312 + if (hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE) && 5313 5313 !auto_off && !hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { 5314 5314 set_bit(HCI_INIT, &hdev->flags); 5315 5315 hci_reset_sync(hdev); ··· 5959 5959 own_addr_type = ADDR_LE_DEV_PUBLIC; 5960 5960 5961 5961 if (hci_is_adv_monitoring(hdev) || 5962 - (test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks) && 5962 + (hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER) && 5963 5963 hdev->discovery.result_filtering)) { 5964 5964 /* Duplicate filter should be disabled when some advertisement 5965 5965 * monitor is activated, otherwise AdvMon can only receive one ··· 6022 6022 * and LE scanning are done sequentially with separate 6023 6023 * timeouts. 6024 6024 */ 6025 - if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, 6026 - &hdev->quirks)) { 6025 + if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) { 6027 6026 timeout = msecs_to_jiffies(DISCOV_LE_TIMEOUT); 6028 6027 /* During simultaneous discovery, we double LE scan 6029 6028 * interval. We must leave some time for the controller ··· 6099 6100 /* Some fake CSR controllers lock up after setting this type of 6100 6101 * filter, so avoid sending the request altogether. 6101 6102 */ 6102 - if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks)) 6103 + if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL)) 6103 6104 return 0; 6104 6105 6105 6106 /* Always clear event filter when starting */ ··· 6814 6815 return 0; 6815 6816 } 6816 6817 6817 - /* No privacy so use a public address. */ 6818 - *own_addr_type = ADDR_LE_DEV_PUBLIC; 6818 + /* No privacy, use the current address */ 6819 + hci_copy_identity_address(hdev, rand_addr, own_addr_type); 6819 6820 6820 6821 return 0; 6821 6822 }
+21 -5
net/bluetooth/l2cap_core.c
··· 3520 3520 /* Configure output options and let the other side know 3521 3521 * which ones we don't like. */ 3522 3522 3523 - /* If MTU is not provided in configure request, use the most recently 3524 - * explicitly or implicitly accepted value for the other direction, 3525 - * or the default value. 3523 + /* If MTU is not provided in configure request, try adjusting it 3524 + * to the current output MTU if it has been set 3525 + * 3526 + * Bluetooth Core 6.1, Vol 3, Part A, Section 4.5 3527 + * 3528 + * Each configuration parameter value (if any is present) in an 3529 + * L2CAP_CONFIGURATION_RSP packet reflects an ‘adjustment’ to a 3530 + * configuration parameter value that has been sent (or, in case 3531 + * of default values, implied) in the corresponding 3532 + * L2CAP_CONFIGURATION_REQ packet. 3526 3533 */ 3527 - if (mtu == 0) 3528 - mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU; 3534 + if (!mtu) { 3535 + /* Only adjust for ERTM channels as for older modes the 3536 + * remote stack may not be able to detect that the 3537 + * adjustment causing it to silently drop packets. 3538 + */ 3539 + if (chan->mode == L2CAP_MODE_ERTM && 3540 + chan->omtu && chan->omtu != L2CAP_DEFAULT_MTU) 3541 + mtu = chan->omtu; 3542 + else 3543 + mtu = L2CAP_DEFAULT_MTU; 3544 + } 3529 3545 3530 3546 if (mtu < L2CAP_DEFAULT_MIN_MTU) 3531 3547 result = L2CAP_CONF_UNACCEPT;
+3
net/bluetooth/l2cap_sock.c
··· 1703 1703 { 1704 1704 struct sock *sk = chan->data; 1705 1705 1706 + if (!sk) 1707 + return; 1708 + 1706 1709 if (test_and_clear_bit(FLAG_PENDING_SECURITY, &chan->flags)) { 1707 1710 sk->sk_state = BT_CONNECTED; 1708 1711 chan->state = BT_CONNECTED;
+18 -20
net/bluetooth/mgmt.c
··· 464 464 /* Devices marked as raw-only are neither configured 465 465 * nor unconfigured controllers. 466 466 */ 467 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks)) 467 + if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE)) 468 468 continue; 469 469 470 470 if (!hci_dev_test_flag(d, HCI_UNCONFIGURED)) { ··· 522 522 /* Devices marked as raw-only are neither configured 523 523 * nor unconfigured controllers. 524 524 */ 525 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks)) 525 + if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE)) 526 526 continue; 527 527 528 528 if (hci_dev_test_flag(d, HCI_UNCONFIGURED)) { ··· 576 576 /* Devices marked as raw-only are neither configured 577 577 * nor unconfigured controllers. 578 578 */ 579 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks)) 579 + if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE)) 580 580 continue; 581 581 582 582 if (hci_dev_test_flag(d, HCI_UNCONFIGURED)) ··· 612 612 613 613 static bool is_configured(struct hci_dev *hdev) 614 614 { 615 - if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) && 615 + if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) && 616 616 !hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED)) 617 617 return false; 618 618 619 - if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) || 620 - test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) && 619 + if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) || 620 + hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) && 621 621 !bacmp(&hdev->public_addr, BDADDR_ANY)) 622 622 return false; 623 623 ··· 628 628 { 629 629 u32 options = 0; 630 630 631 - if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) && 631 + if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) && 632 632 !hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED)) 633 633 options |= MGMT_OPTION_EXTERNAL_CONFIG; 634 634 635 - if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) || 636 - test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) && 635 + if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) || 636 + hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) && 637 637 !bacmp(&hdev->public_addr, BDADDR_ANY)) 638 638 options |= MGMT_OPTION_PUBLIC_ADDRESS; 639 639 ··· 669 669 memset(&rp, 0, sizeof(rp)); 670 670 rp.manufacturer = cpu_to_le16(hdev->manufacturer); 671 671 672 - if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks)) 672 + if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG)) 673 673 options |= MGMT_OPTION_EXTERNAL_CONFIG; 674 674 675 675 if (hdev->set_bdaddr) ··· 828 828 if (lmp_sc_capable(hdev)) 829 829 settings |= MGMT_SETTING_SECURE_CONN; 830 830 831 - if (test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, 832 - &hdev->quirks)) 831 + if (hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED)) 833 832 settings |= MGMT_SETTING_WIDEBAND_SPEECH; 834 833 } 835 834 ··· 840 841 settings |= MGMT_SETTING_ADVERTISING; 841 842 } 842 843 843 - if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) || 844 - hdev->set_bdaddr) 844 + if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) || hdev->set_bdaddr) 845 845 settings |= MGMT_SETTING_CONFIGURATION; 846 846 847 847 if (cis_central_capable(hdev)) ··· 4305 4307 4306 4308 bt_dev_dbg(hdev, "sock %p", sk); 4307 4309 4308 - if (!test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks)) 4310 + if (!hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED)) 4309 4311 return mgmt_cmd_status(sk, hdev->id, 4310 4312 MGMT_OP_SET_WIDEBAND_SPEECH, 4311 4313 MGMT_STATUS_NOT_SUPPORTED); ··· 7933 7935 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG, 7934 7936 MGMT_STATUS_INVALID_PARAMS); 7935 7937 7936 - if (!test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks)) 7938 + if (!hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG)) 7937 7939 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG, 7938 7940 MGMT_STATUS_NOT_SUPPORTED); 7939 7941 ··· 9336 9338 { 9337 9339 struct mgmt_ev_ext_index ev; 9338 9340 9339 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 9341 + if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE)) 9340 9342 return; 9341 9343 9342 9344 if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { ··· 9360 9362 struct mgmt_ev_ext_index ev; 9361 9363 struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX }; 9362 9364 9363 - if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 9365 + if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE)) 9364 9366 return; 9365 9367 9366 9368 mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match); ··· 10087 10089 if (hdev->discovery.rssi != HCI_RSSI_INVALID && 10088 10090 (rssi == HCI_RSSI_INVALID || 10089 10091 (rssi < hdev->discovery.rssi && 10090 - !test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks)))) 10092 + !hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER)))) 10091 10093 return false; 10092 10094 10093 10095 if (hdev->discovery.uuid_count != 0) { ··· 10105 10107 /* If duplicate filtering does not report RSSI changes, then restart 10106 10108 * scanning to ensure updated result with updated RSSI values. 10107 10109 */ 10108 - if (test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks)) { 10110 + if (hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER)) { 10109 10111 /* Validate RSSI value against the RSSI threshold once more. */ 10110 10112 if (hdev->discovery.rssi != HCI_RSSI_INVALID && 10111 10113 rssi < hdev->discovery.rssi)
+1 -1
net/bluetooth/msft.c
··· 989 989 990 990 handle_data = msft_find_handle_data(hdev, ev->monitor_handle, false); 991 991 992 - if (!test_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks)) { 992 + if (!hci_test_quirk(hdev, HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER)) { 993 993 if (!handle_data) 994 994 return; 995 995 mgmt_handle = handle_data->mgmt_handle;
+19 -2
net/bluetooth/smp.c
··· 1379 1379 1380 1380 bt_dev_dbg(conn->hcon->hdev, "conn %p", conn); 1381 1381 1382 - hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM); 1382 + hci_disconnect(conn->hcon, HCI_ERROR_AUTH_FAILURE); 1383 1383 } 1384 1384 1385 1385 static struct smp_chan *smp_chan_create(struct l2cap_conn *conn) ··· 2977 2977 if (code > SMP_CMD_MAX) 2978 2978 goto drop; 2979 2979 2980 - if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) 2980 + if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) { 2981 + /* If there is a context and the command is not allowed consider 2982 + * it a failure so the session is cleanup properly. 2983 + */ 2984 + switch (code) { 2985 + case SMP_CMD_IDENT_INFO: 2986 + case SMP_CMD_IDENT_ADDR_INFO: 2987 + case SMP_CMD_SIGN_INFO: 2988 + /* 3.6.1. Key distribution and generation 2989 + * 2990 + * A device may reject a distributed key by sending the 2991 + * Pairing Failed command with the reason set to 2992 + * "Key Rejected". 2993 + */ 2994 + smp_failure(conn, SMP_KEY_REJECTED); 2995 + break; 2996 + } 2981 2997 goto drop; 2998 + } 2982 2999 2983 3000 /* If we don't have a context the only allowed commands are 2984 3001 * pairing request and security request.
+1
net/bluetooth/smp.h
··· 138 138 #define SMP_NUMERIC_COMP_FAILED 0x0c 139 139 #define SMP_BREDR_PAIRING_IN_PROGRESS 0x0d 140 140 #define SMP_CROSS_TRANSP_NOT_ALLOWED 0x0e 141 + #define SMP_KEY_REJECTED 0x0f 141 142 142 143 #define SMP_MIN_ENC_KEY_SIZE 7 143 144 #define SMP_MAX_ENC_KEY_SIZE 16
+3
net/bridge/br_switchdev.c
··· 17 17 if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload)) 18 18 return false; 19 19 20 + if (br_multicast_igmp_type(skb)) 21 + return false; 22 + 20 23 return (p->flags & BR_TX_FWD_OFFLOAD) && 21 24 (p->hwdom != BR_INPUT_SKB_CB(skb)->src_hwdom); 22 25 }
+1
net/ipv4/tcp_offload.c
··· 359 359 flush |= skb->ip_summed != p->ip_summed; 360 360 flush |= skb->csum_level != p->csum_level; 361 361 flush |= NAPI_GRO_CB(p)->count >= 64; 362 + skb_set_network_header(skb, skb_gro_receive_network_offset(skb)); 362 363 363 364 if (flush || skb_gro_receive_list(p, skb)) 364 365 mss = 1;
+1
net/ipv4/udp_offload.c
··· 767 767 NAPI_GRO_CB(skb)->flush = 1; 768 768 return NULL; 769 769 } 770 + skb_set_network_header(skb, skb_gro_receive_network_offset(skb)); 770 771 ret = skb_gro_receive_list(p, skb); 771 772 } else { 772 773 skb_gro_postpull_rcsum(skb, uh,
+1 -1
net/ipv6/mcast.c
··· 807 807 } else { 808 808 im->mca_crcount = idev->mc_qrv; 809 809 } 810 - in6_dev_put(pmc->idev); 811 810 ip6_mc_clear_src(pmc); 811 + in6_dev_put(pmc->idev); 812 812 kfree_rcu(pmc, rcu); 813 813 } 814 814 }
+4 -4
net/ipv6/rpl_iptunnel.c
··· 129 129 struct dst_entry *cache_dst) 130 130 { 131 131 struct ipv6_rpl_sr_hdr *isrh, *csrh; 132 - const struct ipv6hdr *oldhdr; 132 + struct ipv6hdr oldhdr; 133 133 struct ipv6hdr *hdr; 134 134 unsigned char *buf; 135 135 size_t hdrlen; 136 136 int err; 137 137 138 - oldhdr = ipv6_hdr(skb); 138 + memcpy(&oldhdr, ipv6_hdr(skb), sizeof(oldhdr)); 139 139 140 140 buf = kcalloc(struct_size(srh, segments.addr, srh->segments_left), 2, GFP_ATOMIC); 141 141 if (!buf) ··· 147 147 memcpy(isrh, srh, sizeof(*isrh)); 148 148 memcpy(isrh->rpl_segaddr, &srh->rpl_segaddr[1], 149 149 (srh->segments_left - 1) * 16); 150 - isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr->daddr; 150 + isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr.daddr; 151 151 152 152 ipv6_rpl_srh_compress(csrh, isrh, &srh->rpl_segaddr[0], 153 153 isrh->segments_left - 1); ··· 169 169 skb_mac_header_rebuild(skb); 170 170 171 171 hdr = ipv6_hdr(skb); 172 - memmove(hdr, oldhdr, sizeof(*hdr)); 172 + memmove(hdr, &oldhdr, sizeof(*hdr)); 173 173 isrh = (void *)hdr + sizeof(*hdr); 174 174 memcpy(isrh, csrh, hdrlen); 175 175
+2 -1
net/mptcp/options.c
··· 978 978 if (subflow->mp_join) 979 979 goto reset; 980 980 subflow->mp_capable = 0; 981 + if (!mptcp_try_fallback(ssk)) 982 + goto reset; 981 983 pr_fallback(msk); 982 - mptcp_do_fallback(ssk); 983 984 return false; 984 985 } 985 986
+7 -1
net/mptcp/pm.c
··· 765 765 766 766 pr_debug("fail_seq=%llu\n", fail_seq); 767 767 768 - if (!READ_ONCE(msk->allow_infinite_fallback)) 768 + /* After accepting the fail, we can't create any other subflows */ 769 + spin_lock_bh(&msk->fallback_lock); 770 + if (!msk->allow_infinite_fallback) { 771 + spin_unlock_bh(&msk->fallback_lock); 769 772 return; 773 + } 774 + msk->allow_subflows = false; 775 + spin_unlock_bh(&msk->fallback_lock); 770 776 771 777 if (!subflow->fail_tout) { 772 778 pr_debug("send MP_FAIL response and infinite map\n");
+48 -8
net/mptcp/protocol.c
··· 560 560 561 561 static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk) 562 562 { 563 - if (READ_ONCE(msk->allow_infinite_fallback)) { 563 + if (mptcp_try_fallback(ssk)) { 564 564 MPTCP_INC_STATS(sock_net(ssk), 565 565 MPTCP_MIB_DSSCORRUPTIONFALLBACK); 566 - mptcp_do_fallback(ssk); 567 566 } else { 568 567 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET); 569 568 mptcp_subflow_reset(ssk); ··· 791 792 static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk) 792 793 { 793 794 mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq); 794 - WRITE_ONCE(msk->allow_infinite_fallback, false); 795 + msk->allow_infinite_fallback = false; 795 796 mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); 796 797 } 797 798 ··· 802 803 if (sk->sk_state != TCP_ESTABLISHED) 803 804 return false; 804 805 806 + spin_lock_bh(&msk->fallback_lock); 807 + if (!msk->allow_subflows) { 808 + spin_unlock_bh(&msk->fallback_lock); 809 + return false; 810 + } 811 + mptcp_subflow_joined(msk, ssk); 812 + spin_unlock_bh(&msk->fallback_lock); 813 + 805 814 /* attach to msk socket only after we are sure we will deal with it 806 815 * at close time 807 816 */ ··· 818 811 819 812 mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++; 820 813 mptcp_sockopt_sync_locked(msk, ssk); 821 - mptcp_subflow_joined(msk, ssk); 822 814 mptcp_stop_tout_timer(sk); 823 815 __mptcp_propagate_sndbuf(sk, ssk); 824 816 return true; ··· 1142 1136 mpext->infinite_map = 1; 1143 1137 mpext->data_len = 0; 1144 1138 1139 + if (!mptcp_try_fallback(ssk)) { 1140 + mptcp_subflow_reset(ssk); 1141 + return; 1142 + } 1143 + 1145 1144 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX); 1146 1145 mptcp_subflow_ctx(ssk)->send_infinite_map = 0; 1147 1146 pr_fallback(msk); 1148 - mptcp_do_fallback(ssk); 1149 1147 } 1150 1148 1151 1149 #define MPTCP_MAX_GSO_SIZE (GSO_LEGACY_MAX_SIZE - (MAX_TCP_HEADER + 1)) ··· 2553 2543 2554 2544 static void __mptcp_retrans(struct sock *sk) 2555 2545 { 2546 + struct mptcp_sendmsg_info info = { .data_lock_held = true, }; 2556 2547 struct mptcp_sock *msk = mptcp_sk(sk); 2557 2548 struct mptcp_subflow_context *subflow; 2558 - struct mptcp_sendmsg_info info = {}; 2559 2549 struct mptcp_data_frag *dfrag; 2560 2550 struct sock *ssk; 2561 2551 int ret, err; ··· 2600 2590 info.sent = 0; 2601 2591 info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : 2602 2592 dfrag->already_sent; 2593 + 2594 + /* 2595 + * make the whole retrans decision, xmit, disallow 2596 + * fallback atomic 2597 + */ 2598 + spin_lock_bh(&msk->fallback_lock); 2599 + if (__mptcp_check_fallback(msk)) { 2600 + spin_unlock_bh(&msk->fallback_lock); 2601 + release_sock(ssk); 2602 + return; 2603 + } 2604 + 2603 2605 while (info.sent < info.limit) { 2604 2606 ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); 2605 2607 if (ret <= 0) ··· 2625 2603 len = max(copied, len); 2626 2604 tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle, 2627 2605 info.size_goal); 2628 - WRITE_ONCE(msk->allow_infinite_fallback, false); 2606 + msk->allow_infinite_fallback = false; 2629 2607 } 2608 + spin_unlock_bh(&msk->fallback_lock); 2630 2609 2631 2610 release_sock(ssk); 2632 2611 } ··· 2753 2730 WRITE_ONCE(msk->first, NULL); 2754 2731 inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss; 2755 2732 WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk))); 2756 - WRITE_ONCE(msk->allow_infinite_fallback, true); 2733 + msk->allow_infinite_fallback = true; 2734 + msk->allow_subflows = true; 2757 2735 msk->recovery = false; 2758 2736 msk->subflow_id = 1; 2759 2737 msk->last_data_sent = tcp_jiffies32; ··· 2762 2738 msk->last_ack_recv = tcp_jiffies32; 2763 2739 2764 2740 mptcp_pm_data_init(msk); 2741 + spin_lock_init(&msk->fallback_lock); 2765 2742 2766 2743 /* re-use the csk retrans timer for MPTCP-level retrans */ 2767 2744 timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0); ··· 3142 3117 * subflow 3143 3118 */ 3144 3119 mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE); 3120 + 3121 + /* The first subflow is already in TCP_CLOSE status, the following 3122 + * can't overlap with a fallback anymore 3123 + */ 3124 + spin_lock_bh(&msk->fallback_lock); 3125 + msk->allow_subflows = true; 3126 + msk->allow_infinite_fallback = true; 3145 3127 WRITE_ONCE(msk->flags, 0); 3128 + spin_unlock_bh(&msk->fallback_lock); 3129 + 3146 3130 msk->cb_flags = 0; 3147 3131 msk->recovery = false; 3148 3132 WRITE_ONCE(msk->can_ack, false); ··· 3558 3524 3559 3525 /* active subflow, already present inside the conn_list */ 3560 3526 if (!list_empty(&subflow->node)) { 3527 + spin_lock_bh(&msk->fallback_lock); 3528 + if (!msk->allow_subflows) { 3529 + spin_unlock_bh(&msk->fallback_lock); 3530 + return false; 3531 + } 3561 3532 mptcp_subflow_joined(msk, ssk); 3533 + spin_unlock_bh(&msk->fallback_lock); 3562 3534 mptcp_propagate_sndbuf(parent, ssk); 3563 3535 return true; 3564 3536 }
+22 -7
net/mptcp/protocol.h
··· 346 346 u64 rtt_us; /* last maximum rtt of subflows */ 347 347 } rcvq_space; 348 348 u8 scaling_ratio; 349 + bool allow_subflows; 349 350 350 351 u32 subflow_id; 351 352 u32 setsockopt_seq; 352 353 char ca_name[TCP_CA_NAME_MAX]; 354 + 355 + spinlock_t fallback_lock; /* protects fallback, 356 + * allow_infinite_fallback and 357 + * allow_join 358 + */ 353 359 }; 354 360 355 361 #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock) ··· 1222 1216 return __mptcp_check_fallback(msk); 1223 1217 } 1224 1218 1225 - static inline void __mptcp_do_fallback(struct mptcp_sock *msk) 1219 + static inline bool __mptcp_try_fallback(struct mptcp_sock *msk) 1226 1220 { 1227 1221 if (__mptcp_check_fallback(msk)) { 1228 1222 pr_debug("TCP fallback already done (msk=%p)\n", msk); 1229 - return; 1223 + return true; 1230 1224 } 1231 - if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback))) 1232 - return; 1225 + spin_lock_bh(&msk->fallback_lock); 1226 + if (!msk->allow_infinite_fallback) { 1227 + spin_unlock_bh(&msk->fallback_lock); 1228 + return false; 1229 + } 1230 + 1231 + msk->allow_subflows = false; 1233 1232 set_bit(MPTCP_FALLBACK_DONE, &msk->flags); 1233 + spin_unlock_bh(&msk->fallback_lock); 1234 + return true; 1234 1235 } 1235 1236 1236 1237 static inline bool __mptcp_has_initial_subflow(const struct mptcp_sock *msk) ··· 1249 1236 TCPF_SYN_RECV | TCPF_LISTEN)); 1250 1237 } 1251 1238 1252 - static inline void mptcp_do_fallback(struct sock *ssk) 1239 + static inline bool mptcp_try_fallback(struct sock *ssk) 1253 1240 { 1254 1241 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1255 1242 struct sock *sk = subflow->conn; 1256 1243 struct mptcp_sock *msk; 1257 1244 1258 1245 msk = mptcp_sk(sk); 1259 - __mptcp_do_fallback(msk); 1246 + if (!__mptcp_try_fallback(msk)) 1247 + return false; 1260 1248 if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) { 1261 1249 gfp_t saved_allocation = ssk->sk_allocation; 1262 1250 ··· 1269 1255 tcp_shutdown(ssk, SEND_SHUTDOWN); 1270 1256 ssk->sk_allocation = saved_allocation; 1271 1257 } 1258 + return true; 1272 1259 } 1273 1260 1274 1261 #define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a) ··· 1279 1264 { 1280 1265 pr_fallback(msk); 1281 1266 subflow->request_mptcp = 0; 1282 - __mptcp_do_fallback(msk); 1267 + WARN_ON_ONCE(!__mptcp_try_fallback(msk)); 1283 1268 } 1284 1269 1285 1270 static inline bool mptcp_check_infinite_map(struct sk_buff *skb)
+19 -11
net/mptcp/subflow.c
··· 544 544 mptcp_get_options(skb, &mp_opt); 545 545 if (subflow->request_mptcp) { 546 546 if (!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYNACK)) { 547 + if (!mptcp_try_fallback(sk)) 548 + goto do_reset; 549 + 547 550 MPTCP_INC_STATS(sock_net(sk), 548 551 MPTCP_MIB_MPCAPABLEACTIVEFALLBACK); 549 - mptcp_do_fallback(sk); 550 552 pr_fallback(msk); 551 553 goto fallback; 552 554 } ··· 1302 1300 mptcp_schedule_work(sk); 1303 1301 } 1304 1302 1305 - static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) 1303 + static bool mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) 1306 1304 { 1307 1305 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1308 1306 unsigned long fail_tout; 1309 1307 1308 + /* we are really failing, prevent any later subflow join */ 1309 + spin_lock_bh(&msk->fallback_lock); 1310 + if (!msk->allow_infinite_fallback) { 1311 + spin_unlock_bh(&msk->fallback_lock); 1312 + return false; 1313 + } 1314 + msk->allow_subflows = false; 1315 + spin_unlock_bh(&msk->fallback_lock); 1316 + 1310 1317 /* graceful failure can happen only on the MPC subflow */ 1311 1318 if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first))) 1312 - return; 1319 + return false; 1313 1320 1314 1321 /* since the close timeout take precedence on the fail one, 1315 1322 * no need to start the latter when the first is already set 1316 1323 */ 1317 1324 if (sock_flag((struct sock *)msk, SOCK_DEAD)) 1318 - return; 1325 + return true; 1319 1326 1320 1327 /* we don't need extreme accuracy here, use a zero fail_tout as special 1321 1328 * value meaning no fail timeout at all; ··· 1336 1325 tcp_send_ack(ssk); 1337 1326 1338 1327 mptcp_reset_tout_timer(msk, subflow->fail_tout); 1328 + return true; 1339 1329 } 1340 1330 1341 1331 static bool subflow_check_data_avail(struct sock *ssk) ··· 1397 1385 (subflow->mp_join || subflow->valid_csum_seen)) { 1398 1386 subflow->send_mp_fail = 1; 1399 1387 1400 - if (!READ_ONCE(msk->allow_infinite_fallback)) { 1388 + if (!mptcp_subflow_fail(msk, ssk)) { 1401 1389 subflow->reset_transient = 0; 1402 1390 subflow->reset_reason = MPTCP_RST_EMIDDLEBOX; 1403 1391 goto reset; 1404 1392 } 1405 - mptcp_subflow_fail(msk, ssk); 1406 1393 WRITE_ONCE(subflow->data_avail, true); 1407 1394 return true; 1408 1395 } 1409 1396 1410 - if (!READ_ONCE(msk->allow_infinite_fallback)) { 1397 + if (!mptcp_try_fallback(ssk)) { 1411 1398 /* fatal protocol error, close the socket. 1412 1399 * subflow_error_report() will introduce the appropriate barriers 1413 1400 */ ··· 1424 1413 WRITE_ONCE(subflow->data_avail, false); 1425 1414 return false; 1426 1415 } 1427 - 1428 - mptcp_do_fallback(ssk); 1429 1416 } 1430 1417 1431 1418 skb = skb_peek(&ssk->sk_receive_queue); ··· 1688 1679 /* discard the subflow socket */ 1689 1680 mptcp_sock_graft(ssk, sk->sk_socket); 1690 1681 iput(SOCK_INODE(sf)); 1691 - WRITE_ONCE(msk->allow_infinite_fallback, false); 1692 1682 mptcp_stop_tout_timer(sk); 1693 1683 return 0; 1694 1684 ··· 1859 1851 1860 1852 msk = mptcp_sk(parent); 1861 1853 if (subflow_simultaneous_connect(sk)) { 1862 - mptcp_do_fallback(sk); 1854 + WARN_ON_ONCE(!mptcp_try_fallback(sk)); 1863 1855 pr_fallback(msk); 1864 1856 subflow->conn_finished = 1; 1865 1857 mptcp_propagate_state(parent, sk, subflow, NULL);
+20 -6
net/netfilter/nf_conntrack_core.c
··· 1124 1124 1125 1125 hlist_nulls_add_head_rcu(&loser_ct->tuplehash[IP_CT_DIR_REPLY].hnnode, 1126 1126 &nf_conntrack_hash[repl_idx]); 1127 + /* confirmed bit must be set after hlist add, not before: 1128 + * loser_ct can still be visible to other cpu due to 1129 + * SLAB_TYPESAFE_BY_RCU. 1130 + */ 1131 + smp_mb__before_atomic(); 1132 + set_bit(IPS_CONFIRMED_BIT, &loser_ct->status); 1127 1133 1128 1134 NF_CT_STAT_INC(net, clash_resolve); 1129 1135 return NF_ACCEPT; ··· 1266 1260 * user context, else we insert an already 'dead' hash, blocking 1267 1261 * further use of that particular connection -JM. 1268 1262 */ 1269 - ct->status |= IPS_CONFIRMED; 1270 - 1271 1263 if (unlikely(nf_ct_is_dying(ct))) { 1272 1264 NF_CT_STAT_INC(net, insert_failed); 1273 1265 goto dying; ··· 1297 1293 } 1298 1294 } 1299 1295 1300 - /* Timer relative to confirmation time, not original 1296 + /* Timeout is relative to confirmation time, not original 1301 1297 setting time, otherwise we'd get timer wrap in 1302 1298 weird delay cases. */ 1303 1299 ct->timeout += nfct_time_stamp; ··· 1305 1301 __nf_conntrack_insert_prepare(ct); 1306 1302 1307 1303 /* Since the lookup is lockless, hash insertion must be done after 1308 - * starting the timer and setting the CONFIRMED bit. The RCU barriers 1309 - * guarantee that no other CPU can find the conntrack before the above 1310 - * stores are visible. 1304 + * setting ct->timeout. The RCU barriers guarantee that no other CPU 1305 + * can find the conntrack before the above stores are visible. 1311 1306 */ 1312 1307 __nf_conntrack_hash_insert(ct, hash, reply_hash); 1308 + 1309 + /* IPS_CONFIRMED unset means 'ct not (yet) in hash', conntrack lookups 1310 + * skip entries that lack this bit. This happens when a CPU is looking 1311 + * at a stale entry that is being recycled due to SLAB_TYPESAFE_BY_RCU 1312 + * or when another CPU encounters this entry right after the insertion 1313 + * but before the set-confirm-bit below. This bit must not be set until 1314 + * after __nf_conntrack_hash_insert(). 1315 + */ 1316 + smp_mb__before_atomic(); 1317 + set_bit(IPS_CONFIRMED_BIT, &ct->status); 1318 + 1313 1319 nf_conntrack_double_unlock(hash, reply_hash); 1314 1320 local_bh_enable(); 1315 1321
-59
net/netfilter/nf_tables_api.c
··· 9686 9686 } 9687 9687 EXPORT_SYMBOL_GPL(nft_hook_find_ops_rcu); 9688 9688 9689 - static void 9690 - nf_tables_device_notify(const struct nft_table *table, int attr, 9691 - const char *name, const struct nft_hook *hook, 9692 - const struct net_device *dev, int event) 9693 - { 9694 - struct net *net = dev_net(dev); 9695 - struct nlmsghdr *nlh; 9696 - struct sk_buff *skb; 9697 - u16 flags = 0; 9698 - 9699 - if (!nfnetlink_has_listeners(net, NFNLGRP_NFT_DEV)) 9700 - return; 9701 - 9702 - skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 9703 - if (!skb) 9704 - goto err; 9705 - 9706 - event = event == NETDEV_REGISTER ? NFT_MSG_NEWDEV : NFT_MSG_DELDEV; 9707 - event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event); 9708 - nlh = nfnl_msg_put(skb, 0, 0, event, flags, table->family, 9709 - NFNETLINK_V0, nft_base_seq(net)); 9710 - if (!nlh) 9711 - goto err; 9712 - 9713 - if (nla_put_string(skb, NFTA_DEVICE_TABLE, table->name) || 9714 - nla_put_string(skb, attr, name) || 9715 - nla_put(skb, NFTA_DEVICE_SPEC, hook->ifnamelen, hook->ifname) || 9716 - nla_put_string(skb, NFTA_DEVICE_NAME, dev->name)) 9717 - goto err; 9718 - 9719 - nlmsg_end(skb, nlh); 9720 - nfnetlink_send(skb, net, 0, NFNLGRP_NFT_DEV, 9721 - nlmsg_report(nlh), GFP_KERNEL); 9722 - return; 9723 - err: 9724 - if (skb) 9725 - kfree_skb(skb); 9726 - nfnetlink_set_err(net, 0, NFNLGRP_NFT_DEV, -ENOBUFS); 9727 - } 9728 - 9729 - void 9730 - nf_tables_chain_device_notify(const struct nft_chain *chain, 9731 - const struct nft_hook *hook, 9732 - const struct net_device *dev, int event) 9733 - { 9734 - nf_tables_device_notify(chain->table, NFTA_DEVICE_CHAIN, 9735 - chain->name, hook, dev, event); 9736 - } 9737 - 9738 - static void 9739 - nf_tables_flowtable_device_notify(const struct nft_flowtable *ft, 9740 - const struct nft_hook *hook, 9741 - const struct net_device *dev, int event) 9742 - { 9743 - nf_tables_device_notify(ft->table, NFTA_DEVICE_FLOWTABLE, 9744 - ft->name, hook, dev, event); 9745 - } 9746 - 9747 9689 static int nft_flowtable_event(unsigned long event, struct net_device *dev, 9748 9690 struct nft_flowtable *flowtable, bool changename) 9749 9691 { ··· 9733 9791 list_add_tail_rcu(&ops->list, &hook->ops_list); 9734 9792 break; 9735 9793 } 9736 - nf_tables_flowtable_device_notify(flowtable, hook, dev, event); 9737 9794 break; 9738 9795 } 9739 9796 return 0;
+3
net/netfilter/nf_tables_trace.c
··· 127 127 if (nla_put_be32(nlskb, NFTA_TRACE_CT_ID, (__force __be32)id)) 128 128 return -1; 129 129 130 + /* Kernel implementation detail, withhold this from userspace for now */ 131 + status &= ~IPS_NAT_CLASH; 132 + 130 133 if (status && nla_put_be32(nlskb, NFTA_TRACE_CT_STATUS, htonl(status))) 131 134 return -1; 132 135 }
-1
net/netfilter/nfnetlink.c
··· 86 86 [NFNLGRP_NFTABLES] = NFNL_SUBSYS_NFTABLES, 87 87 [NFNLGRP_ACCT_QUOTA] = NFNL_SUBSYS_ACCT, 88 88 [NFNLGRP_NFTRACE] = NFNL_SUBSYS_NFTABLES, 89 - [NFNLGRP_NFT_DEV] = NFNL_SUBSYS_NFTABLES, 90 89 }; 91 90 92 91 static struct nfnl_net *nfnl_pernet(struct net *net)
-2
net/netfilter/nft_chain_filter.c
··· 363 363 list_add_tail_rcu(&ops->list, &hook->ops_list); 364 364 break; 365 365 } 366 - nf_tables_chain_device_notify(&basechain->chain, 367 - hook, dev, event); 368 366 break; 369 367 } 370 368 return 0;
+13 -14
net/packet/af_packet.c
··· 2785 2785 int len_sum = 0; 2786 2786 int status = TP_STATUS_AVAILABLE; 2787 2787 int hlen, tlen, copylen = 0; 2788 - long timeo = 0; 2788 + long timeo; 2789 2789 2790 2790 mutex_lock(&po->pg_vec_lock); 2791 2791 ··· 2839 2839 if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !vnet_hdr_sz) 2840 2840 size_max = dev->mtu + reserve + VLAN_HLEN; 2841 2841 2842 + timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT); 2842 2843 reinit_completion(&po->skb_completion); 2843 2844 2844 2845 do { 2845 2846 ph = packet_current_frame(po, &po->tx_ring, 2846 2847 TP_STATUS_SEND_REQUEST); 2847 2848 if (unlikely(ph == NULL)) { 2848 - if (need_wait && skb) { 2849 - timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT); 2849 + /* Note: packet_read_pending() might be slow if we 2850 + * have to call it as it's per_cpu variable, but in 2851 + * fast-path we don't have to call it, only when ph 2852 + * is NULL, we need to check the pending_refcnt. 2853 + */ 2854 + if (need_wait && packet_read_pending(&po->tx_ring)) { 2850 2855 timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo); 2851 2856 if (timeo <= 0) { 2852 2857 err = !timeo ? -ETIMEDOUT : -ERESTARTSYS; 2853 2858 goto out_put; 2854 2859 } 2855 - } 2856 - /* check for additional frames */ 2857 - continue; 2860 + /* check for additional frames */ 2861 + continue; 2862 + } else 2863 + break; 2858 2864 } 2859 2865 2860 2866 skb = NULL; ··· 2949 2943 } 2950 2944 packet_increment_head(&po->tx_ring); 2951 2945 len_sum += tp_len; 2952 - } while (likely((ph != NULL) || 2953 - /* Note: packet_read_pending() might be slow if we have 2954 - * to call it as it's per_cpu variable, but in fast-path 2955 - * we already short-circuit the loop with the first 2956 - * condition, and luckily don't have to go that path 2957 - * anyway. 2958 - */ 2959 - (need_wait && packet_read_pending(&po->tx_ring)))); 2946 + } while (1); 2960 2947 2961 2948 err = len_sum; 2962 2949 goto out_put;
+1 -1
net/phonet/pep.c
··· 826 826 } 827 827 828 828 /* Check for duplicate pipe handle */ 829 + pn_skb_get_dst_sockaddr(skb, &dst); 829 830 newsk = pep_find_pipe(&pn->hlist, &dst, pipe_handle); 830 831 if (unlikely(newsk)) { 831 832 __sock_put(newsk); ··· 851 850 newsk->sk_destruct = pipe_destruct; 852 851 853 852 newpn = pep_sk(newsk); 854 - pn_skb_get_dst_sockaddr(skb, &dst); 855 853 pn_skb_get_src_sockaddr(skb, &src); 856 854 newpn->pn_sk.sobject = pn_sockaddr_get_object(&dst); 857 855 newpn->pn_sk.dobject = pn_sockaddr_get_object(&src);
+4
net/rxrpc/ar-internal.h
··· 44 44 RXRPC_SKB_MARK_SERVICE_CONN_SECURED, /* Service connection response has been verified */ 45 45 RXRPC_SKB_MARK_REJECT_BUSY, /* Reject with BUSY */ 46 46 RXRPC_SKB_MARK_REJECT_ABORT, /* Reject with ABORT (code in skb->priority) */ 47 + RXRPC_SKB_MARK_REJECT_CONN_ABORT, /* Reject with connection ABORT (code in skb->priority) */ 47 48 }; 48 49 49 50 /* ··· 1254 1253 void rxrpc_error_report(struct sock *); 1255 1254 bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why, 1256 1255 s32 abort_code, int err); 1256 + bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why, 1257 + s32 abort_code, int err); 1257 1258 int rxrpc_io_thread(void *data); 1258 1259 void rxrpc_post_response(struct rxrpc_connection *conn, struct sk_buff *skb); 1259 1260 static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) ··· 1386 1383 const struct sockaddr_rxrpc *); 1387 1384 struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local, 1388 1385 struct sockaddr_rxrpc *srx, gfp_t gfp); 1386 + void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer); 1389 1387 struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t, 1390 1388 enum rxrpc_peer_trace); 1391 1389 void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer);
+8 -6
net/rxrpc/call_accept.c
··· 219 219 tail = b->call_backlog_tail; 220 220 while (CIRC_CNT(head, tail, size) > 0) { 221 221 struct rxrpc_call *call = b->call_backlog[tail]; 222 + rxrpc_see_call(call, rxrpc_call_see_discard); 222 223 rcu_assign_pointer(call->socket, rx); 223 224 if (rx->app_ops && 224 225 rx->app_ops->discard_new_call) { ··· 374 373 spin_lock(&rx->incoming_lock); 375 374 if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED || 376 375 rx->sk.sk_state == RXRPC_CLOSE) { 377 - rxrpc_direct_abort(skb, rxrpc_abort_shut_down, 378 - RX_INVALID_OPERATION, -ESHUTDOWN); 376 + rxrpc_direct_conn_abort(skb, rxrpc_abort_shut_down, 377 + RX_INVALID_OPERATION, -ESHUTDOWN); 379 378 goto no_call; 380 379 } 381 380 ··· 407 406 408 407 spin_unlock(&rx->incoming_lock); 409 408 read_unlock_irq(&local->services_lock); 409 + rxrpc_assess_MTU_size(local, call->peer); 410 410 411 411 if (hlist_unhashed(&call->error_link)) { 412 412 spin_lock_irq(&call->peer->lock); ··· 422 420 423 421 unsupported_service: 424 422 read_unlock_irq(&local->services_lock); 425 - return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered, 426 - RX_INVALID_OPERATION, -EOPNOTSUPP); 423 + return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered, 424 + RX_INVALID_OPERATION, -EOPNOTSUPP); 427 425 unsupported_security: 428 426 read_unlock_irq(&local->services_lock); 429 - return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered, 430 - RX_INVALID_OPERATION, -EKEYREJECTED); 427 + return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered, 428 + RX_INVALID_OPERATION, -EKEYREJECTED); 431 429 no_call: 432 430 spin_unlock(&rx->incoming_lock); 433 431 read_unlock_irq(&local->services_lock);
+12 -16
net/rxrpc/call_object.c
··· 561 561 void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) 562 562 { 563 563 struct rxrpc_connection *conn = call->conn; 564 - bool put = false, putu = false; 564 + bool putu = false; 565 565 566 566 _enter("{%d,%d}", call->debug_id, refcount_read(&call->ref)); 567 567 ··· 573 573 574 574 rxrpc_put_call_slot(call); 575 575 576 - /* Make sure we don't get any more notifications */ 576 + /* Note that at this point, the call may still be on or may have been 577 + * added back on to the socket receive queue. recvmsg() must discard 578 + * released calls. The CALL_RELEASED flag should prevent further 579 + * notifications. 580 + */ 577 581 spin_lock_irq(&rx->recvmsg_lock); 578 - 579 - if (!list_empty(&call->recvmsg_link)) { 580 - _debug("unlinking once-pending call %p { e=%lx f=%lx }", 581 - call, call->events, call->flags); 582 - list_del(&call->recvmsg_link); 583 - put = true; 584 - } 585 - 586 - /* list_empty() must return false in rxrpc_notify_socket() */ 587 - call->recvmsg_link.next = NULL; 588 - call->recvmsg_link.prev = NULL; 589 - 590 582 spin_unlock_irq(&rx->recvmsg_lock); 591 - if (put) 592 - rxrpc_put_call(call, rxrpc_call_put_unnotify); 593 583 594 584 write_lock(&rx->call_lock); 595 585 ··· 626 636 rxrpc_abort_call_sock_release); 627 637 rxrpc_release_call(rx, call); 628 638 rxrpc_put_call(call, rxrpc_call_put_release_sock); 639 + } 640 + 641 + while ((call = list_first_entry_or_null(&rx->recvmsg_q, 642 + struct rxrpc_call, recvmsg_link))) { 643 + list_del_init(&call->recvmsg_link); 644 + rxrpc_put_call(call, rxrpc_call_put_release_recvmsg_q); 629 645 } 630 646 631 647 _leave("");
+14
net/rxrpc/io_thread.c
··· 97 97 return false; 98 98 } 99 99 100 + /* 101 + * Directly produce a connection abort from a packet. 102 + */ 103 + bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why, 104 + s32 abort_code, int err) 105 + { 106 + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 107 + 108 + trace_rxrpc_abort(0, why, sp->hdr.cid, 0, sp->hdr.seq, abort_code, err); 109 + skb->mark = RXRPC_SKB_MARK_REJECT_CONN_ABORT; 110 + skb->priority = abort_code; 111 + return false; 112 + } 113 + 100 114 static bool rxrpc_bad_message(struct sk_buff *skb, enum rxrpc_abort_reason why) 101 115 { 102 116 return rxrpc_direct_abort(skb, why, RX_PROTOCOL_ERROR, -EBADMSG);
+13 -9
net/rxrpc/output.c
··· 814 814 __be32 code; 815 815 int ret, ioc; 816 816 817 + if (sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) 818 + return; /* Never abort an abort. */ 819 + 817 820 rxrpc_see_skb(skb, rxrpc_skb_see_reject); 818 821 819 822 iov[0].iov_base = &whdr; ··· 829 826 msg.msg_controllen = 0; 830 827 msg.msg_flags = 0; 831 828 832 - memset(&whdr, 0, sizeof(whdr)); 829 + whdr = (struct rxrpc_wire_header) { 830 + .epoch = htonl(sp->hdr.epoch), 831 + .cid = htonl(sp->hdr.cid), 832 + .callNumber = htonl(sp->hdr.callNumber), 833 + .serviceId = htons(sp->hdr.serviceId), 834 + .flags = ~sp->hdr.flags & RXRPC_CLIENT_INITIATED, 835 + }; 833 836 834 837 switch (skb->mark) { 835 838 case RXRPC_SKB_MARK_REJECT_BUSY: ··· 843 834 size = sizeof(whdr); 844 835 ioc = 1; 845 836 break; 837 + case RXRPC_SKB_MARK_REJECT_CONN_ABORT: 838 + whdr.callNumber = 0; 839 + fallthrough; 846 840 case RXRPC_SKB_MARK_REJECT_ABORT: 847 841 whdr.type = RXRPC_PACKET_TYPE_ABORT; 848 842 code = htonl(skb->priority); ··· 858 846 859 847 if (rxrpc_extract_addr_from_skb(&srx, skb) == 0) { 860 848 msg.msg_namelen = srx.transport_len; 861 - 862 - whdr.epoch = htonl(sp->hdr.epoch); 863 - whdr.cid = htonl(sp->hdr.cid); 864 - whdr.callNumber = htonl(sp->hdr.callNumber); 865 - whdr.serviceId = htons(sp->hdr.serviceId); 866 - whdr.flags = sp->hdr.flags; 867 - whdr.flags ^= RXRPC_CLIENT_INITIATED; 868 - whdr.flags &= RXRPC_CLIENT_INITIATED; 869 849 870 850 iov_iter_kvec(&msg.msg_iter, WRITE, iov, ioc, size); 871 851 ret = do_udp_sendmsg(local->socket, &msg, size);
+2 -4
net/rxrpc/peer_object.c
··· 149 149 * assess the MTU size for the network interface through which this peer is 150 150 * reached 151 151 */ 152 - static void rxrpc_assess_MTU_size(struct rxrpc_local *local, 153 - struct rxrpc_peer *peer) 152 + void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer) 154 153 { 155 154 struct net *net = local->net; 156 155 struct dst_entry *dst; ··· 276 277 277 278 peer->hdrsize += sizeof(struct rxrpc_wire_header); 278 279 peer->max_data = peer->if_mtu - peer->hdrsize; 279 - 280 - rxrpc_assess_MTU_size(local, peer); 281 280 } 282 281 283 282 /* ··· 294 297 if (peer) { 295 298 memcpy(&peer->srx, srx, sizeof(*srx)); 296 299 rxrpc_init_peer(local, peer, hash_key); 300 + rxrpc_assess_MTU_size(local, peer); 297 301 } 298 302 299 303 _leave(" = %p", peer);
+21 -2
net/rxrpc/recvmsg.c
··· 29 29 30 30 if (!list_empty(&call->recvmsg_link)) 31 31 return; 32 + if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) { 33 + rxrpc_see_call(call, rxrpc_call_see_notify_released); 34 + return; 35 + } 32 36 33 37 rcu_read_lock(); 34 38 ··· 451 447 goto try_again; 452 448 } 453 449 450 + rxrpc_see_call(call, rxrpc_call_see_recvmsg); 451 + if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) { 452 + rxrpc_see_call(call, rxrpc_call_see_already_released); 453 + list_del_init(&call->recvmsg_link); 454 + spin_unlock_irq(&rx->recvmsg_lock); 455 + release_sock(&rx->sk); 456 + trace_rxrpc_recvmsg(call->debug_id, rxrpc_recvmsg_unqueue, 0); 457 + rxrpc_put_call(call, rxrpc_call_put_recvmsg); 458 + goto try_again; 459 + } 454 460 if (!(flags & MSG_PEEK)) 455 461 list_del_init(&call->recvmsg_link); 456 462 else ··· 484 470 485 471 release_sock(&rx->sk); 486 472 487 - if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) 488 - BUG(); 473 + if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) { 474 + rxrpc_see_call(call, rxrpc_call_see_already_released); 475 + mutex_unlock(&call->user_mutex); 476 + if (!(flags & MSG_PEEK)) 477 + rxrpc_put_call(call, rxrpc_call_put_recvmsg); 478 + goto try_again; 479 + } 489 480 490 481 ret = rxrpc_recvmsg_user_id(call, msg, flags); 491 482 if (ret < 0)
+4 -4
net/rxrpc/security.c
··· 140 140 141 141 sec = rxrpc_security_lookup(sp->hdr.securityIndex); 142 142 if (!sec) { 143 - rxrpc_direct_abort(skb, rxrpc_abort_unsupported_security, 144 - RX_INVALID_OPERATION, -EKEYREJECTED); 143 + rxrpc_direct_conn_abort(skb, rxrpc_abort_unsupported_security, 144 + RX_INVALID_OPERATION, -EKEYREJECTED); 145 145 return NULL; 146 146 } 147 147 148 148 if (sp->hdr.securityIndex != RXRPC_SECURITY_NONE && 149 149 !rx->securities) { 150 - rxrpc_direct_abort(skb, rxrpc_abort_no_service_key, 151 - sec->no_key_abort, -EKEYREJECTED); 150 + rxrpc_direct_conn_abort(skb, rxrpc_abort_no_service_key, 151 + sec->no_key_abort, -EKEYREJECTED); 152 152 return NULL; 153 153 } 154 154
+3 -1
net/sched/sch_htb.c
··· 821 821 u32 *pid; 822 822 } stk[TC_HTB_MAXDEPTH], *sp = stk; 823 823 824 - BUG_ON(!hprio->row.rb_node); 824 + if (unlikely(!hprio->row.rb_node)) 825 + return NULL; 826 + 825 827 sp->root = hprio->row.rb_node; 826 828 sp->pptr = &hprio->ptr; 827 829 sp->pid = &hprio->last_ptr_id;
+21 -9
net/sched/sch_qfq.c
··· 412 412 bool existing = false; 413 413 struct nlattr *tb[TCA_QFQ_MAX + 1]; 414 414 struct qfq_aggregate *new_agg = NULL; 415 - u32 weight, lmax, inv_w; 415 + u32 weight, lmax, inv_w, old_weight, old_lmax; 416 416 int err; 417 417 int delta_w; 418 418 ··· 443 443 inv_w = ONE_FP / weight; 444 444 weight = ONE_FP / inv_w; 445 445 446 - if (cl != NULL && 447 - lmax == cl->agg->lmax && 448 - weight == cl->agg->class_weight) 449 - return 0; /* nothing to change */ 446 + if (cl != NULL) { 447 + sch_tree_lock(sch); 448 + old_weight = cl->agg->class_weight; 449 + old_lmax = cl->agg->lmax; 450 + sch_tree_unlock(sch); 451 + if (lmax == old_lmax && weight == old_weight) 452 + return 0; /* nothing to change */ 453 + } 450 454 451 - delta_w = weight - (cl ? cl->agg->class_weight : 0); 455 + delta_w = weight - (cl ? old_weight : 0); 452 456 453 457 if (q->wsum + delta_w > QFQ_MAX_WSUM) { 454 458 NL_SET_ERR_MSG_FMT_MOD(extack, ··· 559 555 560 556 qdisc_purge_queue(cl->qdisc); 561 557 qdisc_class_hash_remove(&q->clhash, &cl->common); 558 + qfq_destroy_class(sch, cl); 562 559 563 560 sch_tree_unlock(sch); 564 561 565 - qfq_destroy_class(sch, cl); 566 562 return 0; 567 563 } 568 564 ··· 629 625 { 630 626 struct qfq_class *cl = (struct qfq_class *)arg; 631 627 struct nlattr *nest; 628 + u32 class_weight, lmax; 632 629 633 630 tcm->tcm_parent = TC_H_ROOT; 634 631 tcm->tcm_handle = cl->common.classid; ··· 638 633 nest = nla_nest_start_noflag(skb, TCA_OPTIONS); 639 634 if (nest == NULL) 640 635 goto nla_put_failure; 641 - if (nla_put_u32(skb, TCA_QFQ_WEIGHT, cl->agg->class_weight) || 642 - nla_put_u32(skb, TCA_QFQ_LMAX, cl->agg->lmax)) 636 + 637 + sch_tree_lock(sch); 638 + class_weight = cl->agg->class_weight; 639 + lmax = cl->agg->lmax; 640 + sch_tree_unlock(sch); 641 + if (nla_put_u32(skb, TCA_QFQ_WEIGHT, class_weight) || 642 + nla_put_u32(skb, TCA_QFQ_LMAX, lmax)) 643 643 goto nla_put_failure; 644 644 return nla_nest_end(skb, nest); 645 645 ··· 661 651 662 652 memset(&xstats, 0, sizeof(xstats)); 663 653 654 + sch_tree_lock(sch); 664 655 xstats.weight = cl->agg->class_weight; 665 656 xstats.lmax = cl->agg->lmax; 657 + sch_tree_unlock(sch); 666 658 667 659 if (gnet_stats_copy_basic(d, NULL, &cl->bstats, true) < 0 || 668 660 gnet_stats_copy_rate_est(d, &cl->rate_est) < 0 ||
+14
net/smc/af_smc.c
··· 30 30 #include <linux/splice.h> 31 31 32 32 #include <net/sock.h> 33 + #include <net/inet_common.h> 34 + #if IS_ENABLED(CONFIG_IPV6) 35 + #include <net/ipv6.h> 36 + #endif 33 37 #include <net/tcp.h> 34 38 #include <net/smc.h> 35 39 #include <asm/ioctls.h> ··· 364 360 return; 365 361 if (!sock_flag(sk, SOCK_DEAD)) 366 362 return; 363 + switch (sk->sk_family) { 364 + case AF_INET: 365 + inet_sock_destruct(sk); 366 + break; 367 + #if IS_ENABLED(CONFIG_IPV6) 368 + case AF_INET6: 369 + inet6_sock_destruct(sk); 370 + break; 371 + #endif 372 + } 367 373 } 368 374 369 375 static struct lock_class_key smc_key;
+4 -4
net/smc/smc.h
··· 283 283 }; 284 284 285 285 struct smc_sock { /* smc sock container */ 286 - struct sock sk; 287 - #if IS_ENABLED(CONFIG_IPV6) 288 - struct ipv6_pinfo *pinet6; 289 - #endif 286 + union { 287 + struct sock sk; 288 + struct inet_sock icsk_inet; 289 + }; 290 290 struct socket *clcsock; /* internal tcp socket */ 291 291 void (*clcsk_state_change)(struct sock *sk); 292 292 /* original stat_change fct. */
+1 -2
net/tls/tls_strp.c
··· 512 512 if (inq < strp->stm.full_len) 513 513 return tls_strp_read_copy(strp, true); 514 514 515 + tls_strp_load_anchor_with_queue(strp, inq); 515 516 if (!strp->stm.full_len) { 516 - tls_strp_load_anchor_with_queue(strp, inq); 517 - 518 517 sz = tls_rx_msg_size(strp, strp->anchor); 519 518 if (sz < 0) { 520 519 tls_strp_abort_strp(strp, sz);
+1
tools/testing/selftests/net/netfilter/.gitignore
··· 5 5 conntrack_reverse_clash 6 6 sctp_collision 7 7 nf_queue 8 + udpclash
+3
tools/testing/selftests/net/netfilter/Makefile
··· 15 15 TEST_PROGS += conntrack_resize.sh 16 16 TEST_PROGS += conntrack_sctp_collision.sh 17 17 TEST_PROGS += conntrack_vrf.sh 18 + TEST_PROGS += conntrack_clash.sh 18 19 TEST_PROGS += conntrack_reverse_clash.sh 19 20 TEST_PROGS += ipvs.sh 20 21 TEST_PROGS += nf_conntrack_packetdrill.sh ··· 45 44 TEST_GEN_FILES += conntrack_dump_flush 46 45 TEST_GEN_FILES += conntrack_reverse_clash 47 46 TEST_GEN_FILES += sctp_collision 47 + TEST_GEN_FILES += udpclash 48 48 49 49 include ../../lib.mk 50 50 ··· 54 52 55 53 $(OUTPUT)/conntrack_dump_flush: CFLAGS += $(MNL_CFLAGS) 56 54 $(OUTPUT)/conntrack_dump_flush: LDLIBS += $(MNL_LDLIBS) 55 + $(OUTPUT)/udpclash: LDLIBS += -lpthread 57 56 58 57 TEST_FILES := lib.sh 59 58 TEST_FILES += packetdrill
+175
tools/testing/selftests/net/netfilter/conntrack_clash.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source lib.sh 5 + 6 + clash_resolution_active=0 7 + dport=22111 8 + ret=0 9 + 10 + cleanup() 11 + { 12 + # netns cleanup also zaps any remaining socat echo server. 13 + cleanup_all_ns 14 + } 15 + 16 + checktool "nft --version" "run test without nft" 17 + checktool "conntrack --version" "run test without conntrack" 18 + checktool "socat -h" "run test without socat" 19 + 20 + trap cleanup EXIT 21 + 22 + setup_ns nsclient1 nsclient2 nsrouter 23 + 24 + ip netns exec "$nsrouter" nft -f -<<EOF 25 + table ip t { 26 + chain lb { 27 + meta l4proto udp dnat to numgen random mod 3 map { 0 : 10.0.2.1 . 9000, 1 : 10.0.2.1 . 9001, 2 : 10.0.2.1 . 9002 } 28 + } 29 + 30 + chain prerouting { 31 + type nat hook prerouting priority dstnat 32 + 33 + udp dport $dport counter jump lb 34 + } 35 + 36 + chain output { 37 + type nat hook output priority dstnat 38 + 39 + udp dport $dport counter jump lb 40 + } 41 + } 42 + EOF 43 + 44 + load_simple_ruleset() 45 + { 46 + ip netns exec "$1" nft -f -<<EOF 47 + table ip t { 48 + chain forward { 49 + type filter hook forward priority 0 50 + 51 + ct state new counter 52 + } 53 + } 54 + EOF 55 + } 56 + 57 + spawn_servers() 58 + { 59 + local ns="$1" 60 + local ports="9000 9001 9002" 61 + 62 + for port in $ports; do 63 + ip netns exec "$ns" socat UDP-RECVFROM:$port,fork PIPE 2>/dev/null & 64 + done 65 + 66 + for port in $ports; do 67 + wait_local_port_listen "$ns" $port udp 68 + done 69 + } 70 + 71 + add_addr() 72 + { 73 + local ns="$1" 74 + local dev="$2" 75 + local i="$3" 76 + local j="$4" 77 + 78 + ip -net "$ns" link set "$dev" up 79 + ip -net "$ns" addr add "10.0.$i.$j/24" dev "$dev" 80 + } 81 + 82 + ping_test() 83 + { 84 + local ns="$1" 85 + local daddr="$2" 86 + 87 + if ! ip netns exec "$ns" ping -q -c 1 $daddr > /dev/null;then 88 + echo "FAIL: ping from $ns to $daddr" 89 + exit 1 90 + fi 91 + } 92 + 93 + run_one_clash_test() 94 + { 95 + local ns="$1" 96 + local daddr="$2" 97 + local dport="$3" 98 + local entries 99 + local cre 100 + 101 + if ! ip netns exec "$ns" ./udpclash $daddr $dport;then 102 + echo "FAIL: did not receive expected number of replies for $daddr:$dport" 103 + ret=1 104 + return 1 105 + fi 106 + 107 + entries=$(conntrack -S | wc -l) 108 + cre=$(conntrack -S | grep -v "clash_resolve=0" | wc -l) 109 + 110 + if [ "$cre" -ne "$entries" ] ;then 111 + clash_resolution_active=1 112 + return 0 113 + fi 114 + 115 + # 1 cpu -> parallel insertion impossible 116 + if [ "$entries" -eq 1 ]; then 117 + return 0 118 + fi 119 + 120 + # not a failure: clash resolution logic did not trigger, but all replies 121 + # were received. With right timing, xmit completed sequentially and 122 + # no parallel insertion occurs. 123 + return $ksft_skip 124 + } 125 + 126 + run_clash_test() 127 + { 128 + local ns="$1" 129 + local daddr="$2" 130 + local dport="$3" 131 + 132 + for i in $(seq 1 10);do 133 + run_one_clash_test "$ns" "$daddr" "$dport" 134 + local rv=$? 135 + if [ $rv -eq 0 ];then 136 + echo "PASS: clash resolution test for $daddr:$dport on attempt $i" 137 + return 0 138 + elif [ $rv -eq 1 ];then 139 + echo "FAIL: clash resolution test for $daddr:$dport on attempt $i" 140 + return 1 141 + fi 142 + done 143 + } 144 + 145 + ip link add veth0 netns "$nsclient1" type veth peer name veth0 netns "$nsrouter" 146 + ip link add veth0 netns "$nsclient2" type veth peer name veth1 netns "$nsrouter" 147 + add_addr "$nsclient1" veth0 1 1 148 + add_addr "$nsclient2" veth0 2 1 149 + add_addr "$nsrouter" veth0 1 99 150 + add_addr "$nsrouter" veth1 2 99 151 + 152 + ip -net "$nsclient1" route add default via 10.0.1.99 153 + ip -net "$nsclient2" route add default via 10.0.2.99 154 + ip netns exec "$nsrouter" sysctl -q net.ipv4.ip_forward=1 155 + 156 + ping_test "$nsclient1" 10.0.1.99 157 + ping_test "$nsclient1" 10.0.2.1 158 + ping_test "$nsclient2" 10.0.1.1 159 + 160 + spawn_servers "$nsclient2" 161 + 162 + # exercise clash resolution with nat: 163 + # nsrouter is supposed to dnat to 10.0.2.1:900{0,1,2,3}. 164 + run_clash_test "$nsclient1" 10.0.1.99 "$dport" 165 + 166 + # exercise clash resolution without nat. 167 + load_simple_ruleset "$nsclient2" 168 + run_clash_test "$nsclient2" 127.0.0.1 9001 169 + 170 + if [ $clash_resolution_active -eq 0 ];then 171 + [ "$ret" -eq 0 ] && ret=$ksft_skip 172 + echo "SKIP: Clash resolution did not trigger" 173 + fi 174 + 175 + exit $ret
+92 -5
tools/testing/selftests/net/netfilter/conntrack_resize.sh
··· 12 12 tmpfile_proc="" 13 13 tmpfile_uniq="" 14 14 ret=0 15 + have_socat=0 16 + 17 + socat -h > /dev/null && have_socat=1 15 18 16 19 insert_count=2000 17 20 [ "$KSFT_MACHINE_SLOW" = "yes" ] && insert_count=400 ··· 126 123 done 127 124 } 128 125 129 - ctflood() 126 + ct_pingflood() 130 127 { 131 128 local ns="$1" 132 129 local duration="$2" ··· 155 152 wait 156 153 } 157 154 155 + ct_udpflood() 156 + { 157 + local ns="$1" 158 + local duration="$2" 159 + local now=$(date +%s) 160 + local end=$((now + duration)) 161 + 162 + [ $have_socat -ne "1" ] && return 163 + 164 + while [ $now -lt $end ]; do 165 + ip netns exec "$ns" bash<<"EOF" 166 + for i in $(seq 1 100);do 167 + dport=$(((RANDOM%65536)+1)) 168 + 169 + echo bar | socat -u STDIN UDP:"127.0.0.1:$dport" & 170 + done > /dev/null 2>&1 171 + wait 172 + EOF 173 + now=$(date +%s) 174 + done 175 + } 176 + 177 + ct_udpclash() 178 + { 179 + local ns="$1" 180 + local duration="$2" 181 + local now=$(date +%s) 182 + local end=$((now + duration)) 183 + 184 + [ -x udpclash ] || return 185 + 186 + while [ $now -lt $end ]; do 187 + ip netns exec "$ns" ./udpclash 127.0.0.1 $((RANDOM%65536)) > /dev/null 2>&1 188 + 189 + now=$(date +%s) 190 + done 191 + } 192 + 158 193 # dump to /dev/null. We don't want dumps to cause infinite loops 159 194 # or use-after-free even when conntrack table is altered while dumps 160 195 # are in progress. ··· 208 167 fi 209 168 210 169 wait 170 + } 171 + 172 + ct_nulldump_loop() 173 + { 174 + local ns="$1" 175 + local duration="$2" 176 + local now=$(date +%s) 177 + local end=$((now + duration)) 178 + 179 + while [ $now -lt $end ]; do 180 + ct_nulldump "$ns" 181 + sleep $((RANDOM%2)) 182 + now=$(date +%s) 183 + done 184 + } 185 + 186 + change_timeouts() 187 + { 188 + local ns="$1" 189 + local r1=$((RANDOM%2)) 190 + local r2=$((RANDOM%2)) 191 + 192 + [ "$r1" -eq 1 ] && ip netns exec "$ns" sysctl -q net.netfilter.nf_conntrack_icmp_timeout=$((RANDOM%5)) 193 + [ "$r2" -eq 1 ] && ip netns exec "$ns" sysctl -q net.netfilter.nf_conntrack_udp_timeout=$((RANDOM%5)) 194 + } 195 + 196 + ct_change_timeouts_loop() 197 + { 198 + local ns="$1" 199 + local duration="$2" 200 + local now=$(date +%s) 201 + local end=$((now + duration)) 202 + 203 + while [ $now -lt $end ]; do 204 + change_timeouts "$ns" 205 + sleep $((RANDOM%2)) 206 + now=$(date +%s) 207 + done 208 + 209 + # restore defaults 210 + ip netns exec "$ns" sysctl -q net.netfilter.nf_conntrack_icmp_timeout=30 211 + ip netns exec "$ns" sysctl -q net.netfilter.nf_conntrack_udp_timeout=30 211 212 } 212 213 213 214 check_taint() ··· 281 198 282 199 r=$((RANDOM%$insert_count)) 283 200 284 - ctflood "$n" "$timeout" "floodresize" & 201 + ct_pingflood "$n" "$timeout" "floodresize" & 202 + ct_udpflood "$n" "$timeout" & 203 + ct_udpclash "$n" "$timeout" & 204 + 285 205 insert_ctnetlink "$n" "$r" & 286 206 ctflush "$n" "$timeout" & 287 - ct_nulldump "$n" & 207 + ct_nulldump_loop "$n" "$timeout" & 208 + ct_change_timeouts_loop "$n" "$timeout" & 288 209 289 210 wait 290 211 } ··· 393 306 394 307 ip netns exec "$nsclient1" sysctl -q net.netfilter.nf_conntrack_icmp_timeout=3600 395 308 396 - ctflood "$nsclient1" $timeout "dumpall" & 309 + ct_pingflood "$nsclient1" $timeout "dumpall" & 397 310 insert_ctnetlink "$nsclient2" $insert_count 398 311 399 312 wait ··· 455 368 ct_flush_once "$nsclient1" 456 369 ct_flush_once "$nsclient2" 457 370 458 - ctflood "$nsclient1" "$timeout" "conntrack disable" 371 + ct_pingflood "$nsclient1" "$timeout" "conntrack disable" 459 372 ip netns exec "$nsclient2" ping -q -c 1 127.0.0.1 >/dev/null 2>&1 460 373 461 374 # Disabled, should not have picked up any connection.
+3
tools/testing/selftests/net/netfilter/nft_concat_range.sh
··· 1311 1311 # - remove some elements, check that packets don't match anymore 1312 1312 test_correctness_main() { 1313 1313 range_size=1 1314 + 1315 + send_nomatch $((end + 1)) $((end + 1 + src_delta)) || return 1 1316 + 1314 1317 for i in $(seq "${start}" $((start + count))); do 1315 1318 local elem="" 1316 1319
+158
tools/testing/selftests/net/netfilter/udpclash.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* Usage: ./udpclash <IP> <PORT> 4 + * 5 + * Emit THREAD_COUNT UDP packets sharing the same saddr:daddr pair. 6 + * 7 + * This mimics DNS resolver libraries that emit A and AAAA requests 8 + * in parallel. 9 + * 10 + * This exercises conntrack clash resolution logic added and later 11 + * refined in 12 + * 13 + * 71d8c47fc653 ("netfilter: conntrack: introduce clash resolution on insertion race") 14 + * ed07d9a021df ("netfilter: nf_conntrack: resolve clash for matching conntracks") 15 + * 6a757c07e51f ("netfilter: conntrack: allow insertion of clashing entries") 16 + */ 17 + #include <stdio.h> 18 + #include <string.h> 19 + #include <stdlib.h> 20 + #include <unistd.h> 21 + #include <arpa/inet.h> 22 + #include <sys/socket.h> 23 + #include <pthread.h> 24 + 25 + #define THREAD_COUNT 128 26 + 27 + struct thread_args { 28 + const struct sockaddr_in *si_remote; 29 + int sockfd; 30 + }; 31 + 32 + static int wait = 1; 33 + 34 + static void *thread_main(void *varg) 35 + { 36 + const struct sockaddr_in *si_remote; 37 + const struct thread_args *args = varg; 38 + static const char msg[] = "foo"; 39 + 40 + si_remote = args->si_remote; 41 + 42 + while (wait == 1) 43 + ; 44 + 45 + if (sendto(args->sockfd, msg, strlen(msg), MSG_NOSIGNAL, 46 + (struct sockaddr *)si_remote, sizeof(*si_remote)) < 0) 47 + exit(111); 48 + 49 + return varg; 50 + } 51 + 52 + static int run_test(int fd, const struct sockaddr_in *si_remote) 53 + { 54 + struct thread_args thread_args = { 55 + .si_remote = si_remote, 56 + .sockfd = fd, 57 + }; 58 + pthread_t *tid = calloc(THREAD_COUNT, sizeof(pthread_t)); 59 + unsigned int repl_count = 0, timeout = 0; 60 + int i; 61 + 62 + if (!tid) { 63 + perror("calloc"); 64 + return 1; 65 + } 66 + 67 + for (i = 0; i < THREAD_COUNT; i++) { 68 + int err = pthread_create(&tid[i], NULL, &thread_main, &thread_args); 69 + 70 + if (err != 0) { 71 + perror("pthread_create"); 72 + exit(1); 73 + } 74 + } 75 + 76 + wait = 0; 77 + 78 + for (i = 0; i < THREAD_COUNT; i++) 79 + pthread_join(tid[i], NULL); 80 + 81 + while (repl_count < THREAD_COUNT) { 82 + struct sockaddr_in si_repl; 83 + socklen_t si_repl_len = sizeof(si_repl); 84 + char repl[512]; 85 + ssize_t ret; 86 + 87 + ret = recvfrom(fd, repl, sizeof(repl), MSG_NOSIGNAL, 88 + (struct sockaddr *) &si_repl, &si_repl_len); 89 + if (ret < 0) { 90 + if (timeout++ > 5000) { 91 + fputs("timed out while waiting for reply from thread\n", stderr); 92 + break; 93 + } 94 + 95 + /* give reply time to pass though the stack */ 96 + usleep(1000); 97 + continue; 98 + } 99 + 100 + if (si_repl_len != sizeof(*si_remote)) { 101 + fprintf(stderr, "warning: reply has unexpected repl_len %d vs %d\n", 102 + (int)si_repl_len, (int)sizeof(si_repl)); 103 + } else if (si_remote->sin_addr.s_addr != si_repl.sin_addr.s_addr || 104 + si_remote->sin_port != si_repl.sin_port) { 105 + char a[64], b[64]; 106 + 107 + inet_ntop(AF_INET, &si_remote->sin_addr, a, sizeof(a)); 108 + inet_ntop(AF_INET, &si_repl.sin_addr, b, sizeof(b)); 109 + 110 + fprintf(stderr, "reply from wrong source: want %s:%d got %s:%d\n", 111 + a, ntohs(si_remote->sin_port), b, ntohs(si_repl.sin_port)); 112 + } 113 + 114 + repl_count++; 115 + } 116 + 117 + printf("got %d of %d replies\n", repl_count, THREAD_COUNT); 118 + 119 + free(tid); 120 + 121 + return repl_count == THREAD_COUNT ? 0 : 1; 122 + } 123 + 124 + int main(int argc, char *argv[]) 125 + { 126 + struct sockaddr_in si_local = { 127 + .sin_family = AF_INET, 128 + }; 129 + struct sockaddr_in si_remote = { 130 + .sin_family = AF_INET, 131 + }; 132 + int fd, ret; 133 + 134 + if (argc < 3) { 135 + fputs("Usage: send_udp <daddr> <dport>\n", stderr); 136 + return 1; 137 + } 138 + 139 + si_remote.sin_port = htons(atoi(argv[2])); 140 + si_remote.sin_addr.s_addr = inet_addr(argv[1]); 141 + 142 + fd = socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_UDP); 143 + if (fd < 0) { 144 + perror("socket"); 145 + return 1; 146 + } 147 + 148 + if (bind(fd, (struct sockaddr *)&si_local, sizeof(si_local)) < 0) { 149 + perror("bind"); 150 + return 1; 151 + } 152 + 153 + ret = run_test(fd, &si_remote); 154 + 155 + close(fd); 156 + 157 + return ret; 158 + }
+4 -4
tools/testing/selftests/net/udpgro.sh
··· 48 48 49 49 cfg_veth 50 50 51 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} & 51 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${rx_args} & 52 52 local PID1=$! 53 53 54 54 wait_local_port_listen ${PEER_NS} 8000 udp ··· 95 95 # will land on the 'plain' one 96 96 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -G ${family} -b ${addr1} -n 0 & 97 97 local PID1=$! 98 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} & 98 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${family} -b ${addr2%/*} ${rx_args} & 99 99 local PID2=$! 100 100 101 101 wait_local_port_listen "${PEER_NS}" 8000 udp ··· 117 117 118 118 cfg_veth 119 119 120 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} -p 12345 & 120 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 100 ${rx_args} -p 12345 & 121 121 local PID1=$! 122 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} & 122 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 100 ${rx_args} & 123 123 local PID2=$! 124 124 125 125 wait_local_port_listen "${PEER_NS}" 12345 udp
+86 -12
tools/testing/selftests/net/vlan_hw_filter.sh
··· 3 3 4 4 readonly NETNS="ns-$(mktemp -u XXXXXX)" 5 5 6 + ALL_TESTS=" 7 + test_vlan_filter_check 8 + test_vlan0_del_crash_01 9 + test_vlan0_del_crash_02 10 + test_vlan0_del_crash_03 11 + test_vid0_memleak 12 + " 13 + 6 14 ret=0 7 15 16 + setup() { 17 + ip netns add ${NETNS} 18 + } 19 + 8 20 cleanup() { 9 - ip netns del $NETNS 21 + ip netns del $NETNS 2>/dev/null 10 22 } 11 23 12 24 trap cleanup EXIT 13 25 14 26 fail() { 15 - echo "ERROR: ${1:-unexpected return code} (ret: $_)" >&2 16 - ret=1 27 + echo "ERROR: ${1:-unexpected return code} (ret: $_)" >&2 28 + ret=1 17 29 } 18 30 19 - ip netns add ${NETNS} 20 - ip netns exec ${NETNS} ip link add bond0 type bond mode 0 21 - ip netns exec ${NETNS} ip link add bond_slave_1 type veth peer veth2 22 - ip netns exec ${NETNS} ip link set bond_slave_1 master bond0 23 - ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 24 - ip netns exec ${NETNS} ip link add link bond_slave_1 name bond_slave_1.0 type vlan id 0 25 - ip netns exec ${NETNS} ip link add link bond0 name bond0.0 type vlan id 0 26 - ip netns exec ${NETNS} ip link set bond_slave_1 nomaster 27 - ip netns exec ${NETNS} ip link del veth2 || fail "Please check vlan HW filter function" 31 + tests_run() 32 + { 33 + local current_test 34 + for current_test in ${TESTS:-$ALL_TESTS}; do 35 + $current_test 36 + done 37 + } 28 38 39 + test_vlan_filter_check() { 40 + setup 41 + ip netns exec ${NETNS} ip link add bond0 type bond mode 0 42 + ip netns exec ${NETNS} ip link add bond_slave_1 type veth peer veth2 43 + ip netns exec ${NETNS} ip link set bond_slave_1 master bond0 44 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 45 + ip netns exec ${NETNS} ip link add link bond_slave_1 name bond_slave_1.0 type vlan id 0 46 + ip netns exec ${NETNS} ip link add link bond0 name bond0.0 type vlan id 0 47 + ip netns exec ${NETNS} ip link set bond_slave_1 nomaster 48 + ip netns exec ${NETNS} ip link del veth2 || fail "Please check vlan HW filter function" 49 + cleanup 50 + } 51 + 52 + #enable vlan_filter feature of real_dev with vlan0 during running time 53 + test_vlan0_del_crash_01() { 54 + setup 55 + ip netns exec ${NETNS} ip link add bond0 type bond mode 0 56 + ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q 57 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 58 + ip netns exec ${NETNS} ifconfig bond0 up 59 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on 60 + ip netns exec ${NETNS} ifconfig bond0 down 61 + ip netns exec ${NETNS} ifconfig bond0 up 62 + ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function" 63 + cleanup 64 + } 65 + 66 + #enable vlan_filter feature and add vlan0 for real_dev during running time 67 + test_vlan0_del_crash_02() { 68 + setup 69 + ip netns exec ${NETNS} ip link add bond0 type bond mode 0 70 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 71 + ip netns exec ${NETNS} ifconfig bond0 up 72 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on 73 + ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q 74 + ip netns exec ${NETNS} ifconfig bond0 down 75 + ip netns exec ${NETNS} ifconfig bond0 up 76 + ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function" 77 + cleanup 78 + } 79 + 80 + #enable vlan_filter feature of real_dev during running time 81 + #test kernel_bug of vlan unregister 82 + test_vlan0_del_crash_03() { 83 + setup 84 + ip netns exec ${NETNS} ip link add bond0 type bond mode 0 85 + ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q 86 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 87 + ip netns exec ${NETNS} ifconfig bond0 up 88 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on 89 + ip netns exec ${NETNS} ifconfig bond0 down 90 + ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function" 91 + cleanup 92 + } 93 + 94 + test_vid0_memleak() { 95 + setup 96 + ip netns exec ${NETNS} ip link add bond0 up type bond mode 0 97 + ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off 98 + ip netns exec ${NETNS} ip link del dev bond0 || fail "Please check vlan HW filter function" 99 + cleanup 100 + } 101 + 102 + tests_run 29 103 exit $ret
+92
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 128 128 ] 129 129 }, 130 130 { 131 + "id": "5456", 132 + "name": "Test htb_dequeue_tree with deactivation and row emptying", 133 + "category": [ 134 + "qdisc", 135 + "htb" 136 + ], 137 + "plugins": { 138 + "requires": "nsPlugin" 139 + }, 140 + "setup": [ 141 + "$IP link set dev $DUMMY up || true", 142 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 143 + "$TC qdisc add dev $DUMMY root handle 1: htb default 1", 144 + "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 64bit ", 145 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2: netem", 146 + "$TC qdisc add dev $DUMMY parent 2:1 handle 3: blackhole" 147 + ], 148 + "cmdUnderTest": "ping -c1 -W0.01 -I $DUMMY 10.10.11.11", 149 + "expExitCode": "1", 150 + "verifyCmd": "$TC -j qdisc show dev $DUMMY", 151 + "matchJSON": [], 152 + "teardown": [ 153 + "$TC qdisc del dev $DUMMY root" 154 + ] 155 + }, 156 + { 131 157 "id": "c024", 132 158 "name": "Test TBF with SKBPRIO - catch qlen corner cases", 133 159 "category": [ ··· 697 671 "matchJSON": [], 698 672 "teardown": [ 699 673 "$TC qdisc del dev $DUMMY root handle 1: drr" 674 + ] 675 + }, 676 + { 677 + "id": "be28", 678 + "name": "Try to add fq_codel qdisc as a child of an hhf qdisc", 679 + "category": [ 680 + "qdisc", 681 + "fq_codel", 682 + "hhf" 683 + ], 684 + "plugins": { 685 + "requires": "nsPlugin" 686 + }, 687 + "setup": [ 688 + "$TC qdisc add dev $DUMMY root handle a: hhf" 689 + ], 690 + "cmdUnderTest": "$TC qdisc add dev $DUMMY parent a: handle b: fq_codel", 691 + "expExitCode": "2", 692 + "verifyCmd": "$TC -j qdisc ls dev $DUMMY handle b:", 693 + "matchJSON": [], 694 + "teardown": [ 695 + "$TC qdisc del dev $DUMMY root" 696 + ] 697 + }, 698 + { 699 + "id": "fcb5", 700 + "name": "Try to add pie qdisc as a child of a drr qdisc", 701 + "category": [ 702 + "qdisc", 703 + "pie", 704 + "drr" 705 + ], 706 + "plugins": { 707 + "requires": "nsPlugin" 708 + }, 709 + "setup": [ 710 + "$TC qdisc add dev $DUMMY root handle a: drr" 711 + ], 712 + "cmdUnderTest": "$TC qdisc add dev $DUMMY parent a: handle b: pie", 713 + "expExitCode": "2", 714 + "verifyCmd": "$TC -j qdisc ls dev $DUMMY handle b:", 715 + "matchJSON": [], 716 + "teardown": [ 717 + "$TC qdisc del dev $DUMMY root" 718 + ] 719 + }, 720 + { 721 + "id": "7801", 722 + "name": "Try to add fq qdisc as a child of an inexistent hfsc class", 723 + "category": [ 724 + "qdisc", 725 + "sfq", 726 + "hfsc" 727 + ], 728 + "plugins": { 729 + "requires": "nsPlugin" 730 + }, 731 + "setup": [ 732 + "$TC qdisc add dev $DUMMY root handle a: hfsc" 733 + ], 734 + "cmdUnderTest": "$TC qdisc add dev $DUMMY parent a:fff2 sfq limit 4", 735 + "expExitCode": "2", 736 + "verifyCmd": "$TC -j qdisc ls dev $DUMMY handle b:", 737 + "matchJSON": [], 738 + "teardown": [ 739 + "$TC qdisc del dev $DUMMY root" 700 740 ] 701 741 } 702 742 ]