Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
"Lots of fixes, here goes:

1) NULL deref in qtnfmac, from Gustavo A. R. Silva.

2) Kernel oops when fw download fails in rtlwifi, from Ping-Ke Shih.

3) Lost completion messages in AF_XDP, from Magnus Karlsson.

4) Correct bogus self-assignment in rhashtable, from Rishabh
Bhatnagar.

5) Fix regression in ipv6 route append handling, from David Ahern.

6) Fix masking in __set_phy_supported(), from Heiner Kallweit.

7) Missing module owner set in x_tables icmp, from Florian Westphal.

8) liquidio's timeouts are HZ dependent, fix from Nicholas Mc Guire.

9) Link setting fixes for sh_eth and ravb, from Vladimir Zapolskiy.

10) Fix NULL deref when using chains in act_csum, from Davide Caratti.

11) XDP_REDIRECT needs to check if the interface is up and whether the
MTU is sufficient. From Toshiaki Makita.

12) Net diag can do a double free when killing TCP_NEW_SYN_RECV
connections, from Lorenzo Colitti.

13) nf_defrag in ipv6 can unnecessarily hold onto dst entries for a
full minute, delaying device unregister. From Eric Dumazet.

14) Update MAC entries in the correct order in ixgbe, from Alexander
Duyck.

15) Don't leave partial mangles bpf program in jit_subprogs, from
Daniel Borkmann.

16) Fix pfmemalloc SKB state propagation, from Stefano Brivio.

17) Fix ACK handling in DCTCP congestion control, from Yuchung Cheng.

18) Use after free in tun XDP_TX, from Toshiaki Makita.

19) Stale ipv6 header pointer in ipv6 gre code, from Prashant Bhole.

20) Don't reuse remainder of RX page when XDP is set in mlx4, from
Saeed Mahameed.

21) Fix window probe handling of TCP rapair sockets, from Stefan
Baranoff.

22) Missing socket locking in smc_ioctl(), from Ursula Braun.

23) IPV6_ILA needs DST_CACHE, from Arnd Bergmann.

24) Spectre v1 fix in cxgb3, from Gustavo A. R. Silva.

25) Two spots in ipv6 do a rol32() on a hash value but ignore the
result. Fixes from Colin Ian King"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (176 commits)
tcp: identify cryptic messages as TCP seq # bugs
ptp: fix missing break in switch
hv_netvsc: Fix napi reschedule while receive completion is busy
MAINTAINERS: Drop inactive Vitaly Bordug's email
net: cavium: Add fine-granular dependencies on PCI
net: qca_spi: Fix log level if probe fails
net: qca_spi: Make sure the QCA7000 reset is triggered
net: qca_spi: Avoid packet drop during initial sync
ipv6: fix useless rol32 call on hash
ipv6: sr: fix useless rol32 call on hash
net: sched: Using NULL instead of plain integer
net: usb: asix: replace mii_nway_restart in resume path
net: cxgb3_main: fix potential Spectre v1
lib/rhashtable: consider param->min_size when setting initial table size
net/smc: reset recv timeout after clc handshake
net/smc: add error handling for get_user()
net/smc: optimize consumer cursor updates
net/nfc: Avoid stalls when nfc_alloc_send_skb() returned NULL.
ipv6: ila: select CONFIG_DST_CACHE
net: usb: rtl8150: demote allmulti message to dev_dbg()
...

+1746 -1135
+1 -1
Documentation/networking/bonding.txt
··· 1490 1490 1491 1491 To configure the interval between learning packet transmits: 1492 1492 # echo 12 > /sys/class/net/bond0/bonding/lp_interval 1493 - NOTE: the lp_inteval is the number of seconds between instances where 1493 + NOTE: the lp_interval is the number of seconds between instances where 1494 1494 the bonding driver sends learning packets to each slaves peer switch. The 1495 1495 default interval is 1 second. 1496 1496
+17 -10
Documentation/networking/e100.rst
··· 47 47 The default value for each parameter is generally the recommended setting, 48 48 unless otherwise noted. 49 49 50 - Rx Descriptors: Number of receive descriptors. A receive descriptor is a data 50 + Rx Descriptors: 51 + Number of receive descriptors. A receive descriptor is a data 51 52 structure that describes a receive buffer and its attributes to the network 52 53 controller. The data in the descriptor is used by the controller to write 53 54 data from the controller to host memory. In the 3.x.x driver the valid range 54 55 for this parameter is 64-256. The default value is 256. This parameter can be 55 56 changed using the command:: 56 57 57 - ethtool -G eth? rx n 58 + ethtool -G eth? rx n 58 59 59 60 Where n is the number of desired Rx descriptors. 60 61 61 - Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data 62 + Tx Descriptors: 63 + Number of transmit descriptors. A transmit descriptor is a data 62 64 structure that describes a transmit buffer and its attributes to the network 63 65 controller. The data in the descriptor is used by the controller to read 64 66 data from the host memory to the controller. In the 3.x.x driver the valid 65 67 range for this parameter is 64-256. The default value is 128. This parameter 66 68 can be changed using the command:: 67 69 68 - ethtool -G eth? tx n 70 + ethtool -G eth? tx n 69 71 70 72 Where n is the number of desired Tx descriptors. 71 73 72 - Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by 74 + Speed/Duplex: 75 + The driver auto-negotiates the link speed and duplex settings by 73 76 default. The ethtool utility can be used as follows to force speed/duplex.:: 74 77 75 - ethtool -s eth? autoneg off speed {10|100} duplex {full|half} 78 + ethtool -s eth? autoneg off speed {10|100} duplex {full|half} 76 79 77 80 NOTE: setting the speed/duplex to incorrect values will cause the link to 78 81 fail. 79 82 80 - Event Log Message Level: The driver uses the message level flag to log events 83 + Event Log Message Level: 84 + The driver uses the message level flag to log events 81 85 to syslog. The message level can be set at driver load time. It can also be 82 86 set using the command:: 83 87 84 - ethtool -s eth? msglvl n 88 + ethtool -s eth? msglvl n 85 89 86 90 87 91 Additional Configurations ··· 96 92 97 93 Configuring a network driver to load properly when the system is started 98 94 is distribution dependent. Typically, the configuration process involves 99 - adding an alias line to /etc/modprobe.d/*.conf as well as editing other 95 + adding an alias line to `/etc/modprobe.d/*.conf` as well as editing other 100 96 system startup scripts and/or configuration files. Many popular Linux 101 97 distributions ship with tools to make these changes for you. To learn 102 98 the proper way to configure a network device for your system, refer to ··· 164 160 If you have multiple interfaces in a server, either turn on ARP 165 161 filtering by 166 162 167 - (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 163 + (1) entering:: 164 + 165 + echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 166 + 168 167 (this only works if your kernel's version is higher than 2.4.5), or 169 168 170 169 (2) installing the interfaces in separate broadcast domains (either
+112 -75
Documentation/networking/e1000.rst
··· 34 34 The default value for each parameter is generally the recommended setting, 35 35 unless otherwise noted. 36 36 37 - NOTES: For more information about the AutoNeg, Duplex, and Speed 37 + NOTES: 38 + For more information about the AutoNeg, Duplex, and Speed 38 39 parameters, see the "Speed and Duplex Configuration" section in 39 40 this document. 40 41 ··· 46 45 47 46 AutoNeg 48 47 ------- 48 + 49 49 (Supported only on adapters with copper connections) 50 - Valid Range: 0x01-0x0F, 0x20-0x2F 51 - Default Value: 0x2F 50 + 51 + :Valid Range: 0x01-0x0F, 0x20-0x2F 52 + :Default Value: 0x2F 52 53 53 54 This parameter is a bit-mask that specifies the speed and duplex settings 54 55 advertised by the adapter. When this parameter is used, the Speed and 55 56 Duplex parameters must not be specified. 56 57 57 - NOTE: Refer to the Speed and Duplex section of this readme for more 58 + NOTE: 59 + Refer to the Speed and Duplex section of this readme for more 58 60 information on the AutoNeg parameter. 59 61 60 62 Duplex 61 63 ------ 64 + 62 65 (Supported only on adapters with copper connections) 63 - Valid Range: 0-2 (0=auto-negotiate, 1=half, 2=full) 64 - Default Value: 0 66 + 67 + :Valid Range: 0-2 (0=auto-negotiate, 1=half, 2=full) 68 + :Default Value: 0 65 69 66 70 This defines the direction in which data is allowed to flow. Can be 67 71 either one or two-directional. If both Duplex and the link partner are ··· 76 70 77 71 FlowControl 78 72 ----------- 79 - Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) 80 - Default Value: Reads flow control settings from the EEPROM 73 + 74 + :Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) 75 + :Default Value: Reads flow control settings from the EEPROM 81 76 82 77 This parameter controls the automatic generation(Tx) and response(Rx) 83 78 to Ethernet PAUSE frames. 84 79 85 80 InterruptThrottleRate 86 81 --------------------- 82 + 87 83 (not supported on Intel(R) 82542, 82543 or 82544-based adapters) 88 - Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 89 - 4=simplified balancing) 90 - Default Value: 3 84 + 85 + :Valid Range: 86 + 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, 87 + 4=simplified balancing) 88 + :Default Value: 3 91 89 92 90 The driver can limit the amount of interrupts per second that the adapter 93 91 will generate for incoming packets. It does this by writing a value to the ··· 145 135 and may improve small packet latency, but is generally not suitable 146 136 for bulk throughput traffic. 147 137 148 - NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and 138 + NOTE: 139 + InterruptThrottleRate takes precedence over the TxAbsIntDelay and 149 140 RxAbsIntDelay parameters. In other words, minimizing the receive 150 141 and/or transmit absolute delays does not force the controller to 151 142 generate more interrupts than what the Interrupt Throttle Rate 152 143 allows. 153 144 154 - CAUTION: If you are using the Intel(R) PRO/1000 CT Network Connection 145 + CAUTION: 146 + If you are using the Intel(R) PRO/1000 CT Network Connection 155 147 (controller 82547), setting InterruptThrottleRate to a value 156 148 greater than 75,000, may hang (stop transmitting) adapters 157 149 under certain network conditions. If this occurs a NETDEV ··· 163 151 hang, ensure that InterruptThrottleRate is set no greater 164 152 than 75,000 and is not set to 0. 165 153 166 - NOTE: When e1000 is loaded with default settings and multiple adapters 154 + NOTE: 155 + When e1000 is loaded with default settings and multiple adapters 167 156 are in use simultaneously, the CPU utilization may increase non- 168 157 linearly. In order to limit the CPU utilization without impacting 169 158 the overall throughput, we recommend that you load the driver as ··· 181 168 182 169 RxDescriptors 183 170 ------------- 184 - Valid Range: 48-256 for 82542 and 82543-based adapters 185 - 48-4096 for all other supported adapters 186 - Default Value: 256 171 + 172 + :Valid Range: 173 + - 48-256 for 82542 and 82543-based adapters 174 + - 48-4096 for all other supported adapters 175 + :Default Value: 256 187 176 188 177 This value specifies the number of receive buffer descriptors allocated 189 178 by the driver. Increasing this value allows the driver to buffer more ··· 195 180 descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 196 181 on the MTU setting. The maximum MTU size is 16110. 197 182 198 - NOTE: MTU designates the frame size. It only needs to be set for Jumbo 183 + NOTE: 184 + MTU designates the frame size. It only needs to be set for Jumbo 199 185 Frames. Depending on the available system resources, the request 200 186 for a higher number of receive descriptors may be denied. In this 201 187 case, use a lower number. 202 188 203 189 RxIntDelay 204 190 ---------- 205 - Valid Range: 0-65535 (0=off) 206 - Default Value: 0 191 + 192 + :Valid Range: 0-65535 (0=off) 193 + :Default Value: 0 207 194 208 195 This value delays the generation of receive interrupts in units of 1.024 209 196 microseconds. Receive interrupt reduction can improve CPU efficiency if ··· 215 198 may be set too high, causing the driver to run out of available receive 216 199 descriptors. 217 200 218 - CAUTION: When setting RxIntDelay to a value other than 0, adapters may 201 + CAUTION: 202 + When setting RxIntDelay to a value other than 0, adapters may 219 203 hang (stop transmitting) under certain network conditions. If 220 204 this occurs a NETDEV WATCHDOG message is logged in the system 221 205 event log. In addition, the controller is automatically reset, ··· 225 207 226 208 RxAbsIntDelay 227 209 ------------- 210 + 228 211 (This parameter is supported only on 82540, 82545 and later adapters.) 229 - Valid Range: 0-65535 (0=off) 230 - Default Value: 128 212 + 213 + :Valid Range: 0-65535 (0=off) 214 + :Default Value: 128 231 215 232 216 This value, in units of 1.024 microseconds, limits the delay in which a 233 217 receive interrupt is generated. Useful only if RxIntDelay is non-zero, ··· 240 220 241 221 Speed 242 222 ----- 223 + 243 224 (This parameter is supported only on adapters with copper connections.) 244 - Valid Settings: 0, 10, 100, 1000 245 - Default Value: 0 (auto-negotiate at all supported speeds) 225 + 226 + :Valid Settings: 0, 10, 100, 1000 227 + :Default Value: 0 (auto-negotiate at all supported speeds) 246 228 247 229 Speed forces the line speed to the specified value in megabits per second 248 230 (Mbps). If this parameter is not specified or is set to 0 and the link ··· 253 231 254 232 TxDescriptors 255 233 ------------- 256 - Valid Range: 48-256 for 82542 and 82543-based adapters 257 - 48-4096 for all other supported adapters 258 - Default Value: 256 234 + 235 + :Valid Range: 236 + - 48-256 for 82542 and 82543-based adapters 237 + - 48-4096 for all other supported adapters 238 + :Default Value: 256 259 239 260 240 This value is the number of transmit descriptors allocated by the driver. 261 241 Increasing this value allows the driver to queue more transmits. Each 262 242 descriptor is 16 bytes. 263 243 264 - NOTE: Depending on the available system resources, the request for a 244 + NOTE: 245 + Depending on the available system resources, the request for a 265 246 higher number of transmit descriptors may be denied. In this case, 266 247 use a lower number. 267 248 268 249 TxIntDelay 269 250 ---------- 270 - Valid Range: 0-65535 (0=off) 271 - Default Value: 8 251 + 252 + :Valid Range: 0-65535 (0=off) 253 + :Default Value: 8 272 254 273 255 This value delays the generation of transmit interrupts in units of 274 256 1.024 microseconds. Transmit interrupt reduction can improve CPU ··· 282 256 283 257 TxAbsIntDelay 284 258 ------------- 259 + 285 260 (This parameter is supported only on 82540, 82545 and later adapters.) 286 - Valid Range: 0-65535 (0=off) 287 - Default Value: 32 261 + 262 + :Valid Range: 0-65535 (0=off) 263 + :Default Value: 32 288 264 289 265 This value, in units of 1.024 microseconds, limits the delay in which a 290 266 transmit interrupt is generated. Useful only if TxIntDelay is non-zero, ··· 297 269 298 270 XsumRX 299 271 ------ 272 + 300 273 (This parameter is NOT supported on the 82542-based adapter.) 301 - Valid Range: 0-1 302 - Default Value: 1 274 + 275 + :Valid Range: 0-1 276 + :Default Value: 1 303 277 304 278 A value of '1' indicates that the driver should enable IP checksum 305 279 offload for received packets (both UDP and TCP) to the adapter hardware. 306 280 307 281 Copybreak 308 282 --------- 309 - Valid Range: 0-xxxxxxx (0=off) 310 - Default Value: 256 311 - Usage: modprobe e1000.ko copybreak=128 283 + 284 + :Valid Range: 0-xxxxxxx (0=off) 285 + :Default Value: 256 286 + :Usage: modprobe e1000.ko copybreak=128 312 287 313 288 Driver copies all packets below or equaling this size to a fresh RX 314 289 buffer before handing it up the stack. ··· 323 292 324 293 SmartPowerDownEnable 325 294 -------------------- 326 - Valid Range: 0-1 327 - Default Value: 0 (disabled) 295 + 296 + :Valid Range: 0-1 297 + :Default Value: 0 (disabled) 328 298 329 299 Allows PHY to turn off in lower power states. The user can turn off 330 300 this parameter in supported chipsets. ··· 341 309 342 310 For copper-based boards, the keywords interact as follows: 343 311 344 - The default operation is auto-negotiate. The board advertises all 312 + - The default operation is auto-negotiate. The board advertises all 345 313 supported speed and duplex combinations, and it links at the highest 346 314 common speed and duplex mode IF the link partner is set to auto-negotiate. 347 315 348 - If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps 316 + - If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps 349 317 is advertised (The 1000BaseT spec requires auto-negotiation.) 350 318 351 - If Speed = 10 or 100, then both Speed and Duplex should be set. Auto- 319 + - If Speed = 10 or 100, then both Speed and Duplex should be set. Auto- 352 320 negotiation is disabled, and the AutoNeg parameter is ignored. Partner 353 321 SHOULD also be forced. 354 322 ··· 360 328 The parameter may be specified as either a decimal or hexadecimal value as 361 329 determined by the bitmap below. 362 330 331 + ============== ====== ====== ======= ======= ====== ====== ======= ====== 363 332 Bit position 7 6 5 4 3 2 1 0 364 333 Decimal Value 128 64 32 16 8 4 2 1 365 334 Hex value 80 40 20 10 8 4 2 1 366 335 Speed (Mbps) N/A N/A 1000 N/A 100 100 10 10 367 336 Duplex Full Full Half Full Half 337 + ============== ====== ====== ======= ======= ====== ====== ======= ====== 368 338 369 - Some examples of using AutoNeg: 339 + Some examples of using AutoNeg:: 370 340 371 341 modprobe e1000 AutoNeg=0x01 (Restricts autonegotiation to 10 Half) 372 342 modprobe e1000 AutoNeg=1 (Same as above) ··· 391 357 392 358 Jumbo Frames 393 359 ------------ 394 - Jumbo Frames support is enabled by changing the MTU to a value larger 395 - than the default of 1500. Use the ifconfig command to increase the MTU 396 - size. For example:: 360 + 361 + Jumbo Frames support is enabled by changing the MTU to a value larger than 362 + the default of 1500. Use the ifconfig command to increase the MTU size. 363 + For example:: 397 364 398 365 ifconfig eth<x> mtu 9000 up 399 366 400 - This setting is not saved across reboots. It can be made permanent if 401 - you add:: 367 + This setting is not saved across reboots. It can be made permanent if 368 + you add:: 402 369 403 370 MTU=9000 404 371 405 - to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>. This example 406 - applies to the Red Hat distributions; other distributions may store this 407 - setting in a different location. 372 + to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>. This example 373 + applies to the Red Hat distributions; other distributions may store this 374 + setting in a different location. 408 375 409 - Notes: Degradation in throughput performance may be observed in some 410 - Jumbo frames environments. If this is observed, increasing the 411 - application's socket buffer size and/or increasing the 412 - /proc/sys/net/ipv4/tcp_*mem entry values may help. See the specific 413 - application manual and /usr/src/linux*/Documentation/ 414 - networking/ip-sysctl.txt for more details. 376 + Notes: 377 + Degradation in throughput performance may be observed in some Jumbo frames 378 + environments. If this is observed, increasing the application's socket buffer 379 + size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. 380 + See the specific application manual and /usr/src/linux*/Documentation/ 381 + networking/ip-sysctl.txt for more details. 415 382 416 - - The maximum MTU setting for Jumbo Frames is 16110. This value 417 - coincides with the maximum Jumbo Frames size of 16128. 383 + - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 384 + with the maximum Jumbo Frames size of 16128. 418 385 419 - - Using Jumbo frames at 10 or 100 Mbps is not supported and may result 420 - in poor performance or loss of link. 386 + - Using Jumbo frames at 10 or 100 Mbps is not supported and may result in 387 + poor performance or loss of link. 421 388 422 - - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 423 - support Jumbo Frames. These correspond to the following product names: 424 - Intel(R) PRO/1000 Gigabit Server Adapter Intel(R) PRO/1000 PM Network 425 - Connection 389 + - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 390 + support Jumbo Frames. These correspond to the following product names:: 391 + 392 + Intel(R) PRO/1000 Gigabit Server Adapter 393 + Intel(R) PRO/1000 PM Network Connection 426 394 427 395 ethtool 428 396 ------- 429 - The driver utilizes the ethtool interface for driver configuration and 430 - diagnostics, as well as displaying statistical information. The ethtool 431 - version 1.6 or later is required for this functionality. 432 397 433 - The latest release of ethtool can be found from 434 - https://www.kernel.org/pub/software/network/ethtool/ 398 + The driver utilizes the ethtool interface for driver configuration and 399 + diagnostics, as well as displaying statistical information. The ethtool 400 + version 1.6 or later is required for this functionality. 401 + 402 + The latest release of ethtool can be found from 403 + https://www.kernel.org/pub/software/network/ethtool/ 435 404 436 405 Enabling Wake on LAN* (WoL) 437 406 --------------------------- 438 - WoL is configured through the ethtool* utility. 439 407 440 - WoL will be enabled on the system during the next shut down or reboot. 441 - For this driver version, in order to enable WoL, the e1000 driver must be 442 - loaded when shutting down or rebooting the system. 408 + WoL is configured through the ethtool* utility. 443 409 410 + WoL will be enabled on the system during the next shut down or reboot. 411 + For this driver version, in order to enable WoL, the e1000 driver must be 412 + loaded when shutting down or rebooting the system. 444 413 445 414 Support 446 415 =======
+3 -4
MAINTAINERS
··· 2523 2523 F: drivers/scsi/esas2r 2524 2524 2525 2525 ATUSB IEEE 802.15.4 RADIO DRIVER 2526 - M: Stefan Schmidt <stefan@osg.samsung.com> 2526 + M: Stefan Schmidt <stefan@datenfreihafen.org> 2527 2527 L: linux-wpan@vger.kernel.org 2528 2528 S: Maintained 2529 2529 F: drivers/net/ieee802154/atusb.c ··· 5790 5790 5791 5791 FREESCALE SOC FS_ENET DRIVER 5792 5792 M: Pantelis Antoniou <pantelis.antoniou@gmail.com> 5793 - M: Vitaly Bordug <vbordug@ru.mvista.com> 5794 5793 L: linuxppc-dev@lists.ozlabs.org 5795 5794 L: netdev@vger.kernel.org 5796 5795 S: Maintained ··· 6908 6909 6909 6910 IEEE 802.15.4 SUBSYSTEM 6910 6911 M: Alexander Aring <alex.aring@gmail.com> 6911 - M: Stefan Schmidt <stefan@osg.samsung.com> 6912 + M: Stefan Schmidt <stefan@datenfreihafen.org> 6912 6913 L: linux-wpan@vger.kernel.org 6913 6914 W: http://wpan.cakelab.org/ 6914 6915 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan.git ··· 8628 8629 M: Amitkumar Karwar <amitkarwar@gmail.com> 8629 8630 M: Nishant Sarmukadam <nishants@marvell.com> 8630 8631 M: Ganapathi Bhat <gbhat@marvell.com> 8631 - M: Xinming Hu <huxm@marvell.com> 8632 + M: Xinming Hu <huxinming820@gmail.com> 8632 8633 L: linux-wireless@vger.kernel.org 8633 8634 S: Maintained 8634 8635 F: drivers/net/wireless/marvell/mwifiex/
-2
drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
··· 63 63 64 64 #define AQ_CFG_NAPI_WEIGHT 64U 65 65 66 - #define AQ_CFG_MULTICAST_ADDRESS_MAX 32U 67 - 68 66 /*#define AQ_CFG_MAC_ADDR_PERMANENT {0x30, 0x0E, 0xE3, 0x12, 0x34, 0x56}*/ 69 67 70 68 #define AQ_NIC_FC_OFF 0U
+3 -1
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 98 98 #define AQ_HW_MEDIA_TYPE_TP 1U 99 99 #define AQ_HW_MEDIA_TYPE_FIBRE 2U 100 100 101 + #define AQ_HW_MULTICAST_ADDRESS_MAX 32U 102 + 101 103 struct aq_hw_s { 102 104 atomic_t flags; 103 105 u8 rbl_enabled:1; ··· 179 177 unsigned int packet_filter); 180 178 181 179 int (*hw_multicast_list_set)(struct aq_hw_s *self, 182 - u8 ar_mac[AQ_CFG_MULTICAST_ADDRESS_MAX] 180 + u8 ar_mac[AQ_HW_MULTICAST_ADDRESS_MAX] 183 181 [ETH_ALEN], 184 182 u32 count); 185 183
+2 -9
drivers/net/ethernet/aquantia/atlantic/aq_main.c
··· 135 135 static void aq_ndev_set_multicast_settings(struct net_device *ndev) 136 136 { 137 137 struct aq_nic_s *aq_nic = netdev_priv(ndev); 138 - int err = 0; 139 138 140 - err = aq_nic_set_packet_filter(aq_nic, ndev->flags); 141 - if (err < 0) 142 - return; 139 + aq_nic_set_packet_filter(aq_nic, ndev->flags); 143 140 144 - if (netdev_mc_count(ndev)) { 145 - err = aq_nic_set_multicast_list(aq_nic, ndev); 146 - if (err < 0) 147 - return; 148 - } 141 + aq_nic_set_multicast_list(aq_nic, ndev); 149 142 } 150 143 151 144 static const struct net_device_ops aq_ndev_ops = {
+30 -23
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 563 563 564 564 int aq_nic_set_multicast_list(struct aq_nic_s *self, struct net_device *ndev) 565 565 { 566 + unsigned int packet_filter = self->packet_filter; 566 567 struct netdev_hw_addr *ha = NULL; 567 568 unsigned int i = 0U; 568 569 569 - self->mc_list.count = 0U; 570 - 571 - netdev_for_each_mc_addr(ha, ndev) { 572 - ether_addr_copy(self->mc_list.ar[i++], ha->addr); 573 - ++self->mc_list.count; 574 - 575 - if (i >= AQ_CFG_MULTICAST_ADDRESS_MAX) 576 - break; 577 - } 578 - 579 - if (i >= AQ_CFG_MULTICAST_ADDRESS_MAX) { 580 - /* Number of filters is too big: atlantic does not support this. 581 - * Force all multi filter to support this. 582 - * With this we disable all UC filters and setup "all pass" 583 - * multicast mask 584 - */ 585 - self->packet_filter |= IFF_ALLMULTI; 586 - self->aq_nic_cfg.mc_list_count = 0; 587 - return self->aq_hw_ops->hw_packet_filter_set(self->aq_hw, 588 - self->packet_filter); 570 + self->mc_list.count = 0; 571 + if (netdev_uc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) { 572 + packet_filter |= IFF_PROMISC; 589 573 } else { 590 - return self->aq_hw_ops->hw_multicast_list_set(self->aq_hw, 591 - self->mc_list.ar, 592 - self->mc_list.count); 574 + netdev_for_each_uc_addr(ha, ndev) { 575 + ether_addr_copy(self->mc_list.ar[i++], ha->addr); 576 + 577 + if (i >= AQ_HW_MULTICAST_ADDRESS_MAX) 578 + break; 579 + } 593 580 } 581 + 582 + if (i + netdev_mc_count(ndev) > AQ_HW_MULTICAST_ADDRESS_MAX) { 583 + packet_filter |= IFF_ALLMULTI; 584 + } else { 585 + netdev_for_each_mc_addr(ha, ndev) { 586 + ether_addr_copy(self->mc_list.ar[i++], ha->addr); 587 + 588 + if (i >= AQ_HW_MULTICAST_ADDRESS_MAX) 589 + break; 590 + } 591 + } 592 + 593 + if (i > 0 && i < AQ_HW_MULTICAST_ADDRESS_MAX) { 594 + packet_filter |= IFF_MULTICAST; 595 + self->mc_list.count = i; 596 + self->aq_hw_ops->hw_multicast_list_set(self->aq_hw, 597 + self->mc_list.ar, 598 + self->mc_list.count); 599 + } 600 + return aq_nic_set_packet_filter(self, packet_filter); 594 601 } 595 602 596 603 int aq_nic_set_mtu(struct aq_nic_s *self, int new_mtu)
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 75 75 struct aq_hw_link_status_s link_status; 76 76 struct { 77 77 u32 count; 78 - u8 ar[AQ_CFG_MULTICAST_ADDRESS_MAX][ETH_ALEN]; 78 + u8 ar[AQ_HW_MULTICAST_ADDRESS_MAX][ETH_ALEN]; 79 79 } mc_list; 80 80 81 81 struct pci_dev *pdev;
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
··· 765 765 766 766 static int hw_atl_a0_hw_multicast_list_set(struct aq_hw_s *self, 767 767 u8 ar_mac 768 - [AQ_CFG_MULTICAST_ADDRESS_MAX] 768 + [AQ_HW_MULTICAST_ADDRESS_MAX] 769 769 [ETH_ALEN], 770 770 u32 count) 771 771 {
+2 -2
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 784 784 785 785 static int hw_atl_b0_hw_multicast_list_set(struct aq_hw_s *self, 786 786 u8 ar_mac 787 - [AQ_CFG_MULTICAST_ADDRESS_MAX] 787 + [AQ_HW_MULTICAST_ADDRESS_MAX] 788 788 [ETH_ALEN], 789 789 u32 count) 790 790 { ··· 812 812 813 813 hw_atl_rpfl2_uc_flr_en_set(self, 814 814 (self->aq_nic_cfg->is_mc_list_enabled), 815 - HW_ATL_B0_MAC_MIN + i); 815 + HW_ATL_B0_MAC_MIN + i); 816 816 } 817 817 818 818 err = aq_hw_err_from_flags(self);
+2 -2
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1946 1946 if (!priv->is_lite) 1947 1947 priv->crc_fwd = !!(umac_readl(priv, UMAC_CMD) & CMD_CRC_FWD); 1948 1948 else 1949 - priv->crc_fwd = !!(gib_readl(priv, GIB_CONTROL) & 1950 - GIB_FCS_STRIP); 1949 + priv->crc_fwd = !((gib_readl(priv, GIB_CONTROL) & 1950 + GIB_FCS_STRIP) >> GIB_FCS_STRIP_SHIFT); 1951 1951 1952 1952 phydev = of_phy_connect(dev, priv->phy_dn, bcm_sysport_adj_link, 1953 1953 0, priv->phy_interface);
+2 -1
drivers/net/ethernet/broadcom/bcmsysport.h
··· 278 278 #define GIB_GTX_CLK_EXT_CLK (0 << GIB_GTX_CLK_SEL_SHIFT) 279 279 #define GIB_GTX_CLK_125MHZ (1 << GIB_GTX_CLK_SEL_SHIFT) 280 280 #define GIB_GTX_CLK_250MHZ (2 << GIB_GTX_CLK_SEL_SHIFT) 281 - #define GIB_FCS_STRIP (1 << 6) 281 + #define GIB_FCS_STRIP_SHIFT 6 282 + #define GIB_FCS_STRIP (1 << GIB_FCS_STRIP_SHIFT) 282 283 #define GIB_LCL_LOOP_EN (1 << 7) 283 284 #define GIB_LCL_LOOP_TXEN (1 << 8) 284 285 #define GIB_RMT_LOOP_EN (1 << 9)
+17 -7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 5712 5712 } 5713 5713 vnic->uc_filter_count = 1; 5714 5714 5715 - vnic->rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST; 5715 + vnic->rx_mask = 0; 5716 + if (bp->dev->flags & IFF_BROADCAST) 5717 + vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_BCAST; 5716 5718 5717 5719 if ((bp->dev->flags & IFF_PROMISC) && bnxt_promisc_ok(bp)) 5718 5720 vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; ··· 5919 5917 return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_cp_rings); 5920 5918 } 5921 5919 5922 - void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) 5920 + static void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) 5923 5921 { 5924 5922 bp->hw_resc.max_irqs = max_irqs; 5925 5923 } ··· 6890 6888 rc = bnxt_request_irq(bp); 6891 6889 if (rc) { 6892 6890 netdev_err(bp->dev, "bnxt_request_irq err: %x\n", rc); 6893 - goto open_err; 6891 + goto open_err_irq; 6894 6892 } 6895 6893 } 6896 6894 ··· 6930 6928 open_err: 6931 6929 bnxt_debug_dev_exit(bp); 6932 6930 bnxt_disable_napi(bp); 6931 + 6932 + open_err_irq: 6933 6933 bnxt_del_napi(bp); 6934 6934 6935 6935 open_err_free_mem: ··· 7218 7214 7219 7215 mask &= ~(CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS | 7220 7216 CFA_L2_SET_RX_MASK_REQ_MASK_MCAST | 7221 - CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST); 7217 + CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST | 7218 + CFA_L2_SET_RX_MASK_REQ_MASK_BCAST); 7222 7219 7223 7220 if ((dev->flags & IFF_PROMISC) && bnxt_promisc_ok(bp)) 7224 7221 mask |= CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; 7225 7222 7226 7223 uc_update = bnxt_uc_list_updated(bp); 7227 7224 7225 + if (dev->flags & IFF_BROADCAST) 7226 + mask |= CFA_L2_SET_RX_MASK_REQ_MASK_BCAST; 7228 7227 if (dev->flags & IFF_ALLMULTI) { 7229 7228 mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST; 7230 7229 vnic->mc_list_count = 0; ··· 8509 8502 int rx, tx, cp; 8510 8503 8511 8504 _bnxt_get_max_rings(bp, &rx, &tx, &cp); 8505 + *max_rx = rx; 8506 + *max_tx = tx; 8512 8507 if (!rx || !tx || !cp) 8513 8508 return -ENOMEM; 8514 8509 8515 - *max_rx = rx; 8516 - *max_tx = tx; 8517 8510 return bnxt_trim_rings(bp, max_rx, max_tx, cp, shared); 8518 8511 } 8519 8512 ··· 8527 8520 /* Not enough rings, try disabling agg rings. */ 8528 8521 bp->flags &= ~BNXT_FLAG_AGG_RINGS; 8529 8522 rc = bnxt_get_max_rings(bp, max_rx, max_tx, shared); 8530 - if (rc) 8523 + if (rc) { 8524 + /* set BNXT_FLAG_AGG_RINGS back for consistency */ 8525 + bp->flags |= BNXT_FLAG_AGG_RINGS; 8531 8526 return rc; 8527 + } 8532 8528 bp->flags |= BNXT_FLAG_NO_AGG_RINGS; 8533 8529 bp->dev->hw_features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW); 8534 8530 bp->dev->features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW);
-1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1470 1470 unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp); 1471 1471 void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max); 1472 1472 unsigned int bnxt_get_max_func_irqs(struct bnxt *bp); 1473 - void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max); 1474 1473 int bnxt_get_avail_msix(struct bnxt *bp, int num); 1475 1474 int bnxt_reserve_rings(struct bnxt *bp); 1476 1475 void bnxt_tx_disable(struct bnxt *bp);
+27 -3
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
··· 27 27 #define BNXT_FID_INVALID 0xffff 28 28 #define VLAN_TCI(vid, prio) ((vid) | ((prio) << VLAN_PRIO_SHIFT)) 29 29 30 + #define is_vlan_pcp_wildcarded(vlan_tci_mask) \ 31 + ((ntohs(vlan_tci_mask) & VLAN_PRIO_MASK) == 0x0000) 32 + #define is_vlan_pcp_exactmatch(vlan_tci_mask) \ 33 + ((ntohs(vlan_tci_mask) & VLAN_PRIO_MASK) == VLAN_PRIO_MASK) 34 + #define is_vlan_pcp_zero(vlan_tci) \ 35 + ((ntohs(vlan_tci) & VLAN_PRIO_MASK) == 0x0000) 36 + #define is_vid_exactmatch(vlan_tci_mask) \ 37 + ((ntohs(vlan_tci_mask) & VLAN_VID_MASK) == VLAN_VID_MASK) 38 + 30 39 /* Return the dst fid of the func for flow forwarding 31 40 * For PFs: src_fid is the fid of the PF 32 41 * For VF-reps: src_fid the fid of the VF ··· 396 387 return false; 397 388 398 389 return true; 390 + } 391 + 392 + static bool is_vlan_tci_allowed(__be16 vlan_tci_mask, 393 + __be16 vlan_tci) 394 + { 395 + /* VLAN priority must be either exactly zero or fully wildcarded and 396 + * VLAN id must be exact match. 397 + */ 398 + if (is_vid_exactmatch(vlan_tci_mask) && 399 + ((is_vlan_pcp_exactmatch(vlan_tci_mask) && 400 + is_vlan_pcp_zero(vlan_tci)) || 401 + is_vlan_pcp_wildcarded(vlan_tci_mask))) 402 + return true; 403 + 404 + return false; 399 405 } 400 406 401 407 static bool bits_set(void *key, int len) ··· 827 803 /* Currently VLAN fields cannot be partial wildcard */ 828 804 if (bits_set(&flow->l2_key.inner_vlan_tci, 829 805 sizeof(flow->l2_key.inner_vlan_tci)) && 830 - !is_exactmatch(&flow->l2_mask.inner_vlan_tci, 831 - sizeof(flow->l2_mask.inner_vlan_tci))) { 832 - netdev_info(bp->dev, "Wildcard match unsupported for VLAN TCI\n"); 806 + !is_vlan_tci_allowed(flow->l2_mask.inner_vlan_tci, 807 + flow->l2_key.inner_vlan_tci)) { 808 + netdev_info(bp->dev, "Unsupported VLAN TCI\n"); 833 809 return false; 834 810 } 835 811 if (bits_set(&flow->l2_key.inner_vlan_tpid,
-2
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 169 169 edev->ulp_tbl[ulp_id].msix_requested = avail_msix; 170 170 } 171 171 bnxt_fill_msix_vecs(bp, ent); 172 - bnxt_set_max_func_irqs(bp, bnxt_get_max_func_irqs(bp) - avail_msix); 173 172 bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix); 174 173 edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED; 175 174 return avail_msix; ··· 191 192 msix_requested = edev->ulp_tbl[ulp_id].msix_requested; 192 193 bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested); 193 194 edev->ulp_tbl[ulp_id].msix_requested = 0; 194 - bnxt_set_max_func_irqs(bp, bnxt_get_max_func_irqs(bp) + msix_requested); 195 195 edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED; 196 196 if (netif_running(dev)) { 197 197 bnxt_close_nic(bp, true, false);
+13
drivers/net/ethernet/broadcom/tg3.c
··· 6 6 * Copyright (C) 2004 Sun Microsystems Inc. 7 7 * Copyright (C) 2005-2016 Broadcom Corporation. 8 8 * Copyright (C) 2016-2017 Broadcom Limited. 9 + * Copyright (C) 2018 Broadcom. All Rights Reserved. The term "Broadcom" 10 + * refers to Broadcom Inc. and/or its subsidiaries. 9 11 * 10 12 * Firmware is: 11 13 * Derived from proprietary unpublished source code, 12 14 * Copyright (C) 2000-2016 Broadcom Corporation. 13 15 * Copyright (C) 2016-2017 Broadcom Ltd. 16 + * Copyright (C) 2018 Broadcom. All Rights Reserved. The term "Broadcom" 17 + * refers to Broadcom Inc. and/or its subsidiaries. 14 18 * 15 19 * Permission is hereby granted for the distribution of this firmware 16 20 * data in hexadecimal or equivalent format, provided this copyright ··· 9293 9289 } 9294 9290 9295 9291 tg3_restore_clk(tp); 9292 + 9293 + /* Increase the core clock speed to fix tx timeout issue for 5762 9294 + * with 100Mbps link speed. 9295 + */ 9296 + if (tg3_asic_rev(tp) == ASIC_REV_5762) { 9297 + val = tr32(TG3_CPMU_CLCK_ORIDE_ENABLE); 9298 + tw32(TG3_CPMU_CLCK_ORIDE_ENABLE, val | 9299 + TG3_CPMU_MAC_ORIDE_ENABLE); 9300 + } 9296 9301 9297 9302 /* Reprobe ASF enable state. */ 9298 9303 tg3_flag_clear(tp, ENABLE_ASF);
+2
drivers/net/ethernet/broadcom/tg3.h
··· 7 7 * Copyright (C) 2004 Sun Microsystems Inc. 8 8 * Copyright (C) 2007-2016 Broadcom Corporation. 9 9 * Copyright (C) 2016-2017 Broadcom Limited. 10 + * Copyright (C) 2018 Broadcom. All Rights Reserved. The term "Broadcom" 11 + * refers to Broadcom Inc. and/or its subsidiaries. 10 12 */ 11 13 12 14 #ifndef _T3_H
+11
drivers/net/ethernet/cadence/macb.h
··· 166 166 #define GEM_DCFG6 0x0294 /* Design Config 6 */ 167 167 #define GEM_DCFG7 0x0298 /* Design Config 7 */ 168 168 #define GEM_DCFG8 0x029C /* Design Config 8 */ 169 + #define GEM_DCFG10 0x02A4 /* Design Config 10 */ 169 170 170 171 #define GEM_TXBDCTRL 0x04cc /* TX Buffer Descriptor control register */ 171 172 #define GEM_RXBDCTRL 0x04d0 /* RX Buffer Descriptor control register */ ··· 491 490 #define GEM_SCR2CMP_OFFSET 0 492 491 #define GEM_SCR2CMP_SIZE 8 493 492 493 + /* Bitfields in DCFG10 */ 494 + #define GEM_TXBD_RDBUFF_OFFSET 12 495 + #define GEM_TXBD_RDBUFF_SIZE 4 496 + #define GEM_RXBD_RDBUFF_OFFSET 8 497 + #define GEM_RXBD_RDBUFF_SIZE 4 498 + 494 499 /* Bitfields in TISUBN */ 495 500 #define GEM_SUBNSINCR_OFFSET 0 496 501 #define GEM_SUBNSINCR_SIZE 16 ··· 642 635 #define MACB_CAPS_USRIO_DISABLED 0x00000010 643 636 #define MACB_CAPS_JUMBO 0x00000020 644 637 #define MACB_CAPS_GEM_HAS_PTP 0x00000040 638 + #define MACB_CAPS_BD_RD_PREFETCH 0x00000080 645 639 #define MACB_CAPS_FIFO_MODE 0x10000000 646 640 #define MACB_CAPS_GIGABIT_MODE_AVAILABLE 0x20000000 647 641 #define MACB_CAPS_SG_DISABLED 0x40000000 ··· 1211 1203 unsigned int max_tuples; 1212 1204 1213 1205 struct tasklet_struct hresp_err_tasklet; 1206 + 1207 + int rx_bd_rd_prefetch; 1208 + int tx_bd_rd_prefetch; 1214 1209 }; 1215 1210 1216 1211 #ifdef CONFIG_MACB_USE_HWSTAMP
+25 -11
drivers/net/ethernet/cadence/macb_main.c
··· 1811 1811 { 1812 1812 struct macb_queue *queue; 1813 1813 unsigned int q; 1814 + int size; 1814 1815 1815 - queue = &bp->queues[0]; 1816 1816 bp->macbgem_ops.mog_free_rx_buffers(bp); 1817 - if (queue->rx_ring) { 1818 - dma_free_coherent(&bp->pdev->dev, RX_RING_BYTES(bp), 1819 - queue->rx_ring, queue->rx_ring_dma); 1820 - queue->rx_ring = NULL; 1821 - } 1822 1817 1823 1818 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) { 1824 1819 kfree(queue->tx_skb); 1825 1820 queue->tx_skb = NULL; 1826 1821 if (queue->tx_ring) { 1827 - dma_free_coherent(&bp->pdev->dev, TX_RING_BYTES(bp), 1822 + size = TX_RING_BYTES(bp) + bp->tx_bd_rd_prefetch; 1823 + dma_free_coherent(&bp->pdev->dev, size, 1828 1824 queue->tx_ring, queue->tx_ring_dma); 1829 1825 queue->tx_ring = NULL; 1826 + } 1827 + if (queue->rx_ring) { 1828 + size = RX_RING_BYTES(bp) + bp->rx_bd_rd_prefetch; 1829 + dma_free_coherent(&bp->pdev->dev, size, 1830 + queue->rx_ring, queue->rx_ring_dma); 1831 + queue->rx_ring = NULL; 1830 1832 } 1831 1833 } 1832 1834 } ··· 1876 1874 int size; 1877 1875 1878 1876 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) { 1879 - size = TX_RING_BYTES(bp); 1877 + size = TX_RING_BYTES(bp) + bp->tx_bd_rd_prefetch; 1880 1878 queue->tx_ring = dma_alloc_coherent(&bp->pdev->dev, size, 1881 1879 &queue->tx_ring_dma, 1882 1880 GFP_KERNEL); ··· 1892 1890 if (!queue->tx_skb) 1893 1891 goto out_err; 1894 1892 1895 - size = RX_RING_BYTES(bp); 1893 + size = RX_RING_BYTES(bp) + bp->rx_bd_rd_prefetch; 1896 1894 queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev, size, 1897 1895 &queue->rx_ring_dma, GFP_KERNEL); 1898 1896 if (!queue->rx_ring) ··· 3799 3797 static const struct macb_config zynqmp_config = { 3800 3798 .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | 3801 3799 MACB_CAPS_JUMBO | 3802 - MACB_CAPS_GEM_HAS_PTP, 3800 + MACB_CAPS_GEM_HAS_PTP | MACB_CAPS_BD_RD_PREFETCH, 3803 3801 .dma_burst_length = 16, 3804 3802 .clk_init = macb_clk_init, 3805 3803 .init = macb_init, ··· 3860 3858 void __iomem *mem; 3861 3859 const char *mac; 3862 3860 struct macb *bp; 3863 - int err; 3861 + int err, val; 3864 3862 3865 3863 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 3866 3864 mem = devm_ioremap_resource(&pdev->dev, regs); ··· 3948 3946 dev->max_mtu = gem_readl(bp, JML) - ETH_HLEN - ETH_FCS_LEN; 3949 3947 else 3950 3948 dev->max_mtu = ETH_DATA_LEN; 3949 + 3950 + if (bp->caps & MACB_CAPS_BD_RD_PREFETCH) { 3951 + val = GEM_BFEXT(RXBD_RDBUFF, gem_readl(bp, DCFG10)); 3952 + if (val) 3953 + bp->rx_bd_rd_prefetch = (2 << (val - 1)) * 3954 + macb_dma_desc_get_size(bp); 3955 + 3956 + val = GEM_BFEXT(TXBD_RDBUFF, gem_readl(bp, DCFG10)); 3957 + if (val) 3958 + bp->tx_bd_rd_prefetch = (2 << (val - 1)) * 3959 + macb_dma_desc_get_size(bp); 3960 + } 3951 3961 3952 3962 mac = of_get_mac_address(np); 3953 3963 if (mac) {
+6 -6
drivers/net/ethernet/cavium/Kconfig
··· 15 15 16 16 config THUNDER_NIC_PF 17 17 tristate "Thunder Physical function driver" 18 - depends on 64BIT 18 + depends on 64BIT && PCI 19 19 select THUNDER_NIC_BGX 20 20 ---help--- 21 21 This driver supports Thunder's NIC physical function. ··· 28 28 config THUNDER_NIC_VF 29 29 tristate "Thunder Virtual function driver" 30 30 imply CAVIUM_PTP 31 - depends on 64BIT 31 + depends on 64BIT && PCI 32 32 ---help--- 33 33 This driver supports Thunder's NIC virtual function 34 34 35 35 config THUNDER_NIC_BGX 36 36 tristate "Thunder MAC interface driver (BGX)" 37 - depends on 64BIT 37 + depends on 64BIT && PCI 38 38 select PHYLIB 39 39 select MDIO_THUNDER 40 40 select THUNDER_NIC_RGX ··· 44 44 45 45 config THUNDER_NIC_RGX 46 46 tristate "Thunder MAC interface driver (RGX)" 47 - depends on 64BIT 47 + depends on 64BIT && PCI 48 48 select PHYLIB 49 49 select MDIO_THUNDER 50 50 ---help--- ··· 53 53 54 54 config CAVIUM_PTP 55 55 tristate "Cavium PTP coprocessor as PTP clock" 56 - depends on 64BIT 56 + depends on 64BIT && PCI 57 57 imply PTP_1588_CLOCK 58 58 default y 59 59 ---help--- ··· 65 65 66 66 config LIQUIDIO 67 67 tristate "Cavium LiquidIO support" 68 - depends on 64BIT 68 + depends on 64BIT && PCI 69 69 depends on MAY_USE_DEVLINK 70 70 imply PTP_1588_CLOCK 71 71 select FW_LOADER
+4 -1
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 91 91 */ 92 92 #define LIO_SYNC_OCTEON_TIME_INTERVAL_MS 60000 93 93 94 + /* time to wait for possible in-flight requests in milliseconds */ 95 + #define WAIT_INFLIGHT_REQUEST msecs_to_jiffies(1000) 96 + 94 97 struct lio_trusted_vf_ctx { 95 98 struct completion complete; 96 99 int status; ··· 262 259 force_io_queues_off(oct); 263 260 264 261 /* To allow for in-flight requests */ 265 - schedule_timeout_uninterruptible(100); 262 + schedule_timeout_uninterruptible(WAIT_INFLIGHT_REQUEST); 266 263 267 264 if (wait_for_pending_requests(oct)) 268 265 dev_err(&oct->pci_dev->dev, "There were pending requests\n");
+11 -3
drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
··· 643 643 static int octeon_mgmt_change_mtu(struct net_device *netdev, int new_mtu) 644 644 { 645 645 struct octeon_mgmt *p = netdev_priv(netdev); 646 - int size_without_fcs = new_mtu + OCTEON_MGMT_RX_HEADROOM; 646 + int max_packet = new_mtu + ETH_HLEN + ETH_FCS_LEN; 647 647 648 648 netdev->mtu = new_mtu; 649 649 650 - cvmx_write_csr(p->agl + AGL_GMX_RX_FRM_MAX, size_without_fcs); 650 + /* HW lifts the limit if the frame is VLAN tagged 651 + * (+4 bytes per each tag, up to two tags) 652 + */ 653 + cvmx_write_csr(p->agl + AGL_GMX_RX_FRM_MAX, max_packet); 654 + /* Set the hardware to truncate packets larger than the MTU. The jabber 655 + * register must be set to a multiple of 8 bytes, so round up. JABBER is 656 + * an unconditional limit, so we need to account for two possible VLAN 657 + * tags. 658 + */ 651 659 cvmx_write_csr(p->agl + AGL_GMX_RX_JABBER, 652 - (size_without_fcs + 7) & 0xfff8); 660 + (max_packet + 7 + VLAN_HLEN * 2) & 0xfff8); 653 661 654 662 return 0; 655 663 }
+2
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 51 51 #include <linux/sched.h> 52 52 #include <linux/slab.h> 53 53 #include <linux/uaccess.h> 54 + #include <linux/nospec.h> 54 55 55 56 #include "common.h" 56 57 #include "cxgb3_ioctl.h" ··· 2269 2268 2270 2269 if (t.qset_idx >= nqsets) 2271 2270 return -EINVAL; 2271 + t.qset_idx = array_index_nospec(t.qset_idx, nqsets); 2272 2272 2273 2273 q = &adapter->params.sge.qset[q1 + t.qset_idx]; 2274 2274 t.rspq_size = q->rspq_size;
+13 -22
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 8702 8702 }; 8703 8703 8704 8704 unsigned int part, manufacturer; 8705 - unsigned int density, size; 8705 + unsigned int density, size = 0; 8706 8706 u32 flashid = 0; 8707 8707 int ret; 8708 8708 ··· 8772 8772 case 0x22: /* 256MB */ 8773 8773 size = 1 << 28; 8774 8774 break; 8775 - 8776 - default: 8777 - dev_err(adap->pdev_dev, "Micron Flash Part has bad size, ID = %#x, Density code = %#x\n", 8778 - flashid, density); 8779 - return -EINVAL; 8780 8775 } 8781 8776 break; 8782 8777 } ··· 8787 8792 case 0x17: /* 64MB */ 8788 8793 size = 1 << 26; 8789 8794 break; 8790 - default: 8791 - dev_err(adap->pdev_dev, "ISSI Flash Part has bad size, ID = %#x, Density code = %#x\n", 8792 - flashid, density); 8793 - return -EINVAL; 8794 8795 } 8795 8796 break; 8796 8797 } ··· 8802 8811 case 0x18: /* 16MB */ 8803 8812 size = 1 << 24; 8804 8813 break; 8805 - default: 8806 - dev_err(adap->pdev_dev, "Macronix Flash Part has bad size, ID = %#x, Density code = %#x\n", 8807 - flashid, density); 8808 - return -EINVAL; 8809 8814 } 8810 8815 break; 8811 8816 } ··· 8817 8830 case 0x18: /* 16MB */ 8818 8831 size = 1 << 24; 8819 8832 break; 8820 - default: 8821 - dev_err(adap->pdev_dev, "Winbond Flash Part has bad size, ID = %#x, Density code = %#x\n", 8822 - flashid, density); 8823 - return -EINVAL; 8824 8833 } 8825 8834 break; 8826 8835 } 8827 - default: 8828 - dev_err(adap->pdev_dev, "Unsupported Flash Part, ID = %#x\n", 8829 - flashid); 8830 - return -EINVAL; 8836 + } 8837 + 8838 + /* If we didn't recognize the FLASH part, that's no real issue: the 8839 + * Hardware/Software contract says that Hardware will _*ALWAYS*_ 8840 + * use a FLASH part which is at least 4MB in size and has 64KB 8841 + * sectors. The unrecognized FLASH part is likely to be much larger 8842 + * than 4MB, but that's all we really need. 8843 + */ 8844 + if (size == 0) { 8845 + dev_warn(adap->pdev_dev, "Unknown Flash Part, ID = %#x, assuming 4MB\n", 8846 + flashid); 8847 + size = 1 << 22; 8831 8848 } 8832 8849 8833 8850 /* Store decoded Flash size and fall through into vetting code. */
+29 -14
drivers/net/ethernet/ibm/ibmvnic.c
··· 329 329 return; 330 330 331 331 failure: 332 - dev_info(dev, "replenish pools failure\n"); 332 + if (lpar_rc != H_PARAMETER && lpar_rc != H_CLOSED) 333 + dev_err_ratelimited(dev, "rx: replenish packet buffer failed\n"); 333 334 pool->free_map[pool->next_free] = index; 334 335 pool->rx_buff[index].skb = NULL; 335 336 ··· 1618 1617 &tx_crq); 1619 1618 } 1620 1619 if (lpar_rc != H_SUCCESS) { 1621 - dev_err(dev, "tx failed with code %ld\n", lpar_rc); 1620 + if (lpar_rc != H_CLOSED && lpar_rc != H_PARAMETER) 1621 + dev_err_ratelimited(dev, "tx: send failed\n"); 1622 1622 dev_kfree_skb_any(skb); 1623 1623 tx_buff->skb = NULL; 1624 1624 ··· 1827 1825 1828 1826 rc = ibmvnic_login(netdev); 1829 1827 if (rc) { 1830 - adapter->state = VNIC_PROBED; 1831 - return 0; 1828 + adapter->state = reset_state; 1829 + return rc; 1832 1830 } 1833 1831 1834 1832 if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM || ··· 3206 3204 return crq; 3207 3205 } 3208 3206 3207 + static void print_subcrq_error(struct device *dev, int rc, const char *func) 3208 + { 3209 + switch (rc) { 3210 + case H_PARAMETER: 3211 + dev_warn_ratelimited(dev, 3212 + "%s failed: Send request is malformed or adapter failover pending. (rc=%d)\n", 3213 + func, rc); 3214 + break; 3215 + case H_CLOSED: 3216 + dev_warn_ratelimited(dev, 3217 + "%s failed: Backing queue closed. Adapter is down or failover pending. (rc=%d)\n", 3218 + func, rc); 3219 + break; 3220 + default: 3221 + dev_err_ratelimited(dev, "%s failed: (rc=%d)\n", func, rc); 3222 + break; 3223 + } 3224 + } 3225 + 3209 3226 static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle, 3210 3227 union sub_crq *sub_crq) 3211 3228 { ··· 3251 3230 cpu_to_be64(u64_crq[2]), 3252 3231 cpu_to_be64(u64_crq[3])); 3253 3232 3254 - if (rc) { 3255 - if (rc == H_CLOSED) 3256 - dev_warn(dev, "CRQ Queue closed\n"); 3257 - dev_err(dev, "Send error (rc=%d)\n", rc); 3258 - } 3233 + if (rc) 3234 + print_subcrq_error(dev, rc, __func__); 3259 3235 3260 3236 return rc; 3261 3237 } ··· 3270 3252 cpu_to_be64(remote_handle), 3271 3253 ioba, num_entries); 3272 3254 3273 - if (rc) { 3274 - if (rc == H_CLOSED) 3275 - dev_warn(dev, "CRQ Queue closed\n"); 3276 - dev_err(dev, "Send (indirect) error (rc=%d)\n", rc); 3277 - } 3255 + if (rc) 3256 + print_subcrq_error(dev, rc, __func__); 3278 3257 3279 3258 return rc; 3280 3259 }
+11 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
··· 1871 1871 if (enable_addr != 0) 1872 1872 rar_high |= IXGBE_RAH_AV; 1873 1873 1874 + /* Record lower 32 bits of MAC address and then make 1875 + * sure that write is flushed to hardware before writing 1876 + * the upper 16 bits and setting the valid bit. 1877 + */ 1874 1878 IXGBE_WRITE_REG(hw, IXGBE_RAL(index), rar_low); 1879 + IXGBE_WRITE_FLUSH(hw); 1875 1880 IXGBE_WRITE_REG(hw, IXGBE_RAH(index), rar_high); 1876 1881 1877 1882 return 0; ··· 1908 1903 rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(index)); 1909 1904 rar_high &= ~(0x0000FFFF | IXGBE_RAH_AV); 1910 1905 1911 - IXGBE_WRITE_REG(hw, IXGBE_RAL(index), 0); 1906 + /* Clear the address valid bit and upper 16 bits of the address 1907 + * before clearing the lower bits. This way we aren't updating 1908 + * a live filter. 1909 + */ 1912 1910 IXGBE_WRITE_REG(hw, IXGBE_RAH(index), rar_high); 1911 + IXGBE_WRITE_FLUSH(hw); 1912 + IXGBE_WRITE_REG(hw, IXGBE_RAL(index), 0); 1913 1913 1914 1914 /* clear VMDq pool/queue selection for this RAR */ 1915 1915 hw->mac.ops.clear_vmdq(hw, index, IXGBE_CLEAR_VMDQ_ALL);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
··· 839 839 } 840 840 841 841 itd->sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_TX_INDEX; 842 - if (unlikely(itd->sa_idx > IXGBE_IPSEC_MAX_SA_COUNT)) { 842 + if (unlikely(itd->sa_idx >= IXGBE_IPSEC_MAX_SA_COUNT)) { 843 843 netdev_err(tx_ring->netdev, "%s: bad sa_idx=%d handle=%lu\n", 844 844 __func__, itd->sa_idx, xs->xso.offload_handle); 845 845 return 0;
+6 -2
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 474 474 { 475 475 const struct mlx4_en_frag_info *frag_info = priv->frag_info; 476 476 unsigned int truesize = 0; 477 + bool release = true; 477 478 int nr, frag_size; 478 479 struct page *page; 479 480 dma_addr_t dma; 480 - bool release; 481 481 482 482 /* Collect used fragments while replacing them in the HW descriptors */ 483 483 for (nr = 0;; frags++) { ··· 500 500 release = page_count(page) != 1 || 501 501 page_is_pfmemalloc(page) || 502 502 page_to_nid(page) != numa_mem_id(); 503 - } else { 503 + } else if (!priv->rx_headroom) { 504 + /* rx_headroom for non XDP setup is always 0. 505 + * When XDP is set, the above condition will 506 + * guarantee page is always released. 507 + */ 504 508 u32 sz_align = ALIGN(frag_size, SMP_CACHE_BYTES); 505 509 506 510 frags->page_offset += sz_align;
+24 -24
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 4756 4756 kfree(mlxsw_sp_rt6); 4757 4757 } 4758 4758 4759 + static bool mlxsw_sp_fib6_rt_can_mp(const struct fib6_info *rt) 4760 + { 4761 + /* RTF_CACHE routes are ignored */ 4762 + return (rt->fib6_flags & (RTF_GATEWAY | RTF_ADDRCONF)) == RTF_GATEWAY; 4763 + } 4764 + 4759 4765 static struct fib6_info * 4760 4766 mlxsw_sp_fib6_entry_rt(const struct mlxsw_sp_fib6_entry *fib6_entry) 4761 4767 { ··· 4771 4765 4772 4766 static struct mlxsw_sp_fib6_entry * 4773 4767 mlxsw_sp_fib6_node_mp_entry_find(const struct mlxsw_sp_fib_node *fib_node, 4774 - const struct fib6_info *nrt, bool append) 4768 + const struct fib6_info *nrt, bool replace) 4775 4769 { 4776 4770 struct mlxsw_sp_fib6_entry *fib6_entry; 4777 4771 4778 - if (!append) 4772 + if (!mlxsw_sp_fib6_rt_can_mp(nrt) || replace) 4779 4773 return NULL; 4780 4774 4781 4775 list_for_each_entry(fib6_entry, &fib_node->entry_list, common.list) { ··· 4790 4784 break; 4791 4785 if (rt->fib6_metric < nrt->fib6_metric) 4792 4786 continue; 4793 - if (rt->fib6_metric == nrt->fib6_metric) 4787 + if (rt->fib6_metric == nrt->fib6_metric && 4788 + mlxsw_sp_fib6_rt_can_mp(rt)) 4794 4789 return fib6_entry; 4795 4790 if (rt->fib6_metric > nrt->fib6_metric) 4796 4791 break; ··· 5170 5163 mlxsw_sp_fib6_node_entry_find(const struct mlxsw_sp_fib_node *fib_node, 5171 5164 const struct fib6_info *nrt, bool replace) 5172 5165 { 5173 - struct mlxsw_sp_fib6_entry *fib6_entry; 5166 + struct mlxsw_sp_fib6_entry *fib6_entry, *fallback = NULL; 5174 5167 5175 5168 list_for_each_entry(fib6_entry, &fib_node->entry_list, common.list) { 5176 5169 struct fib6_info *rt = mlxsw_sp_fib6_entry_rt(fib6_entry); ··· 5179 5172 continue; 5180 5173 if (rt->fib6_table->tb6_id != nrt->fib6_table->tb6_id) 5181 5174 break; 5182 - if (replace && rt->fib6_metric == nrt->fib6_metric) 5183 - return fib6_entry; 5175 + if (replace && rt->fib6_metric == nrt->fib6_metric) { 5176 + if (mlxsw_sp_fib6_rt_can_mp(rt) == 5177 + mlxsw_sp_fib6_rt_can_mp(nrt)) 5178 + return fib6_entry; 5179 + if (mlxsw_sp_fib6_rt_can_mp(nrt)) 5180 + fallback = fallback ?: fib6_entry; 5181 + } 5184 5182 if (rt->fib6_metric > nrt->fib6_metric) 5185 - return fib6_entry; 5183 + return fallback ?: fib6_entry; 5186 5184 } 5187 5185 5188 - return NULL; 5186 + return fallback; 5189 5187 } 5190 5188 5191 5189 static int ··· 5316 5304 } 5317 5305 5318 5306 static int mlxsw_sp_router_fib6_add(struct mlxsw_sp *mlxsw_sp, 5319 - struct fib6_info *rt, bool replace, 5320 - bool append) 5307 + struct fib6_info *rt, bool replace) 5321 5308 { 5322 5309 struct mlxsw_sp_fib6_entry *fib6_entry; 5323 5310 struct mlxsw_sp_fib_node *fib_node; ··· 5342 5331 /* Before creating a new entry, try to append route to an existing 5343 5332 * multipath entry. 5344 5333 */ 5345 - fib6_entry = mlxsw_sp_fib6_node_mp_entry_find(fib_node, rt, append); 5334 + fib6_entry = mlxsw_sp_fib6_node_mp_entry_find(fib_node, rt, replace); 5346 5335 if (fib6_entry) { 5347 5336 err = mlxsw_sp_fib6_entry_nexthop_add(mlxsw_sp, fib6_entry, rt); 5348 5337 if (err) 5349 5338 goto err_fib6_entry_nexthop_add; 5350 5339 return 0; 5351 - } 5352 - 5353 - /* We received an append event, yet did not find any route to 5354 - * append to. 5355 - */ 5356 - if (WARN_ON(append)) { 5357 - err = -EINVAL; 5358 - goto err_fib6_entry_append; 5359 5340 } 5360 5341 5361 5342 fib6_entry = mlxsw_sp_fib6_entry_create(mlxsw_sp, fib_node, rt); ··· 5367 5364 err_fib6_node_entry_link: 5368 5365 mlxsw_sp_fib6_entry_destroy(mlxsw_sp, fib6_entry); 5369 5366 err_fib6_entry_create: 5370 - err_fib6_entry_append: 5371 5367 err_fib6_entry_nexthop_add: 5372 5368 mlxsw_sp_fib_node_put(mlxsw_sp, fib_node); 5373 5369 return err; ··· 5717 5715 struct mlxsw_sp_fib_event_work *fib_work = 5718 5716 container_of(work, struct mlxsw_sp_fib_event_work, work); 5719 5717 struct mlxsw_sp *mlxsw_sp = fib_work->mlxsw_sp; 5720 - bool replace, append; 5718 + bool replace; 5721 5719 int err; 5722 5720 5723 5721 rtnl_lock(); ··· 5728 5726 case FIB_EVENT_ENTRY_APPEND: /* fall through */ 5729 5727 case FIB_EVENT_ENTRY_ADD: 5730 5728 replace = fib_work->event == FIB_EVENT_ENTRY_REPLACE; 5731 - append = fib_work->event == FIB_EVENT_ENTRY_APPEND; 5732 5729 err = mlxsw_sp_router_fib6_add(mlxsw_sp, 5733 - fib_work->fen6_info.rt, replace, 5734 - append); 5730 + fib_work->fen6_info.rt, replace); 5735 5731 if (err) 5736 5732 mlxsw_sp_router_fib_abort(mlxsw_sp); 5737 5733 mlxsw_sp_rt6_release(fib_work->fen6_info.rt);
+1
drivers/net/ethernet/qlogic/qed/qed.h
··· 502 502 struct qed_nvm_image_info { 503 503 u32 num_images; 504 504 struct bist_nvm_image_att *image_att; 505 + bool valid; 505 506 }; 506 507 507 508 #define DRV_MODULE_VERSION \
+1 -1
drivers/net/ethernet/qlogic/qed/qed_debug.c
··· 6723 6723 format_idx = header & MFW_TRACE_EVENTID_MASK; 6724 6724 6725 6725 /* Skip message if its index doesn't exist in the meta data */ 6726 - if (format_idx > s_mcp_trace_meta.formats_num) { 6726 + if (format_idx >= s_mcp_trace_meta.formats_num) { 6727 6727 u8 format_size = 6728 6728 (u8)((header & MFW_TRACE_PRM_SIZE_MASK) >> 6729 6729 MFW_TRACE_PRM_SIZE_SHIFT);
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 371 371 goto err2; 372 372 } 373 373 374 - DP_INFO(cdev, "qed_probe completed successffuly\n"); 374 + DP_INFO(cdev, "qed_probe completed successfully\n"); 375 375 376 376 return cdev; 377 377
+27 -12
drivers/net/ethernet/qlogic/qed/qed_mcp.c
··· 592 592 *o_mcp_resp = mb_params.mcp_resp; 593 593 *o_mcp_param = mb_params.mcp_param; 594 594 595 + /* nvm_info needs to be updated */ 596 + p_hwfn->nvm_info.valid = false; 597 + 595 598 return 0; 596 599 } 597 600 ··· 2558 2555 2559 2556 int qed_mcp_nvm_info_populate(struct qed_hwfn *p_hwfn) 2560 2557 { 2561 - struct qed_nvm_image_info *nvm_info = &p_hwfn->nvm_info; 2558 + struct qed_nvm_image_info nvm_info; 2562 2559 struct qed_ptt *p_ptt; 2563 2560 int rc; 2564 2561 u32 i; 2562 + 2563 + if (p_hwfn->nvm_info.valid) 2564 + return 0; 2565 2565 2566 2566 p_ptt = qed_ptt_acquire(p_hwfn); 2567 2567 if (!p_ptt) { ··· 2573 2567 } 2574 2568 2575 2569 /* Acquire from MFW the amount of available images */ 2576 - nvm_info->num_images = 0; 2570 + nvm_info.num_images = 0; 2577 2571 rc = qed_mcp_bist_nvm_get_num_images(p_hwfn, 2578 - p_ptt, &nvm_info->num_images); 2572 + p_ptt, &nvm_info.num_images); 2579 2573 if (rc == -EOPNOTSUPP) { 2580 2574 DP_INFO(p_hwfn, "DRV_MSG_CODE_BIST_TEST is not supported\n"); 2581 2575 goto out; 2582 - } else if (rc || !nvm_info->num_images) { 2576 + } else if (rc || !nvm_info.num_images) { 2583 2577 DP_ERR(p_hwfn, "Failed getting number of images\n"); 2584 2578 goto err0; 2585 2579 } 2586 2580 2587 - nvm_info->image_att = kmalloc_array(nvm_info->num_images, 2588 - sizeof(struct bist_nvm_image_att), 2589 - GFP_KERNEL); 2590 - if (!nvm_info->image_att) { 2581 + nvm_info.image_att = kmalloc_array(nvm_info.num_images, 2582 + sizeof(struct bist_nvm_image_att), 2583 + GFP_KERNEL); 2584 + if (!nvm_info.image_att) { 2591 2585 rc = -ENOMEM; 2592 2586 goto err0; 2593 2587 } 2594 2588 2595 2589 /* Iterate over images and get their attributes */ 2596 - for (i = 0; i < nvm_info->num_images; i++) { 2590 + for (i = 0; i < nvm_info.num_images; i++) { 2597 2591 rc = qed_mcp_bist_nvm_get_image_att(p_hwfn, p_ptt, 2598 - &nvm_info->image_att[i], i); 2592 + &nvm_info.image_att[i], i); 2599 2593 if (rc) { 2600 2594 DP_ERR(p_hwfn, 2601 2595 "Failed getting image index %d attributes\n", i); ··· 2603 2597 } 2604 2598 2605 2599 DP_VERBOSE(p_hwfn, QED_MSG_SP, "image index %d, size %x\n", i, 2606 - nvm_info->image_att[i].len); 2600 + nvm_info.image_att[i].len); 2607 2601 } 2608 2602 out: 2603 + /* Update hwfn's nvm_info */ 2604 + if (nvm_info.num_images) { 2605 + p_hwfn->nvm_info.num_images = nvm_info.num_images; 2606 + kfree(p_hwfn->nvm_info.image_att); 2607 + p_hwfn->nvm_info.image_att = nvm_info.image_att; 2608 + p_hwfn->nvm_info.valid = true; 2609 + } 2610 + 2609 2611 qed_ptt_release(p_hwfn, p_ptt); 2610 2612 return 0; 2611 2613 2612 2614 err1: 2613 - kfree(nvm_info->image_att); 2615 + kfree(nvm_info.image_att); 2614 2616 err0: 2615 2617 qed_ptt_release(p_hwfn, p_ptt); 2616 2618 return rc; ··· 2655 2641 return -EINVAL; 2656 2642 } 2657 2643 2644 + qed_mcp_nvm_info_populate(p_hwfn); 2658 2645 for (i = 0; i < p_hwfn->nvm_info.num_images; i++) 2659 2646 if (type == p_hwfn->nvm_info.image_att[i].image_type) 2660 2647 break;
+2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
··· 1128 1128 struct qlcnic_adapter *adapter = dev_get_drvdata(dev); 1129 1129 1130 1130 ret = kstrtoul(buf, 16, &data); 1131 + if (ret) 1132 + return ret; 1131 1133 1132 1134 switch (data) { 1133 1135 case QLC_83XX_FLASH_SECTOR_ERASE_CMD:
+12 -9
drivers/net/ethernet/qualcomm/qca_spi.c
··· 658 658 return ret; 659 659 } 660 660 661 - netif_start_queue(qca->net_dev); 661 + /* SPI thread takes care of TX queue */ 662 662 663 663 return 0; 664 664 } ··· 760 760 qca->net_dev->stats.tx_errors++; 761 761 /* Trigger tx queue flush and QCA7000 reset */ 762 762 qca->sync = QCASPI_SYNC_UNKNOWN; 763 + 764 + if (qca->spi_thread) 765 + wake_up_process(qca->spi_thread); 763 766 } 764 767 765 768 static int ··· 881 878 882 879 if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) || 883 880 (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) { 884 - dev_info(&spi->dev, "Invalid clkspeed: %d\n", 885 - qcaspi_clkspeed); 881 + dev_err(&spi->dev, "Invalid clkspeed: %d\n", 882 + qcaspi_clkspeed); 886 883 return -EINVAL; 887 884 } 888 885 889 886 if ((qcaspi_burst_len < QCASPI_BURST_LEN_MIN) || 890 887 (qcaspi_burst_len > QCASPI_BURST_LEN_MAX)) { 891 - dev_info(&spi->dev, "Invalid burst len: %d\n", 892 - qcaspi_burst_len); 888 + dev_err(&spi->dev, "Invalid burst len: %d\n", 889 + qcaspi_burst_len); 893 890 return -EINVAL; 894 891 } 895 892 896 893 if ((qcaspi_pluggable < QCASPI_PLUGGABLE_MIN) || 897 894 (qcaspi_pluggable > QCASPI_PLUGGABLE_MAX)) { 898 - dev_info(&spi->dev, "Invalid pluggable: %d\n", 899 - qcaspi_pluggable); 895 + dev_err(&spi->dev, "Invalid pluggable: %d\n", 896 + qcaspi_pluggable); 900 897 return -EINVAL; 901 898 } 902 899 ··· 958 955 } 959 956 960 957 if (register_netdev(qcaspi_devs)) { 961 - dev_info(&spi->dev, "Unable to register net device %s\n", 962 - qcaspi_devs->name); 958 + dev_err(&spi->dev, "Unable to register net device %s\n", 959 + qcaspi_devs->name); 963 960 free_netdev(qcaspi_devs); 964 961 return -EFAULT; 965 962 }
+1
drivers/net/ethernet/realtek/r8169.c
··· 7789 7789 NETIF_F_HW_VLAN_CTAG_RX; 7790 7790 dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO | 7791 7791 NETIF_F_HIGHDMA; 7792 + dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 7792 7793 7793 7794 tp->cp_cmd |= RxChkSum | RxVlan; 7794 7795
+17 -76
drivers/net/ethernet/renesas/ravb_main.c
··· 980 980 struct ravb_private *priv = netdev_priv(ndev); 981 981 struct phy_device *phydev = ndev->phydev; 982 982 bool new_state = false; 983 + unsigned long flags; 984 + 985 + spin_lock_irqsave(&priv->lock, flags); 986 + 987 + /* Disable TX and RX right over here, if E-MAC change is ignored */ 988 + if (priv->no_avb_link) 989 + ravb_rcv_snd_disable(ndev); 983 990 984 991 if (phydev->link) { 985 992 if (phydev->duplex != priv->duplex) { ··· 1004 997 ravb_modify(ndev, ECMR, ECMR_TXF, 0); 1005 998 new_state = true; 1006 999 priv->link = phydev->link; 1007 - if (priv->no_avb_link) 1008 - ravb_rcv_snd_enable(ndev); 1009 1000 } 1010 1001 } else if (priv->link) { 1011 1002 new_state = true; 1012 1003 priv->link = 0; 1013 1004 priv->speed = 0; 1014 1005 priv->duplex = -1; 1015 - if (priv->no_avb_link) 1016 - ravb_rcv_snd_disable(ndev); 1017 1006 } 1007 + 1008 + /* Enable TX and RX right over here, if E-MAC change is ignored */ 1009 + if (priv->no_avb_link && phydev->link) 1010 + ravb_rcv_snd_enable(ndev); 1011 + 1012 + mmiowb(); 1013 + spin_unlock_irqrestore(&priv->lock, flags); 1018 1014 1019 1015 if (new_state && netif_msg_link(priv)) 1020 1016 phy_print_status(phydev); ··· 1104 1094 phy_start(ndev->phydev); 1105 1095 1106 1096 return 0; 1107 - } 1108 - 1109 - static int ravb_get_link_ksettings(struct net_device *ndev, 1110 - struct ethtool_link_ksettings *cmd) 1111 - { 1112 - struct ravb_private *priv = netdev_priv(ndev); 1113 - unsigned long flags; 1114 - 1115 - if (!ndev->phydev) 1116 - return -ENODEV; 1117 - 1118 - spin_lock_irqsave(&priv->lock, flags); 1119 - phy_ethtool_ksettings_get(ndev->phydev, cmd); 1120 - spin_unlock_irqrestore(&priv->lock, flags); 1121 - 1122 - return 0; 1123 - } 1124 - 1125 - static int ravb_set_link_ksettings(struct net_device *ndev, 1126 - const struct ethtool_link_ksettings *cmd) 1127 - { 1128 - struct ravb_private *priv = netdev_priv(ndev); 1129 - unsigned long flags; 1130 - int error; 1131 - 1132 - if (!ndev->phydev) 1133 - return -ENODEV; 1134 - 1135 - spin_lock_irqsave(&priv->lock, flags); 1136 - 1137 - /* Disable TX and RX */ 1138 - ravb_rcv_snd_disable(ndev); 1139 - 1140 - error = phy_ethtool_ksettings_set(ndev->phydev, cmd); 1141 - if (error) 1142 - goto error_exit; 1143 - 1144 - if (cmd->base.duplex == DUPLEX_FULL) 1145 - priv->duplex = 1; 1146 - else 1147 - priv->duplex = 0; 1148 - 1149 - ravb_set_duplex(ndev); 1150 - 1151 - error_exit: 1152 - mdelay(1); 1153 - 1154 - /* Enable TX and RX */ 1155 - ravb_rcv_snd_enable(ndev); 1156 - 1157 - mmiowb(); 1158 - spin_unlock_irqrestore(&priv->lock, flags); 1159 - 1160 - return error; 1161 - } 1162 - 1163 - static int ravb_nway_reset(struct net_device *ndev) 1164 - { 1165 - struct ravb_private *priv = netdev_priv(ndev); 1166 - int error = -ENODEV; 1167 - unsigned long flags; 1168 - 1169 - if (ndev->phydev) { 1170 - spin_lock_irqsave(&priv->lock, flags); 1171 - error = phy_start_aneg(ndev->phydev); 1172 - spin_unlock_irqrestore(&priv->lock, flags); 1173 - } 1174 - 1175 - return error; 1176 1097 } 1177 1098 1178 1099 static u32 ravb_get_msglevel(struct net_device *ndev) ··· 1318 1377 } 1319 1378 1320 1379 static const struct ethtool_ops ravb_ethtool_ops = { 1321 - .nway_reset = ravb_nway_reset, 1380 + .nway_reset = phy_ethtool_nway_reset, 1322 1381 .get_msglevel = ravb_get_msglevel, 1323 1382 .set_msglevel = ravb_set_msglevel, 1324 1383 .get_link = ethtool_op_get_link, ··· 1328 1387 .get_ringparam = ravb_get_ringparam, 1329 1388 .set_ringparam = ravb_set_ringparam, 1330 1389 .get_ts_info = ravb_get_ts_info, 1331 - .get_link_ksettings = ravb_get_link_ksettings, 1332 - .set_link_ksettings = ravb_set_link_ksettings, 1390 + .get_link_ksettings = phy_ethtool_get_link_ksettings, 1391 + .set_link_ksettings = phy_ethtool_set_link_ksettings, 1333 1392 .get_wol = ravb_get_wol, 1334 1393 .set_wol = ravb_set_wol, 1335 1394 };
+17 -77
drivers/net/ethernet/renesas/sh_eth.c
··· 1927 1927 { 1928 1928 struct sh_eth_private *mdp = netdev_priv(ndev); 1929 1929 struct phy_device *phydev = ndev->phydev; 1930 + unsigned long flags; 1930 1931 int new_state = 0; 1932 + 1933 + spin_lock_irqsave(&mdp->lock, flags); 1934 + 1935 + /* Disable TX and RX right over here, if E-MAC change is ignored */ 1936 + if (mdp->cd->no_psr || mdp->no_ether_link) 1937 + sh_eth_rcv_snd_disable(ndev); 1931 1938 1932 1939 if (phydev->link) { 1933 1940 if (phydev->duplex != mdp->duplex) { ··· 1954 1947 sh_eth_modify(ndev, ECMR, ECMR_TXF, 0); 1955 1948 new_state = 1; 1956 1949 mdp->link = phydev->link; 1957 - if (mdp->cd->no_psr || mdp->no_ether_link) 1958 - sh_eth_rcv_snd_enable(ndev); 1959 1950 } 1960 1951 } else if (mdp->link) { 1961 1952 new_state = 1; 1962 1953 mdp->link = 0; 1963 1954 mdp->speed = 0; 1964 1955 mdp->duplex = -1; 1965 - if (mdp->cd->no_psr || mdp->no_ether_link) 1966 - sh_eth_rcv_snd_disable(ndev); 1967 1956 } 1957 + 1958 + /* Enable TX and RX right over here, if E-MAC change is ignored */ 1959 + if ((mdp->cd->no_psr || mdp->no_ether_link) && phydev->link) 1960 + sh_eth_rcv_snd_enable(ndev); 1961 + 1962 + mmiowb(); 1963 + spin_unlock_irqrestore(&mdp->lock, flags); 1968 1964 1969 1965 if (new_state && netif_msg_link(mdp)) 1970 1966 phy_print_status(phydev); ··· 2038 2028 phy_start(ndev->phydev); 2039 2029 2040 2030 return 0; 2041 - } 2042 - 2043 - static int sh_eth_get_link_ksettings(struct net_device *ndev, 2044 - struct ethtool_link_ksettings *cmd) 2045 - { 2046 - struct sh_eth_private *mdp = netdev_priv(ndev); 2047 - unsigned long flags; 2048 - 2049 - if (!ndev->phydev) 2050 - return -ENODEV; 2051 - 2052 - spin_lock_irqsave(&mdp->lock, flags); 2053 - phy_ethtool_ksettings_get(ndev->phydev, cmd); 2054 - spin_unlock_irqrestore(&mdp->lock, flags); 2055 - 2056 - return 0; 2057 - } 2058 - 2059 - static int sh_eth_set_link_ksettings(struct net_device *ndev, 2060 - const struct ethtool_link_ksettings *cmd) 2061 - { 2062 - struct sh_eth_private *mdp = netdev_priv(ndev); 2063 - unsigned long flags; 2064 - int ret; 2065 - 2066 - if (!ndev->phydev) 2067 - return -ENODEV; 2068 - 2069 - spin_lock_irqsave(&mdp->lock, flags); 2070 - 2071 - /* disable tx and rx */ 2072 - sh_eth_rcv_snd_disable(ndev); 2073 - 2074 - ret = phy_ethtool_ksettings_set(ndev->phydev, cmd); 2075 - if (ret) 2076 - goto error_exit; 2077 - 2078 - if (cmd->base.duplex == DUPLEX_FULL) 2079 - mdp->duplex = 1; 2080 - else 2081 - mdp->duplex = 0; 2082 - 2083 - if (mdp->cd->set_duplex) 2084 - mdp->cd->set_duplex(ndev); 2085 - 2086 - error_exit: 2087 - mdelay(1); 2088 - 2089 - /* enable tx and rx */ 2090 - sh_eth_rcv_snd_enable(ndev); 2091 - 2092 - spin_unlock_irqrestore(&mdp->lock, flags); 2093 - 2094 - return ret; 2095 2031 } 2096 2032 2097 2033 /* If it is ever necessary to increase SH_ETH_REG_DUMP_MAX_REGS, the ··· 2219 2263 pm_runtime_put_sync(&mdp->pdev->dev); 2220 2264 } 2221 2265 2222 - static int sh_eth_nway_reset(struct net_device *ndev) 2223 - { 2224 - struct sh_eth_private *mdp = netdev_priv(ndev); 2225 - unsigned long flags; 2226 - int ret; 2227 - 2228 - if (!ndev->phydev) 2229 - return -ENODEV; 2230 - 2231 - spin_lock_irqsave(&mdp->lock, flags); 2232 - ret = phy_start_aneg(ndev->phydev); 2233 - spin_unlock_irqrestore(&mdp->lock, flags); 2234 - 2235 - return ret; 2236 - } 2237 - 2238 2266 static u32 sh_eth_get_msglevel(struct net_device *ndev) 2239 2267 { 2240 2268 struct sh_eth_private *mdp = netdev_priv(ndev); ··· 2369 2429 static const struct ethtool_ops sh_eth_ethtool_ops = { 2370 2430 .get_regs_len = sh_eth_get_regs_len, 2371 2431 .get_regs = sh_eth_get_regs, 2372 - .nway_reset = sh_eth_nway_reset, 2432 + .nway_reset = phy_ethtool_nway_reset, 2373 2433 .get_msglevel = sh_eth_get_msglevel, 2374 2434 .set_msglevel = sh_eth_set_msglevel, 2375 2435 .get_link = ethtool_op_get_link, ··· 2378 2438 .get_sset_count = sh_eth_get_sset_count, 2379 2439 .get_ringparam = sh_eth_get_ringparam, 2380 2440 .set_ringparam = sh_eth_set_ringparam, 2381 - .get_link_ksettings = sh_eth_get_link_ksettings, 2382 - .set_link_ksettings = sh_eth_set_link_ksettings, 2441 + .get_link_ksettings = phy_ethtool_get_link_ksettings, 2442 + .set_link_ksettings = phy_ethtool_set_link_ksettings, 2383 2443 .get_wol = sh_eth_get_wol, 2384 2444 .set_wol = sh_eth_set_wol, 2385 2445 };
+21 -9
drivers/net/ethernet/sfc/ef10.c
··· 4288 4288 return -EPROTONOSUPPORT; 4289 4289 } 4290 4290 4291 - static s32 efx_ef10_filter_insert(struct efx_nic *efx, 4292 - struct efx_filter_spec *spec, 4293 - bool replace_equal) 4291 + static s32 efx_ef10_filter_insert_locked(struct efx_nic *efx, 4292 + struct efx_filter_spec *spec, 4293 + bool replace_equal) 4294 4294 { 4295 4295 DECLARE_BITMAP(mc_rem_map, EFX_EF10_FILTER_SEARCH_LIMIT); 4296 4296 struct efx_ef10_nic_data *nic_data = efx->nic_data; ··· 4307 4307 bool is_mc_recip; 4308 4308 s32 rc; 4309 4309 4310 - down_read(&efx->filter_sem); 4310 + WARN_ON(!rwsem_is_locked(&efx->filter_sem)); 4311 4311 table = efx->filter_state; 4312 4312 down_write(&table->lock); 4313 4313 ··· 4498 4498 if (rss_locked) 4499 4499 mutex_unlock(&efx->rss_lock); 4500 4500 up_write(&table->lock); 4501 - up_read(&efx->filter_sem); 4502 4501 return rc; 4502 + } 4503 + 4504 + static s32 efx_ef10_filter_insert(struct efx_nic *efx, 4505 + struct efx_filter_spec *spec, 4506 + bool replace_equal) 4507 + { 4508 + s32 ret; 4509 + 4510 + down_read(&efx->filter_sem); 4511 + ret = efx_ef10_filter_insert_locked(efx, spec, replace_equal); 4512 + up_read(&efx->filter_sem); 4513 + 4514 + return ret; 4503 4515 } 4504 4516 4505 4517 static void efx_ef10_filter_update_rx_scatter(struct efx_nic *efx) ··· 5297 5285 EFX_WARN_ON_PARANOID(ids[i] != EFX_EF10_FILTER_ID_INVALID); 5298 5286 efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0); 5299 5287 efx_filter_set_eth_local(&spec, vlan->vid, addr_list[i].addr); 5300 - rc = efx_ef10_filter_insert(efx, &spec, true); 5288 + rc = efx_ef10_filter_insert_locked(efx, &spec, true); 5301 5289 if (rc < 0) { 5302 5290 if (rollback) { 5303 5291 netif_info(efx, drv, efx->net_dev, ··· 5326 5314 efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0); 5327 5315 eth_broadcast_addr(baddr); 5328 5316 efx_filter_set_eth_local(&spec, vlan->vid, baddr); 5329 - rc = efx_ef10_filter_insert(efx, &spec, true); 5317 + rc = efx_ef10_filter_insert_locked(efx, &spec, true); 5330 5318 if (rc < 0) { 5331 5319 netif_warn(efx, drv, efx->net_dev, 5332 5320 "Broadcast filter insert failed rc=%d\n", rc); ··· 5382 5370 if (vlan->vid != EFX_FILTER_VID_UNSPEC) 5383 5371 efx_filter_set_eth_local(&spec, vlan->vid, NULL); 5384 5372 5385 - rc = efx_ef10_filter_insert(efx, &spec, true); 5373 + rc = efx_ef10_filter_insert_locked(efx, &spec, true); 5386 5374 if (rc < 0) { 5387 5375 const char *um = multicast ? "Multicast" : "Unicast"; 5388 5376 const char *encap_name = ""; ··· 5442 5430 filter_flags, 0); 5443 5431 eth_broadcast_addr(baddr); 5444 5432 efx_filter_set_eth_local(&spec, vlan->vid, baddr); 5445 - rc = efx_ef10_filter_insert(efx, &spec, true); 5433 + rc = efx_ef10_filter_insert_locked(efx, &spec, true); 5446 5434 if (rc < 0) { 5447 5435 netif_warn(efx, drv, efx->net_dev, 5448 5436 "Broadcast filter insert failed rc=%d\n",
+8 -9
drivers/net/ethernet/sfc/efx.c
··· 1871 1871 up_write(&efx->filter_sem); 1872 1872 } 1873 1873 1874 - static void efx_restore_filters(struct efx_nic *efx) 1875 - { 1876 - down_read(&efx->filter_sem); 1877 - efx->type->filter_table_restore(efx); 1878 - up_read(&efx->filter_sem); 1879 - } 1880 1874 1881 1875 /************************************************************************** 1882 1876 * ··· 2682 2688 efx_disable_interrupts(efx); 2683 2689 2684 2690 mutex_lock(&efx->mac_lock); 2691 + down_write(&efx->filter_sem); 2685 2692 mutex_lock(&efx->rss_lock); 2686 2693 if (efx->port_initialized && method != RESET_TYPE_INVISIBLE && 2687 2694 method != RESET_TYPE_DATAPATH) ··· 2740 2745 if (efx->type->rx_restore_rss_contexts) 2741 2746 efx->type->rx_restore_rss_contexts(efx); 2742 2747 mutex_unlock(&efx->rss_lock); 2743 - down_read(&efx->filter_sem); 2744 - efx_restore_filters(efx); 2745 - up_read(&efx->filter_sem); 2748 + efx->type->filter_table_restore(efx); 2749 + up_write(&efx->filter_sem); 2746 2750 if (efx->type->sriov_reset) 2747 2751 efx->type->sriov_reset(efx); 2748 2752 ··· 2758 2764 efx->port_initialized = false; 2759 2765 2760 2766 mutex_unlock(&efx->rss_lock); 2767 + up_write(&efx->filter_sem); 2761 2768 mutex_unlock(&efx->mac_lock); 2762 2769 2763 2770 return rc; ··· 3468 3473 3469 3474 efx_init_napi(efx); 3470 3475 3476 + down_write(&efx->filter_sem); 3471 3477 rc = efx->type->init(efx); 3478 + up_write(&efx->filter_sem); 3472 3479 if (rc) { 3473 3480 netif_err(efx, probe, efx->net_dev, 3474 3481 "failed to initialise NIC\n"); ··· 3762 3765 rc = efx->type->reset(efx, RESET_TYPE_ALL); 3763 3766 if (rc) 3764 3767 return rc; 3768 + down_write(&efx->filter_sem); 3765 3769 rc = efx->type->init(efx); 3770 + up_write(&efx->filter_sem); 3766 3771 if (rc) 3767 3772 return rc; 3768 3773 rc = efx_pm_thaw(dev);
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
··· 37 37 * is done in the "stmmac files" 38 38 */ 39 39 40 - /* struct emac_variant - Descrive dwmac-sun8i hardware variant 40 + /* struct emac_variant - Describe dwmac-sun8i hardware variant 41 41 * @default_syscon_value: The default value of the EMAC register in syscon 42 42 * This value is used for disabling properly EMAC 43 43 * and used as a good starting value in case of the
-1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 94 94 /** 95 95 * stmmac_axi_setup - parse DT parameters for programming the AXI register 96 96 * @pdev: platform device 97 - * @priv: driver private struct. 98 97 * Description: 99 98 * if required, from device-tree the AXI internal register can be tuned 100 99 * by using platform parameters.
+10 -7
drivers/net/hyperv/netvsc.c
··· 1274 1274 struct hv_device *device = netvsc_channel_to_device(channel); 1275 1275 struct net_device *ndev = hv_get_drvdata(device); 1276 1276 int work_done = 0; 1277 + int ret; 1277 1278 1278 1279 /* If starting a new interval */ 1279 1280 if (!nvchan->desc) ··· 1286 1285 nvchan->desc = hv_pkt_iter_next(channel, nvchan->desc); 1287 1286 } 1288 1287 1289 - /* If send of pending receive completions suceeded 1290 - * and did not exhaust NAPI budget this time 1291 - * and not doing busy poll 1288 + /* Send any pending receive completions */ 1289 + ret = send_recv_completions(ndev, net_device, nvchan); 1290 + 1291 + /* If it did not exhaust NAPI budget this time 1292 + * and not doing busy poll 1292 1293 * then re-enable host interrupts 1293 - * and reschedule if ring is not empty. 1294 + * and reschedule if ring is not empty 1295 + * or sending receive completion failed. 1294 1296 */ 1295 - if (send_recv_completions(ndev, net_device, nvchan) == 0 && 1296 - work_done < budget && 1297 + if (work_done < budget && 1297 1298 napi_complete_done(napi, work_done) && 1298 - hv_end_read(&channel->inbound) && 1299 + (ret || hv_end_read(&channel->inbound)) && 1299 1300 napi_schedule_prep(napi)) { 1300 1301 hv_begin_read(&channel->inbound); 1301 1302 __napi_schedule(napi);
+1
drivers/net/hyperv/rndis_filter.c
··· 1338 1338 /* setting up multiple channels failed */ 1339 1339 net_device->max_chn = 1; 1340 1340 net_device->num_chn = 1; 1341 + return 0; 1341 1342 1342 1343 err_dev_remv: 1343 1344 rndis_filter_device_remove(dev, net_device);
+32 -2
drivers/net/ieee802154/adf7242.c
··· 275 275 struct spi_message stat_msg; 276 276 struct spi_transfer stat_xfer; 277 277 struct dentry *debugfs_root; 278 + struct delayed_work work; 279 + struct workqueue_struct *wqueue; 278 280 unsigned long flags; 279 281 int tx_stat; 280 282 bool promiscuous; ··· 577 575 /* Wait until the ACK is sent */ 578 576 adf7242_wait_status(lp, RC_STATUS_PHY_RDY, RC_STATUS_MASK, __LINE__); 579 577 adf7242_clear_irqstat(lp); 578 + mod_delayed_work(lp->wqueue, &lp->work, msecs_to_jiffies(400)); 580 579 581 580 return adf7242_cmd(lp, CMD_RC_RX); 581 + } 582 + 583 + static void adf7242_rx_cal_work(struct work_struct *work) 584 + { 585 + struct adf7242_local *lp = 586 + container_of(work, struct adf7242_local, work.work); 587 + 588 + /* Reissuing RC_RX every 400ms - to adjust for offset 589 + * drift in receiver (datasheet page 61, OCL section) 590 + */ 591 + 592 + if (!test_bit(FLAG_XMIT, &lp->flags)) { 593 + adf7242_cmd(lp, CMD_RC_PHY_RDY); 594 + adf7242_cmd_rx(lp); 595 + } 582 596 } 583 597 584 598 static int adf7242_set_txpower(struct ieee802154_hw *hw, int mbm) ··· 704 686 enable_irq(lp->spi->irq); 705 687 set_bit(FLAG_START, &lp->flags); 706 688 707 - return adf7242_cmd(lp, CMD_RC_RX); 689 + return adf7242_cmd_rx(lp); 708 690 } 709 691 710 692 static void adf7242_stop(struct ieee802154_hw *hw) ··· 712 694 struct adf7242_local *lp = hw->priv; 713 695 714 696 disable_irq(lp->spi->irq); 697 + cancel_delayed_work_sync(&lp->work); 715 698 adf7242_cmd(lp, CMD_RC_IDLE); 716 699 clear_bit(FLAG_START, &lp->flags); 717 700 adf7242_clear_irqstat(lp); ··· 738 719 adf7242_write_reg(lp, REG_CH_FREQ1, freq >> 8); 739 720 adf7242_write_reg(lp, REG_CH_FREQ2, freq >> 16); 740 721 741 - return adf7242_cmd(lp, CMD_RC_RX); 722 + if (test_bit(FLAG_START, &lp->flags)) 723 + return adf7242_cmd_rx(lp); 724 + else 725 + return adf7242_cmd(lp, CMD_RC_PHY_RDY); 742 726 } 743 727 744 728 static int adf7242_set_hw_addr_filt(struct ieee802154_hw *hw, ··· 836 814 /* ensure existing instances of the IRQ handler have completed */ 837 815 disable_irq(lp->spi->irq); 838 816 set_bit(FLAG_XMIT, &lp->flags); 817 + cancel_delayed_work_sync(&lp->work); 839 818 reinit_completion(&lp->tx_complete); 840 819 adf7242_cmd(lp, CMD_RC_PHY_RDY); 841 820 adf7242_clear_irqstat(lp); ··· 975 952 unsigned int xmit; 976 953 u8 irq1; 977 954 955 + mod_delayed_work(lp->wqueue, &lp->work, msecs_to_jiffies(400)); 978 956 adf7242_read_reg(lp, REG_IRQ1_SRC1, &irq1); 979 957 980 958 if (!(irq1 & (IRQ_RX_PKT_RCVD | IRQ_CSMA_CA))) ··· 1265 1241 spi_message_add_tail(&lp->stat_xfer, &lp->stat_msg); 1266 1242 1267 1243 spi_set_drvdata(spi, lp); 1244 + INIT_DELAYED_WORK(&lp->work, adf7242_rx_cal_work); 1245 + lp->wqueue = alloc_ordered_workqueue(dev_name(&spi->dev), 1246 + WQ_MEM_RECLAIM); 1268 1247 1269 1248 ret = adf7242_hw_init(lp); 1270 1249 if (ret) ··· 1310 1283 1311 1284 if (!IS_ERR_OR_NULL(lp->debugfs_root)) 1312 1285 debugfs_remove_recursive(lp->debugfs_root); 1286 + 1287 + cancel_delayed_work_sync(&lp->work); 1288 + destroy_workqueue(lp->wqueue); 1313 1289 1314 1290 ieee802154_unregister_hw(lp->hw); 1315 1291 mutex_destroy(&lp->bmux);
+5 -10
drivers/net/ieee802154/at86rf230.c
··· 940 940 static int 941 941 at86rf230_ed(struct ieee802154_hw *hw, u8 *level) 942 942 { 943 - BUG_ON(!level); 943 + WARN_ON(!level); 944 944 *level = 0xbe; 945 945 return 0; 946 946 } ··· 1121 1121 if (changed & IEEE802154_AFILT_SADDR_CHANGED) { 1122 1122 u16 addr = le16_to_cpu(filt->short_addr); 1123 1123 1124 - dev_vdbg(&lp->spi->dev, 1125 - "at86rf230_set_hw_addr_filt called for saddr\n"); 1124 + dev_vdbg(&lp->spi->dev, "%s called for saddr\n", __func__); 1126 1125 __at86rf230_write(lp, RG_SHORT_ADDR_0, addr); 1127 1126 __at86rf230_write(lp, RG_SHORT_ADDR_1, addr >> 8); 1128 1127 } ··· 1129 1130 if (changed & IEEE802154_AFILT_PANID_CHANGED) { 1130 1131 u16 pan = le16_to_cpu(filt->pan_id); 1131 1132 1132 - dev_vdbg(&lp->spi->dev, 1133 - "at86rf230_set_hw_addr_filt called for pan id\n"); 1133 + dev_vdbg(&lp->spi->dev, "%s called for pan id\n", __func__); 1134 1134 __at86rf230_write(lp, RG_PAN_ID_0, pan); 1135 1135 __at86rf230_write(lp, RG_PAN_ID_1, pan >> 8); 1136 1136 } ··· 1138 1140 u8 i, addr[8]; 1139 1141 1140 1142 memcpy(addr, &filt->ieee_addr, 8); 1141 - dev_vdbg(&lp->spi->dev, 1142 - "at86rf230_set_hw_addr_filt called for IEEE addr\n"); 1143 + dev_vdbg(&lp->spi->dev, "%s called for IEEE addr\n", __func__); 1143 1144 for (i = 0; i < 8; i++) 1144 1145 __at86rf230_write(lp, RG_IEEE_ADDR_0 + i, addr[i]); 1145 1146 } 1146 1147 1147 1148 if (changed & IEEE802154_AFILT_PANC_CHANGED) { 1148 - dev_vdbg(&lp->spi->dev, 1149 - "at86rf230_set_hw_addr_filt called for panc change\n"); 1149 + dev_vdbg(&lp->spi->dev, "%s called for panc change\n", __func__); 1150 1150 if (filt->pan_coord) 1151 1151 at86rf230_write_subreg(lp, SR_AACK_I_AM_COORD, 1); 1152 1152 else ··· 1247 1251 1248 1252 return at86rf230_write_subreg(lp, SR_CCA_MODE, val); 1249 1253 } 1250 - 1251 1254 1252 1255 static int 1253 1256 at86rf230_set_cca_ed_level(struct ieee802154_hw *hw, s32 mbm)
+1 -1
drivers/net/ieee802154/fakelb.c
··· 49 49 50 50 static int fakelb_hw_ed(struct ieee802154_hw *hw, u8 *level) 51 51 { 52 - BUG_ON(!level); 52 + WARN_ON(!level); 53 53 *level = 0xbe; 54 54 55 55 return 0;
+2 -1
drivers/net/ieee802154/mcr20a.c
··· 15 15 */ 16 16 #include <linux/kernel.h> 17 17 #include <linux/module.h> 18 - #include <linux/gpio.h> 18 + #include <linux/gpio/consumer.h> 19 19 #include <linux/spi/spi.h> 20 20 #include <linux/workqueue.h> 21 21 #include <linux/interrupt.h> 22 + #include <linux/irq.h> 22 23 #include <linux/skbuff.h> 23 24 #include <linux/of_gpio.h> 24 25 #include <linux/regmap.h>
+36 -18
drivers/net/phy/marvell.c
··· 130 130 #define MII_88E1318S_PHY_WOL_CTRL_CLEAR_WOL_STATUS BIT(12) 131 131 #define MII_88E1318S_PHY_WOL_CTRL_MAGIC_PACKET_MATCH_ENABLE BIT(14) 132 132 133 - #define MII_88E1121_PHY_LED_CTRL 16 133 + #define MII_PHY_LED_CTRL 16 134 134 #define MII_88E1121_PHY_LED_DEF 0x0030 135 + #define MII_88E1510_PHY_LED_DEF 0x1177 135 136 136 137 #define MII_M1011_PHY_STATUS 0x11 137 138 #define MII_M1011_PHY_STATUS_1000 0x8000 ··· 633 632 return err; 634 633 } 635 634 635 + static void marvell_config_led(struct phy_device *phydev) 636 + { 637 + u16 def_config; 638 + int err; 639 + 640 + switch (MARVELL_PHY_FAMILY_ID(phydev->phy_id)) { 641 + /* Default PHY LED config: LED[0] .. Link, LED[1] .. Activity */ 642 + case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1121R): 643 + case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1318S): 644 + def_config = MII_88E1121_PHY_LED_DEF; 645 + break; 646 + /* Default PHY LED config: 647 + * LED[0] .. 1000Mbps Link 648 + * LED[1] .. 100Mbps Link 649 + * LED[2] .. Blink, Activity 650 + */ 651 + case MARVELL_PHY_FAMILY_ID(MARVELL_PHY_ID_88E1510): 652 + def_config = MII_88E1510_PHY_LED_DEF; 653 + break; 654 + default: 655 + return; 656 + } 657 + 658 + err = phy_write_paged(phydev, MII_MARVELL_LED_PAGE, MII_PHY_LED_CTRL, 659 + def_config); 660 + if (err < 0) 661 + pr_warn("Fail to config marvell phy LED.\n"); 662 + } 663 + 636 664 static int marvell_config_init(struct phy_device *phydev) 637 665 { 666 + /* Set defalut LED */ 667 + marvell_config_led(phydev); 668 + 638 669 /* Set registers from marvell,reg-init DT property */ 639 670 return marvell_of_reg_init(phydev); 640 671 } ··· 846 813 return genphy_soft_reset(phydev); 847 814 } 848 815 849 - static int m88e1121_config_init(struct phy_device *phydev) 850 - { 851 - int err; 852 - 853 - /* Default PHY LED config: LED[0] .. Link, LED[1] .. Activity */ 854 - err = phy_write_paged(phydev, MII_MARVELL_LED_PAGE, 855 - MII_88E1121_PHY_LED_CTRL, 856 - MII_88E1121_PHY_LED_DEF); 857 - if (err < 0) 858 - return err; 859 - 860 - /* Set marvell,reg-init configuration from device tree */ 861 - return marvell_config_init(phydev); 862 - } 863 - 864 816 static int m88e1318_config_init(struct phy_device *phydev) 865 817 { 866 818 if (phy_interrupt_is_valid(phydev)) { ··· 859 841 return err; 860 842 } 861 843 862 - return m88e1121_config_init(phydev); 844 + return marvell_config_init(phydev); 863 845 } 864 846 865 847 static int m88e1510_config_init(struct phy_device *phydev) ··· 2105 2087 .features = PHY_GBIT_FEATURES, 2106 2088 .flags = PHY_HAS_INTERRUPT, 2107 2089 .probe = &m88e1121_probe, 2108 - .config_init = &m88e1121_config_init, 2090 + .config_init = &marvell_config_init, 2109 2091 .config_aneg = &m88e1121_config_aneg, 2110 2092 .read_status = &marvell_read_status, 2111 2093 .ack_interrupt = &marvell_ack_interrupt,
+2 -5
drivers/net/phy/phy_device.c
··· 1724 1724 1725 1725 static int __set_phy_supported(struct phy_device *phydev, u32 max_speed) 1726 1726 { 1727 - /* The default values for phydev->supported are provided by the PHY 1728 - * driver "features" member, we want to reset to sane defaults first 1729 - * before supporting higher speeds. 1730 - */ 1731 - phydev->supported &= PHY_DEFAULT_FEATURES; 1727 + phydev->supported &= ~(PHY_1000BT_FEATURES | PHY_100BT_FEATURES | 1728 + PHY_10BT_FEATURES); 1732 1729 1733 1730 switch (max_speed) { 1734 1731 default:
+26 -9
drivers/net/phy/sfp-bus.c
··· 349 349 } 350 350 if (bus->started) 351 351 bus->socket_ops->start(bus->sfp); 352 - bus->netdev->sfp_bus = bus; 353 352 bus->registered = true; 354 353 return 0; 355 354 } ··· 363 364 if (bus->phydev && ops && ops->disconnect_phy) 364 365 ops->disconnect_phy(bus->upstream); 365 366 } 366 - bus->netdev->sfp_bus = NULL; 367 367 bus->registered = false; 368 368 } 369 369 ··· 434 436 } 435 437 EXPORT_SYMBOL_GPL(sfp_upstream_stop); 436 438 439 + static void sfp_upstream_clear(struct sfp_bus *bus) 440 + { 441 + bus->upstream_ops = NULL; 442 + bus->upstream = NULL; 443 + bus->netdev->sfp_bus = NULL; 444 + bus->netdev = NULL; 445 + } 446 + 437 447 /** 438 448 * sfp_register_upstream() - Register the neighbouring device 439 449 * @fwnode: firmware node for the SFP bus ··· 467 461 bus->upstream_ops = ops; 468 462 bus->upstream = upstream; 469 463 bus->netdev = ndev; 464 + ndev->sfp_bus = bus; 470 465 471 - if (bus->sfp) 466 + if (bus->sfp) { 472 467 ret = sfp_register_bus(bus); 468 + if (ret) 469 + sfp_upstream_clear(bus); 470 + } 473 471 rtnl_unlock(); 474 472 } 475 473 ··· 498 488 rtnl_lock(); 499 489 if (bus->sfp) 500 490 sfp_unregister_bus(bus); 501 - bus->upstream = NULL; 502 - bus->netdev = NULL; 491 + sfp_upstream_clear(bus); 503 492 rtnl_unlock(); 504 493 505 494 sfp_bus_put(bus); ··· 570 561 } 571 562 EXPORT_SYMBOL_GPL(sfp_module_remove); 572 563 564 + static void sfp_socket_clear(struct sfp_bus *bus) 565 + { 566 + bus->sfp_dev = NULL; 567 + bus->sfp = NULL; 568 + bus->socket_ops = NULL; 569 + } 570 + 573 571 struct sfp_bus *sfp_register_socket(struct device *dev, struct sfp *sfp, 574 572 const struct sfp_socket_ops *ops) 575 573 { ··· 589 573 bus->sfp = sfp; 590 574 bus->socket_ops = ops; 591 575 592 - if (bus->netdev) 576 + if (bus->netdev) { 593 577 ret = sfp_register_bus(bus); 578 + if (ret) 579 + sfp_socket_clear(bus); 580 + } 594 581 rtnl_unlock(); 595 582 } 596 583 ··· 611 592 rtnl_lock(); 612 593 if (bus->netdev) 613 594 sfp_unregister_bus(bus); 614 - bus->sfp_dev = NULL; 615 - bus->sfp = NULL; 616 - bus->socket_ops = NULL; 595 + sfp_socket_clear(bus); 617 596 rtnl_unlock(); 618 597 619 598 sfp_bus_put(bus);
+1 -1
drivers/net/tun.c
··· 1688 1688 case XDP_TX: 1689 1689 get_page(alloc_frag->page); 1690 1690 alloc_frag->offset += buflen; 1691 - if (tun_xdp_tx(tun->dev, &xdp)) 1691 + if (tun_xdp_tx(tun->dev, &xdp) < 0) 1692 1692 goto err_redirect; 1693 1693 rcu_read_unlock(); 1694 1694 local_bh_enable();
+3 -1
drivers/net/usb/asix_devices.c
··· 642 642 priv->presvd_phy_advertise); 643 643 644 644 /* Restore BMCR */ 645 + if (priv->presvd_phy_bmcr & BMCR_ANENABLE) 646 + priv->presvd_phy_bmcr |= BMCR_ANRESTART; 647 + 645 648 asix_mdio_write_nopm(dev->net, dev->mii.phy_id, MII_BMCR, 646 649 priv->presvd_phy_bmcr); 647 650 648 - mii_nway_restart(&dev->mii); 649 651 priv->presvd_phy_advertise = 0; 650 652 priv->presvd_phy_bmcr = 0; 651 653 }
+4 -1
drivers/net/usb/lan78xx.c
··· 3344 3344 pkt_cnt = 0; 3345 3345 count = 0; 3346 3346 length = 0; 3347 + spin_lock_irqsave(&tqp->lock, flags); 3347 3348 for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) { 3348 3349 if (skb_is_gso(skb)) { 3349 3350 if (pkt_cnt) { ··· 3353 3352 } 3354 3353 count = 1; 3355 3354 length = skb->len - TX_OVERHEAD; 3356 - skb2 = skb_dequeue(tqp); 3355 + __skb_unlink(skb, tqp); 3356 + spin_unlock_irqrestore(&tqp->lock, flags); 3357 3357 goto gso_skb; 3358 3358 } 3359 3359 ··· 3363 3361 skb_totallen = skb->len + roundup(skb_totallen, sizeof(u32)); 3364 3362 pkt_cnt++; 3365 3363 } 3364 + spin_unlock_irqrestore(&tqp->lock, flags); 3366 3365 3367 3366 /* copy to a single skb */ 3368 3367 skb = alloc_skb(skb_totallen, GFP_ATOMIC);
+1
drivers/net/usb/qmi_wwan.c
··· 1253 1253 {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */ 1254 1254 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */ 1255 1255 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */ 1256 + {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */ 1256 1257 {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */ 1257 1258 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0306, 4)}, /* Quectel EP06 Mini PCIe */ 1258 1259
+1 -1
drivers/net/usb/rtl8150.c
··· 681 681 (netdev->flags & IFF_ALLMULTI)) { 682 682 rx_creg &= 0xfffe; 683 683 rx_creg |= 0x0002; 684 - dev_info(&netdev->dev, "%s: allmulti set\n", netdev->name); 684 + dev_dbg(&netdev->dev, "%s: allmulti set\n", netdev->name); 685 685 } else { 686 686 /* ~RX_MULTICAST, ~RX_PROMISCUOUS */ 687 687 rx_creg &= 0x00fc;
+62
drivers/net/usb/smsc75xx.c
··· 82 82 module_param(turbo_mode, bool, 0644); 83 83 MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction"); 84 84 85 + static int smsc75xx_link_ok_nopm(struct usbnet *dev); 86 + static int smsc75xx_phy_gig_workaround(struct usbnet *dev); 87 + 85 88 static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index, 86 89 u32 *data, int in_pm) 87 90 { ··· 855 852 return -EIO; 856 853 } 857 854 855 + /* phy workaround for gig link */ 856 + smsc75xx_phy_gig_workaround(dev); 857 + 858 858 smsc75xx_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE, 859 859 ADVERTISE_ALL | ADVERTISE_CSMA | ADVERTISE_PAUSE_CAP | 860 860 ADVERTISE_PAUSE_ASYM); ··· 991 985 992 986 netdev_warn(dev->net, "timeout waiting for device ready\n"); 993 987 return -EIO; 988 + } 989 + 990 + static int smsc75xx_phy_gig_workaround(struct usbnet *dev) 991 + { 992 + struct mii_if_info *mii = &dev->mii; 993 + int ret = 0, timeout = 0; 994 + u32 buf, link_up = 0; 995 + 996 + /* Set the phy in Gig loopback */ 997 + smsc75xx_mdio_write(dev->net, mii->phy_id, MII_BMCR, 0x4040); 998 + 999 + /* Wait for the link up */ 1000 + do { 1001 + link_up = smsc75xx_link_ok_nopm(dev); 1002 + usleep_range(10000, 20000); 1003 + timeout++; 1004 + } while ((!link_up) && (timeout < 1000)); 1005 + 1006 + if (timeout >= 1000) { 1007 + netdev_warn(dev->net, "Timeout waiting for PHY link up\n"); 1008 + return -EIO; 1009 + } 1010 + 1011 + /* phy reset */ 1012 + ret = smsc75xx_read_reg(dev, PMT_CTL, &buf); 1013 + if (ret < 0) { 1014 + netdev_warn(dev->net, "Failed to read PMT_CTL: %d\n", ret); 1015 + return ret; 1016 + } 1017 + 1018 + buf |= PMT_CTL_PHY_RST; 1019 + 1020 + ret = smsc75xx_write_reg(dev, PMT_CTL, buf); 1021 + if (ret < 0) { 1022 + netdev_warn(dev->net, "Failed to write PMT_CTL: %d\n", ret); 1023 + return ret; 1024 + } 1025 + 1026 + timeout = 0; 1027 + do { 1028 + usleep_range(10000, 20000); 1029 + ret = smsc75xx_read_reg(dev, PMT_CTL, &buf); 1030 + if (ret < 0) { 1031 + netdev_warn(dev->net, "Failed to read PMT_CTL: %d\n", 1032 + ret); 1033 + return ret; 1034 + } 1035 + timeout++; 1036 + } while ((buf & PMT_CTL_PHY_RST) && (timeout < 100)); 1037 + 1038 + if (timeout >= 100) { 1039 + netdev_warn(dev->net, "timeout waiting for PHY Reset\n"); 1040 + return -EIO; 1041 + } 1042 + 1043 + return 0; 994 1044 } 995 1045 996 1046 static int smsc75xx_reset(struct usbnet *dev)
+14 -2
drivers/net/wireless/ath/ath10k/mac.c
··· 6058 6058 ath10k_mac_max_vht_nss(vht_mcs_mask))); 6059 6059 6060 6060 if (changed & IEEE80211_RC_BW_CHANGED) { 6061 - ath10k_dbg(ar, ATH10K_DBG_MAC, "mac update sta %pM peer bw %d\n", 6062 - sta->addr, bw); 6061 + enum wmi_phy_mode mode; 6062 + 6063 + mode = chan_to_phymode(&def); 6064 + ath10k_dbg(ar, ATH10K_DBG_MAC, "mac update sta %pM peer bw %d phymode %d\n", 6065 + sta->addr, bw, mode); 6066 + 6067 + err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr, 6068 + WMI_PEER_PHYMODE, mode); 6069 + if (err) { 6070 + ath10k_warn(ar, "failed to update STA %pM peer phymode %d: %d\n", 6071 + sta->addr, mode, err); 6072 + goto exit; 6073 + } 6063 6074 6064 6075 err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr, 6065 6076 WMI_PEER_CHAN_WIDTH, bw); ··· 6111 6100 sta->addr); 6112 6101 } 6113 6102 6103 + exit: 6114 6104 mutex_unlock(&ar->conf_mutex); 6115 6105 } 6116 6106
+1
drivers/net/wireless/ath/ath10k/wmi.h
··· 6144 6144 WMI_PEER_NSS = 0x5, 6145 6145 WMI_PEER_USE_4ADDR = 0x6, 6146 6146 WMI_PEER_DEBUG = 0xa, 6147 + WMI_PEER_PHYMODE = 0xd, 6147 6148 WMI_PEER_DUMMY_VAR = 0xff, /* dummy parameter for STA PS workaround */ 6148 6149 }; 6149 6150
+1 -1
drivers/net/wireless/ath/wcn36xx/testmode.c
··· 1 - /* 1 + /* 2 2 * Copyright (c) 2018, The Linux Foundation. All rights reserved. 3 3 * 4 4 * Permission to use, copy, modify, and/or distribute this software for any
+7
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
··· 4296 4296 brcmf_dbg(TRACE, "Enter\n"); 4297 4297 4298 4298 if (bus) { 4299 + /* Stop watchdog task */ 4300 + if (bus->watchdog_tsk) { 4301 + send_sig(SIGTERM, bus->watchdog_tsk, 1); 4302 + kthread_stop(bus->watchdog_tsk); 4303 + bus->watchdog_tsk = NULL; 4304 + } 4305 + 4299 4306 /* De-register interrupt handler */ 4300 4307 brcmf_sdiod_intr_unregister(bus->sdiodev); 4301 4308
+2 -5
drivers/net/wireless/marvell/mwifiex/usb.c
··· 644 644 MWIFIEX_FUNC_SHUTDOWN); 645 645 } 646 646 647 - if (adapter->workqueue) 648 - flush_workqueue(adapter->workqueue); 649 - 650 - mwifiex_usb_free(card); 651 - 652 647 mwifiex_dbg(adapter, FATAL, 653 648 "%s: removing card\n", __func__); 654 649 mwifiex_remove_card(adapter); ··· 1350 1355 static void mwifiex_unregister_dev(struct mwifiex_adapter *adapter) 1351 1356 { 1352 1357 struct usb_card_rec *card = (struct usb_card_rec *)adapter->card; 1358 + 1359 + mwifiex_usb_free(card); 1353 1360 1354 1361 mwifiex_usb_cleanup_tx_aggr(adapter); 1355 1362
+4 -2
drivers/net/wireless/mediatek/mt7601u/phy.c
··· 986 986 */ 987 987 spin_lock_bh(&dev->con_mon_lock); 988 988 avg_rssi = ewma_rssi_read(&dev->avg_rssi); 989 - WARN_ON_ONCE(avg_rssi == 0); 989 + spin_unlock_bh(&dev->con_mon_lock); 990 + if (avg_rssi == 0) 991 + return; 992 + 990 993 avg_rssi = -avg_rssi; 991 994 if (avg_rssi <= -70) 992 995 val -= 0x20; 993 996 else if (avg_rssi <= -60) 994 997 val -= 0x10; 995 - spin_unlock_bh(&dev->con_mon_lock); 996 998 997 999 if (val != mt7601u_bbp_rr(dev, 66)) 998 1000 mt7601u_bbp_wr(dev, 66, val);
+1 -2
drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
··· 654 654 vif = qtnf_mac_get_base_vif(mac); 655 655 if (!vif) { 656 656 pr_err("MAC%u: primary VIF is not configured\n", mac->macid); 657 - ret = -EFAULT; 658 - goto out; 657 + return -EFAULT; 659 658 } 660 659 661 660 if (vif->wdev.iftype != NL80211_IFTYPE_STATION) {
+10 -7
drivers/net/wireless/realtek/rtlwifi/base.c
··· 484 484 485 485 } 486 486 487 - void rtl_deinit_deferred_work(struct ieee80211_hw *hw) 487 + void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq) 488 488 { 489 489 struct rtl_priv *rtlpriv = rtl_priv(hw); 490 490 491 491 del_timer_sync(&rtlpriv->works.watchdog_timer); 492 492 493 - cancel_delayed_work(&rtlpriv->works.watchdog_wq); 494 - cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq); 495 - cancel_delayed_work(&rtlpriv->works.ps_work); 496 - cancel_delayed_work(&rtlpriv->works.ps_rfon_wq); 497 - cancel_delayed_work(&rtlpriv->works.fwevt_wq); 498 - cancel_delayed_work(&rtlpriv->works.c2hcmd_wq); 493 + cancel_delayed_work_sync(&rtlpriv->works.watchdog_wq); 494 + if (ips_wq) 495 + cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq); 496 + else 497 + cancel_delayed_work_sync(&rtlpriv->works.ips_nic_off_wq); 498 + cancel_delayed_work_sync(&rtlpriv->works.ps_work); 499 + cancel_delayed_work_sync(&rtlpriv->works.ps_rfon_wq); 500 + cancel_delayed_work_sync(&rtlpriv->works.fwevt_wq); 501 + cancel_delayed_work_sync(&rtlpriv->works.c2hcmd_wq); 499 502 } 500 503 EXPORT_SYMBOL_GPL(rtl_deinit_deferred_work); 501 504
+1 -1
drivers/net/wireless/realtek/rtlwifi/base.h
··· 121 121 void rtl_deinit_rfkill(struct ieee80211_hw *hw); 122 122 123 123 void rtl_watch_dog_timer_callback(struct timer_list *t); 124 - void rtl_deinit_deferred_work(struct ieee80211_hw *hw); 124 + void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq); 125 125 126 126 bool rtl_action_proc(struct ieee80211_hw *hw, struct sk_buff *skb, u8 is_tx); 127 127 int rtlwifi_rate_mapping(struct ieee80211_hw *hw, bool isht,
+1 -2
drivers/net/wireless/realtek/rtlwifi/core.c
··· 130 130 firmware->size); 131 131 rtlpriv->rtlhal.wowlan_fwsize = firmware->size; 132 132 } 133 - rtlpriv->rtlhal.fwsize = firmware->size; 134 133 release_firmware(firmware); 135 134 } 136 135 ··· 195 196 /* reset sec info */ 196 197 rtl_cam_reset_sec_info(hw); 197 198 198 - rtl_deinit_deferred_work(hw); 199 + rtl_deinit_deferred_work(hw, false); 199 200 } 200 201 rtlpriv->intf_ops->adapter_stop(hw); 201 202
+1 -1
drivers/net/wireless/realtek/rtlwifi/pci.c
··· 2377 2377 ieee80211_unregister_hw(hw); 2378 2378 rtlmac->mac80211_registered = 0; 2379 2379 } else { 2380 - rtl_deinit_deferred_work(hw); 2380 + rtl_deinit_deferred_work(hw, false); 2381 2381 rtlpriv->intf_ops->adapter_stop(hw); 2382 2382 } 2383 2383 rtlpriv->cfg->ops->disable_interrupt(hw);
+2 -2
drivers/net/wireless/realtek/rtlwifi/ps.c
··· 71 71 struct rtl_priv *rtlpriv = rtl_priv(hw); 72 72 73 73 /*<1> Stop all timer */ 74 - rtl_deinit_deferred_work(hw); 74 + rtl_deinit_deferred_work(hw, true); 75 75 76 76 /*<2> Disable Interrupt */ 77 77 rtlpriv->cfg->ops->disable_interrupt(hw); ··· 292 292 struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); 293 293 enum rf_pwrstate rtstate; 294 294 295 - cancel_delayed_work(&rtlpriv->works.ips_nic_off_wq); 295 + cancel_delayed_work_sync(&rtlpriv->works.ips_nic_off_wq); 296 296 297 297 mutex_lock(&rtlpriv->locks.ips_mutex); 298 298 if (ppsc->inactiveps) {
+1 -1
drivers/net/wireless/realtek/rtlwifi/usb.c
··· 1132 1132 ieee80211_unregister_hw(hw); 1133 1133 rtlmac->mac80211_registered = 0; 1134 1134 } else { 1135 - rtl_deinit_deferred_work(hw); 1135 + rtl_deinit_deferred_work(hw, false); 1136 1136 rtlpriv->intf_ops->adapter_stop(hw); 1137 1137 } 1138 1138 /*deinit rfkill */
+1
drivers/ptp/ptp_chardev.c
··· 89 89 case PTP_PF_PHYSYNC: 90 90 if (chan != 0) 91 91 return -EINVAL; 92 + break; 92 93 default: 93 94 return -EINVAL; 94 95 }
+1
include/linux/bpf-cgroup.h
··· 2 2 #ifndef _BPF_CGROUP_H 3 3 #define _BPF_CGROUP_H 4 4 5 + #include <linux/errno.h> 5 6 #include <linux/jump_label.h> 6 7 #include <uapi/linux/bpf.h> 7 8
+3 -3
include/linux/filter.h
··· 765 765 struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off, 766 766 const struct bpf_insn *patch, u32 len); 767 767 768 - static inline int __xdp_generic_ok_fwd_dev(struct sk_buff *skb, 769 - struct net_device *fwd) 768 + static inline int xdp_ok_fwd_dev(const struct net_device *fwd, 769 + unsigned int pktlen) 770 770 { 771 771 unsigned int len; 772 772 ··· 774 774 return -ENETDOWN; 775 775 776 776 len = fwd->mtu + fwd->hard_header_len + VLAN_HLEN; 777 - if (skb->len > len) 777 + if (pktlen > len) 778 778 return -EMSGSIZE; 779 779 780 780 return 0;
+1
include/linux/fsl/guts.h
··· 16 16 #define __FSL_GUTS_H__ 17 17 18 18 #include <linux/types.h> 19 + #include <linux/io.h> 19 20 20 21 /** 21 22 * Global Utility Registers.
+2 -2
include/linux/if_bridge.h
··· 105 105 106 106 static inline int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid) 107 107 { 108 - return -1; 108 + return -EINVAL; 109 109 } 110 110 111 111 static inline int br_vlan_get_info(const struct net_device *dev, u16 vid, 112 112 struct bridge_vlan_info *p_vinfo) 113 113 { 114 - return -1; 114 + return -EINVAL; 115 115 } 116 116 #endif 117 117
+2
include/linux/igmp.h
··· 109 109 extern int ip_check_mc_rcu(struct in_device *dev, __be32 mc_addr, __be32 src_addr, u8 proto); 110 110 extern int igmp_rcv(struct sk_buff *); 111 111 extern int ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr); 112 + extern int ip_mc_join_group_ssm(struct sock *sk, struct ip_mreqn *imr, 113 + unsigned int mode); 112 114 extern int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr); 113 115 extern void ip_mc_drop_socket(struct sock *sk); 114 116 extern int ip_mc_source(int add, int omode, struct sock *sk,
+2
include/linux/marvell_phy.h
··· 27 27 */ 28 28 #define MARVELL_PHY_ID_88E6390 0x01410f90 29 29 30 + #define MARVELL_PHY_FAMILY_ID(id) ((id) >> 4) 31 + 30 32 /* struct phy_device dev_flags definitions */ 31 33 #define MARVELL_PHY_M1145_FLAGS_RESISTANCE 0x00000001 32 34 #define MARVELL_PHY_M1118_DNS323_LEDS 0x00000002
+5 -5
include/linux/skbuff.h
··· 630 630 * @hash: the packet hash 631 631 * @queue_mapping: Queue mapping for multiqueue devices 632 632 * @xmit_more: More SKBs are pending for this queue 633 + * @pfmemalloc: skbuff was allocated from PFMEMALLOC reserves 633 634 * @ndisc_nodetype: router type (from link layer) 634 635 * @ooo_okay: allow the mapping of a socket to a queue to be changed 635 636 * @l4_hash: indicate hash is a canonical 4-tuple hash over transport ··· 736 735 peeked:1, 737 736 head_frag:1, 738 737 xmit_more:1, 739 - __unused:1; /* one bit hole */ 738 + pfmemalloc:1; 740 739 741 740 /* fields enclosed in headers_start/headers_end are copied 742 741 * using a single memcpy() in __copy_skb_header() ··· 755 754 756 755 __u8 __pkt_type_offset[0]; 757 756 __u8 pkt_type:3; 758 - __u8 pfmemalloc:1; 759 757 __u8 ignore_df:1; 760 - 761 758 __u8 nf_trace:1; 762 759 __u8 ip_summed:2; 763 760 __u8 ooo_okay:1; 761 + 764 762 __u8 l4_hash:1; 765 763 __u8 sw_hash:1; 766 764 __u8 wifi_acked_valid:1; 767 765 __u8 wifi_acked:1; 768 - 769 766 __u8 no_fcs:1; 770 767 /* Indicates the inner headers are valid in the skbuff. */ 771 768 __u8 encapsulation:1; 772 769 __u8 encap_hdr_csum:1; 773 770 __u8 csum_valid:1; 771 + 774 772 __u8 csum_complete_sw:1; 775 773 __u8 csum_level:2; 776 774 __u8 csum_not_inet:1; 777 - 778 775 __u8 dst_pending_confirm:1; 779 776 #ifdef CONFIG_IPV6_NDISC_NODETYPE 780 777 __u8 ndisc_nodetype:2; 781 778 #endif 782 779 __u8 ipvs_property:1; 780 + 783 781 __u8 inner_protocol_type:1; 784 782 __u8 remcsum_offload:1; 785 783 #ifdef CONFIG_NET_SWITCHDEV
+6
include/net/ip6_route.h
··· 66 66 (IPV6_ADDR_MULTICAST | IPV6_ADDR_LINKLOCAL | IPV6_ADDR_LOOPBACK); 67 67 } 68 68 69 + static inline bool rt6_qualify_for_ecmp(const struct fib6_info *f6i) 70 + { 71 + return (f6i->fib6_flags & (RTF_GATEWAY|RTF_ADDRCONF|RTF_DYNAMIC)) == 72 + RTF_GATEWAY; 73 + } 74 + 69 75 void ip6_route_input(struct sk_buff *skb); 70 76 struct dst_entry *ip6_route_input_lookup(struct net *net, 71 77 struct net_device *dev,
+4 -9
include/net/ipv6.h
··· 355 355 struct ipv6_txoptions *ipv6_renew_options(struct sock *sk, 356 356 struct ipv6_txoptions *opt, 357 357 int newtype, 358 - struct ipv6_opt_hdr __user *newopt, 359 - int newoptlen); 360 - struct ipv6_txoptions * 361 - ipv6_renew_options_kern(struct sock *sk, 362 - struct ipv6_txoptions *opt, 363 - int newtype, 364 - struct ipv6_opt_hdr *newopt, 365 - int newoptlen); 358 + struct ipv6_opt_hdr *newopt); 366 359 struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space, 367 360 struct ipv6_txoptions *opt); 368 361 ··· 823 830 * to minimize possbility that any useful information to an 824 831 * attacker is leaked. Only lower 20 bits are relevant. 825 832 */ 826 - rol32(hash, 16); 833 + hash = rol32(hash, 16); 827 834 828 835 flowlabel = (__force __be32)hash & IPV6_FLOWLABEL_MASK; 829 836 ··· 1100 1107 1101 1108 int ipv6_sock_mc_join(struct sock *sk, int ifindex, 1102 1109 const struct in6_addr *addr); 1110 + int ipv6_sock_mc_join_ssm(struct sock *sk, int ifindex, 1111 + const struct in6_addr *addr, unsigned int mode); 1103 1112 int ipv6_sock_mc_drop(struct sock *sk, int ifindex, 1104 1113 const struct in6_addr *addr); 1105 1114 #endif /* _NET_IPV6_H */
+6
include/net/netfilter/nf_tables_core.h
··· 65 65 extern struct static_key_false nft_counters_enabled; 66 66 extern struct static_key_false nft_trace_enabled; 67 67 68 + extern struct nft_set_type nft_set_rhash_type; 69 + extern struct nft_set_type nft_set_hash_type; 70 + extern struct nft_set_type nft_set_hash_fast_type; 71 + extern struct nft_set_type nft_set_rbtree_type; 72 + extern struct nft_set_type nft_set_bitmap_type; 73 + 68 74 #endif /* _NET_NF_TABLES_CORE_H */
+2 -2
include/net/netfilter/nf_tproxy.h
··· 64 64 * belonging to established connections going through that one. 65 65 */ 66 66 struct sock * 67 - nf_tproxy_get_sock_v4(struct net *net, struct sk_buff *skb, void *hp, 67 + nf_tproxy_get_sock_v4(struct net *net, struct sk_buff *skb, 68 68 const u8 protocol, 69 69 const __be32 saddr, const __be32 daddr, 70 70 const __be16 sport, const __be16 dport, ··· 103 103 struct sock *sk); 104 104 105 105 struct sock * 106 - nf_tproxy_get_sock_v6(struct net *net, struct sk_buff *skb, int thoff, void *hp, 106 + nf_tproxy_get_sock_v6(struct net *net, struct sk_buff *skb, int thoff, 107 107 const u8 protocol, 108 108 const struct in6_addr *saddr, const struct in6_addr *daddr, 109 109 const __be16 sport, const __be16 dport,
-1
include/net/tc_act/tc_csum.h
··· 7 7 #include <linux/tc_act/tc_csum.h> 8 8 9 9 struct tcf_csum_params { 10 - int action; 11 10 u32 update_flags; 12 11 struct rcu_head rcu; 13 12 };
-1
include/net/tc_act/tc_tunnel_key.h
··· 18 18 struct tcf_tunnel_key_params { 19 19 struct rcu_head rcu; 20 20 int tcft_action; 21 - int action; 22 21 struct metadata_dst *tcft_enc_metadata; 23 22 }; 24 23
+4 -2
include/net/tcp.h
··· 828 828 829 829 #define TCP_SKB_CB(__skb) ((struct tcp_skb_cb *)&((__skb)->cb[0])) 830 830 831 + static inline void bpf_compute_data_end_sk_skb(struct sk_buff *skb) 832 + { 833 + TCP_SKB_CB(skb)->bpf.data_end = skb->data + skb_headlen(skb); 834 + } 831 835 832 836 #if IS_ENABLED(CONFIG_IPV6) 833 837 /* This is the variant of inet6_iif() that must be used by TCP, ··· 912 908 CA_EVENT_LOSS, /* loss timeout */ 913 909 CA_EVENT_ECN_NO_CE, /* ECT set, but not CE marked */ 914 910 CA_EVENT_ECN_IS_CE, /* received CE marked IP packet */ 915 - CA_EVENT_DELAYED_ACK, /* Delayed ack is sent */ 916 - CA_EVENT_NON_DELAYED_ACK, 917 911 }; 918 912 919 913 /* Information about inbound ACK, passed to cong_ops->in_ack_event() */
+4
include/net/xdp_sock.h
··· 60 60 bool zc; 61 61 /* Protects multiple processes in the control path */ 62 62 struct mutex mutex; 63 + /* Mutual exclusion of NAPI TX thread and sendmsg error paths 64 + * in the SKB destructor callback. 65 + */ 66 + spinlock_t tx_completion_lock; 63 67 u64 rx_dropped; 64 68 }; 65 69
+1 -1
include/uapi/linux/ethtool.h
··· 226 226 ETHTOOL_TX_COPYBREAK, 227 227 ETHTOOL_PFC_PREVENTION_TOUT, /* timeout in msecs */ 228 228 /* 229 - * Add your fresh new tubale attribute above and remember to update 229 + * Add your fresh new tunable attribute above and remember to update 230 230 * tunable_strings[] in net/core/ethtool.c 231 231 */ 232 232 __ETHTOOL_TUNABLE_COUNT,
+4
include/uapi/linux/tcp.h
··· 127 127 128 128 #define TCP_CM_INQ TCP_INQ 129 129 130 + #define TCP_REPAIR_ON 1 131 + #define TCP_REPAIR_OFF 0 132 + #define TCP_REPAIR_OFF_NO_WP -1 /* Turn off without window probes */ 133 + 130 134 struct tcp_repair_opt { 131 135 __u32 opt_code; 132 136 __u32 opt_val;
+13 -17
kernel/bpf/btf.c
··· 991 991 void *data, u8 bits_offset, 992 992 struct seq_file *m) 993 993 { 994 + u16 left_shift_bits, right_shift_bits; 994 995 u32 int_data = btf_type_int(t); 995 996 u16 nr_bits = BTF_INT_BITS(int_data); 996 997 u16 total_bits_offset; 997 998 u16 nr_copy_bytes; 998 999 u16 nr_copy_bits; 999 - u8 nr_upper_bits; 1000 - union { 1001 - u64 u64_num; 1002 - u8 u8_nums[8]; 1003 - } print_num; 1000 + u64 print_num; 1004 1001 1005 1002 total_bits_offset = bits_offset + BTF_INT_OFFSET(int_data); 1006 1003 data += BITS_ROUNDDOWN_BYTES(total_bits_offset); ··· 1005 1008 nr_copy_bits = nr_bits + bits_offset; 1006 1009 nr_copy_bytes = BITS_ROUNDUP_BYTES(nr_copy_bits); 1007 1010 1008 - print_num.u64_num = 0; 1009 - memcpy(&print_num.u64_num, data, nr_copy_bytes); 1011 + print_num = 0; 1012 + memcpy(&print_num, data, nr_copy_bytes); 1010 1013 1011 - /* Ditch the higher order bits */ 1012 - nr_upper_bits = BITS_PER_BYTE_MASKED(nr_copy_bits); 1013 - if (nr_upper_bits) { 1014 - /* We need to mask out some bits of the upper byte. */ 1015 - u8 mask = (1 << nr_upper_bits) - 1; 1014 + #ifdef __BIG_ENDIAN_BITFIELD 1015 + left_shift_bits = bits_offset; 1016 + #else 1017 + left_shift_bits = BITS_PER_U64 - nr_copy_bits; 1018 + #endif 1019 + right_shift_bits = BITS_PER_U64 - nr_bits; 1016 1020 1017 - print_num.u8_nums[nr_copy_bytes - 1] &= mask; 1018 - } 1021 + print_num <<= left_shift_bits; 1022 + print_num >>= right_shift_bits; 1019 1023 1020 - print_num.u64_num >>= bits_offset; 1021 - 1022 - seq_printf(m, "0x%llx", print_num.u64_num); 1024 + seq_printf(m, "0x%llx", print_num); 1023 1025 } 1024 1026 1025 1027 static void btf_int_seq_show(const struct btf *btf, const struct btf_type *t,
+6 -1
kernel/bpf/devmap.c
··· 334 334 { 335 335 struct net_device *dev = dst->dev; 336 336 struct xdp_frame *xdpf; 337 + int err; 337 338 338 339 if (!dev->netdev_ops->ndo_xdp_xmit) 339 340 return -EOPNOTSUPP; 341 + 342 + err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data); 343 + if (unlikely(err)) 344 + return err; 340 345 341 346 xdpf = convert_to_xdp_frame(xdp); 342 347 if (unlikely(!xdpf)) ··· 355 350 { 356 351 int err; 357 352 358 - err = __xdp_generic_ok_fwd_dev(skb, dst->dev); 353 + err = xdp_ok_fwd_dev(dst->dev, skb->len); 359 354 if (unlikely(err)) 360 355 return err; 361 356 skb->dev = dst->dev;
+11 -5
kernel/bpf/hashtab.c
··· 747 747 * old element will be freed immediately. 748 748 * Otherwise return an error 749 749 */ 750 - atomic_dec(&htab->count); 751 - return ERR_PTR(-E2BIG); 750 + l_new = ERR_PTR(-E2BIG); 751 + goto dec_count; 752 752 } 753 753 l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN, 754 754 htab->map.numa_node); 755 - if (!l_new) 756 - return ERR_PTR(-ENOMEM); 755 + if (!l_new) { 756 + l_new = ERR_PTR(-ENOMEM); 757 + goto dec_count; 758 + } 757 759 } 758 760 759 761 memcpy(l_new->key, key, key_size); ··· 768 766 GFP_ATOMIC | __GFP_NOWARN); 769 767 if (!pptr) { 770 768 kfree(l_new); 771 - return ERR_PTR(-ENOMEM); 769 + l_new = ERR_PTR(-ENOMEM); 770 + goto dec_count; 772 771 } 773 772 } 774 773 ··· 782 779 } 783 780 784 781 l_new->hash = hash; 782 + return l_new; 783 + dec_count: 784 + atomic_dec(&htab->count); 785 785 return l_new; 786 786 } 787 787
+27 -16
kernel/bpf/sockmap.c
··· 312 312 struct smap_psock *psock; 313 313 struct sock *osk; 314 314 315 + lock_sock(sk); 315 316 rcu_read_lock(); 316 317 psock = smap_psock_sk(sk); 317 318 if (unlikely(!psock)) { 318 319 rcu_read_unlock(); 320 + release_sock(sk); 319 321 return sk->sk_prot->close(sk, timeout); 320 322 } 321 323 ··· 373 371 e = psock_map_pop(sk, psock); 374 372 } 375 373 rcu_read_unlock(); 374 + release_sock(sk); 376 375 close_fun(sk, timeout); 377 376 } 378 377 ··· 571 568 while (sg[i].length) { 572 569 free += sg[i].length; 573 570 sk_mem_uncharge(sk, sg[i].length); 574 - put_page(sg_page(&sg[i])); 571 + if (!md->skb) 572 + put_page(sg_page(&sg[i])); 575 573 sg[i].length = 0; 576 574 sg[i].page_link = 0; 577 575 sg[i].offset = 0; ··· 581 577 if (i == MAX_SKB_FRAGS) 582 578 i = 0; 583 579 } 580 + if (md->skb) 581 + consume_skb(md->skb); 584 582 585 583 return free; 586 584 } ··· 1236 1230 */ 1237 1231 TCP_SKB_CB(skb)->bpf.sk_redir = NULL; 1238 1232 skb->sk = psock->sock; 1239 - bpf_compute_data_pointers(skb); 1233 + bpf_compute_data_end_sk_skb(skb); 1240 1234 preempt_disable(); 1241 1235 rc = (*prog->bpf_func)(skb, prog->insnsi); 1242 1236 preempt_enable(); ··· 1491 1485 * any socket yet. 1492 1486 */ 1493 1487 skb->sk = psock->sock; 1494 - bpf_compute_data_pointers(skb); 1488 + bpf_compute_data_end_sk_skb(skb); 1495 1489 rc = (*prog->bpf_func)(skb, prog->insnsi); 1496 1490 skb->sk = NULL; 1497 1491 rcu_read_unlock(); ··· 1902 1896 e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN); 1903 1897 if (!e) { 1904 1898 err = -ENOMEM; 1905 - goto out_progs; 1899 + goto out_free; 1906 1900 } 1907 1901 } 1908 1902 ··· 2075 2069 return -EOPNOTSUPP; 2076 2070 } 2077 2071 2072 + lock_sock(skops.sk); 2073 + preempt_disable(); 2074 + rcu_read_lock(); 2078 2075 err = sock_map_ctx_update_elem(&skops, map, key, flags); 2076 + rcu_read_unlock(); 2077 + preempt_enable(); 2078 + release_sock(skops.sk); 2079 2079 fput(socket->file); 2080 2080 return err; 2081 2081 } ··· 2354 2342 if (err) 2355 2343 goto err; 2356 2344 2357 - /* bpf_map_update_elem() can be called in_irq() */ 2345 + /* psock is valid here because otherwise above *ctx_update_elem would 2346 + * have thrown an error. It is safe to skip error check. 2347 + */ 2348 + psock = smap_psock_sk(sock); 2358 2349 raw_spin_lock_bh(&b->lock); 2359 2350 l_old = lookup_elem_raw(head, hash, key, key_size); 2360 2351 if (l_old && map_flags == BPF_NOEXIST) { ··· 2372 2357 l_new = alloc_sock_hash_elem(htab, key, key_size, hash, sock, l_old); 2373 2358 if (IS_ERR(l_new)) { 2374 2359 err = PTR_ERR(l_new); 2375 - goto bucket_err; 2376 - } 2377 - 2378 - psock = smap_psock_sk(sock); 2379 - if (unlikely(!psock)) { 2380 - err = -EINVAL; 2381 2360 goto bucket_err; 2382 2361 } 2383 2362 ··· 2397 2388 raw_spin_unlock_bh(&b->lock); 2398 2389 return 0; 2399 2390 bucket_err: 2391 + smap_release_sock(psock, sock); 2400 2392 raw_spin_unlock_bh(&b->lock); 2401 2393 err: 2402 2394 kfree(e); 2403 - psock = smap_psock_sk(sock); 2404 - if (psock) 2405 - smap_release_sock(psock, sock); 2406 2395 return err; 2407 2396 } 2408 2397 ··· 2422 2415 return -EINVAL; 2423 2416 } 2424 2417 2418 + lock_sock(skops.sk); 2419 + preempt_disable(); 2420 + rcu_read_lock(); 2425 2421 err = sock_hash_ctx_update_elem(&skops, map, key, flags); 2422 + rcu_read_unlock(); 2423 + preempt_enable(); 2424 + release_sock(skops.sk); 2426 2425 fput(socket->file); 2427 2426 return err; 2428 2427 } ··· 2485 2472 b = __select_bucket(htab, hash); 2486 2473 head = &b->head; 2487 2474 2488 - raw_spin_lock_bh(&b->lock); 2489 2475 l = lookup_elem_raw(head, hash, key, key_size); 2490 2476 sk = l ? l->sk : NULL; 2491 - raw_spin_unlock_bh(&b->lock); 2492 2477 return sk; 2493 2478 } 2494 2479
+3 -1
kernel/bpf/syscall.c
··· 735 735 if (bpf_map_is_dev_bound(map)) { 736 736 err = bpf_map_offload_update_elem(map, key, value, attr->flags); 737 737 goto out; 738 - } else if (map->map_type == BPF_MAP_TYPE_CPUMAP) { 738 + } else if (map->map_type == BPF_MAP_TYPE_CPUMAP || 739 + map->map_type == BPF_MAP_TYPE_SOCKHASH || 740 + map->map_type == BPF_MAP_TYPE_SOCKMAP) { 739 741 err = map->ops->map_update_elem(map, key, value, attr->flags); 740 742 goto out; 741 743 }
+9 -2
kernel/bpf/verifier.c
··· 5430 5430 if (insn->code != (BPF_JMP | BPF_CALL) || 5431 5431 insn->src_reg != BPF_PSEUDO_CALL) 5432 5432 continue; 5433 + /* Upon error here we cannot fall back to interpreter but 5434 + * need a hard reject of the program. Thus -EFAULT is 5435 + * propagated in any case. 5436 + */ 5433 5437 subprog = find_subprog(env, i + insn->imm + 1); 5434 5438 if (subprog < 0) { 5435 5439 WARN_ONCE(1, "verifier bug. No program starts at insn %d\n", ··· 5454 5450 5455 5451 func = kcalloc(env->subprog_cnt, sizeof(prog), GFP_KERNEL); 5456 5452 if (!func) 5457 - return -ENOMEM; 5453 + goto out_undo_insn; 5458 5454 5459 5455 for (i = 0; i < env->subprog_cnt; i++) { 5460 5456 subprog_start = subprog_end; ··· 5519 5515 tmp = bpf_int_jit_compile(func[i]); 5520 5516 if (tmp != func[i] || func[i]->bpf_func != old_bpf_func) { 5521 5517 verbose(env, "JIT doesn't support bpf-to-bpf calls\n"); 5522 - err = -EFAULT; 5518 + err = -ENOTSUPP; 5523 5519 goto out_free; 5524 5520 } 5525 5521 cond_resched(); ··· 5556 5552 if (func[i]) 5557 5553 bpf_jit_free(func[i]); 5558 5554 kfree(func); 5555 + out_undo_insn: 5559 5556 /* cleanup main prog to be interpreted */ 5560 5557 prog->jit_requested = 0; 5561 5558 for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) { ··· 5583 5578 err = jit_subprogs(env); 5584 5579 if (err == 0) 5585 5580 return 0; 5581 + if (err == -EFAULT) 5582 + return err; 5586 5583 } 5587 5584 #ifndef CONFIG_BPF_JIT_ALWAYS_ON 5588 5585 for (i = 0; i < prog->len; i++, insn++) {
+19 -8
lib/rhashtable.c
··· 774 774 skip++; 775 775 if (list == iter->list) { 776 776 iter->p = p; 777 - skip = skip; 777 + iter->skip = skip; 778 778 goto found; 779 779 } 780 780 } ··· 964 964 965 965 static size_t rounded_hashtable_size(const struct rhashtable_params *params) 966 966 { 967 - return max(roundup_pow_of_two(params->nelem_hint * 4 / 3), 968 - (unsigned long)params->min_size); 967 + size_t retsize; 968 + 969 + if (params->nelem_hint) 970 + retsize = max(roundup_pow_of_two(params->nelem_hint * 4 / 3), 971 + (unsigned long)params->min_size); 972 + else 973 + retsize = max(HASH_DEFAULT_SIZE, 974 + (unsigned long)params->min_size); 975 + 976 + return retsize; 969 977 } 970 978 971 979 static u32 rhashtable_jhash2(const void *key, u32 length, u32 seed) ··· 1030 1022 struct bucket_table *tbl; 1031 1023 size_t size; 1032 1024 1033 - size = HASH_DEFAULT_SIZE; 1034 - 1035 1025 if ((!params->key_len && !params->obj_hashfn) || 1036 1026 (params->obj_hashfn && !params->obj_cmpfn)) 1037 1027 return -EINVAL; ··· 1056 1050 1057 1051 ht->p.min_size = max_t(u16, ht->p.min_size, HASH_MIN_SIZE); 1058 1052 1059 - if (params->nelem_hint) 1060 - size = rounded_hashtable_size(&ht->p); 1053 + size = rounded_hashtable_size(&ht->p); 1061 1054 1062 1055 if (params->locks_mul) 1063 1056 ht->p.locks_mul = roundup_pow_of_two(params->locks_mul); ··· 1148 1143 void (*free_fn)(void *ptr, void *arg), 1149 1144 void *arg) 1150 1145 { 1151 - struct bucket_table *tbl; 1146 + struct bucket_table *tbl, *next_tbl; 1152 1147 unsigned int i; 1153 1148 1154 1149 cancel_work_sync(&ht->run_work); 1155 1150 1156 1151 mutex_lock(&ht->mutex); 1157 1152 tbl = rht_dereference(ht->tbl, ht); 1153 + restart: 1158 1154 if (free_fn) { 1159 1155 for (i = 0; i < tbl->size; i++) { 1160 1156 struct rhash_head *pos, *next; ··· 1172 1166 } 1173 1167 } 1174 1168 1169 + next_tbl = rht_dereference(tbl->future_tbl, ht); 1175 1170 bucket_table_free(tbl); 1171 + if (next_tbl) { 1172 + tbl = next_tbl; 1173 + goto restart; 1174 + } 1176 1175 mutex_unlock(&ht->mutex); 1177 1176 } 1178 1177 EXPORT_SYMBOL_GPL(rhashtable_free_and_destroy);
+3 -1
net/batman-adv/bat_iv_ogm.c
··· 2732 2732 { 2733 2733 struct batadv_neigh_ifinfo *router_ifinfo = NULL; 2734 2734 struct batadv_neigh_node *router; 2735 - struct batadv_gw_node *curr_gw; 2735 + struct batadv_gw_node *curr_gw = NULL; 2736 2736 int ret = 0; 2737 2737 void *hdr; 2738 2738 ··· 2780 2780 ret = 0; 2781 2781 2782 2782 out: 2783 + if (curr_gw) 2784 + batadv_gw_node_put(curr_gw); 2783 2785 if (router_ifinfo) 2784 2786 batadv_neigh_ifinfo_put(router_ifinfo); 2785 2787 if (router)
+3 -1
net/batman-adv/bat_v.c
··· 927 927 { 928 928 struct batadv_neigh_ifinfo *router_ifinfo = NULL; 929 929 struct batadv_neigh_node *router; 930 - struct batadv_gw_node *curr_gw; 930 + struct batadv_gw_node *curr_gw = NULL; 931 931 int ret = 0; 932 932 void *hdr; 933 933 ··· 995 995 ret = 0; 996 996 997 997 out: 998 + if (curr_gw) 999 + batadv_gw_node_put(curr_gw); 998 1000 if (router_ifinfo) 999 1001 batadv_neigh_ifinfo_put(router_ifinfo); 1000 1002 if (router)
+40
net/batman-adv/debugfs.c
··· 19 19 #include "debugfs.h" 20 20 #include "main.h" 21 21 22 + #include <linux/dcache.h> 22 23 #include <linux/debugfs.h> 23 24 #include <linux/err.h> 24 25 #include <linux/errno.h> ··· 345 344 } 346 345 347 346 /** 347 + * batadv_debugfs_rename_hardif() - Fix debugfs path for renamed hardif 348 + * @hard_iface: hard interface which was renamed 349 + */ 350 + void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface) 351 + { 352 + const char *name = hard_iface->net_dev->name; 353 + struct dentry *dir; 354 + struct dentry *d; 355 + 356 + dir = hard_iface->debug_dir; 357 + if (!dir) 358 + return; 359 + 360 + d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name); 361 + if (!d) 362 + pr_err("Can't rename debugfs dir to %s\n", name); 363 + } 364 + 365 + /** 348 366 * batadv_debugfs_del_hardif() - delete the base directory for a hard interface 349 367 * in debugfs. 350 368 * @hard_iface: hard interface which is deleted. ··· 431 411 bat_priv->debug_dir = NULL; 432 412 out: 433 413 return -ENOMEM; 414 + } 415 + 416 + /** 417 + * batadv_debugfs_rename_meshif() - Fix debugfs path for renamed softif 418 + * @dev: net_device which was renamed 419 + */ 420 + void batadv_debugfs_rename_meshif(struct net_device *dev) 421 + { 422 + struct batadv_priv *bat_priv = netdev_priv(dev); 423 + const char *name = dev->name; 424 + struct dentry *dir; 425 + struct dentry *d; 426 + 427 + dir = bat_priv->debug_dir; 428 + if (!dir) 429 + return; 430 + 431 + d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name); 432 + if (!d) 433 + pr_err("Can't rename debugfs dir to %s\n", name); 434 434 } 435 435 436 436 /**
+11
net/batman-adv/debugfs.h
··· 30 30 void batadv_debugfs_init(void); 31 31 void batadv_debugfs_destroy(void); 32 32 int batadv_debugfs_add_meshif(struct net_device *dev); 33 + void batadv_debugfs_rename_meshif(struct net_device *dev); 33 34 void batadv_debugfs_del_meshif(struct net_device *dev); 34 35 int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface); 36 + void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface); 35 37 void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface); 36 38 37 39 #else ··· 51 49 return 0; 52 50 } 53 51 52 + static inline void batadv_debugfs_rename_meshif(struct net_device *dev) 53 + { 54 + } 55 + 54 56 static inline void batadv_debugfs_del_meshif(struct net_device *dev) 55 57 { 56 58 } ··· 63 57 int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface) 64 58 { 65 59 return 0; 60 + } 61 + 62 + static inline 63 + void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface) 64 + { 66 65 } 67 66 68 67 static inline
+31 -6
net/batman-adv/hard-interface.c
··· 989 989 rtnl_unlock(); 990 990 } 991 991 992 + /** 993 + * batadv_hard_if_event_softif() - Handle events for soft interfaces 994 + * @event: NETDEV_* event to handle 995 + * @net_dev: net_device which generated an event 996 + * 997 + * Return: NOTIFY_* result 998 + */ 999 + static int batadv_hard_if_event_softif(unsigned long event, 1000 + struct net_device *net_dev) 1001 + { 1002 + struct batadv_priv *bat_priv; 1003 + 1004 + switch (event) { 1005 + case NETDEV_REGISTER: 1006 + batadv_sysfs_add_meshif(net_dev); 1007 + bat_priv = netdev_priv(net_dev); 1008 + batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS); 1009 + break; 1010 + case NETDEV_CHANGENAME: 1011 + batadv_debugfs_rename_meshif(net_dev); 1012 + break; 1013 + } 1014 + 1015 + return NOTIFY_DONE; 1016 + } 1017 + 992 1018 static int batadv_hard_if_event(struct notifier_block *this, 993 1019 unsigned long event, void *ptr) 994 1020 { ··· 1023 997 struct batadv_hard_iface *primary_if = NULL; 1024 998 struct batadv_priv *bat_priv; 1025 999 1026 - if (batadv_softif_is_valid(net_dev) && event == NETDEV_REGISTER) { 1027 - batadv_sysfs_add_meshif(net_dev); 1028 - bat_priv = netdev_priv(net_dev); 1029 - batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS); 1030 - return NOTIFY_DONE; 1031 - } 1000 + if (batadv_softif_is_valid(net_dev)) 1001 + return batadv_hard_if_event_softif(event, net_dev); 1032 1002 1033 1003 hard_iface = batadv_hardif_get_by_netdev(net_dev); 1034 1004 if (!hard_iface && (event == NETDEV_REGISTER || ··· 1072 1050 hard_iface->wifi_flags = batadv_wifi_flags_evaluate(net_dev); 1073 1051 if (batadv_is_wifi_hardif(hard_iface)) 1074 1052 hard_iface->num_bcasts = BATADV_NUM_BCASTS_WIRELESS; 1053 + break; 1054 + case NETDEV_CHANGENAME: 1055 + batadv_debugfs_rename_hardif(hard_iface); 1075 1056 break; 1076 1057 default: 1077 1058 break;
+5 -2
net/batman-adv/translation-table.c
··· 1705 1705 ether_addr_copy(common->addr, tt_addr); 1706 1706 common->vid = vid; 1707 1707 1708 - common->flags = flags; 1708 + if (!is_multicast_ether_addr(common->addr)) 1709 + common->flags = flags & (~BATADV_TT_SYNC_MASK); 1710 + 1709 1711 tt_global_entry->roam_at = 0; 1710 1712 /* node must store current time in case of roaming. This is 1711 1713 * needed to purge this entry out on timeout (if nobody claims ··· 1770 1768 * TT_CLIENT_TEMP, therefore they have to be copied in the 1771 1769 * client entry 1772 1770 */ 1773 - common->flags |= flags & (~BATADV_TT_SYNC_MASK); 1771 + if (!is_multicast_ether_addr(common->addr)) 1772 + common->flags |= flags & (~BATADV_TT_SYNC_MASK); 1774 1773 1775 1774 /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only 1776 1775 * one originator left in the list and we previously received a
+14 -3
net/bpf/test_run.c
··· 96 96 u32 size = kattr->test.data_size_in; 97 97 u32 repeat = kattr->test.repeat; 98 98 u32 retval, duration; 99 + int hh_len = ETH_HLEN; 99 100 struct sk_buff *skb; 100 101 void *data; 101 102 int ret; ··· 132 131 skb_reset_network_header(skb); 133 132 134 133 if (is_l2) 135 - __skb_push(skb, ETH_HLEN); 134 + __skb_push(skb, hh_len); 136 135 if (is_direct_pkt_access) 137 136 bpf_compute_data_pointers(skb); 138 137 retval = bpf_test_run(prog, skb, repeat, &duration); 139 - if (!is_l2) 140 - __skb_push(skb, ETH_HLEN); 138 + if (!is_l2) { 139 + if (skb_headroom(skb) < hh_len) { 140 + int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb)); 141 + 142 + if (pskb_expand_head(skb, nhead, 0, GFP_USER)) { 143 + kfree_skb(skb); 144 + return -ENOMEM; 145 + } 146 + } 147 + memset(__skb_push(skb, hh_len), 0, hh_len); 148 + } 149 + 141 150 size = skb->len; 142 151 /* bpf program can never convert linear skb to non-linear */ 143 152 if (WARN_ON_ONCE(skb_is_nonlinear(skb)))
+121 -28
net/core/filter.c
··· 459 459 (!unaligned_ok && offset >= 0 && 460 460 offset + ip_align >= 0 && 461 461 offset + ip_align % size == 0))) { 462 + bool ldx_off_ok = offset <= S16_MAX; 463 + 462 464 *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_H); 463 465 *insn++ = BPF_ALU64_IMM(BPF_SUB, BPF_REG_TMP, offset); 464 - *insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP, size, 2 + endian); 465 - *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A, BPF_REG_D, 466 - offset); 466 + *insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP, 467 + size, 2 + endian + (!ldx_off_ok * 2)); 468 + if (ldx_off_ok) { 469 + *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A, 470 + BPF_REG_D, offset); 471 + } else { 472 + *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_D); 473 + *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_TMP, offset); 474 + *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A, 475 + BPF_REG_TMP, 0); 476 + } 467 477 if (endian) 468 478 *insn++ = BPF_ENDIAN(BPF_FROM_BE, BPF_REG_A, size * 8); 469 479 *insn++ = BPF_JMP_A(8); ··· 1772 1762 .arg2_type = ARG_ANYTHING, 1773 1763 }; 1774 1764 1765 + static inline int sk_skb_try_make_writable(struct sk_buff *skb, 1766 + unsigned int write_len) 1767 + { 1768 + int err = __bpf_try_make_writable(skb, write_len); 1769 + 1770 + bpf_compute_data_end_sk_skb(skb); 1771 + return err; 1772 + } 1773 + 1774 + BPF_CALL_2(sk_skb_pull_data, struct sk_buff *, skb, u32, len) 1775 + { 1776 + /* Idea is the following: should the needed direct read/write 1777 + * test fail during runtime, we can pull in more data and redo 1778 + * again, since implicitly, we invalidate previous checks here. 1779 + * 1780 + * Or, since we know how much we need to make read/writeable, 1781 + * this can be done once at the program beginning for direct 1782 + * access case. By this we overcome limitations of only current 1783 + * headroom being accessible. 1784 + */ 1785 + return sk_skb_try_make_writable(skb, len ? : skb_headlen(skb)); 1786 + } 1787 + 1788 + static const struct bpf_func_proto sk_skb_pull_data_proto = { 1789 + .func = sk_skb_pull_data, 1790 + .gpl_only = false, 1791 + .ret_type = RET_INTEGER, 1792 + .arg1_type = ARG_PTR_TO_CTX, 1793 + .arg2_type = ARG_ANYTHING, 1794 + }; 1795 + 1775 1796 BPF_CALL_5(bpf_l3_csum_replace, struct sk_buff *, skb, u32, offset, 1776 1797 u64, from, u64, to, u64, flags) 1777 1798 { ··· 2820 2779 2821 2780 static u32 __bpf_skb_max_len(const struct sk_buff *skb) 2822 2781 { 2823 - return skb->dev->mtu + skb->dev->hard_header_len; 2782 + return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len : 2783 + SKB_MAX_ALLOC; 2824 2784 } 2825 2785 2826 2786 static int bpf_skb_adjust_net(struct sk_buff *skb, s32 len_diff) ··· 2905 2863 return __skb_trim_rcsum(skb, new_len); 2906 2864 } 2907 2865 2908 - BPF_CALL_3(bpf_skb_change_tail, struct sk_buff *, skb, u32, new_len, 2909 - u64, flags) 2866 + static inline int __bpf_skb_change_tail(struct sk_buff *skb, u32 new_len, 2867 + u64 flags) 2910 2868 { 2911 2869 u32 max_len = __bpf_skb_max_len(skb); 2912 2870 u32 min_len = __bpf_skb_min_len(skb); ··· 2942 2900 if (!ret && skb_is_gso(skb)) 2943 2901 skb_gso_reset(skb); 2944 2902 } 2903 + return ret; 2904 + } 2905 + 2906 + BPF_CALL_3(bpf_skb_change_tail, struct sk_buff *, skb, u32, new_len, 2907 + u64, flags) 2908 + { 2909 + int ret = __bpf_skb_change_tail(skb, new_len, flags); 2945 2910 2946 2911 bpf_compute_data_pointers(skb); 2947 2912 return ret; ··· 2963 2914 .arg3_type = ARG_ANYTHING, 2964 2915 }; 2965 2916 2966 - BPF_CALL_3(bpf_skb_change_head, struct sk_buff *, skb, u32, head_room, 2917 + BPF_CALL_3(sk_skb_change_tail, struct sk_buff *, skb, u32, new_len, 2967 2918 u64, flags) 2919 + { 2920 + int ret = __bpf_skb_change_tail(skb, new_len, flags); 2921 + 2922 + bpf_compute_data_end_sk_skb(skb); 2923 + return ret; 2924 + } 2925 + 2926 + static const struct bpf_func_proto sk_skb_change_tail_proto = { 2927 + .func = sk_skb_change_tail, 2928 + .gpl_only = false, 2929 + .ret_type = RET_INTEGER, 2930 + .arg1_type = ARG_PTR_TO_CTX, 2931 + .arg2_type = ARG_ANYTHING, 2932 + .arg3_type = ARG_ANYTHING, 2933 + }; 2934 + 2935 + static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room, 2936 + u64 flags) 2968 2937 { 2969 2938 u32 max_len = __bpf_skb_max_len(skb); 2970 2939 u32 new_len = skb->len + head_room; ··· 3008 2941 skb_reset_mac_header(skb); 3009 2942 } 3010 2943 2944 + return ret; 2945 + } 2946 + 2947 + BPF_CALL_3(bpf_skb_change_head, struct sk_buff *, skb, u32, head_room, 2948 + u64, flags) 2949 + { 2950 + int ret = __bpf_skb_change_head(skb, head_room, flags); 2951 + 3011 2952 bpf_compute_data_pointers(skb); 3012 - return 0; 2953 + return ret; 3013 2954 } 3014 2955 3015 2956 static const struct bpf_func_proto bpf_skb_change_head_proto = { ··· 3029 2954 .arg3_type = ARG_ANYTHING, 3030 2955 }; 3031 2956 2957 + BPF_CALL_3(sk_skb_change_head, struct sk_buff *, skb, u32, head_room, 2958 + u64, flags) 2959 + { 2960 + int ret = __bpf_skb_change_head(skb, head_room, flags); 2961 + 2962 + bpf_compute_data_end_sk_skb(skb); 2963 + return ret; 2964 + } 2965 + 2966 + static const struct bpf_func_proto sk_skb_change_head_proto = { 2967 + .func = sk_skb_change_head, 2968 + .gpl_only = false, 2969 + .ret_type = RET_INTEGER, 2970 + .arg1_type = ARG_PTR_TO_CTX, 2971 + .arg2_type = ARG_ANYTHING, 2972 + .arg3_type = ARG_ANYTHING, 2973 + }; 3032 2974 static unsigned long xdp_get_metalen(const struct xdp_buff *xdp) 3033 2975 { 3034 2976 return xdp_data_meta_unsupported(xdp) ? 0 : ··· 3138 3046 u32 index) 3139 3047 { 3140 3048 struct xdp_frame *xdpf; 3141 - int sent; 3049 + int err, sent; 3142 3050 3143 3051 if (!dev->netdev_ops->ndo_xdp_xmit) { 3144 3052 return -EOPNOTSUPP; 3145 3053 } 3054 + 3055 + err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data); 3056 + if (unlikely(err)) 3057 + return err; 3146 3058 3147 3059 xdpf = convert_to_xdp_frame(xdp); 3148 3060 if (unlikely(!xdpf)) ··· 3381 3285 goto err; 3382 3286 } 3383 3287 3384 - if (unlikely((err = __xdp_generic_ok_fwd_dev(skb, fwd)))) 3288 + err = xdp_ok_fwd_dev(fwd, skb->len); 3289 + if (unlikely(err)) 3385 3290 goto err; 3386 3291 3387 3292 skb->dev = fwd; ··· 4536 4439 .arg4_type = ARG_CONST_SIZE 4537 4440 }; 4538 4441 4442 + #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 4539 4443 BPF_CALL_4(bpf_lwt_seg6_store_bytes, struct sk_buff *, skb, u32, offset, 4540 4444 const void *, from, u32, len) 4541 4445 { 4542 - #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 4543 4446 struct seg6_bpf_srh_state *srh_state = 4544 4447 this_cpu_ptr(&seg6_bpf_srh_states); 4545 4448 void *srh_tlvs, *srh_end, *ptr; ··· 4565 4468 4566 4469 memcpy(skb->data + offset, from, len); 4567 4470 return 0; 4568 - #else /* CONFIG_IPV6_SEG6_BPF */ 4569 - return -EOPNOTSUPP; 4570 - #endif 4571 4471 } 4572 4472 4573 4473 static const struct bpf_func_proto bpf_lwt_seg6_store_bytes_proto = { ··· 4580 4486 BPF_CALL_4(bpf_lwt_seg6_action, struct sk_buff *, skb, 4581 4487 u32, action, void *, param, u32, param_len) 4582 4488 { 4583 - #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 4584 4489 struct seg6_bpf_srh_state *srh_state = 4585 4490 this_cpu_ptr(&seg6_bpf_srh_states); 4586 4491 struct ipv6_sr_hdr *srh; ··· 4627 4534 default: 4628 4535 return -EINVAL; 4629 4536 } 4630 - #else /* CONFIG_IPV6_SEG6_BPF */ 4631 - return -EOPNOTSUPP; 4632 - #endif 4633 4537 } 4634 4538 4635 4539 static const struct bpf_func_proto bpf_lwt_seg6_action_proto = { ··· 4642 4552 BPF_CALL_3(bpf_lwt_seg6_adjust_srh, struct sk_buff *, skb, u32, offset, 4643 4553 s32, len) 4644 4554 { 4645 - #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 4646 4555 struct seg6_bpf_srh_state *srh_state = 4647 4556 this_cpu_ptr(&seg6_bpf_srh_states); 4648 4557 void *srh_end, *srh_tlvs, *ptr; ··· 4685 4596 srh_state->hdrlen += len; 4686 4597 srh_state->valid = 0; 4687 4598 return 0; 4688 - #else /* CONFIG_IPV6_SEG6_BPF */ 4689 - return -EOPNOTSUPP; 4690 - #endif 4691 4599 } 4692 4600 4693 4601 static const struct bpf_func_proto bpf_lwt_seg6_adjust_srh_proto = { ··· 4695 4609 .arg2_type = ARG_ANYTHING, 4696 4610 .arg3_type = ARG_ANYTHING, 4697 4611 }; 4612 + #endif /* CONFIG_IPV6_SEG6_BPF */ 4698 4613 4699 4614 bool bpf_helper_changes_pkt_data(void *func) 4700 4615 { ··· 4704 4617 func == bpf_skb_store_bytes || 4705 4618 func == bpf_skb_change_proto || 4706 4619 func == bpf_skb_change_head || 4620 + func == sk_skb_change_head || 4707 4621 func == bpf_skb_change_tail || 4622 + func == sk_skb_change_tail || 4708 4623 func == bpf_skb_adjust_room || 4709 4624 func == bpf_skb_pull_data || 4625 + func == sk_skb_pull_data || 4710 4626 func == bpf_clone_redirect || 4711 4627 func == bpf_l3_csum_replace || 4712 4628 func == bpf_l4_csum_replace || ··· 4717 4627 func == bpf_xdp_adjust_meta || 4718 4628 func == bpf_msg_pull_data || 4719 4629 func == bpf_xdp_adjust_tail || 4720 - func == bpf_lwt_push_encap || 4630 + #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 4721 4631 func == bpf_lwt_seg6_store_bytes || 4722 4632 func == bpf_lwt_seg6_adjust_srh || 4723 - func == bpf_lwt_seg6_action 4724 - ) 4633 + func == bpf_lwt_seg6_action || 4634 + #endif 4635 + func == bpf_lwt_push_encap) 4725 4636 return true; 4726 4637 4727 4638 return false; ··· 4962 4871 case BPF_FUNC_skb_load_bytes: 4963 4872 return &bpf_skb_load_bytes_proto; 4964 4873 case BPF_FUNC_skb_pull_data: 4965 - return &bpf_skb_pull_data_proto; 4874 + return &sk_skb_pull_data_proto; 4966 4875 case BPF_FUNC_skb_change_tail: 4967 - return &bpf_skb_change_tail_proto; 4876 + return &sk_skb_change_tail_proto; 4968 4877 case BPF_FUNC_skb_change_head: 4969 - return &bpf_skb_change_head_proto; 4878 + return &sk_skb_change_head_proto; 4970 4879 case BPF_FUNC_get_socket_cookie: 4971 4880 return &bpf_get_socket_cookie_proto; 4972 4881 case BPF_FUNC_get_socket_uid: ··· 5057 4966 lwt_seg6local_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) 5058 4967 { 5059 4968 switch (func_id) { 4969 + #if IS_ENABLED(CONFIG_IPV6_SEG6_BPF) 5060 4970 case BPF_FUNC_lwt_seg6_store_bytes: 5061 4971 return &bpf_lwt_seg6_store_bytes_proto; 5062 4972 case BPF_FUNC_lwt_seg6_action: 5063 4973 return &bpf_lwt_seg6_action_proto; 5064 4974 case BPF_FUNC_lwt_seg6_adjust_srh: 5065 4975 return &bpf_lwt_seg6_adjust_srh_proto; 4976 + #endif 5066 4977 default: 5067 4978 return lwt_out_func_proto(func_id, prog); 5068 4979 }
+14 -2
net/core/gen_stats.c
··· 77 77 d->lock = lock; 78 78 spin_lock_bh(lock); 79 79 } 80 - if (d->tail) 81 - return gnet_stats_copy(d, type, NULL, 0, padattr); 80 + if (d->tail) { 81 + int ret = gnet_stats_copy(d, type, NULL, 0, padattr); 82 + 83 + /* The initial attribute added in gnet_stats_copy() may be 84 + * preceded by a padding attribute, in which case d->tail will 85 + * end up pointing at the padding instead of the real attribute. 86 + * Fix this so gnet_stats_finish_copy() adjusts the length of 87 + * the right attribute. 88 + */ 89 + if (ret == 0 && d->tail->nla_type == padattr) 90 + d->tail = (struct nlattr *)((char *)d->tail + 91 + NLA_ALIGN(d->tail->nla_len)); 92 + return ret; 93 + } 82 94 83 95 return 0; 84 96 }
+1
net/core/skbuff.c
··· 858 858 n->cloned = 1; 859 859 n->nohdr = 0; 860 860 n->peeked = 0; 861 + C(pfmemalloc); 861 862 n->destructor = NULL; 862 863 C(tail); 863 864 C(end);
+16 -12
net/dns_resolver/dns_key.c
··· 86 86 opt++; 87 87 kdebug("options: '%s'", opt); 88 88 do { 89 + int opt_len, opt_nlen; 89 90 const char *eq; 90 - int opt_len, opt_nlen, opt_vlen, tmp; 91 + char optval[128]; 91 92 92 93 next_opt = memchr(opt, '#', end - opt) ?: end; 93 94 opt_len = next_opt - opt; 94 - if (opt_len <= 0 || opt_len > 128) { 95 + if (opt_len <= 0 || opt_len > sizeof(optval)) { 95 96 pr_warn_ratelimited("Invalid option length (%d) for dns_resolver key\n", 96 97 opt_len); 97 98 return -EINVAL; 98 99 } 99 100 100 - eq = memchr(opt, '=', opt_len) ?: end; 101 - opt_nlen = eq - opt; 102 - eq++; 103 - opt_vlen = next_opt - eq; /* will be -1 if no value */ 101 + eq = memchr(opt, '=', opt_len); 102 + if (eq) { 103 + opt_nlen = eq - opt; 104 + eq++; 105 + memcpy(optval, eq, next_opt - eq); 106 + optval[next_opt - eq] = '\0'; 107 + } else { 108 + opt_nlen = opt_len; 109 + optval[0] = '\0'; 110 + } 104 111 105 - tmp = opt_vlen >= 0 ? opt_vlen : 0; 106 - kdebug("option '%*.*s' val '%*.*s'", 107 - opt_nlen, opt_nlen, opt, tmp, tmp, eq); 112 + kdebug("option '%*.*s' val '%s'", 113 + opt_nlen, opt_nlen, opt, optval); 108 114 109 115 /* see if it's an error number representing a DNS error 110 116 * that's to be recorded as the result in this key */ 111 117 if (opt_nlen == sizeof(DNS_ERRORNO_OPTION) - 1 && 112 118 memcmp(opt, DNS_ERRORNO_OPTION, opt_nlen) == 0) { 113 119 kdebug("dns error number option"); 114 - if (opt_vlen <= 0) 115 - goto bad_option_value; 116 120 117 - ret = kstrtoul(eq, 10, &derrno); 121 + ret = kstrtoul(optval, 10, &derrno); 118 122 if (ret < 0) 119 123 goto bad_option_value; 120 124
+6
net/ieee802154/6lowpan/core.c
··· 90 90 return 0; 91 91 } 92 92 93 + static int lowpan_get_iflink(const struct net_device *dev) 94 + { 95 + return lowpan_802154_dev(dev)->wdev->ifindex; 96 + } 97 + 93 98 static const struct net_device_ops lowpan_netdev_ops = { 94 99 .ndo_init = lowpan_dev_init, 95 100 .ndo_start_xmit = lowpan_xmit, 96 101 .ndo_open = lowpan_open, 97 102 .ndo_stop = lowpan_stop, 98 103 .ndo_neigh_construct = lowpan_neigh_construct, 104 + .ndo_get_iflink = lowpan_get_iflink, 99 105 }; 100 106 101 107 static void lowpan_setup(struct net_device *ldev)
+1
net/ipv4/fib_frontend.c
··· 300 300 if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) { 301 301 struct flowi4 fl4 = { 302 302 .flowi4_iif = LOOPBACK_IFINDEX, 303 + .flowi4_oif = l3mdev_master_ifindex_rcu(dev), 303 304 .daddr = ip_hdr(skb)->saddr, 304 305 .flowi4_tos = RT_TOS(ip_hdr(skb)->tos), 305 306 .flowi4_scope = scope,
+42 -16
net/ipv4/igmp.c
··· 1200 1200 spin_lock_bh(&im->lock); 1201 1201 if (pmc) { 1202 1202 im->interface = pmc->interface; 1203 - im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1204 1203 im->sfmode = pmc->sfmode; 1205 1204 if (pmc->sfmode == MCAST_INCLUDE) { 1206 1205 im->tomb = pmc->tomb; 1207 1206 im->sources = pmc->sources; 1208 1207 for (psf = im->sources; psf; psf = psf->sf_next) 1209 - psf->sf_crcount = im->crcount; 1208 + psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1209 + } else { 1210 + im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1210 1211 } 1211 1212 in_dev_put(pmc->interface); 1212 1213 kfree(pmc); ··· 1289 1288 #endif 1290 1289 } 1291 1290 1292 - static void igmp_group_added(struct ip_mc_list *im) 1291 + static void igmp_group_added(struct ip_mc_list *im, unsigned int mode) 1293 1292 { 1294 1293 struct in_device *in_dev = im->interface; 1295 1294 #ifdef CONFIG_IP_MULTICAST ··· 1317 1316 } 1318 1317 /* else, v3 */ 1319 1318 1320 - im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1319 + /* Based on RFC3376 5.1, for newly added INCLUDE SSM, we should 1320 + * not send filter-mode change record as the mode should be from 1321 + * IN() to IN(A). 1322 + */ 1323 + if (mode == MCAST_EXCLUDE) 1324 + im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1325 + 1321 1326 igmp_ifc_event(in_dev); 1322 1327 #endif 1323 1328 } ··· 1388 1381 /* 1389 1382 * A socket has joined a multicast group on device dev. 1390 1383 */ 1391 - 1392 - void ip_mc_inc_group(struct in_device *in_dev, __be32 addr) 1384 + void __ip_mc_inc_group(struct in_device *in_dev, __be32 addr, unsigned int mode) 1393 1385 { 1394 1386 struct ip_mc_list *im; 1395 1387 #ifdef CONFIG_IP_MULTICAST ··· 1400 1394 for_each_pmc_rtnl(in_dev, im) { 1401 1395 if (im->multiaddr == addr) { 1402 1396 im->users++; 1403 - ip_mc_add_src(in_dev, &addr, MCAST_EXCLUDE, 0, NULL, 0); 1397 + ip_mc_add_src(in_dev, &addr, mode, 0, NULL, 0); 1404 1398 goto out; 1405 1399 } 1406 1400 } ··· 1414 1408 in_dev_hold(in_dev); 1415 1409 im->multiaddr = addr; 1416 1410 /* initial mode is (EX, empty) */ 1417 - im->sfmode = MCAST_EXCLUDE; 1418 - im->sfcount[MCAST_EXCLUDE] = 1; 1411 + im->sfmode = mode; 1412 + im->sfcount[mode] = 1; 1419 1413 refcount_set(&im->refcnt, 1); 1420 1414 spin_lock_init(&im->lock); 1421 1415 #ifdef CONFIG_IP_MULTICAST ··· 1432 1426 #ifdef CONFIG_IP_MULTICAST 1433 1427 igmpv3_del_delrec(in_dev, im); 1434 1428 #endif 1435 - igmp_group_added(im); 1429 + igmp_group_added(im, mode); 1436 1430 if (!in_dev->dead) 1437 1431 ip_rt_multicast_event(in_dev); 1438 1432 out: 1439 1433 return; 1434 + } 1435 + 1436 + void ip_mc_inc_group(struct in_device *in_dev, __be32 addr) 1437 + { 1438 + __ip_mc_inc_group(in_dev, addr, MCAST_EXCLUDE); 1440 1439 } 1441 1440 EXPORT_SYMBOL(ip_mc_inc_group); 1442 1441 ··· 1699 1688 #ifdef CONFIG_IP_MULTICAST 1700 1689 igmpv3_del_delrec(in_dev, pmc); 1701 1690 #endif 1702 - igmp_group_added(pmc); 1691 + igmp_group_added(pmc, pmc->sfmode); 1703 1692 } 1704 1693 } 1705 1694 ··· 1762 1751 #ifdef CONFIG_IP_MULTICAST 1763 1752 igmpv3_del_delrec(in_dev, pmc); 1764 1753 #endif 1765 - igmp_group_added(pmc); 1754 + igmp_group_added(pmc, pmc->sfmode); 1766 1755 } 1767 1756 } 1768 1757 ··· 2141 2130 2142 2131 /* Join a multicast group 2143 2132 */ 2144 - 2145 - int ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr) 2133 + static int __ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr, 2134 + unsigned int mode) 2146 2135 { 2147 2136 __be32 addr = imr->imr_multiaddr.s_addr; 2148 2137 struct ip_mc_socklist *iml, *i; ··· 2183 2172 memcpy(&iml->multi, imr, sizeof(*imr)); 2184 2173 iml->next_rcu = inet->mc_list; 2185 2174 iml->sflist = NULL; 2186 - iml->sfmode = MCAST_EXCLUDE; 2175 + iml->sfmode = mode; 2187 2176 rcu_assign_pointer(inet->mc_list, iml); 2188 - ip_mc_inc_group(in_dev, addr); 2177 + __ip_mc_inc_group(in_dev, addr, mode); 2189 2178 err = 0; 2190 2179 done: 2191 2180 return err; 2192 2181 } 2182 + 2183 + /* Join ASM (Any-Source Multicast) group 2184 + */ 2185 + int ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr) 2186 + { 2187 + return __ip_mc_join_group(sk, imr, MCAST_EXCLUDE); 2188 + } 2193 2189 EXPORT_SYMBOL(ip_mc_join_group); 2190 + 2191 + /* Join SSM (Source-Specific Multicast) group 2192 + */ 2193 + int ip_mc_join_group_ssm(struct sock *sk, struct ip_mreqn *imr, 2194 + unsigned int mode) 2195 + { 2196 + return __ip_mc_join_group(sk, imr, mode); 2197 + } 2194 2198 2195 2199 static int ip_mc_leave_src(struct sock *sk, struct ip_mc_socklist *iml, 2196 2200 struct in_device *in_dev)
+1 -1
net/ipv4/inet_fragment.c
··· 90 90 91 91 void inet_frags_exit_net(struct netns_frags *nf) 92 92 { 93 - nf->low_thresh = 0; /* prevent creation of new frags */ 93 + nf->high_thresh = 0; /* prevent creation of new frags */ 94 94 95 95 rhashtable_free_and_destroy(&nf->rhashtable, inet_frags_free_cb, NULL); 96 96 }
+2 -2
net/ipv4/ip_sockglue.c
··· 984 984 mreq.imr_multiaddr.s_addr = mreqs.imr_multiaddr; 985 985 mreq.imr_address.s_addr = mreqs.imr_interface; 986 986 mreq.imr_ifindex = 0; 987 - err = ip_mc_join_group(sk, &mreq); 987 + err = ip_mc_join_group_ssm(sk, &mreq, MCAST_INCLUDE); 988 988 if (err && err != -EADDRINUSE) 989 989 break; 990 990 omode = MCAST_INCLUDE; ··· 1061 1061 mreq.imr_multiaddr = psin->sin_addr; 1062 1062 mreq.imr_address.s_addr = 0; 1063 1063 mreq.imr_ifindex = greqs.gsr_interface; 1064 - err = ip_mc_join_group(sk, &mreq); 1064 + err = ip_mc_join_group_ssm(sk, &mreq, MCAST_INCLUDE); 1065 1065 if (err && err != -EADDRINUSE) 1066 1066 break; 1067 1067 greqs.gsr_interface = mreq.imr_ifindex;
+1
net/ipv4/netfilter/ip_tables.c
··· 1898 1898 .checkentry = icmp_checkentry, 1899 1899 .proto = IPPROTO_ICMP, 1900 1900 .family = NFPROTO_IPV4, 1901 + .me = THIS_MODULE, 1901 1902 }, 1902 1903 }; 1903 1904
+12 -6
net/ipv4/netfilter/nf_tproxy_ipv4.c
··· 37 37 * to a listener socket if there's one */ 38 38 struct sock *sk2; 39 39 40 - sk2 = nf_tproxy_get_sock_v4(net, skb, hp, iph->protocol, 40 + sk2 = nf_tproxy_get_sock_v4(net, skb, iph->protocol, 41 41 iph->saddr, laddr ? laddr : iph->daddr, 42 42 hp->source, lport ? lport : hp->dest, 43 43 skb->dev, NF_TPROXY_LOOKUP_LISTENER); ··· 71 71 EXPORT_SYMBOL_GPL(nf_tproxy_laddr4); 72 72 73 73 struct sock * 74 - nf_tproxy_get_sock_v4(struct net *net, struct sk_buff *skb, void *hp, 74 + nf_tproxy_get_sock_v4(struct net *net, struct sk_buff *skb, 75 75 const u8 protocol, 76 76 const __be32 saddr, const __be32 daddr, 77 77 const __be16 sport, const __be16 dport, ··· 79 79 const enum nf_tproxy_lookup_t lookup_type) 80 80 { 81 81 struct sock *sk; 82 - struct tcphdr *tcph; 83 82 84 83 switch (protocol) { 85 - case IPPROTO_TCP: 84 + case IPPROTO_TCP: { 85 + struct tcphdr _hdr, *hp; 86 + 87 + hp = skb_header_pointer(skb, ip_hdrlen(skb), 88 + sizeof(struct tcphdr), &_hdr); 89 + if (hp == NULL) 90 + return NULL; 91 + 86 92 switch (lookup_type) { 87 93 case NF_TPROXY_LOOKUP_LISTENER: 88 - tcph = hp; 89 94 sk = inet_lookup_listener(net, &tcp_hashinfo, skb, 90 95 ip_hdrlen(skb) + 91 - __tcp_hdrlen(tcph), 96 + __tcp_hdrlen(hp), 92 97 saddr, sport, 93 98 daddr, dport, 94 99 in->ifindex, 0); ··· 115 110 BUG(); 116 111 } 117 112 break; 113 + } 118 114 case IPPROTO_UDP: 119 115 sk = udp4_lib_lookup(net, saddr, sport, daddr, dport, 120 116 in->ifindex);
+3 -2
net/ipv4/sysctl_net_ipv4.c
··· 189 189 if (write && ret == 0) { 190 190 low = make_kgid(user_ns, urange[0]); 191 191 high = make_kgid(user_ns, urange[1]); 192 - if (!gid_valid(low) || !gid_valid(high) || 193 - (urange[1] < urange[0]) || gid_lt(high, low)) { 192 + if (!gid_valid(low) || !gid_valid(high)) 193 + return -EINVAL; 194 + if (urange[1] < urange[0] || gid_lt(high, low)) { 194 195 low = make_kgid(&init_user_ns, 1); 195 196 high = make_kgid(&init_user_ns, 0); 196 197 }
+10 -6
net/ipv4/tcp.c
··· 1998 1998 * shouldn't happen. 1999 1999 */ 2000 2000 if (WARN(before(*seq, TCP_SKB_CB(skb)->seq), 2001 - "recvmsg bug: copied %X seq %X rcvnxt %X fl %X\n", 2001 + "TCP recvmsg seq # bug: copied %X, seq %X, rcvnxt %X, fl %X\n", 2002 2002 *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt, 2003 2003 flags)) 2004 2004 break; ··· 2013 2013 if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) 2014 2014 goto found_fin_ok; 2015 2015 WARN(!(flags & MSG_PEEK), 2016 - "recvmsg bug 2: copied %X seq %X rcvnxt %X fl %X\n", 2016 + "TCP recvmsg seq # bug 2: copied %X, seq %X, rcvnxt %X, fl %X\n", 2017 2017 *seq, TCP_SKB_CB(skb)->seq, tp->rcv_nxt, flags); 2018 2018 } 2019 2019 ··· 2562 2562 2563 2563 tcp_clear_xmit_timers(sk); 2564 2564 __skb_queue_purge(&sk->sk_receive_queue); 2565 + tp->copied_seq = tp->rcv_nxt; 2566 + tp->urg_data = 0; 2565 2567 tcp_write_queue_purge(sk); 2566 2568 tcp_fastopen_active_disable_ofo_check(sk); 2567 2569 skb_rbtree_purge(&tp->out_of_order_queue); ··· 2823 2821 case TCP_REPAIR: 2824 2822 if (!tcp_can_repair_sock(sk)) 2825 2823 err = -EPERM; 2826 - else if (val == 1) { 2824 + else if (val == TCP_REPAIR_ON) { 2827 2825 tp->repair = 1; 2828 2826 sk->sk_reuse = SK_FORCE_REUSE; 2829 2827 tp->repair_queue = TCP_NO_QUEUE; 2830 - } else if (val == 0) { 2828 + } else if (val == TCP_REPAIR_OFF) { 2831 2829 tp->repair = 0; 2832 2830 sk->sk_reuse = SK_NO_REUSE; 2833 2831 tcp_send_window_probe(sk); 2832 + } else if (val == TCP_REPAIR_OFF_NO_WP) { 2833 + tp->repair = 0; 2834 + sk->sk_reuse = SK_NO_REUSE; 2834 2835 } else 2835 2836 err = -EINVAL; 2836 2837 ··· 3725 3720 struct request_sock *req = inet_reqsk(sk); 3726 3721 3727 3722 local_bh_disable(); 3728 - inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, 3729 - req); 3723 + inet_csk_reqsk_queue_drop(req->rsk_listener, req); 3730 3724 local_bh_enable(); 3731 3725 return 0; 3732 3726 }
+4 -27
net/ipv4/tcp_dctcp.c
··· 55 55 u32 dctcp_alpha; 56 56 u32 next_seq; 57 57 u32 ce_state; 58 - u32 delayed_ack_reserved; 59 58 u32 loss_cwnd; 60 59 }; 61 60 ··· 95 96 96 97 ca->dctcp_alpha = min(dctcp_alpha_on_init, DCTCP_MAX_ALPHA); 97 98 98 - ca->delayed_ack_reserved = 0; 99 99 ca->loss_cwnd = 0; 100 100 ca->ce_state = 0; 101 101 ··· 132 134 /* State has changed from CE=0 to CE=1 and delayed 133 135 * ACK has not sent yet. 134 136 */ 135 - if (!ca->ce_state && ca->delayed_ack_reserved) { 137 + if (!ca->ce_state && 138 + inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER) { 136 139 u32 tmp_rcv_nxt; 137 140 138 141 /* Save current rcv_nxt. */ ··· 163 164 /* State has changed from CE=1 to CE=0 and delayed 164 165 * ACK has not sent yet. 165 166 */ 166 - if (ca->ce_state && ca->delayed_ack_reserved) { 167 + if (ca->ce_state && 168 + inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER) { 167 169 u32 tmp_rcv_nxt; 168 170 169 171 /* Save current rcv_nxt. */ ··· 248 248 } 249 249 } 250 250 251 - static void dctcp_update_ack_reserved(struct sock *sk, enum tcp_ca_event ev) 252 - { 253 - struct dctcp *ca = inet_csk_ca(sk); 254 - 255 - switch (ev) { 256 - case CA_EVENT_DELAYED_ACK: 257 - if (!ca->delayed_ack_reserved) 258 - ca->delayed_ack_reserved = 1; 259 - break; 260 - case CA_EVENT_NON_DELAYED_ACK: 261 - if (ca->delayed_ack_reserved) 262 - ca->delayed_ack_reserved = 0; 263 - break; 264 - default: 265 - /* Don't care for the rest. */ 266 - break; 267 - } 268 - } 269 - 270 251 static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev) 271 252 { 272 253 switch (ev) { ··· 256 275 break; 257 276 case CA_EVENT_ECN_NO_CE: 258 277 dctcp_ce_state_1_to_0(sk); 259 - break; 260 - case CA_EVENT_DELAYED_ACK: 261 - case CA_EVENT_NON_DELAYED_ACK: 262 - dctcp_update_ack_reserved(sk, ev); 263 278 break; 264 279 default: 265 280 /* Don't care for the rest. */
+18 -5
net/ipv4/tcp_ipv4.c
··· 156 156 */ 157 157 if (tcptw->tw_ts_recent_stamp && 158 158 (!twp || (reuse && get_seconds() - tcptw->tw_ts_recent_stamp > 1))) { 159 - tp->write_seq = tcptw->tw_snd_nxt + 65535 + 2; 160 - if (tp->write_seq == 0) 161 - tp->write_seq = 1; 162 - tp->rx_opt.ts_recent = tcptw->tw_ts_recent; 163 - tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp; 159 + /* In case of repair and re-using TIME-WAIT sockets we still 160 + * want to be sure that it is safe as above but honor the 161 + * sequence numbers and time stamps set as part of the repair 162 + * process. 163 + * 164 + * Without this check re-using a TIME-WAIT socket with TCP 165 + * repair would accumulate a -1 on the repair assigned 166 + * sequence number. The first time it is reused the sequence 167 + * is -1, the second time -2, etc. This fixes that issue 168 + * without appearing to create any others. 169 + */ 170 + if (likely(!tp->repair)) { 171 + tp->write_seq = tcptw->tw_snd_nxt + 65535 + 2; 172 + if (tp->write_seq == 0) 173 + tp->write_seq = 1; 174 + tp->rx_opt.ts_recent = tcptw->tw_ts_recent; 175 + tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp; 176 + } 164 177 sock_hold(sktw); 165 178 return 1; 166 179 }
-4
net/ipv4/tcp_output.c
··· 3523 3523 int ato = icsk->icsk_ack.ato; 3524 3524 unsigned long timeout; 3525 3525 3526 - tcp_ca_event(sk, CA_EVENT_DELAYED_ACK); 3527 - 3528 3526 if (ato > TCP_DELACK_MIN) { 3529 3527 const struct tcp_sock *tp = tcp_sk(sk); 3530 3528 int max_ato = HZ / 2; ··· 3578 3580 /* If we have been reset, we may not send again. */ 3579 3581 if (sk->sk_state == TCP_CLOSE) 3580 3582 return; 3581 - 3582 - tcp_ca_event(sk, CA_EVENT_NON_DELAYED_ACK); 3583 3583 3584 3584 /* We are not putting this on the write queue, so 3585 3585 * tcp_transmit_skb() will set the ownership to this
+1
net/ipv6/Kconfig
··· 108 108 config IPV6_ILA 109 109 tristate "IPv6: Identifier Locator Addressing (ILA)" 110 110 depends on NETFILTER 111 + select DST_CACHE 111 112 select LWTUNNEL 112 113 ---help--- 113 114 Support for IPv6 Identifier Locator Addressing (ILA).
+3 -6
net/ipv6/calipso.c
··· 799 799 { 800 800 struct ipv6_txoptions *old = txopt_get(inet6_sk(sk)), *txopts; 801 801 802 - txopts = ipv6_renew_options_kern(sk, old, IPV6_HOPOPTS, 803 - hop, hop ? ipv6_optlen(hop) : 0); 802 + txopts = ipv6_renew_options(sk, old, IPV6_HOPOPTS, hop); 804 803 txopt_put(old); 805 804 if (IS_ERR(txopts)) 806 805 return PTR_ERR(txopts); ··· 1221 1222 if (IS_ERR(new)) 1222 1223 return PTR_ERR(new); 1223 1224 1224 - txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, 1225 - new, new ? ipv6_optlen(new) : 0); 1225 + txopts = ipv6_renew_options(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, new); 1226 1226 1227 1227 kfree(new); 1228 1228 ··· 1258 1260 if (calipso_opt_del(req_inet->ipv6_opt->hopopt, &new)) 1259 1261 return; /* Nothing to do */ 1260 1262 1261 - txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, 1262 - new, new ? ipv6_optlen(new) : 0); 1263 + txopts = ipv6_renew_options(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, new); 1263 1264 1264 1265 if (!IS_ERR(txopts)) { 1265 1266 txopts = xchg(&req_inet->ipv6_opt, txopts);
+30 -81
net/ipv6/exthdrs.c
··· 1015 1015 } 1016 1016 EXPORT_SYMBOL_GPL(ipv6_dup_options); 1017 1017 1018 - static int ipv6_renew_option(void *ohdr, 1019 - struct ipv6_opt_hdr __user *newopt, int newoptlen, 1020 - int inherit, 1021 - struct ipv6_opt_hdr **hdr, 1022 - char **p) 1018 + static void ipv6_renew_option(int renewtype, 1019 + struct ipv6_opt_hdr **dest, 1020 + struct ipv6_opt_hdr *old, 1021 + struct ipv6_opt_hdr *new, 1022 + int newtype, char **p) 1023 1023 { 1024 - if (inherit) { 1025 - if (ohdr) { 1026 - memcpy(*p, ohdr, ipv6_optlen((struct ipv6_opt_hdr *)ohdr)); 1027 - *hdr = (struct ipv6_opt_hdr *)*p; 1028 - *p += CMSG_ALIGN(ipv6_optlen(*hdr)); 1029 - } 1030 - } else { 1031 - if (newopt) { 1032 - if (copy_from_user(*p, newopt, newoptlen)) 1033 - return -EFAULT; 1034 - *hdr = (struct ipv6_opt_hdr *)*p; 1035 - if (ipv6_optlen(*hdr) > newoptlen) 1036 - return -EINVAL; 1037 - *p += CMSG_ALIGN(newoptlen); 1038 - } 1039 - } 1040 - return 0; 1024 + struct ipv6_opt_hdr *src; 1025 + 1026 + src = (renewtype == newtype ? new : old); 1027 + if (!src) 1028 + return; 1029 + 1030 + memcpy(*p, src, ipv6_optlen(src)); 1031 + *dest = (struct ipv6_opt_hdr *)*p; 1032 + *p += CMSG_ALIGN(ipv6_optlen(*dest)); 1041 1033 } 1042 1034 1043 1035 /** ··· 1055 1063 */ 1056 1064 struct ipv6_txoptions * 1057 1065 ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt, 1058 - int newtype, 1059 - struct ipv6_opt_hdr __user *newopt, int newoptlen) 1066 + int newtype, struct ipv6_opt_hdr *newopt) 1060 1067 { 1061 1068 int tot_len = 0; 1062 1069 char *p; 1063 1070 struct ipv6_txoptions *opt2; 1064 - int err; 1065 1071 1066 1072 if (opt) { 1067 1073 if (newtype != IPV6_HOPOPTS && opt->hopopt) ··· 1072 1082 tot_len += CMSG_ALIGN(ipv6_optlen(opt->dst1opt)); 1073 1083 } 1074 1084 1075 - if (newopt && newoptlen) 1076 - tot_len += CMSG_ALIGN(newoptlen); 1085 + if (newopt) 1086 + tot_len += CMSG_ALIGN(ipv6_optlen(newopt)); 1077 1087 1078 1088 if (!tot_len) 1079 1089 return NULL; ··· 1088 1098 opt2->tot_len = tot_len; 1089 1099 p = (char *)(opt2 + 1); 1090 1100 1091 - err = ipv6_renew_option(opt ? opt->hopopt : NULL, newopt, newoptlen, 1092 - newtype != IPV6_HOPOPTS, 1093 - &opt2->hopopt, &p); 1094 - if (err) 1095 - goto out; 1096 - 1097 - err = ipv6_renew_option(opt ? opt->dst0opt : NULL, newopt, newoptlen, 1098 - newtype != IPV6_RTHDRDSTOPTS, 1099 - &opt2->dst0opt, &p); 1100 - if (err) 1101 - goto out; 1102 - 1103 - err = ipv6_renew_option(opt ? opt->srcrt : NULL, newopt, newoptlen, 1104 - newtype != IPV6_RTHDR, 1105 - (struct ipv6_opt_hdr **)&opt2->srcrt, &p); 1106 - if (err) 1107 - goto out; 1108 - 1109 - err = ipv6_renew_option(opt ? opt->dst1opt : NULL, newopt, newoptlen, 1110 - newtype != IPV6_DSTOPTS, 1111 - &opt2->dst1opt, &p); 1112 - if (err) 1113 - goto out; 1101 + ipv6_renew_option(IPV6_HOPOPTS, &opt2->hopopt, 1102 + (opt ? opt->hopopt : NULL), 1103 + newopt, newtype, &p); 1104 + ipv6_renew_option(IPV6_RTHDRDSTOPTS, &opt2->dst0opt, 1105 + (opt ? opt->dst0opt : NULL), 1106 + newopt, newtype, &p); 1107 + ipv6_renew_option(IPV6_RTHDR, 1108 + (struct ipv6_opt_hdr **)&opt2->srcrt, 1109 + (opt ? (struct ipv6_opt_hdr *)opt->srcrt : NULL), 1110 + newopt, newtype, &p); 1111 + ipv6_renew_option(IPV6_DSTOPTS, &opt2->dst1opt, 1112 + (opt ? opt->dst1opt : NULL), 1113 + newopt, newtype, &p); 1114 1114 1115 1115 opt2->opt_nflen = (opt2->hopopt ? ipv6_optlen(opt2->hopopt) : 0) + 1116 1116 (opt2->dst0opt ? ipv6_optlen(opt2->dst0opt) : 0) + ··· 1108 1128 opt2->opt_flen = (opt2->dst1opt ? ipv6_optlen(opt2->dst1opt) : 0); 1109 1129 1110 1130 return opt2; 1111 - out: 1112 - sock_kfree_s(sk, opt2, opt2->tot_len); 1113 - return ERR_PTR(err); 1114 - } 1115 - 1116 - /** 1117 - * ipv6_renew_options_kern - replace a specific ext hdr with a new one. 1118 - * 1119 - * @sk: sock from which to allocate memory 1120 - * @opt: original options 1121 - * @newtype: option type to replace in @opt 1122 - * @newopt: new option of type @newtype to replace (kernel-mem) 1123 - * @newoptlen: length of @newopt 1124 - * 1125 - * See ipv6_renew_options(). The difference is that @newopt is 1126 - * kernel memory, rather than user memory. 1127 - */ 1128 - struct ipv6_txoptions * 1129 - ipv6_renew_options_kern(struct sock *sk, struct ipv6_txoptions *opt, 1130 - int newtype, struct ipv6_opt_hdr *newopt, 1131 - int newoptlen) 1132 - { 1133 - struct ipv6_txoptions *ret_val; 1134 - const mm_segment_t old_fs = get_fs(); 1135 - 1136 - set_fs(KERNEL_DS); 1137 - ret_val = ipv6_renew_options(sk, opt, newtype, 1138 - (struct ipv6_opt_hdr __user *)newopt, 1139 - newoptlen); 1140 - set_fs(old_fs); 1141 - return ret_val; 1142 1131 } 1143 1132 1144 1133 struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
+89 -73
net/ipv6/ip6_fib.c
··· 935 935 { 936 936 struct fib6_info *leaf = rcu_dereference_protected(fn->leaf, 937 937 lockdep_is_held(&rt->fib6_table->tb6_lock)); 938 - enum fib_event_type event = FIB_EVENT_ENTRY_ADD; 939 - struct fib6_info *iter = NULL, *match = NULL; 938 + struct fib6_info *iter = NULL; 940 939 struct fib6_info __rcu **ins; 940 + struct fib6_info __rcu **fallback_ins = NULL; 941 941 int replace = (info->nlh && 942 942 (info->nlh->nlmsg_flags & NLM_F_REPLACE)); 943 - int append = (info->nlh && 944 - (info->nlh->nlmsg_flags & NLM_F_APPEND)); 945 943 int add = (!info->nlh || 946 944 (info->nlh->nlmsg_flags & NLM_F_CREATE)); 947 945 int found = 0; 946 + bool rt_can_ecmp = rt6_qualify_for_ecmp(rt); 948 947 u16 nlflags = NLM_F_EXCL; 949 948 int err; 950 949 951 - if (append) 950 + if (info->nlh && (info->nlh->nlmsg_flags & NLM_F_APPEND)) 952 951 nlflags |= NLM_F_APPEND; 953 952 954 953 ins = &fn->leaf; ··· 969 970 970 971 nlflags &= ~NLM_F_EXCL; 971 972 if (replace) { 972 - found++; 973 - break; 973 + if (rt_can_ecmp == rt6_qualify_for_ecmp(iter)) { 974 + found++; 975 + break; 976 + } 977 + if (rt_can_ecmp) 978 + fallback_ins = fallback_ins ?: ins; 979 + goto next_iter; 974 980 } 975 981 976 982 if (rt6_duplicate_nexthop(iter, rt)) { ··· 990 986 fib6_metric_set(iter, RTAX_MTU, rt->fib6_pmtu); 991 987 return -EEXIST; 992 988 } 993 - 994 - /* first route that matches */ 995 - if (!match) 996 - match = iter; 989 + /* If we have the same destination and the same metric, 990 + * but not the same gateway, then the route we try to 991 + * add is sibling to this route, increment our counter 992 + * of siblings, and later we will add our route to the 993 + * list. 994 + * Only static routes (which don't have flag 995 + * RTF_EXPIRES) are used for ECMPv6. 996 + * 997 + * To avoid long list, we only had siblings if the 998 + * route have a gateway. 999 + */ 1000 + if (rt_can_ecmp && 1001 + rt6_qualify_for_ecmp(iter)) 1002 + rt->fib6_nsiblings++; 997 1003 } 998 1004 999 1005 if (iter->fib6_metric > rt->fib6_metric) 1000 1006 break; 1001 1007 1008 + next_iter: 1002 1009 ins = &iter->fib6_next; 1010 + } 1011 + 1012 + if (fallback_ins && !found) { 1013 + /* No ECMP-able route found, replace first non-ECMP one */ 1014 + ins = fallback_ins; 1015 + iter = rcu_dereference_protected(*ins, 1016 + lockdep_is_held(&rt->fib6_table->tb6_lock)); 1017 + found++; 1003 1018 } 1004 1019 1005 1020 /* Reset round-robin state, if necessary */ ··· 1026 1003 fn->rr_ptr = NULL; 1027 1004 1028 1005 /* Link this route to others same route. */ 1029 - if (append && match) { 1006 + if (rt->fib6_nsiblings) { 1007 + unsigned int fib6_nsiblings; 1030 1008 struct fib6_info *sibling, *temp_sibling; 1031 1009 1032 - if (rt->fib6_flags & RTF_REJECT) { 1033 - NL_SET_ERR_MSG(extack, 1034 - "Can not append a REJECT route"); 1035 - return -EINVAL; 1036 - } else if (match->fib6_flags & RTF_REJECT) { 1037 - NL_SET_ERR_MSG(extack, 1038 - "Can not append to a REJECT route"); 1039 - return -EINVAL; 1010 + /* Find the first route that have the same metric */ 1011 + sibling = leaf; 1012 + while (sibling) { 1013 + if (sibling->fib6_metric == rt->fib6_metric && 1014 + rt6_qualify_for_ecmp(sibling)) { 1015 + list_add_tail(&rt->fib6_siblings, 1016 + &sibling->fib6_siblings); 1017 + break; 1018 + } 1019 + sibling = rcu_dereference_protected(sibling->fib6_next, 1020 + lockdep_is_held(&rt->fib6_table->tb6_lock)); 1040 1021 } 1041 - event = FIB_EVENT_ENTRY_APPEND; 1042 - rt->fib6_nsiblings = match->fib6_nsiblings; 1043 - list_add_tail(&rt->fib6_siblings, &match->fib6_siblings); 1044 - match->fib6_nsiblings++; 1045 - 1046 1022 /* For each sibling in the list, increment the counter of 1047 1023 * siblings. BUG() if counters does not match, list of siblings 1048 1024 * is broken! 1049 1025 */ 1026 + fib6_nsiblings = 0; 1050 1027 list_for_each_entry_safe(sibling, temp_sibling, 1051 - &match->fib6_siblings, fib6_siblings) { 1028 + &rt->fib6_siblings, fib6_siblings) { 1052 1029 sibling->fib6_nsiblings++; 1053 - BUG_ON(sibling->fib6_nsiblings != match->fib6_nsiblings); 1030 + BUG_ON(sibling->fib6_nsiblings != rt->fib6_nsiblings); 1031 + fib6_nsiblings++; 1054 1032 } 1055 - 1056 - rt6_multipath_rebalance(match); 1033 + BUG_ON(fib6_nsiblings != rt->fib6_nsiblings); 1034 + rt6_multipath_rebalance(temp_sibling); 1057 1035 } 1058 1036 1059 1037 /* ··· 1067 1043 add: 1068 1044 nlflags |= NLM_F_CREATE; 1069 1045 1070 - err = call_fib6_entry_notifiers(info->nl_net, event, rt, 1071 - extack); 1046 + err = call_fib6_entry_notifiers(info->nl_net, 1047 + FIB_EVENT_ENTRY_ADD, 1048 + rt, extack); 1072 1049 if (err) 1073 1050 return err; 1074 1051 ··· 1087 1062 } 1088 1063 1089 1064 } else { 1090 - struct fib6_info *tmp; 1065 + int nsiblings; 1091 1066 1092 1067 if (!found) { 1093 1068 if (add) ··· 1102 1077 if (err) 1103 1078 return err; 1104 1079 1105 - /* if route being replaced has siblings, set tmp to 1106 - * last one, otherwise tmp is current route. this is 1107 - * used to set fib6_next for new route 1108 - */ 1109 - if (iter->fib6_nsiblings) 1110 - tmp = list_last_entry(&iter->fib6_siblings, 1111 - struct fib6_info, 1112 - fib6_siblings); 1113 - else 1114 - tmp = iter; 1115 - 1116 - /* insert new route */ 1117 1080 atomic_inc(&rt->fib6_ref); 1118 1081 rcu_assign_pointer(rt->fib6_node, fn); 1119 - rt->fib6_next = tmp->fib6_next; 1082 + rt->fib6_next = iter->fib6_next; 1120 1083 rcu_assign_pointer(*ins, rt); 1121 - 1122 1084 if (!info->skip_notify) 1123 1085 inet6_rt_notify(RTM_NEWROUTE, rt, info, NLM_F_REPLACE); 1124 1086 if (!(fn->fn_flags & RTN_RTINFO)) { 1125 1087 info->nl_net->ipv6.rt6_stats->fib_route_nodes++; 1126 1088 fn->fn_flags |= RTN_RTINFO; 1127 1089 } 1128 - 1129 - /* delete old route */ 1130 - rt = iter; 1131 - 1132 - if (rt->fib6_nsiblings) { 1133 - struct fib6_info *tmp; 1134 - 1135 - /* Replacing an ECMP route, remove all siblings */ 1136 - list_for_each_entry_safe(iter, tmp, &rt->fib6_siblings, 1137 - fib6_siblings) { 1138 - iter->fib6_node = NULL; 1139 - fib6_purge_rt(iter, fn, info->nl_net); 1140 - if (rcu_access_pointer(fn->rr_ptr) == iter) 1141 - fn->rr_ptr = NULL; 1142 - fib6_info_release(iter); 1143 - 1144 - rt->fib6_nsiblings--; 1145 - info->nl_net->ipv6.rt6_stats->fib_rt_entries--; 1146 - } 1147 - } 1148 - 1149 - WARN_ON(rt->fib6_nsiblings != 0); 1150 - 1151 - rt->fib6_node = NULL; 1152 - fib6_purge_rt(rt, fn, info->nl_net); 1153 - if (rcu_access_pointer(fn->rr_ptr) == rt) 1090 + nsiblings = iter->fib6_nsiblings; 1091 + iter->fib6_node = NULL; 1092 + fib6_purge_rt(iter, fn, info->nl_net); 1093 + if (rcu_access_pointer(fn->rr_ptr) == iter) 1154 1094 fn->rr_ptr = NULL; 1155 - fib6_info_release(rt); 1095 + fib6_info_release(iter); 1096 + 1097 + if (nsiblings) { 1098 + /* Replacing an ECMP route, remove all siblings */ 1099 + ins = &rt->fib6_next; 1100 + iter = rcu_dereference_protected(*ins, 1101 + lockdep_is_held(&rt->fib6_table->tb6_lock)); 1102 + while (iter) { 1103 + if (iter->fib6_metric > rt->fib6_metric) 1104 + break; 1105 + if (rt6_qualify_for_ecmp(iter)) { 1106 + *ins = iter->fib6_next; 1107 + iter->fib6_node = NULL; 1108 + fib6_purge_rt(iter, fn, info->nl_net); 1109 + if (rcu_access_pointer(fn->rr_ptr) == iter) 1110 + fn->rr_ptr = NULL; 1111 + fib6_info_release(iter); 1112 + nsiblings--; 1113 + info->nl_net->ipv6.rt6_stats->fib_rt_entries--; 1114 + } else { 1115 + ins = &iter->fib6_next; 1116 + } 1117 + iter = rcu_dereference_protected(*ins, 1118 + lockdep_is_held(&rt->fib6_table->tb6_lock)); 1119 + } 1120 + WARN_ON(nsiblings != 0); 1121 + } 1156 1122 } 1157 1123 1158 1124 return 0;
+2 -1
net/ipv6/ip6_gre.c
··· 927 927 static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb, 928 928 struct net_device *dev) 929 929 { 930 - struct ipv6hdr *ipv6h = ipv6_hdr(skb); 931 930 struct ip6_tnl *t = netdev_priv(dev); 932 931 struct dst_entry *dst = skb_dst(skb); 933 932 struct net_device_stats *stats; ··· 1009 1010 goto tx_err; 1010 1011 } 1011 1012 } else { 1013 + struct ipv6hdr *ipv6h = ipv6_hdr(skb); 1014 + 1012 1015 switch (skb->protocol) { 1013 1016 case htons(ETH_P_IP): 1014 1017 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+22 -10
net/ipv6/ipv6_sockglue.c
··· 398 398 case IPV6_DSTOPTS: 399 399 { 400 400 struct ipv6_txoptions *opt; 401 + struct ipv6_opt_hdr *new = NULL; 402 + 403 + /* hop-by-hop / destination options are privileged option */ 404 + retv = -EPERM; 405 + if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW)) 406 + break; 401 407 402 408 /* remove any sticky options header with a zero option 403 409 * length, per RFC3542. ··· 415 409 else if (optlen < sizeof(struct ipv6_opt_hdr) || 416 410 optlen & 0x7 || optlen > 8 * 255) 417 411 goto e_inval; 418 - 419 - /* hop-by-hop / destination options are privileged option */ 420 - retv = -EPERM; 421 - if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW)) 422 - break; 412 + else { 413 + new = memdup_user(optval, optlen); 414 + if (IS_ERR(new)) { 415 + retv = PTR_ERR(new); 416 + break; 417 + } 418 + if (unlikely(ipv6_optlen(new) > optlen)) { 419 + kfree(new); 420 + goto e_inval; 421 + } 422 + } 423 423 424 424 opt = rcu_dereference_protected(np->opt, 425 425 lockdep_sock_is_held(sk)); 426 - opt = ipv6_renew_options(sk, opt, optname, 427 - (struct ipv6_opt_hdr __user *)optval, 428 - optlen); 426 + opt = ipv6_renew_options(sk, opt, optname, new); 427 + kfree(new); 429 428 if (IS_ERR(opt)) { 430 429 retv = PTR_ERR(opt); 431 430 break; ··· 729 718 struct sockaddr_in6 *psin6; 730 719 731 720 psin6 = (struct sockaddr_in6 *)&greqs.gsr_group; 732 - retv = ipv6_sock_mc_join(sk, greqs.gsr_interface, 733 - &psin6->sin6_addr); 721 + retv = ipv6_sock_mc_join_ssm(sk, greqs.gsr_interface, 722 + &psin6->sin6_addr, 723 + MCAST_INCLUDE); 734 724 /* prior join w/ different source is ok */ 735 725 if (retv && retv != -EADDRINUSE) 736 726 break;
+45 -19
net/ipv6/mcast.c
··· 95 95 int delta); 96 96 static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, 97 97 struct inet6_dev *idev); 98 + static int __ipv6_dev_mc_inc(struct net_device *dev, 99 + const struct in6_addr *addr, unsigned int mode); 98 100 99 101 #define MLD_QRV_DEFAULT 2 100 102 /* RFC3810, 9.2. Query Interval */ ··· 134 132 return iv > 0 ? iv : 1; 135 133 } 136 134 137 - int ipv6_sock_mc_join(struct sock *sk, int ifindex, const struct in6_addr *addr) 135 + static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, 136 + const struct in6_addr *addr, unsigned int mode) 138 137 { 139 138 struct net_device *dev = NULL; 140 139 struct ipv6_mc_socklist *mc_lst; ··· 182 179 } 183 180 184 181 mc_lst->ifindex = dev->ifindex; 185 - mc_lst->sfmode = MCAST_EXCLUDE; 182 + mc_lst->sfmode = mode; 186 183 rwlock_init(&mc_lst->sflock); 187 184 mc_lst->sflist = NULL; 188 185 ··· 190 187 * now add/increase the group membership on the device 191 188 */ 192 189 193 - err = ipv6_dev_mc_inc(dev, addr); 190 + err = __ipv6_dev_mc_inc(dev, addr, mode); 194 191 195 192 if (err) { 196 193 sock_kfree_s(sk, mc_lst, sizeof(*mc_lst)); ··· 202 199 203 200 return 0; 204 201 } 202 + 203 + int ipv6_sock_mc_join(struct sock *sk, int ifindex, const struct in6_addr *addr) 204 + { 205 + return __ipv6_sock_mc_join(sk, ifindex, addr, MCAST_EXCLUDE); 206 + } 205 207 EXPORT_SYMBOL(ipv6_sock_mc_join); 208 + 209 + int ipv6_sock_mc_join_ssm(struct sock *sk, int ifindex, 210 + const struct in6_addr *addr, unsigned int mode) 211 + { 212 + return __ipv6_sock_mc_join(sk, ifindex, addr, mode); 213 + } 206 214 207 215 /* 208 216 * socket leave on multicast group ··· 660 646 return rv; 661 647 } 662 648 663 - static void igmp6_group_added(struct ifmcaddr6 *mc) 649 + static void igmp6_group_added(struct ifmcaddr6 *mc, unsigned int mode) 664 650 { 665 651 struct net_device *dev = mc->idev->dev; 666 652 char buf[MAX_ADDR_LEN]; ··· 686 672 } 687 673 /* else v2 */ 688 674 689 - mc->mca_crcount = mc->idev->mc_qrv; 675 + /* Based on RFC3810 6.1, for newly added INCLUDE SSM, we 676 + * should not send filter-mode change record as the mode 677 + * should be from IN() to IN(A). 678 + */ 679 + if (mode == MCAST_EXCLUDE) 680 + mc->mca_crcount = mc->idev->mc_qrv; 681 + 690 682 mld_ifc_event(mc->idev); 691 683 } 692 684 ··· 790 770 spin_lock_bh(&im->mca_lock); 791 771 if (pmc) { 792 772 im->idev = pmc->idev; 793 - im->mca_crcount = idev->mc_qrv; 794 773 im->mca_sfmode = pmc->mca_sfmode; 795 774 if (pmc->mca_sfmode == MCAST_INCLUDE) { 796 775 im->mca_tomb = pmc->mca_tomb; 797 776 im->mca_sources = pmc->mca_sources; 798 777 for (psf = im->mca_sources; psf; psf = psf->sf_next) 799 - psf->sf_crcount = im->mca_crcount; 778 + psf->sf_crcount = idev->mc_qrv; 779 + } else { 780 + im->mca_crcount = idev->mc_qrv; 800 781 } 801 782 in6_dev_put(pmc->idev); 802 783 kfree(pmc); ··· 852 831 } 853 832 854 833 static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, 855 - const struct in6_addr *addr) 834 + const struct in6_addr *addr, 835 + unsigned int mode) 856 836 { 857 837 struct ifmcaddr6 *mc; 858 838 ··· 871 849 refcount_set(&mc->mca_refcnt, 1); 872 850 spin_lock_init(&mc->mca_lock); 873 851 874 - /* initial mode is (EX, empty) */ 875 - mc->mca_sfmode = MCAST_EXCLUDE; 876 - mc->mca_sfcount[MCAST_EXCLUDE] = 1; 852 + mc->mca_sfmode = mode; 853 + mc->mca_sfcount[mode] = 1; 877 854 878 855 if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) || 879 856 IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) ··· 884 863 /* 885 864 * device multicast group inc (add if not found) 886 865 */ 887 - int ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr) 866 + static int __ipv6_dev_mc_inc(struct net_device *dev, 867 + const struct in6_addr *addr, unsigned int mode) 888 868 { 889 869 struct ifmcaddr6 *mc; 890 870 struct inet6_dev *idev; ··· 909 887 if (ipv6_addr_equal(&mc->mca_addr, addr)) { 910 888 mc->mca_users++; 911 889 write_unlock_bh(&idev->lock); 912 - ip6_mc_add_src(idev, &mc->mca_addr, MCAST_EXCLUDE, 0, 913 - NULL, 0); 890 + ip6_mc_add_src(idev, &mc->mca_addr, mode, 0, NULL, 0); 914 891 in6_dev_put(idev); 915 892 return 0; 916 893 } 917 894 } 918 895 919 - mc = mca_alloc(idev, addr); 896 + mc = mca_alloc(idev, addr, mode); 920 897 if (!mc) { 921 898 write_unlock_bh(&idev->lock); 922 899 in6_dev_put(idev); ··· 932 911 write_unlock_bh(&idev->lock); 933 912 934 913 mld_del_delrec(idev, mc); 935 - igmp6_group_added(mc); 914 + igmp6_group_added(mc, mode); 936 915 ma_put(mc); 937 916 return 0; 917 + } 918 + 919 + int ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr) 920 + { 921 + return __ipv6_dev_mc_inc(dev, addr, MCAST_EXCLUDE); 938 922 } 939 923 940 924 /* ··· 1777 1751 1778 1752 psf_next = psf->sf_next; 1779 1753 1780 - if (!is_in(pmc, psf, type, gdeleted, sdeleted)) { 1754 + if (!is_in(pmc, psf, type, gdeleted, sdeleted) && !crsend) { 1781 1755 psf_prev = psf; 1782 1756 continue; 1783 1757 } ··· 2092 2066 if (pmc->mca_sfcount[MCAST_EXCLUDE]) 2093 2067 type = MLD2_CHANGE_TO_EXCLUDE; 2094 2068 else 2095 - type = MLD2_CHANGE_TO_INCLUDE; 2069 + type = MLD2_ALLOW_NEW_SOURCES; 2096 2070 skb = add_grec(skb, pmc, type, 0, 0, 1); 2097 2071 spin_unlock_bh(&pmc->mca_lock); 2098 2072 } ··· 2572 2546 ipv6_mc_reset(idev); 2573 2547 for (i = idev->mc_list; i; i = i->next) { 2574 2548 mld_del_delrec(idev, i); 2575 - igmp6_group_added(i); 2549 + igmp6_group_added(i, i->mca_sfmode); 2576 2550 } 2577 2551 read_unlock_bh(&idev->lock); 2578 2552 }
+1 -1
net/ipv6/ndisc.c
··· 811 811 return; 812 812 } 813 813 } 814 - if (ndopts.nd_opts_nonce) 814 + if (ndopts.nd_opts_nonce && ndopts.nd_opts_nonce->nd_opt_len == 1) 815 815 memcpy(&nonce, (u8 *)(ndopts.nd_opts_nonce + 1), 6); 816 816 817 817 inc = ipv6_addr_is_multicast(daddr);
+1
net/ipv6/netfilter/ip6_tables.c
··· 1909 1909 .checkentry = icmp6_checkentry, 1910 1910 .proto = IPPROTO_ICMPV6, 1911 1911 .family = NFPROTO_IPV6, 1912 + .me = THIS_MODULE, 1912 1913 }, 1913 1914 }; 1914 1915
+2
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 585 585 fq->q.meat == fq->q.len && 586 586 nf_ct_frag6_reasm(fq, skb, dev)) 587 587 ret = 0; 588 + else 589 + skb_dst_drop(skb); 588 590 589 591 out_unlock: 590 592 spin_unlock_bh(&fq->q.lock);
+12 -6
net/ipv6/netfilter/nf_tproxy_ipv6.c
··· 55 55 * to a listener socket if there's one */ 56 56 struct sock *sk2; 57 57 58 - sk2 = nf_tproxy_get_sock_v6(net, skb, thoff, hp, tproto, 58 + sk2 = nf_tproxy_get_sock_v6(net, skb, thoff, tproto, 59 59 &iph->saddr, 60 60 nf_tproxy_laddr6(skb, laddr, &iph->daddr), 61 61 hp->source, ··· 72 72 EXPORT_SYMBOL_GPL(nf_tproxy_handle_time_wait6); 73 73 74 74 struct sock * 75 - nf_tproxy_get_sock_v6(struct net *net, struct sk_buff *skb, int thoff, void *hp, 75 + nf_tproxy_get_sock_v6(struct net *net, struct sk_buff *skb, int thoff, 76 76 const u8 protocol, 77 77 const struct in6_addr *saddr, const struct in6_addr *daddr, 78 78 const __be16 sport, const __be16 dport, ··· 80 80 const enum nf_tproxy_lookup_t lookup_type) 81 81 { 82 82 struct sock *sk; 83 - struct tcphdr *tcph; 84 83 85 84 switch (protocol) { 86 - case IPPROTO_TCP: 85 + case IPPROTO_TCP: { 86 + struct tcphdr _hdr, *hp; 87 + 88 + hp = skb_header_pointer(skb, thoff, 89 + sizeof(struct tcphdr), &_hdr); 90 + if (hp == NULL) 91 + return NULL; 92 + 87 93 switch (lookup_type) { 88 94 case NF_TPROXY_LOOKUP_LISTENER: 89 - tcph = hp; 90 95 sk = inet6_lookup_listener(net, &tcp_hashinfo, skb, 91 - thoff + __tcp_hdrlen(tcph), 96 + thoff + __tcp_hdrlen(hp), 92 97 saddr, sport, 93 98 daddr, ntohs(dport), 94 99 in->ifindex, 0); ··· 115 110 BUG(); 116 111 } 117 112 break; 113 + } 118 114 case IPPROTO_UDP: 119 115 sk = udp6_lib_lookup(net, saddr, sport, daddr, dport, 120 116 in->ifindex);
+8 -2
net/ipv6/route.c
··· 3842 3842 lockdep_is_held(&rt->fib6_table->tb6_lock)); 3843 3843 while (iter) { 3844 3844 if (iter->fib6_metric == rt->fib6_metric && 3845 - iter->fib6_nsiblings) 3845 + rt6_qualify_for_ecmp(iter)) 3846 3846 return iter; 3847 3847 iter = rcu_dereference_protected(iter->fib6_next, 3848 3848 lockdep_is_held(&rt->fib6_table->tb6_lock)); ··· 4388 4388 rt = NULL; 4389 4389 goto cleanup; 4390 4390 } 4391 + if (!rt6_qualify_for_ecmp(rt)) { 4392 + err = -EINVAL; 4393 + NL_SET_ERR_MSG(extack, 4394 + "Device only routes can not be added for IPv6 using the multipath API."); 4395 + fib6_info_release(rt); 4396 + goto cleanup; 4397 + } 4391 4398 4392 4399 rt->fib6_nh.nh_weight = rtnh->rtnh_hops + 1; 4393 4400 ··· 4446 4439 */ 4447 4440 cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 4448 4441 NLM_F_REPLACE); 4449 - cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_APPEND; 4450 4442 nhn++; 4451 4443 } 4452 4444
+1 -1
net/ipv6/seg6_iptunnel.c
··· 101 101 102 102 if (do_flowlabel > 0) { 103 103 hash = skb_get_hash(skb); 104 - rol32(hash, 16); 104 + hash = rol32(hash, 16); 105 105 flowlabel = (__force __be32)hash & IPV6_FLOWLABEL_MASK; 106 106 } else if (!do_flowlabel && skb->protocol == htons(ETH_P_IPV6)) { 107 107 flowlabel = ip6_flowlabel(inner_hdr);
+7 -18
net/netfilter/Kconfig
··· 460 460 461 461 if NF_TABLES 462 462 463 + config NF_TABLES_SET 464 + tristate "Netfilter nf_tables set infrastructure" 465 + help 466 + This option enables the nf_tables set infrastructure that allows to 467 + look up for elements in a set and to build one-way mappings between 468 + matchings and actions. 469 + 463 470 config NF_TABLES_INET 464 471 depends on IPV6 465 472 select NF_TABLES_IPV4 ··· 499 492 help 500 493 This option adds the "flow_offload" expression that you can use to 501 494 choose what flows are placed into the hardware. 502 - 503 - config NFT_SET_RBTREE 504 - tristate "Netfilter nf_tables rbtree set module" 505 - help 506 - This option adds the "rbtree" set type (Red Black tree) that is used 507 - to build interval-based sets. 508 - 509 - config NFT_SET_HASH 510 - tristate "Netfilter nf_tables hash set module" 511 - help 512 - This option adds the "hash" set type that is used to build one-way 513 - mappings between matchings and actions. 514 - 515 - config NFT_SET_BITMAP 516 - tristate "Netfilter nf_tables bitmap set module" 517 - help 518 - This option adds the "bitmap" set type that is used to build sets 519 - whose keys are smaller or equal to 16 bits. 520 495 521 496 config NFT_COUNTER 522 497 tristate "Netfilter nf_tables counter module"
+4 -3
net/netfilter/Makefile
··· 78 78 nft_bitwise.o nft_byteorder.o nft_payload.o nft_lookup.o \ 79 79 nft_dynset.o nft_meta.o nft_rt.o nft_exthdr.o 80 80 81 + nf_tables_set-objs := nf_tables_set_core.o \ 82 + nft_set_hash.o nft_set_bitmap.o nft_set_rbtree.o 83 + 81 84 obj-$(CONFIG_NF_TABLES) += nf_tables.o 85 + obj-$(CONFIG_NF_TABLES_SET) += nf_tables_set.o 82 86 obj-$(CONFIG_NFT_COMPAT) += nft_compat.o 83 87 obj-$(CONFIG_NFT_CONNLIMIT) += nft_connlimit.o 84 88 obj-$(CONFIG_NFT_NUMGEN) += nft_numgen.o ··· 95 91 obj-$(CONFIG_NFT_QUOTA) += nft_quota.o 96 92 obj-$(CONFIG_NFT_REJECT) += nft_reject.o 97 93 obj-$(CONFIG_NFT_REJECT_INET) += nft_reject_inet.o 98 - obj-$(CONFIG_NFT_SET_RBTREE) += nft_set_rbtree.o 99 - obj-$(CONFIG_NFT_SET_HASH) += nft_set_hash.o 100 - obj-$(CONFIG_NFT_SET_BITMAP) += nft_set_bitmap.o 101 94 obj-$(CONFIG_NFT_COUNTER) += nft_counter.o 102 95 obj-$(CONFIG_NFT_LOG) += nft_log.o 103 96 obj-$(CONFIG_NFT_MASQ) += nft_masq.o
+1 -1
net/netfilter/nf_conntrack_core.c
··· 2043 2043 return -EOPNOTSUPP; 2044 2044 2045 2045 /* On boot, we can set this without any fancy locking. */ 2046 - if (!nf_conntrack_htable_size) 2046 + if (!nf_conntrack_hash) 2047 2047 return param_set_uint(val, kp); 2048 2048 2049 2049 rc = kstrtouint(val, 0, &hashsize);
+28
net/netfilter/nf_tables_set_core.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <net/netfilter/nf_tables_core.h> 3 + 4 + static int __init nf_tables_set_module_init(void) 5 + { 6 + nft_register_set(&nft_set_hash_fast_type); 7 + nft_register_set(&nft_set_hash_type); 8 + nft_register_set(&nft_set_rhash_type); 9 + nft_register_set(&nft_set_bitmap_type); 10 + nft_register_set(&nft_set_rbtree_type); 11 + 12 + return 0; 13 + } 14 + 15 + static void __exit nf_tables_set_module_exit(void) 16 + { 17 + nft_unregister_set(&nft_set_rbtree_type); 18 + nft_unregister_set(&nft_set_bitmap_type); 19 + nft_unregister_set(&nft_set_rhash_type); 20 + nft_unregister_set(&nft_set_hash_type); 21 + nft_unregister_set(&nft_set_hash_fast_type); 22 + } 23 + 24 + module_init(nf_tables_set_module_init); 25 + module_exit(nf_tables_set_module_exit); 26 + 27 + MODULE_LICENSE("GPL"); 28 + MODULE_ALIAS_NFT_SET();
+13
net/netfilter/nft_compat.c
··· 832 832 rev = ntohl(nla_get_be32(tb[NFTA_TARGET_REV])); 833 833 family = ctx->family; 834 834 835 + if (strcmp(tg_name, XT_ERROR_TARGET) == 0 || 836 + strcmp(tg_name, XT_STANDARD_TARGET) == 0 || 837 + strcmp(tg_name, "standard") == 0) 838 + return ERR_PTR(-EINVAL); 839 + 835 840 /* Re-use the existing target if it's already loaded. */ 836 841 list_for_each_entry(nft_target, &nft_target_list, head) { 837 842 struct xt_target *target = nft_target->ops.data; 843 + 844 + if (!target->target) 845 + continue; 838 846 839 847 if (nft_target_cmp(target, tg_name, rev, family)) 840 848 return &nft_target->ops; ··· 851 843 target = xt_request_find_target(family, tg_name, rev); 852 844 if (IS_ERR(target)) 853 845 return ERR_PTR(-ENOENT); 846 + 847 + if (!target->target) { 848 + err = -EINVAL; 849 + goto err; 850 + } 854 851 855 852 if (target->targetsize > nla_len(tb[NFTA_TARGET_INFO])) { 856 853 err = -EINVAL;
+1 -18
net/netfilter/nft_set_bitmap.c
··· 296 296 return true; 297 297 } 298 298 299 - static struct nft_set_type nft_bitmap_type __read_mostly = { 299 + struct nft_set_type nft_set_bitmap_type __read_mostly = { 300 300 .owner = THIS_MODULE, 301 301 .ops = { 302 302 .privsize = nft_bitmap_privsize, ··· 314 314 .get = nft_bitmap_get, 315 315 }, 316 316 }; 317 - 318 - static int __init nft_bitmap_module_init(void) 319 - { 320 - return nft_register_set(&nft_bitmap_type); 321 - } 322 - 323 - static void __exit nft_bitmap_module_exit(void) 324 - { 325 - nft_unregister_set(&nft_bitmap_type); 326 - } 327 - 328 - module_init(nft_bitmap_module_init); 329 - module_exit(nft_bitmap_module_exit); 330 - 331 - MODULE_LICENSE("GPL"); 332 - MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 333 - MODULE_ALIAS_NFT_SET();
+3 -26
net/netfilter/nft_set_hash.c
··· 654 654 return true; 655 655 } 656 656 657 - static struct nft_set_type nft_rhash_type __read_mostly = { 657 + struct nft_set_type nft_set_rhash_type __read_mostly = { 658 658 .owner = THIS_MODULE, 659 659 .features = NFT_SET_MAP | NFT_SET_OBJECT | 660 660 NFT_SET_TIMEOUT | NFT_SET_EVAL, ··· 677 677 }, 678 678 }; 679 679 680 - static struct nft_set_type nft_hash_type __read_mostly = { 680 + struct nft_set_type nft_set_hash_type __read_mostly = { 681 681 .owner = THIS_MODULE, 682 682 .features = NFT_SET_MAP | NFT_SET_OBJECT, 683 683 .ops = { ··· 697 697 }, 698 698 }; 699 699 700 - static struct nft_set_type nft_hash_fast_type __read_mostly = { 700 + struct nft_set_type nft_set_hash_fast_type __read_mostly = { 701 701 .owner = THIS_MODULE, 702 702 .features = NFT_SET_MAP | NFT_SET_OBJECT, 703 703 .ops = { ··· 716 716 .get = nft_hash_get, 717 717 }, 718 718 }; 719 - 720 - static int __init nft_hash_module_init(void) 721 - { 722 - if (nft_register_set(&nft_hash_fast_type) || 723 - nft_register_set(&nft_hash_type) || 724 - nft_register_set(&nft_rhash_type)) 725 - return 1; 726 - return 0; 727 - } 728 - 729 - static void __exit nft_hash_module_exit(void) 730 - { 731 - nft_unregister_set(&nft_rhash_type); 732 - nft_unregister_set(&nft_hash_type); 733 - nft_unregister_set(&nft_hash_fast_type); 734 - } 735 - 736 - module_init(nft_hash_module_init); 737 - module_exit(nft_hash_module_exit); 738 - 739 - MODULE_LICENSE("GPL"); 740 - MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 741 - MODULE_ALIAS_NFT_SET();
+1 -18
net/netfilter/nft_set_rbtree.c
··· 462 462 return true; 463 463 } 464 464 465 - static struct nft_set_type nft_rbtree_type __read_mostly = { 465 + struct nft_set_type nft_set_rbtree_type __read_mostly = { 466 466 .owner = THIS_MODULE, 467 467 .features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | NFT_SET_TIMEOUT, 468 468 .ops = { ··· 481 481 .get = nft_rbtree_get, 482 482 }, 483 483 }; 484 - 485 - static int __init nft_rbtree_module_init(void) 486 - { 487 - return nft_register_set(&nft_rbtree_type); 488 - } 489 - 490 - static void __exit nft_rbtree_module_exit(void) 491 - { 492 - nft_unregister_set(&nft_rbtree_type); 493 - } 494 - 495 - module_init(nft_rbtree_module_init); 496 - module_exit(nft_rbtree_module_exit); 497 - 498 - MODULE_LICENSE("GPL"); 499 - MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 500 - MODULE_ALIAS_NFT_SET();
+4 -4
net/netfilter/xt_TPROXY.c
··· 61 61 * addresses, this happens if the redirect already happened 62 62 * and the current packet belongs to an already established 63 63 * connection */ 64 - sk = nf_tproxy_get_sock_v4(net, skb, hp, iph->protocol, 64 + sk = nf_tproxy_get_sock_v4(net, skb, iph->protocol, 65 65 iph->saddr, iph->daddr, 66 66 hp->source, hp->dest, 67 67 skb->dev, NF_TPROXY_LOOKUP_ESTABLISHED); ··· 77 77 else if (!sk) 78 78 /* no, there's no established connection, check if 79 79 * there's a listener on the redirected addr/port */ 80 - sk = nf_tproxy_get_sock_v4(net, skb, hp, iph->protocol, 80 + sk = nf_tproxy_get_sock_v4(net, skb, iph->protocol, 81 81 iph->saddr, laddr, 82 82 hp->source, lport, 83 83 skb->dev, NF_TPROXY_LOOKUP_LISTENER); ··· 150 150 * addresses, this happens if the redirect already happened 151 151 * and the current packet belongs to an already established 152 152 * connection */ 153 - sk = nf_tproxy_get_sock_v6(xt_net(par), skb, thoff, hp, tproto, 153 + sk = nf_tproxy_get_sock_v6(xt_net(par), skb, thoff, tproto, 154 154 &iph->saddr, &iph->daddr, 155 155 hp->source, hp->dest, 156 156 xt_in(par), NF_TPROXY_LOOKUP_ESTABLISHED); ··· 171 171 else if (!sk) 172 172 /* no there's no established connection, check if 173 173 * there's a listener on the redirected addr/port */ 174 - sk = nf_tproxy_get_sock_v6(xt_net(par), skb, thoff, hp, 174 + sk = nf_tproxy_get_sock_v6(xt_net(par), skb, thoff, 175 175 tproto, &iph->saddr, laddr, 176 176 hp->source, lport, 177 177 xt_in(par), NF_TPROXY_LOOKUP_LISTENER);
+6 -3
net/nfc/llcp_commands.c
··· 752 752 pr_debug("Fragment %zd bytes remaining %zd", 753 753 frag_len, remaining_len); 754 754 755 - pdu = nfc_alloc_send_skb(sock->dev, &sock->sk, MSG_DONTWAIT, 755 + pdu = nfc_alloc_send_skb(sock->dev, &sock->sk, 0, 756 756 frag_len + LLCP_HEADER_SIZE, &err); 757 757 if (pdu == NULL) { 758 - pr_err("Could not allocate PDU\n"); 759 - continue; 758 + pr_err("Could not allocate PDU (error=%d)\n", err); 759 + len -= remaining_len; 760 + if (len == 0) 761 + len = err; 762 + break; 760 763 } 761 764 762 765 pdu = llcp_add_header(pdu, dsap, ssap, LLCP_PDU_UI);
+1 -1
net/nsh/nsh.c
··· 104 104 __skb_pull(skb, nsh_len); 105 105 106 106 skb_reset_mac_header(skb); 107 - skb_reset_mac_len(skb); 107 + skb->mac_len = proto == htons(ETH_P_TEB) ? ETH_HLEN : 0; 108 108 skb->protocol = proto; 109 109 110 110 features &= NETIF_F_SG;
+2
net/packet/af_packet.c
··· 2878 2878 goto out_free; 2879 2879 } else if (reserve) { 2880 2880 skb_reserve(skb, -reserve); 2881 + if (len < reserve) 2882 + skb_reset_network_header(skb); 2881 2883 } 2882 2884 2883 2885 /* Returns -EFAULT on error */
+11 -2
net/qrtr/qrtr.c
··· 191 191 hdr->type = cpu_to_le32(type); 192 192 hdr->src_node_id = cpu_to_le32(from->sq_node); 193 193 hdr->src_port_id = cpu_to_le32(from->sq_port); 194 - hdr->dst_node_id = cpu_to_le32(to->sq_node); 195 - hdr->dst_port_id = cpu_to_le32(to->sq_port); 194 + if (to->sq_port == QRTR_PORT_CTRL) { 195 + hdr->dst_node_id = cpu_to_le32(node->nid); 196 + hdr->dst_port_id = cpu_to_le32(QRTR_NODE_BCAST); 197 + } else { 198 + hdr->dst_node_id = cpu_to_le32(to->sq_node); 199 + hdr->dst_port_id = cpu_to_le32(to->sq_port); 200 + } 196 201 197 202 hdr->size = cpu_to_le32(len); 198 203 hdr->confirm_rx = 0; ··· 769 764 node = NULL; 770 765 if (addr->sq_node == QRTR_NODE_BCAST) { 771 766 enqueue_fn = qrtr_bcast_enqueue; 767 + if (addr->sq_port != QRTR_PORT_CTRL) { 768 + release_sock(sk); 769 + return -ENOTCONN; 770 + } 772 771 } else if (addr->sq_node == ipc->us.sq_node) { 773 772 enqueue_fn = qrtr_local_enqueue; 774 773 } else {
+3 -3
net/sched/act_csum.c
··· 91 91 } 92 92 params_old = rtnl_dereference(p->params); 93 93 94 - params_new->action = parm->action; 94 + p->tcf_action = parm->action; 95 95 params_new->update_flags = parm->update_flags; 96 96 rcu_assign_pointer(p->params, params_new); 97 97 if (params_old) ··· 561 561 tcf_lastuse_update(&p->tcf_tm); 562 562 bstats_cpu_update(this_cpu_ptr(p->common.cpu_bstats), skb); 563 563 564 - action = params->action; 564 + action = READ_ONCE(p->tcf_action); 565 565 if (unlikely(action == TC_ACT_SHOT)) 566 566 goto drop_stats; 567 567 ··· 599 599 .index = p->tcf_index, 600 600 .refcnt = p->tcf_refcnt - ref, 601 601 .bindcnt = p->tcf_bindcnt - bind, 602 + .action = p->tcf_action, 602 603 }; 603 604 struct tcf_t t; 604 605 605 606 params = rtnl_dereference(p->params); 606 - opt.action = params->action; 607 607 opt.update_flags = params->update_flags; 608 608 609 609 if (nla_put(skb, TCA_CSUM_PARMS, sizeof(opt), &opt))
+3 -3
net/sched/act_tunnel_key.c
··· 36 36 37 37 tcf_lastuse_update(&t->tcf_tm); 38 38 bstats_cpu_update(this_cpu_ptr(t->common.cpu_bstats), skb); 39 - action = params->action; 39 + action = READ_ONCE(t->tcf_action); 40 40 41 41 switch (params->tcft_action) { 42 42 case TCA_TUNNEL_KEY_ACT_RELEASE: ··· 182 182 183 183 params_old = rtnl_dereference(t->params); 184 184 185 - params_new->action = parm->action; 185 + t->tcf_action = parm->action; 186 186 params_new->tcft_action = parm->t_action; 187 187 params_new->tcft_enc_metadata = metadata; 188 188 ··· 254 254 .index = t->tcf_index, 255 255 .refcnt = t->tcf_refcnt - ref, 256 256 .bindcnt = t->tcf_bindcnt - bind, 257 + .action = t->tcf_action, 257 258 }; 258 259 struct tcf_t tm; 259 260 260 261 params = rtnl_dereference(t->params); 261 262 262 263 opt.t_action = params->tcft_action; 263 - opt.action = params->action; 264 264 265 265 if (nla_put(skb, TCA_TUNNEL_KEY_PARMS, sizeof(opt), &opt)) 266 266 goto nla_put_failure;
+2 -2
net/sched/cls_api.c
··· 1053 1053 for (tp = rtnl_dereference(chain->filter_chain); 1054 1054 tp; tp = rtnl_dereference(tp->next)) 1055 1055 tfilter_notify(net, oskb, n, tp, block, 1056 - q, parent, 0, event, false); 1056 + q, parent, NULL, event, false); 1057 1057 } 1058 1058 1059 1059 static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n, ··· 1444 1444 memset(&cb->args[1], 0, 1445 1445 sizeof(cb->args) - sizeof(cb->args[0])); 1446 1446 if (cb->args[1] == 0) { 1447 - if (tcf_fill_node(net, skb, tp, block, q, parent, 0, 1447 + if (tcf_fill_node(net, skb, tp, block, q, parent, NULL, 1448 1448 NETLINK_CB(cb->skb).portid, 1449 1449 cb->nlh->nlmsg_seq, NLM_F_MULTI, 1450 1450 RTM_NEWTFILTER) <= 0)
+18 -7
net/sched/sch_fq_codel.c
··· 479 479 q->cparams.mtu = psched_mtu(qdisc_dev(sch)); 480 480 481 481 if (opt) { 482 - int err = fq_codel_change(sch, opt, extack); 482 + err = fq_codel_change(sch, opt, extack); 483 483 if (err) 484 - return err; 484 + goto init_failure; 485 485 } 486 486 487 487 err = tcf_block_get(&q->block, &q->filter_list, sch, extack); 488 488 if (err) 489 - return err; 489 + goto init_failure; 490 490 491 491 if (!q->flows) { 492 492 q->flows = kvcalloc(q->flows_cnt, 493 493 sizeof(struct fq_codel_flow), 494 494 GFP_KERNEL); 495 - if (!q->flows) 496 - return -ENOMEM; 495 + if (!q->flows) { 496 + err = -ENOMEM; 497 + goto init_failure; 498 + } 497 499 q->backlogs = kvcalloc(q->flows_cnt, sizeof(u32), GFP_KERNEL); 498 - if (!q->backlogs) 499 - return -ENOMEM; 500 + if (!q->backlogs) { 501 + err = -ENOMEM; 502 + goto alloc_failure; 503 + } 500 504 for (i = 0; i < q->flows_cnt; i++) { 501 505 struct fq_codel_flow *flow = q->flows + i; 502 506 ··· 513 509 else 514 510 sch->flags &= ~TCQ_F_CAN_BYPASS; 515 511 return 0; 512 + 513 + alloc_failure: 514 + kvfree(q->flows); 515 + q->flows = NULL; 516 + init_failure: 517 + q->flows_cnt = 0; 518 + return err; 516 519 } 517 520 518 521 static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
+1 -1
net/sctp/transport.c
··· 282 282 283 283 if (dst) { 284 284 /* Re-fetch, as under layers may have a higher minimum size */ 285 - pmtu = SCTP_TRUNC4(dst_mtu(dst)); 285 + pmtu = sctp_dst_mtu(dst); 286 286 change = t->pathmtu != pmtu; 287 287 } 288 288 t->pathmtu = pmtu;
+27 -10
net/smc/af_smc.c
··· 147 147 smc->clcsock = NULL; 148 148 } 149 149 if (smc->use_fallback) { 150 - sock_put(sk); /* passive closing */ 150 + if (sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_INIT) 151 + sock_put(sk); /* passive closing */ 151 152 sk->sk_state = SMC_CLOSED; 152 153 sk->sk_state_change(sk); 153 154 } ··· 418 417 { 419 418 int rc; 420 419 421 - if (reason_code < 0) /* error, fallback is not possible */ 420 + if (reason_code < 0) { /* error, fallback is not possible */ 421 + if (smc->sk.sk_state == SMC_INIT) 422 + sock_put(&smc->sk); /* passive closing */ 422 423 return reason_code; 424 + } 423 425 if (reason_code != SMC_CLC_DECL_REPLY) { 424 426 rc = smc_clc_send_decline(smc, reason_code); 425 - if (rc < 0) 427 + if (rc < 0) { 428 + if (smc->sk.sk_state == SMC_INIT) 429 + sock_put(&smc->sk); /* passive closing */ 426 430 return rc; 431 + } 427 432 } 428 433 return smc_connect_fallback(smc); 429 434 } ··· 442 435 smc_lgr_forget(smc->conn.lgr); 443 436 mutex_unlock(&smc_create_lgr_pending); 444 437 smc_conn_free(&smc->conn); 445 - if (reason_code < 0 && smc->sk.sk_state == SMC_INIT) 446 - sock_put(&smc->sk); /* passive closing */ 447 438 return reason_code; 448 439 } 449 440 ··· 1457 1452 1458 1453 if (optlen < sizeof(int)) 1459 1454 return -EINVAL; 1460 - get_user(val, (int __user *)optval); 1455 + if (get_user(val, (int __user *)optval)) 1456 + return -EFAULT; 1461 1457 1462 1458 lock_sock(sk); 1463 1459 switch (optname) { ··· 1526 1520 return -EBADF; 1527 1521 return smc->clcsock->ops->ioctl(smc->clcsock, cmd, arg); 1528 1522 } 1523 + lock_sock(&smc->sk); 1529 1524 switch (cmd) { 1530 1525 case SIOCINQ: /* same as FIONREAD */ 1531 - if (smc->sk.sk_state == SMC_LISTEN) 1526 + if (smc->sk.sk_state == SMC_LISTEN) { 1527 + release_sock(&smc->sk); 1532 1528 return -EINVAL; 1529 + } 1533 1530 if (smc->sk.sk_state == SMC_INIT || 1534 1531 smc->sk.sk_state == SMC_CLOSED) 1535 1532 answ = 0; ··· 1541 1532 break; 1542 1533 case SIOCOUTQ: 1543 1534 /* output queue size (not send + not acked) */ 1544 - if (smc->sk.sk_state == SMC_LISTEN) 1535 + if (smc->sk.sk_state == SMC_LISTEN) { 1536 + release_sock(&smc->sk); 1545 1537 return -EINVAL; 1538 + } 1546 1539 if (smc->sk.sk_state == SMC_INIT || 1547 1540 smc->sk.sk_state == SMC_CLOSED) 1548 1541 answ = 0; ··· 1554 1543 break; 1555 1544 case SIOCOUTQNSD: 1556 1545 /* output queue size (not send only) */ 1557 - if (smc->sk.sk_state == SMC_LISTEN) 1546 + if (smc->sk.sk_state == SMC_LISTEN) { 1547 + release_sock(&smc->sk); 1558 1548 return -EINVAL; 1549 + } 1559 1550 if (smc->sk.sk_state == SMC_INIT || 1560 1551 smc->sk.sk_state == SMC_CLOSED) 1561 1552 answ = 0; ··· 1565 1552 answ = smc_tx_prepared_sends(&smc->conn); 1566 1553 break; 1567 1554 case SIOCATMARK: 1568 - if (smc->sk.sk_state == SMC_LISTEN) 1555 + if (smc->sk.sk_state == SMC_LISTEN) { 1556 + release_sock(&smc->sk); 1569 1557 return -EINVAL; 1558 + } 1570 1559 if (smc->sk.sk_state == SMC_INIT || 1571 1560 smc->sk.sk_state == SMC_CLOSED) { 1572 1561 answ = 0; ··· 1584 1569 } 1585 1570 break; 1586 1571 default: 1572 + release_sock(&smc->sk); 1587 1573 return -ENOIOCTLCMD; 1588 1574 } 1575 + release_sock(&smc->sk); 1589 1576 1590 1577 return put_user(answ, (int __user *)arg); 1591 1578 }
+2 -1
net/smc/smc_clc.c
··· 250 250 int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen, 251 251 u8 expected_type) 252 252 { 253 + long rcvtimeo = smc->clcsock->sk->sk_rcvtimeo; 253 254 struct sock *clc_sk = smc->clcsock->sk; 254 255 struct smc_clc_msg_hdr *clcm = buf; 255 256 struct msghdr msg = {NULL, 0}; ··· 307 306 memset(&msg, 0, sizeof(struct msghdr)); 308 307 iov_iter_kvec(&msg.msg_iter, READ | ITER_KVEC, &vec, 1, datlen); 309 308 krflags = MSG_WAITALL; 310 - smc->clcsock->sk->sk_rcvtimeo = CLC_WAIT_TIME; 311 309 len = sock_recvmsg(smc->clcsock, &msg, krflags); 312 310 if (len < datlen || !smc_clc_msg_hdr_valid(clcm)) { 313 311 smc->sk.sk_err = EPROTO; ··· 322 322 } 323 323 324 324 out: 325 + smc->clcsock->sk->sk_rcvtimeo = rcvtimeo; 325 326 return reason_code; 326 327 } 327 328
+2
net/smc/smc_close.c
··· 107 107 } 108 108 switch (sk->sk_state) { 109 109 case SMC_INIT: 110 + sk->sk_state = SMC_PEERABORTWAIT; 111 + break; 110 112 case SMC_ACTIVE: 111 113 sk->sk_state = SMC_PEERABORTWAIT; 112 114 release_sock(sk);
+10 -2
net/smc/smc_tx.c
··· 495 495 496 496 void smc_tx_consumer_update(struct smc_connection *conn, bool force) 497 497 { 498 - union smc_host_cursor cfed, cons; 498 + union smc_host_cursor cfed, cons, prod; 499 + int sender_free = conn->rmb_desc->len; 499 500 int to_confirm; 500 501 501 502 smc_curs_write(&cons, ··· 506 505 smc_curs_read(&conn->rx_curs_confirmed, conn), 507 506 conn); 508 507 to_confirm = smc_curs_diff(conn->rmb_desc->len, &cfed, &cons); 508 + if (to_confirm > conn->rmbe_update_limit) { 509 + smc_curs_write(&prod, 510 + smc_curs_read(&conn->local_rx_ctrl.prod, conn), 511 + conn); 512 + sender_free = conn->rmb_desc->len - 513 + smc_curs_diff(conn->rmb_desc->len, &prod, &cfed); 514 + } 509 515 510 516 if (conn->local_rx_ctrl.prod_flags.cons_curs_upd_req || 511 517 force || 512 518 ((to_confirm > conn->rmbe_update_limit) && 513 - ((to_confirm > (conn->rmb_desc->len / 2)) || 519 + ((sender_free <= (conn->rmb_desc->len / 2)) || 514 520 conn->local_rx_ctrl.prod_flags.write_blocked))) { 515 521 if ((smc_cdc_get_slot_and_msg_send(conn) < 0) && 516 522 conn->alert_token_local) { /* connection healthy */
+11 -7
net/tipc/discover.c
··· 133 133 } 134 134 135 135 /* tipc_disc_addr_trial(): - handle an address uniqueness trial from peer 136 + * Returns true if message should be dropped by caller, i.e., if it is a 137 + * trial message or we are inside trial period. Otherwise false. 136 138 */ 137 139 static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d, 138 140 struct tipc_media_addr *maddr, ··· 170 168 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 171 169 } 172 170 171 + /* Accept regular link requests/responses only after trial period */ 173 172 if (mtyp != DSC_TRIAL_MSG) 174 - return false; 173 + return trial; 175 174 176 175 sugg_addr = tipc_node_try_addr(net, peer_id, src); 177 176 if (sugg_addr) ··· 287 284 { 288 285 struct tipc_discoverer *d = from_timer(d, t, timer); 289 286 struct tipc_net *tn = tipc_net(d->net); 290 - u32 self = tipc_own_addr(d->net); 291 287 struct tipc_media_addr maddr; 292 288 struct sk_buff *skb = NULL; 293 289 struct net *net = d->net; ··· 300 298 goto exit; 301 299 } 302 300 303 - /* Did we just leave the address trial period ? */ 304 - if (!self && !time_before(jiffies, tn->addr_trial_end)) { 305 - self = tn->trial_addr; 306 - tipc_net_finalize(net, self); 307 - msg_set_prevnode(buf_msg(d->skb), self); 301 + /* Trial period over ? */ 302 + if (!time_before(jiffies, tn->addr_trial_end)) { 303 + /* Did we just leave it ? */ 304 + if (!tipc_own_addr(net)) 305 + tipc_net_finalize(net, tn->trial_addr); 306 + 308 307 msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); 308 + msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net)); 309 309 } 310 310 311 311 /* Adjust timeout interval according to discovery phase */
+11 -6
net/tipc/net.c
··· 121 121 122 122 void tipc_net_finalize(struct net *net, u32 addr) 123 123 { 124 - tipc_set_node_addr(net, addr); 125 - smp_mb(); 126 - tipc_named_reinit(net); 127 - tipc_sk_reinit(net); 128 - tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 129 - TIPC_CLUSTER_SCOPE, 0, addr); 124 + struct tipc_net *tn = tipc_net(net); 125 + 126 + spin_lock_bh(&tn->node_list_lock); 127 + if (!tipc_own_addr(net)) { 128 + tipc_set_node_addr(net, addr); 129 + tipc_named_reinit(net); 130 + tipc_sk_reinit(net); 131 + tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr, 132 + TIPC_CLUSTER_SCOPE, 0, addr); 133 + } 134 + spin_unlock_bh(&tn->node_list_lock); 130 135 } 131 136 132 137 void tipc_net_stop(struct net *net)
+5 -2
net/tipc/node.c
··· 797 797 } 798 798 799 799 /* tipc_node_try_addr(): Check if addr can be used by peer, suggest other if not 800 + * Returns suggested address if any, otherwise 0 800 801 */ 801 802 u32 tipc_node_try_addr(struct net *net, u8 *id, u32 addr) 802 803 { ··· 820 819 if (n) { 821 820 addr = n->addr; 822 821 tipc_node_put(n); 822 + return addr; 823 823 } 824 - /* Even this node may be in trial phase */ 824 + 825 + /* Even this node may be in conflict */ 825 826 if (tn->trial_addr == addr) 826 827 return tipc_node_suggest_addr(net, addr); 827 828 828 - return addr; 829 + return 0; 829 830 } 830 831 831 832 void tipc_node_check_dest(struct net *net, u32 addr,
+6 -1
net/tls/tls_sw.c
··· 440 440 ret = tls_push_record(sk, msg->msg_flags, record_type); 441 441 if (!ret) 442 442 continue; 443 - if (ret == -EAGAIN) 443 + if (ret < 0) 444 444 goto send_end; 445 445 446 446 copied -= try_to_copy; ··· 701 701 nsg = skb_to_sgvec(skb, &sgin[1], 702 702 rxm->offset + tls_ctx->rx.prepend_size, 703 703 rxm->full_len - tls_ctx->rx.prepend_size); 704 + if (nsg < 0) { 705 + ret = nsg; 706 + goto out; 707 + } 704 708 705 709 tls_make_aad(ctx->rx_aad_ciphertext, 706 710 rxm->full_len - tls_ctx->rx.overhead_size, ··· 716 712 rxm->full_len - tls_ctx->rx.overhead_size, 717 713 skb, sk->sk_allocation); 718 714 715 + out: 719 716 if (sgin != &sgin_arr[0]) 720 717 kfree(sgin); 721 718
+13 -19
net/xdp/xsk.c
··· 199 199 { 200 200 u64 addr = (u64)(long)skb_shinfo(skb)->destructor_arg; 201 201 struct xdp_sock *xs = xdp_sk(skb->sk); 202 + unsigned long flags; 202 203 204 + spin_lock_irqsave(&xs->tx_completion_lock, flags); 203 205 WARN_ON_ONCE(xskq_produce_addr(xs->umem->cq, addr)); 206 + spin_unlock_irqrestore(&xs->tx_completion_lock, flags); 204 207 205 208 sock_wfree(skb); 206 209 } ··· 218 215 struct sk_buff *skb; 219 216 int err = 0; 220 217 221 - if (unlikely(!xs->tx)) 222 - return -ENOBUFS; 223 - 224 218 mutex_lock(&xs->mutex); 225 219 226 220 while (xskq_peek_desc(xs->tx, &desc)) { ··· 230 230 goto out; 231 231 } 232 232 233 - if (xskq_reserve_addr(xs->umem->cq)) { 234 - err = -EAGAIN; 233 + if (xskq_reserve_addr(xs->umem->cq)) 235 234 goto out; 236 - } 235 + 236 + if (xs->queue_id >= xs->dev->real_num_tx_queues) 237 + goto out; 237 238 238 239 len = desc.len; 239 - if (unlikely(len > xs->dev->mtu)) { 240 - err = -EMSGSIZE; 241 - goto out; 242 - } 243 - 244 - if (xs->queue_id >= xs->dev->real_num_tx_queues) { 245 - err = -ENXIO; 246 - goto out; 247 - } 248 - 249 240 skb = sock_alloc_send_skb(sk, len, 1, &err); 250 241 if (unlikely(!skb)) { 251 242 err = -EAGAIN; ··· 259 268 skb->destructor = xsk_destruct_skb; 260 269 261 270 err = dev_direct_xmit(skb, xs->queue_id); 271 + xskq_discard_desc(xs->tx); 262 272 /* Ignore NET_XMIT_CN as packet might have been sent */ 263 273 if (err == NET_XMIT_DROP || err == NETDEV_TX_BUSY) { 264 - err = -EAGAIN; 265 - /* SKB consumed by dev_direct_xmit() */ 274 + /* SKB completed but not sent */ 275 + err = -EBUSY; 266 276 goto out; 267 277 } 268 278 269 279 sent_frame = true; 270 - xskq_discard_desc(xs->tx); 271 280 } 272 281 273 282 out: ··· 288 297 return -ENXIO; 289 298 if (unlikely(!(xs->dev->flags & IFF_UP))) 290 299 return -ENETDOWN; 300 + if (unlikely(!xs->tx)) 301 + return -ENOBUFS; 291 302 if (need_wait) 292 303 return -EOPNOTSUPP; 293 304 ··· 748 755 749 756 xs = xdp_sk(sk); 750 757 mutex_init(&xs->mutex); 758 + spin_lock_init(&xs->tx_completion_lock); 751 759 752 760 local_bh_disable(); 753 761 sock_prot_inuse_add(net, &xsk_proto, 1);
+2 -7
net/xdp/xsk_queue.h
··· 62 62 return (entries > dcnt) ? dcnt : entries; 63 63 } 64 64 65 - static inline u32 xskq_nb_free_lazy(struct xsk_queue *q, u32 producer) 66 - { 67 - return q->nentries - (producer - q->cons_tail); 68 - } 69 - 70 65 static inline u32 xskq_nb_free(struct xsk_queue *q, u32 producer, u32 dcnt) 71 66 { 72 - u32 free_entries = xskq_nb_free_lazy(q, producer); 67 + u32 free_entries = q->nentries - (producer - q->cons_tail); 73 68 74 69 if (free_entries >= dcnt) 75 70 return free_entries; ··· 124 129 { 125 130 struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; 126 131 127 - if (xskq_nb_free(q, q->prod_tail, LAZY_UPDATE_THRESHOLD) == 0) 132 + if (xskq_nb_free(q, q->prod_tail, 1) == 0) 128 133 return -ENOSPC; 129 134 130 135 ring->desc[q->prod_tail++ & q->ring_mask] = addr;
+49
samples/bpf/.gitignore
··· 1 + cpustat 2 + fds_example 3 + lathist 4 + load_sock_ops 5 + lwt_len_hist 6 + map_perf_test 7 + offwaketime 8 + per_socket_stats_example 9 + sampleip 10 + sock_example 11 + sockex1 12 + sockex2 13 + sockex3 14 + spintest 15 + syscall_nrs.h 16 + syscall_tp 17 + task_fd_query 18 + tc_l2_redirect 19 + test_cgrp2_array_pin 20 + test_cgrp2_attach 21 + test_cgrp2_attach2 22 + test_cgrp2_sock 23 + test_cgrp2_sock2 24 + test_current_task_under_cgroup 25 + test_lru_dist 26 + test_map_in_map 27 + test_overhead 28 + test_probe_write_user 29 + trace_event 30 + trace_output 31 + tracex1 32 + tracex2 33 + tracex3 34 + tracex4 35 + tracex5 36 + tracex6 37 + tracex7 38 + xdp1 39 + xdp2 40 + xdp_adjust_tail 41 + xdp_fwd 42 + xdp_monitor 43 + xdp_redirect 44 + xdp_redirect_cpu 45 + xdp_redirect_map 46 + xdp_router_ipv4 47 + xdp_rxq_info 48 + xdp_tx_iptunnel 49 + xdpsock
+1 -5
samples/bpf/parse_varlen.c
··· 6 6 */ 7 7 #define KBUILD_MODNAME "foo" 8 8 #include <linux/if_ether.h> 9 + #include <linux/if_vlan.h> 9 10 #include <linux/ip.h> 10 11 #include <linux/ipv6.h> 11 12 #include <linux/in.h> ··· 108 107 return udp(data, nh_off + ihl_len, data_end); 109 108 return 0; 110 109 } 111 - 112 - struct vlan_hdr { 113 - uint16_t h_vlan_TCI; 114 - uint16_t h_vlan_encapsulated_proto; 115 - }; 116 110 117 111 SEC("varlen") 118 112 int handle_ingress(struct __sk_buff *skb)
+15 -4
samples/bpf/test_overhead_user.c
··· 6 6 */ 7 7 #define _GNU_SOURCE 8 8 #include <sched.h> 9 + #include <errno.h> 9 10 #include <stdio.h> 10 11 #include <sys/types.h> 11 12 #include <asm/unistd.h> ··· 45 44 exit(1); 46 45 } 47 46 start_time = time_get_ns(); 48 - for (i = 0; i < MAX_CNT; i++) 49 - write(fd, buf, sizeof(buf)); 47 + for (i = 0; i < MAX_CNT; i++) { 48 + if (write(fd, buf, sizeof(buf)) < 0) { 49 + printf("task rename failed: %s\n", strerror(errno)); 50 + close(fd); 51 + return; 52 + } 53 + } 50 54 printf("task_rename:%d: %lld events per sec\n", 51 55 cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time)); 52 56 close(fd); ··· 69 63 exit(1); 70 64 } 71 65 start_time = time_get_ns(); 72 - for (i = 0; i < MAX_CNT; i++) 73 - read(fd, buf, sizeof(buf)); 66 + for (i = 0; i < MAX_CNT; i++) { 67 + if (read(fd, buf, sizeof(buf)) < 0) { 68 + printf("failed to read from /dev/urandom: %s\n", strerror(errno)); 69 + close(fd); 70 + return; 71 + } 72 + } 74 73 printf("urandom_read:%d: %lld events per sec\n", 75 74 cpu, MAX_CNT * 1000000000ll / (time_get_ns() - start_time)); 76 75 close(fd);
+24 -3
samples/bpf/trace_event_user.c
··· 122 122 } 123 123 } 124 124 125 + static inline int generate_load(void) 126 + { 127 + if (system("dd if=/dev/zero of=/dev/null count=5000k status=none") < 0) { 128 + printf("failed to generate some load with dd: %s\n", strerror(errno)); 129 + return -1; 130 + } 131 + 132 + return 0; 133 + } 134 + 125 135 static void test_perf_event_all_cpu(struct perf_event_attr *attr) 126 136 { 127 137 int nr_cpus = sysconf(_SC_NPROCESSORS_CONF); ··· 152 142 assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0); 153 143 assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0); 154 144 } 155 - system("dd if=/dev/zero of=/dev/null count=5000k status=none"); 145 + 146 + if (generate_load() < 0) { 147 + error = 1; 148 + goto all_cpu_err; 149 + } 156 150 print_stacks(); 157 151 all_cpu_err: 158 152 for (i--; i >= 0; i--) { ··· 170 156 171 157 static void test_perf_event_task(struct perf_event_attr *attr) 172 158 { 173 - int pmu_fd; 159 + int pmu_fd, error = 0; 174 160 175 161 /* per task perf event, enable inherit so the "dd ..." command can be traced properly. 176 162 * Enabling inherit will cause bpf_perf_prog_read_time helper failure. ··· 185 171 } 186 172 assert(ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0); 187 173 assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE) == 0); 188 - system("dd if=/dev/zero of=/dev/null count=5000k status=none"); 174 + 175 + if (generate_load() < 0) { 176 + error = 1; 177 + goto err; 178 + } 189 179 print_stacks(); 180 + err: 190 181 ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE); 191 182 close(pmu_fd); 183 + if (error) 184 + int_exit(0); 192 185 } 193 186 194 187 static void test_bpf_perf_event(void)
+3 -3
samples/bpf/xdp2skb_meta.sh
··· 16 16 BPF_FILE=xdp2skb_meta_kern.o 17 17 DIR=$(dirname $0) 18 18 19 - export TC=/usr/sbin/tc 20 - export IP=/usr/sbin/ip 19 + [ -z "$TC" ] && TC=tc 20 + [ -z "$IP" ] && IP=ip 21 21 22 22 function usage() { 23 23 echo "" ··· 53 53 local allow_fail="$2" 54 54 shift 2 55 55 if [[ -n "$VERBOSE" ]]; then 56 - echo "$(basename $cmd) $@" 56 + echo "$cmd $@" 57 57 fi 58 58 if [[ -n "$DRYRUN" ]]; then 59 59 return
+1 -1
samples/bpf/xdpsock_user.c
··· 729 729 int ret; 730 730 731 731 ret = sendto(fd, NULL, 0, MSG_DONTWAIT, NULL, 0); 732 - if (ret >= 0 || errno == ENOBUFS || errno == EAGAIN) 732 + if (ret >= 0 || errno == ENOBUFS || errno == EAGAIN || errno == EBUSY) 733 733 return; 734 734 lassert(0); 735 735 }
+1
scripts/tags.sh
··· 152 152 ) 153 153 regex_c=( 154 154 '/^SYSCALL_DEFINE[0-9](\([[:alnum:]_]*\).*/sys_\1/' 155 + '/^BPF_CALL_[0-9](\([[:alnum:]_]*\).*/\1/' 155 156 '/^COMPAT_SYSCALL_DEFINE[0-9](\([[:alnum:]_]*\).*/compat_sys_\1/' 156 157 '/^TRACE_EVENT(\([[:alnum:]_]*\).*/trace_\1/' 157 158 '/^TRACE_EVENT(\([[:alnum:]_]*\).*/trace_\1_rcuidle/'
+22 -1
tools/testing/selftests/bpf/test_verifier.c
··· 4975 4975 .prog_type = BPF_PROG_TYPE_LWT_XMIT, 4976 4976 }, 4977 4977 { 4978 + "make headroom for LWT_XMIT", 4979 + .insns = { 4980 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4981 + BPF_MOV64_IMM(BPF_REG_2, 34), 4982 + BPF_MOV64_IMM(BPF_REG_3, 0), 4983 + BPF_EMIT_CALL(BPF_FUNC_skb_change_head), 4984 + /* split for s390 to succeed */ 4985 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 4986 + BPF_MOV64_IMM(BPF_REG_2, 42), 4987 + BPF_MOV64_IMM(BPF_REG_3, 0), 4988 + BPF_EMIT_CALL(BPF_FUNC_skb_change_head), 4989 + BPF_MOV64_IMM(BPF_REG_0, 0), 4990 + BPF_EXIT_INSN(), 4991 + }, 4992 + .result = ACCEPT, 4993 + .prog_type = BPF_PROG_TYPE_LWT_XMIT, 4994 + }, 4995 + { 4978 4996 "invalid access of tc_classid for LWT_IN", 4979 4997 .insns = { 4980 4998 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, ··· 12572 12554 } 12573 12555 12574 12556 if (fd_prog >= 0) { 12557 + __u8 tmp[TEST_DATA_LEN << 2]; 12558 + __u32 size_tmp = sizeof(tmp); 12559 + 12575 12560 err = bpf_prog_test_run(fd_prog, 1, test->data, 12576 - sizeof(test->data), NULL, NULL, 12561 + sizeof(test->data), tmp, &size_tmp, 12577 12562 &retval, NULL); 12578 12563 if (err && errno != 524/*ENOTSUPP*/ && errno != EPERM) { 12579 12564 printf("Unexpected bpf_prog_test_run error\n");
-41
tools/testing/selftests/net/fib_tests.sh
··· 740 740 run_cmd "$IP -6 ro add unreachable 2001:db8:104::/64" 741 741 log_test $? 2 "Attempt to add duplicate route - reject route" 742 742 743 - # iproute2 prepend only sets NLM_F_CREATE 744 - # - adds a new route; does NOT convert existing route to ECMP 745 - add_route6 "2001:db8:104::/64" "via 2001:db8:101::2" 746 - run_cmd "$IP -6 ro prepend 2001:db8:104::/64 via 2001:db8:103::2" 747 - check_route6 "2001:db8:104::/64 via 2001:db8:101::2 dev veth1 metric 1024 2001:db8:104::/64 via 2001:db8:103::2 dev veth3 metric 1024" 748 - log_test $? 0 "Add new route for existing prefix (w/o NLM_F_EXCL)" 749 - 750 743 # route append with same prefix adds a new route 751 744 # - iproute2 sets NLM_F_CREATE | NLM_F_APPEND 752 745 add_route6 "2001:db8:104::/64" "via 2001:db8:101::2" 753 746 run_cmd "$IP -6 ro append 2001:db8:104::/64 via 2001:db8:103::2" 754 747 check_route6 "2001:db8:104::/64 metric 1024 nexthop via 2001:db8:101::2 dev veth1 weight 1 nexthop via 2001:db8:103::2 dev veth3 weight 1" 755 748 log_test $? 0 "Append nexthop to existing route - gw" 756 - 757 - add_route6 "2001:db8:104::/64" "via 2001:db8:101::2" 758 - run_cmd "$IP -6 ro append 2001:db8:104::/64 dev veth3" 759 - check_route6 "2001:db8:104::/64 metric 1024 nexthop via 2001:db8:101::2 dev veth1 weight 1 nexthop dev veth3 weight 1" 760 - log_test $? 0 "Append nexthop to existing route - dev only" 761 - 762 - # multipath route can not have a nexthop that is a reject route 763 - add_route6 "2001:db8:104::/64" "via 2001:db8:101::2" 764 - run_cmd "$IP -6 ro append unreachable 2001:db8:104::/64" 765 - log_test $? 2 "Append nexthop to existing route - reject route" 766 - 767 - # reject route can not be converted to multipath route 768 - run_cmd "$IP -6 ro flush 2001:db8:104::/64" 769 - run_cmd "$IP -6 ro add unreachable 2001:db8:104::/64" 770 - run_cmd "$IP -6 ro append 2001:db8:104::/64 via 2001:db8:103::2" 771 - log_test $? 2 "Append nexthop to existing reject route - gw" 772 - 773 - run_cmd "$IP -6 ro flush 2001:db8:104::/64" 774 - run_cmd "$IP -6 ro add unreachable 2001:db8:104::/64" 775 - run_cmd "$IP -6 ro append 2001:db8:104::/64 dev veth3" 776 - log_test $? 2 "Append nexthop to existing reject route - dev only" 777 749 778 750 # insert mpath directly 779 751 add_route6 "2001:db8:104::/64" "nexthop via 2001:db8:101::2 nexthop via 2001:db8:103::2" ··· 790 818 run_cmd "$IP -6 ro replace 2001:db8:104::/64 nexthop via 2001:db8:101::3 nexthop via 2001:db8:103::2" 791 819 check_route6 "2001:db8:104::/64 metric 1024 nexthop via 2001:db8:101::3 dev veth1 weight 1 nexthop via 2001:db8:103::2 dev veth3 weight 1" 792 820 log_test $? 0 "Single path with multipath" 793 - 794 - # single path with reject 795 - # 796 - add_initial_route6 "nexthop via 2001:db8:101::2" 797 - run_cmd "$IP -6 ro replace unreachable 2001:db8:104::/64" 798 - check_route6 "unreachable 2001:db8:104::/64 dev lo metric 1024" 799 - log_test $? 0 "Single path with reject route" 800 821 801 822 # single path with single path using MULTIPATH attribute 802 823 # ··· 837 872 run_cmd "$IP -6 ro replace 2001:db8:104::/64 nexthop via 2001:db8:101::3" 838 873 check_route6 "2001:db8:104::/64 via 2001:db8:101::3 dev veth1 metric 1024" 839 874 log_test $? 0 "Multipath with single path via multipath attribute" 840 - 841 - # multipath with reject 842 - add_initial_route6 "nexthop via 2001:db8:101::2 nexthop via 2001:db8:103::2" 843 - run_cmd "$IP -6 ro replace unreachable 2001:db8:104::/64" 844 - check_route6 "unreachable 2001:db8:104::/64 dev lo metric 1024" 845 - log_test $? 0 "Multipath with reject route" 846 875 847 876 # route replace fails - invalid nexthop 1 848 877 add_initial_route6 "nexthop via 2001:db8:101::2 nexthop via 2001:db8:103::2"
-3
tools/testing/selftests/net/udpgso_bench.sh
··· 35 35 36 36 echo "udp gso" 37 37 run_in_netns ${args} -S 38 - 39 - echo "udp gso zerocopy" 40 - run_in_netns ${args} -S -z 41 38 } 42 39 43 40 run_tcp() {