Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
"Several fixups, of note:

1) Fix unlock of not held spinlock in RXRPC code, from Alexey
Khoroshilov.

2) Call pci_disable_device() from the correct shutdown path in bnx2x
driver, from Yuval Mintz.

3) Fix qeth build on s390 for some configurations, from Eugene
Crosser.

4) Cure locking bugs in bond_loadbalance_arp_mon(), from Ding
Tianhong.

5) Must do netif_napi_add() before registering netdevice in sky2
driver, from Stanislaw Gruszka.

6) Fix lost bug fix during merge due to code movement in ieee802154,
noticed and fixed by the eagle eyed Stephen Rothwell.

7) Get rid of resource leak in xen-netfront driver, from Annie Li.

8) Bounds checks in qlcnic driver are off by one, from Manish Chopra.

9) TPROXY can leak sockets when TCP early demux is enabled, fix from
Holger Eitzenberger"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (32 commits)
qeth: fix build of s390 allmodconfig
bonding: fix locking in bond_loadbalance_arp_mon()
tun: add device name(iff) field to proc fdinfo entry
DT: net: davinci_emac: "ti, davinci-no-bd-ram" property is actually optional
DT: net: davinci_emac: "ti, davinci-rmii-en" property is actually optional
bnx2x: Fix generic option settings
net: Fix warning on make htmldocs caused by skbuff.c
llc: remove noisy WARN from llc_mac_hdr_init
qlcnic: Fix loopback test failure
qlcnic: Fix tx timeout.
qlcnic: Fix initialization of vlan list.
qlcnic: Correct off-by-one errors in bounds checks
net: Document promote_secondaries
net: gre: use icmp_hdr() to get inner ip header
i40e: Add missing braces to i40e_dcb_need_reconfig()
xen-netfront: fix resource leak in netfront
net: 6lowpan: fixup for code movement
hyperv: Add support for physically discontinuous receive buffer
sky2: initialize napi before registering device
net: Fix memory leak if TPROXY used with TCP early demux
...

+306 -228
+2 -2
Documentation/devicetree/bindings/net/davinci_emac.txt
··· 10 10 - ti,davinci-ctrl-mod-reg-offset: offset to control module register 11 11 - ti,davinci-ctrl-ram-offset: offset to control module ram 12 12 - ti,davinci-ctrl-ram-size: size of control module ram 13 - - ti,davinci-rmii-en: use RMII 14 - - ti,davinci-no-bd-ram: has the emac controller BD RAM 15 13 - interrupts: interrupt mapping for the davinci emac interrupts sources: 16 14 4 sources: <Receive Threshold Interrupt 17 15 Receive Interrupt ··· 20 22 - phy-handle: Contains a phandle to an Ethernet PHY. 21 23 If absent, davinci_emac driver defaults to 100/FULL. 22 24 - local-mac-address : 6 bytes, mac address 25 + - ti,davinci-rmii-en: 1 byte, 1 means use RMII 26 + - ti,davinci-no-bd-ram: boolean, does EMAC have BD RAM? 23 27 24 28 Example (enbw_cmc board): 25 29 eth0: emac@1e20000 {
+6
Documentation/networking/ip-sysctl.txt
··· 1088 1088 IGMPv3 report retransmit will take place. 1089 1089 Default: 1000 (1 seconds) 1090 1090 1091 + promote_secondaries - BOOLEAN 1092 + When a primary IP address is removed from this interface 1093 + promote a corresponding secondary IP address instead of 1094 + removing all the corresponding secondary IP addresses. 1095 + 1096 + 1091 1097 tag - INTEGER 1092 1098 Allows you to write a number, which can be used as required. 1093 1099 Default value is 0.
+1
Documentation/networking/packet_mmap.txt
··· 583 583 - PACKET_FANOUT_CPU: schedule to socket by CPU packet arrives on 584 584 - PACKET_FANOUT_RND: schedule to socket by random selection 585 585 - PACKET_FANOUT_ROLLOVER: if one socket is full, rollover to another 586 + - PACKET_FANOUT_QM: schedule to socket by skbs recorded queue_mapping 586 587 587 588 Minimal example code by David S. Miller (try things like "./test eth0 hash", 588 589 "./test eth0 lb", etc.):
+8 -6
drivers/hv/channel.c
··· 209 209 { 210 210 int i; 211 211 int pagecount; 212 - unsigned long long pfn; 213 212 struct vmbus_channel_gpadl_header *gpadl_header; 214 213 struct vmbus_channel_gpadl_body *gpadl_body; 215 214 struct vmbus_channel_msginfo *msgheader; ··· 218 219 int pfnsum, pfncount, pfnleft, pfncurr, pfnsize; 219 220 220 221 pagecount = size >> PAGE_SHIFT; 221 - pfn = virt_to_phys(kbuffer) >> PAGE_SHIFT; 222 222 223 223 /* do we need a gpadl body msg */ 224 224 pfnsize = MAX_SIZE_CHANNEL_MESSAGE - ··· 246 248 gpadl_header->range[0].byte_offset = 0; 247 249 gpadl_header->range[0].byte_count = size; 248 250 for (i = 0; i < pfncount; i++) 249 - gpadl_header->range[0].pfn_array[i] = pfn+i; 251 + gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys( 252 + kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT; 250 253 *msginfo = msgheader; 251 254 *messagecount = 1; 252 255 ··· 300 301 * so the hypervisor gurantees that this is ok. 301 302 */ 302 303 for (i = 0; i < pfncurr; i++) 303 - gpadl_body->pfn[i] = pfn + pfnsum + i; 304 + gpadl_body->pfn[i] = slow_virt_to_phys( 305 + kbuffer + PAGE_SIZE * (pfnsum + i)) >> 306 + PAGE_SHIFT; 304 307 305 308 /* add to msg header */ 306 309 list_add_tail(&msgbody->msglistentry, ··· 328 327 gpadl_header->range[0].byte_offset = 0; 329 328 gpadl_header->range[0].byte_count = size; 330 329 for (i = 0; i < pagecount; i++) 331 - gpadl_header->range[0].pfn_array[i] = pfn+i; 330 + gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys( 331 + kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT; 332 332 333 333 *msginfo = msgheader; 334 334 *messagecount = 1; ··· 346 344 * vmbus_establish_gpadl - Estabish a GPADL for the specified buffer 347 345 * 348 346 * @channel: a channel 349 - * @kbuffer: from kmalloc 347 + * @kbuffer: from kmalloc or vmalloc 350 348 * @size: page-size multiple 351 349 * @gpadl_handle: some funky thing 352 350 */
+56 -40
drivers/net/bonding/bond_main.c
··· 2346 2346 arp_work.work); 2347 2347 struct slave *slave, *oldcurrent; 2348 2348 struct list_head *iter; 2349 - int do_failover = 0; 2349 + int do_failover = 0, slave_state_changed = 0; 2350 2350 2351 2351 if (!bond_has_slaves(bond)) 2352 2352 goto re_arm; ··· 2370 2370 bond_time_in_interval(bond, slave->dev->last_rx, 1)) { 2371 2371 2372 2372 slave->link = BOND_LINK_UP; 2373 - bond_set_active_slave(slave); 2373 + slave_state_changed = 1; 2374 2374 2375 2375 /* primary_slave has no meaning in round-robin 2376 2376 * mode. the window of a slave being up and ··· 2399 2399 !bond_time_in_interval(bond, slave->dev->last_rx, 2)) { 2400 2400 2401 2401 slave->link = BOND_LINK_DOWN; 2402 - bond_set_backup_slave(slave); 2402 + slave_state_changed = 1; 2403 2403 2404 2404 if (slave->link_failure_count < UINT_MAX) 2405 2405 slave->link_failure_count++; ··· 2426 2426 2427 2427 rcu_read_unlock(); 2428 2428 2429 - if (do_failover) { 2430 - /* the bond_select_active_slave must hold RTNL 2431 - * and curr_slave_lock for write. 2432 - */ 2429 + if (do_failover || slave_state_changed) { 2433 2430 if (!rtnl_trylock()) 2434 2431 goto re_arm; 2435 - block_netpoll_tx(); 2436 - write_lock_bh(&bond->curr_slave_lock); 2437 2432 2438 - bond_select_active_slave(bond); 2433 + if (slave_state_changed) { 2434 + bond_slave_state_change(bond); 2435 + } else if (do_failover) { 2436 + /* the bond_select_active_slave must hold RTNL 2437 + * and curr_slave_lock for write. 2438 + */ 2439 + block_netpoll_tx(); 2440 + write_lock_bh(&bond->curr_slave_lock); 2439 2441 2440 - write_unlock_bh(&bond->curr_slave_lock); 2441 - unblock_netpoll_tx(); 2442 + bond_select_active_slave(bond); 2443 + 2444 + write_unlock_bh(&bond->curr_slave_lock); 2445 + unblock_netpoll_tx(); 2446 + } 2442 2447 rtnl_unlock(); 2443 2448 } 2444 2449 ··· 2604 2599 2605 2600 /* 2606 2601 * Send ARP probes for active-backup mode ARP monitor. 2607 - * 2608 - * Called with rcu_read_lock hold. 2609 2602 */ 2610 - static void bond_ab_arp_probe(struct bonding *bond) 2603 + static bool bond_ab_arp_probe(struct bonding *bond) 2611 2604 { 2612 2605 struct slave *slave, *before = NULL, *new_slave = NULL, 2613 - *curr_arp_slave = rcu_dereference(bond->current_arp_slave); 2606 + *curr_arp_slave, *curr_active_slave; 2614 2607 struct list_head *iter; 2615 2608 bool found = false; 2616 2609 2617 - read_lock(&bond->curr_slave_lock); 2610 + rcu_read_lock(); 2611 + curr_arp_slave = rcu_dereference(bond->current_arp_slave); 2612 + curr_active_slave = rcu_dereference(bond->curr_active_slave); 2618 2613 2619 - if (curr_arp_slave && bond->curr_active_slave) 2614 + if (curr_arp_slave && curr_active_slave) 2620 2615 pr_info("PROBE: c_arp %s && cas %s BAD\n", 2621 2616 curr_arp_slave->dev->name, 2622 - bond->curr_active_slave->dev->name); 2617 + curr_active_slave->dev->name); 2623 2618 2624 - if (bond->curr_active_slave) { 2625 - bond_arp_send_all(bond, bond->curr_active_slave); 2626 - read_unlock(&bond->curr_slave_lock); 2627 - return; 2619 + if (curr_active_slave) { 2620 + bond_arp_send_all(bond, curr_active_slave); 2621 + rcu_read_unlock(); 2622 + return true; 2628 2623 } 2629 - 2630 - read_unlock(&bond->curr_slave_lock); 2624 + rcu_read_unlock(); 2631 2625 2632 2626 /* if we don't have a curr_active_slave, search for the next available 2633 2627 * backup slave from the current_arp_slave and make it the candidate 2634 2628 * for becoming the curr_active_slave 2635 2629 */ 2636 2630 2631 + if (!rtnl_trylock()) 2632 + return false; 2633 + /* curr_arp_slave might have gone away */ 2634 + curr_arp_slave = ACCESS_ONCE(bond->current_arp_slave); 2635 + 2637 2636 if (!curr_arp_slave) { 2638 - curr_arp_slave = bond_first_slave_rcu(bond); 2639 - if (!curr_arp_slave) 2640 - return; 2637 + curr_arp_slave = bond_first_slave(bond); 2638 + if (!curr_arp_slave) { 2639 + rtnl_unlock(); 2640 + return true; 2641 + } 2641 2642 } 2642 2643 2643 2644 bond_set_slave_inactive_flags(curr_arp_slave); 2644 2645 2645 - bond_for_each_slave_rcu(bond, slave, iter) { 2646 + bond_for_each_slave(bond, slave, iter) { 2646 2647 if (!found && !before && IS_UP(slave->dev)) 2647 2648 before = slave; 2648 2649 ··· 2678 2667 if (!new_slave && before) 2679 2668 new_slave = before; 2680 2669 2681 - if (!new_slave) 2682 - return; 2670 + if (!new_slave) { 2671 + rtnl_unlock(); 2672 + return true; 2673 + } 2683 2674 2684 2675 new_slave->link = BOND_LINK_BACK; 2685 2676 bond_set_slave_active_flags(new_slave); 2686 2677 bond_arp_send_all(bond, new_slave); 2687 2678 new_slave->jiffies = jiffies; 2688 2679 rcu_assign_pointer(bond->current_arp_slave, new_slave); 2680 + rtnl_unlock(); 2681 + 2682 + return true; 2689 2683 } 2690 2684 2691 2685 static void bond_activebackup_arp_mon(struct work_struct *work) 2692 2686 { 2693 2687 struct bonding *bond = container_of(work, struct bonding, 2694 2688 arp_work.work); 2695 - bool should_notify_peers = false; 2689 + bool should_notify_peers = false, should_commit = false; 2696 2690 int delta_in_ticks; 2697 2691 2698 2692 delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval); ··· 2706 2690 goto re_arm; 2707 2691 2708 2692 rcu_read_lock(); 2709 - 2710 2693 should_notify_peers = bond_should_notify_peers(bond); 2694 + should_commit = bond_ab_arp_inspect(bond); 2695 + rcu_read_unlock(); 2711 2696 2712 - if (bond_ab_arp_inspect(bond)) { 2713 - rcu_read_unlock(); 2714 - 2697 + if (should_commit) { 2715 2698 /* Race avoidance with bond_close flush of workqueue */ 2716 2699 if (!rtnl_trylock()) { 2717 2700 delta_in_ticks = 1; ··· 2719 2704 } 2720 2705 2721 2706 bond_ab_arp_commit(bond); 2722 - 2723 2707 rtnl_unlock(); 2724 - rcu_read_lock(); 2725 2708 } 2726 2709 2727 - bond_ab_arp_probe(bond); 2728 - rcu_read_unlock(); 2710 + if (!bond_ab_arp_probe(bond)) { 2711 + /* rtnl locking failed, re-arm */ 2712 + delta_in_ticks = 1; 2713 + should_notify_peers = false; 2714 + } 2729 2715 2730 2716 re_arm: 2731 2717 if (bond->params.arp_interval)
+13
drivers/net/bonding/bonding.h
··· 303 303 } 304 304 } 305 305 306 + static inline void bond_slave_state_change(struct bonding *bond) 307 + { 308 + struct list_head *iter; 309 + struct slave *tmp; 310 + 311 + bond_for_each_slave(bond, tmp, iter) { 312 + if (tmp->link == BOND_LINK_UP) 313 + bond_set_active_slave(tmp); 314 + else if (tmp->link == BOND_LINK_DOWN) 315 + bond_set_backup_slave(tmp); 316 + } 317 + } 318 + 306 319 static inline int bond_slave_state(struct slave *slave) 307 320 { 308 321 return slave->backup;
-1
drivers/net/ethernet/8390/apne.c
··· 212 212 int neX000, ctron; 213 213 #endif 214 214 static unsigned version_printed; 215 - struct ei_device *ei_local = netdev_priv(dev); 216 215 217 216 if ((apne_msg_enable & NETIF_MSG_DRV) && (version_printed++ == 0)) 218 217 netdev_info(dev, version);
+38 -40
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 358 358 359 359 cfg_idx = bnx2x_get_link_cfg_idx(bp); 360 360 old_multi_phy_config = bp->link_params.multi_phy_config; 361 - switch (cmd->port) { 362 - case PORT_TP: 363 - if (bp->port.supported[cfg_idx] & SUPPORTED_TP) 364 - break; /* no port change */ 365 - 366 - if (!(bp->port.supported[0] & SUPPORTED_TP || 367 - bp->port.supported[1] & SUPPORTED_TP)) { 361 + if (cmd->port != bnx2x_get_port_type(bp)) { 362 + switch (cmd->port) { 363 + case PORT_TP: 364 + if (!(bp->port.supported[0] & SUPPORTED_TP || 365 + bp->port.supported[1] & SUPPORTED_TP)) { 366 + DP(BNX2X_MSG_ETHTOOL, 367 + "Unsupported port type\n"); 368 + return -EINVAL; 369 + } 370 + bp->link_params.multi_phy_config &= 371 + ~PORT_HW_CFG_PHY_SELECTION_MASK; 372 + if (bp->link_params.multi_phy_config & 373 + PORT_HW_CFG_PHY_SWAPPED_ENABLED) 374 + bp->link_params.multi_phy_config |= 375 + PORT_HW_CFG_PHY_SELECTION_SECOND_PHY; 376 + else 377 + bp->link_params.multi_phy_config |= 378 + PORT_HW_CFG_PHY_SELECTION_FIRST_PHY; 379 + break; 380 + case PORT_FIBRE: 381 + case PORT_DA: 382 + if (!(bp->port.supported[0] & SUPPORTED_FIBRE || 383 + bp->port.supported[1] & SUPPORTED_FIBRE)) { 384 + DP(BNX2X_MSG_ETHTOOL, 385 + "Unsupported port type\n"); 386 + return -EINVAL; 387 + } 388 + bp->link_params.multi_phy_config &= 389 + ~PORT_HW_CFG_PHY_SELECTION_MASK; 390 + if (bp->link_params.multi_phy_config & 391 + PORT_HW_CFG_PHY_SWAPPED_ENABLED) 392 + bp->link_params.multi_phy_config |= 393 + PORT_HW_CFG_PHY_SELECTION_FIRST_PHY; 394 + else 395 + bp->link_params.multi_phy_config |= 396 + PORT_HW_CFG_PHY_SELECTION_SECOND_PHY; 397 + break; 398 + default: 368 399 DP(BNX2X_MSG_ETHTOOL, "Unsupported port type\n"); 369 400 return -EINVAL; 370 401 } 371 - bp->link_params.multi_phy_config &= 372 - ~PORT_HW_CFG_PHY_SELECTION_MASK; 373 - if (bp->link_params.multi_phy_config & 374 - PORT_HW_CFG_PHY_SWAPPED_ENABLED) 375 - bp->link_params.multi_phy_config |= 376 - PORT_HW_CFG_PHY_SELECTION_SECOND_PHY; 377 - else 378 - bp->link_params.multi_phy_config |= 379 - PORT_HW_CFG_PHY_SELECTION_FIRST_PHY; 380 - break; 381 - case PORT_FIBRE: 382 - case PORT_DA: 383 - if (bp->port.supported[cfg_idx] & SUPPORTED_FIBRE) 384 - break; /* no port change */ 385 - 386 - if (!(bp->port.supported[0] & SUPPORTED_FIBRE || 387 - bp->port.supported[1] & SUPPORTED_FIBRE)) { 388 - DP(BNX2X_MSG_ETHTOOL, "Unsupported port type\n"); 389 - return -EINVAL; 390 - } 391 - bp->link_params.multi_phy_config &= 392 - ~PORT_HW_CFG_PHY_SELECTION_MASK; 393 - if (bp->link_params.multi_phy_config & 394 - PORT_HW_CFG_PHY_SWAPPED_ENABLED) 395 - bp->link_params.multi_phy_config |= 396 - PORT_HW_CFG_PHY_SELECTION_FIRST_PHY; 397 - else 398 - bp->link_params.multi_phy_config |= 399 - PORT_HW_CFG_PHY_SELECTION_SECOND_PHY; 400 - break; 401 - default: 402 - DP(BNX2X_MSG_ETHTOOL, "Unsupported port type\n"); 403 - return -EINVAL; 404 402 } 405 403 /* Save new config in case command complete successfully */ 406 404 new_multi_phy_config = bp->link_params.multi_phy_config;
+2 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 13102 13102 13103 13103 if (atomic_read(&pdev->enable_cnt) == 1) 13104 13104 pci_release_regions(pdev); 13105 - } 13106 13105 13107 - pci_disable_device(pdev); 13106 + pci_disable_device(pdev); 13107 + } 13108 13108 } 13109 13109 13110 13110 static void bnx2x_remove_one(struct pci_dev *pdev)
+2 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 4440 4440 /* Check if APP Table has changed */ 4441 4441 if (memcmp(&new_cfg->app, 4442 4442 &old_cfg->app, 4443 - sizeof(new_cfg->app))) 4443 + sizeof(new_cfg->app))) { 4444 4444 need_reconfig = true; 4445 4445 dev_info(&pf->pdev->dev, "APP Table change detected.\n"); 4446 + } 4446 4447 4447 4448 return need_reconfig; 4448 4449 }
+2 -2
drivers/net/ethernet/marvell/sky2.c
··· 5020 5020 } 5021 5021 } 5022 5022 5023 + netif_napi_add(dev, &hw->napi, sky2_poll, NAPI_WEIGHT); 5024 + 5023 5025 err = register_netdev(dev); 5024 5026 if (err) { 5025 5027 dev_err(&pdev->dev, "cannot register net device\n"); ··· 5029 5027 } 5030 5028 5031 5029 netif_carrier_off(dev); 5032 - 5033 - netif_napi_add(dev, &hw->napi, sky2_poll, NAPI_WEIGHT); 5034 5030 5035 5031 sky2_show_addr(dev); 5036 5032
+12 -7
drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
··· 683 683 adapter->ahw->linkup = 0; 684 684 netif_carrier_off(netdev); 685 685 } else if (!adapter->ahw->linkup && linkup) { 686 - /* Do not advertise Link up if the port is in loopback mode */ 687 - if (qlcnic_83xx_check(adapter) && adapter->ahw->lb_mode) 686 + adapter->ahw->linkup = 1; 687 + 688 + /* Do not advertise Link up to the stack if device 689 + * is in loopback mode 690 + */ 691 + if (qlcnic_83xx_check(adapter) && adapter->ahw->lb_mode) { 692 + netdev_info(netdev, "NIC Link is up for loopback test\n"); 688 693 return; 694 + } 689 695 690 696 netdev_info(netdev, "NIC Link is up\n"); 691 - adapter->ahw->linkup = 1; 692 697 netif_carrier_on(netdev); 693 698 } 694 699 } ··· 1155 1150 u16 lro_length, length, data_offset, t_vid, vid = 0xffff; 1156 1151 u32 seq_number; 1157 1152 1158 - if (unlikely(ring > adapter->max_rds_rings)) 1153 + if (unlikely(ring >= adapter->max_rds_rings)) 1159 1154 return NULL; 1160 1155 1161 1156 rds_ring = &recv_ctx->rds_rings[ring]; 1162 1157 1163 1158 index = qlcnic_get_lro_sts_refhandle(sts_data0); 1164 - if (unlikely(index > rds_ring->num_desc)) 1159 + if (unlikely(index >= rds_ring->num_desc)) 1165 1160 return NULL; 1166 1161 1167 1162 buffer = &rds_ring->rx_buf_arr[index]; ··· 1667 1662 u16 vid = 0xffff; 1668 1663 int err; 1669 1664 1670 - if (unlikely(ring > adapter->max_rds_rings)) 1665 + if (unlikely(ring >= adapter->max_rds_rings)) 1671 1666 return NULL; 1672 1667 1673 1668 rds_ring = &recv_ctx->rds_rings[ring]; 1674 1669 1675 1670 index = qlcnic_83xx_hndl(sts_data[0]); 1676 - if (unlikely(index > rds_ring->num_desc)) 1671 + if (unlikely(index >= rds_ring->num_desc)) 1677 1672 return NULL; 1678 1673 1679 1674 buffer = &rds_ring->rx_buf_arr[index];
+2 -7
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 1837 1837 qlcnic_linkevent_request(adapter, 1); 1838 1838 1839 1839 adapter->ahw->reset_context = 0; 1840 + netif_tx_start_all_queues(netdev); 1840 1841 return 0; 1841 1842 } 1842 1843 ··· 2705 2704 2706 2705 err = __qlcnic_up(adapter, netdev); 2707 2706 if (err) 2708 - goto err_out; 2707 + qlcnic_detach(adapter); 2709 2708 2710 - netif_tx_start_all_queues(netdev); 2711 - 2712 - return 0; 2713 - 2714 - err_out: 2715 - qlcnic_detach(adapter); 2716 2709 return err; 2717 2710 } 2718 2711
+5 -6
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 448 448 return 0; 449 449 } 450 450 451 - static int qlcnic_sriov_get_vf_acl(struct qlcnic_adapter *adapter, 452 - struct qlcnic_info *info) 451 + static int qlcnic_sriov_get_vf_acl(struct qlcnic_adapter *adapter) 453 452 { 454 453 struct qlcnic_sriov *sriov = adapter->ahw->sriov; 455 454 struct qlcnic_cmd_args cmd; ··· 493 494 err = qlcnic_get_nic_info(adapter, &nic_info, ahw->pci_func); 494 495 if (err) 495 496 return -EIO; 496 - 497 - err = qlcnic_sriov_get_vf_acl(adapter, &nic_info); 498 - if (err) 499 - return err; 500 497 501 498 if (qlcnic_83xx_get_port_info(adapter)) 502 499 return -EIO; ··· 547 552 goto err_out_disable_bc_intr; 548 553 549 554 err = qlcnic_sriov_vf_init_driver(adapter); 555 + if (err) 556 + goto err_out_send_channel_term; 557 + 558 + err = qlcnic_sriov_get_vf_acl(adapter); 550 559 if (err) 551 560 goto err_out_send_channel_term; 552 561
+3 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1524 1524 priv->dev->dev_addr, 0); 1525 1525 if (!is_valid_ether_addr(priv->dev->dev_addr)) 1526 1526 eth_hw_addr_random(priv->dev); 1527 + pr_info("%s: device MAC address %pM\n", priv->dev->name, 1528 + priv->dev->dev_addr); 1527 1529 } 1528 - pr_warn("%s: device MAC address %pM\n", priv->dev->name, 1529 - priv->dev->dev_addr); 1530 1530 } 1531 1531 1532 1532 /** ··· 1635 1635 stmmac_mmc_setup(priv); 1636 1636 1637 1637 ret = stmmac_init_ptp(priv); 1638 - if (ret) 1638 + if (ret && ret != -EOPNOTSUPP) 1639 1639 pr_warn("%s: failed PTP initialisation\n", __func__); 1640 1640 1641 1641 #ifdef CONFIG_STMMAC_DEBUG_FS
+1 -1
drivers/net/hyperv/hyperv_net.h
··· 462 462 463 463 #define NETVSC_MTU 65536 464 464 465 - #define NETVSC_RECEIVE_BUFFER_SIZE (1024*1024*2) /* 2MB */ 465 + #define NETVSC_RECEIVE_BUFFER_SIZE (1024*1024*16) /* 16MB */ 466 466 467 467 #define NETVSC_RECEIVE_BUFFER_ID 0xcafe 468 468
+2 -5
drivers/net/hyperv/netvsc.c
··· 136 136 137 137 if (net_device->recv_buf) { 138 138 /* Free up the receive buffer */ 139 - free_pages((unsigned long)net_device->recv_buf, 140 - get_order(net_device->recv_buf_size)); 139 + vfree(net_device->recv_buf); 141 140 net_device->recv_buf = NULL; 142 141 } 143 142 ··· 162 163 return -ENODEV; 163 164 ndev = net_device->ndev; 164 165 165 - net_device->recv_buf = 166 - (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, 167 - get_order(net_device->recv_buf_size)); 166 + net_device->recv_buf = vzalloc(net_device->recv_buf_size); 168 167 if (!net_device->recv_buf) { 169 168 netdev_err(ndev, "unable to allocate receive " 170 169 "buffer of size %d\n", net_device->recv_buf_size);
+26 -1
drivers/net/tun.c
··· 69 69 #include <net/netns/generic.h> 70 70 #include <net/rtnetlink.h> 71 71 #include <net/sock.h> 72 + #include <linux/seq_file.h> 72 73 73 74 #include <asm/uaccess.h> 74 75 ··· 2229 2228 return 0; 2230 2229 } 2231 2230 2231 + #ifdef CONFIG_PROC_FS 2232 + static int tun_chr_show_fdinfo(struct seq_file *m, struct file *f) 2233 + { 2234 + struct tun_struct *tun; 2235 + struct ifreq ifr; 2236 + 2237 + memset(&ifr, 0, sizeof(ifr)); 2238 + 2239 + rtnl_lock(); 2240 + tun = tun_get(f); 2241 + if (tun) 2242 + tun_get_iff(current->nsproxy->net_ns, tun, &ifr); 2243 + rtnl_unlock(); 2244 + 2245 + if (tun) 2246 + tun_put(tun); 2247 + 2248 + return seq_printf(m, "iff:\t%s\n", ifr.ifr_name); 2249 + } 2250 + #endif 2251 + 2232 2252 static const struct file_operations tun_fops = { 2233 2253 .owner = THIS_MODULE, 2234 2254 .llseek = no_llseek, ··· 2264 2242 #endif 2265 2243 .open = tun_chr_open, 2266 2244 .release = tun_chr_close, 2267 - .fasync = tun_chr_fasync 2245 + .fasync = tun_chr_fasync, 2246 + #ifdef CONFIG_PROC_FS 2247 + .show_fdinfo = tun_chr_show_fdinfo, 2248 + #endif 2268 2249 }; 2269 2250 2270 2251 static struct miscdevice tun_miscdev = {
+29 -65
drivers/net/xen-netfront.c
··· 117 117 } tx_skbs[NET_TX_RING_SIZE]; 118 118 grant_ref_t gref_tx_head; 119 119 grant_ref_t grant_tx_ref[NET_TX_RING_SIZE]; 120 + struct page *grant_tx_page[NET_TX_RING_SIZE]; 120 121 unsigned tx_skb_freelist; 121 122 122 123 spinlock_t rx_lock ____cacheline_aligned_in_smp; ··· 397 396 gnttab_release_grant_reference( 398 397 &np->gref_tx_head, np->grant_tx_ref[id]); 399 398 np->grant_tx_ref[id] = GRANT_INVALID_REF; 399 + np->grant_tx_page[id] = NULL; 400 400 add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id); 401 401 dev_kfree_skb_irq(skb); 402 402 } ··· 454 452 gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id, 455 453 mfn, GNTMAP_readonly); 456 454 455 + np->grant_tx_page[id] = virt_to_page(data); 457 456 tx->gref = np->grant_tx_ref[id] = ref; 458 457 tx->offset = offset; 459 458 tx->size = len; ··· 500 497 np->xbdev->otherend_id, 501 498 mfn, GNTMAP_readonly); 502 499 500 + np->grant_tx_page[id] = page; 503 501 tx->gref = np->grant_tx_ref[id] = ref; 504 502 tx->offset = offset; 505 503 tx->size = bytes; ··· 600 596 mfn = virt_to_mfn(data); 601 597 gnttab_grant_foreign_access_ref( 602 598 ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly); 599 + np->grant_tx_page[id] = virt_to_page(data); 603 600 tx->gref = np->grant_tx_ref[id] = ref; 604 601 tx->offset = offset; 605 602 tx->size = len; ··· 1090 1085 continue; 1091 1086 1092 1087 skb = np->tx_skbs[i].skb; 1093 - gnttab_end_foreign_access_ref(np->grant_tx_ref[i], 1094 - GNTMAP_readonly); 1095 - gnttab_release_grant_reference(&np->gref_tx_head, 1096 - np->grant_tx_ref[i]); 1088 + get_page(np->grant_tx_page[i]); 1089 + gnttab_end_foreign_access(np->grant_tx_ref[i], 1090 + GNTMAP_readonly, 1091 + (unsigned long)page_address(np->grant_tx_page[i])); 1092 + np->grant_tx_page[i] = NULL; 1097 1093 np->grant_tx_ref[i] = GRANT_INVALID_REF; 1098 1094 add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i); 1099 1095 dev_kfree_skb_irq(skb); ··· 1103 1097 1104 1098 static void xennet_release_rx_bufs(struct netfront_info *np) 1105 1099 { 1106 - struct mmu_update *mmu = np->rx_mmu; 1107 - struct multicall_entry *mcl = np->rx_mcl; 1108 - struct sk_buff_head free_list; 1109 - struct sk_buff *skb; 1110 - unsigned long mfn; 1111 - int xfer = 0, noxfer = 0, unused = 0; 1112 1100 int id, ref; 1113 - 1114 - dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n", 1115 - __func__); 1116 - return; 1117 - 1118 - skb_queue_head_init(&free_list); 1119 1101 1120 1102 spin_lock_bh(&np->rx_lock); 1121 1103 1122 1104 for (id = 0; id < NET_RX_RING_SIZE; id++) { 1123 - ref = np->grant_rx_ref[id]; 1124 - if (ref == GRANT_INVALID_REF) { 1125 - unused++; 1126 - continue; 1127 - } 1105 + struct sk_buff *skb; 1106 + struct page *page; 1128 1107 1129 1108 skb = np->rx_skbs[id]; 1130 - mfn = gnttab_end_foreign_transfer_ref(ref); 1131 - gnttab_release_grant_reference(&np->gref_rx_head, ref); 1109 + if (!skb) 1110 + continue; 1111 + 1112 + ref = np->grant_rx_ref[id]; 1113 + if (ref == GRANT_INVALID_REF) 1114 + continue; 1115 + 1116 + page = skb_frag_page(&skb_shinfo(skb)->frags[0]); 1117 + 1118 + /* gnttab_end_foreign_access() needs a page ref until 1119 + * foreign access is ended (which may be deferred). 1120 + */ 1121 + get_page(page); 1122 + gnttab_end_foreign_access(ref, 0, 1123 + (unsigned long)page_address(page)); 1132 1124 np->grant_rx_ref[id] = GRANT_INVALID_REF; 1133 1125 1134 - if (0 == mfn) { 1135 - skb_shinfo(skb)->nr_frags = 0; 1136 - dev_kfree_skb(skb); 1137 - noxfer++; 1138 - continue; 1139 - } 1140 - 1141 - if (!xen_feature(XENFEAT_auto_translated_physmap)) { 1142 - /* Remap the page. */ 1143 - const struct page *page = 1144 - skb_frag_page(&skb_shinfo(skb)->frags[0]); 1145 - unsigned long pfn = page_to_pfn(page); 1146 - void *vaddr = page_address(page); 1147 - 1148 - MULTI_update_va_mapping(mcl, (unsigned long)vaddr, 1149 - mfn_pte(mfn, PAGE_KERNEL), 1150 - 0); 1151 - mcl++; 1152 - mmu->ptr = ((u64)mfn << PAGE_SHIFT) 1153 - | MMU_MACHPHYS_UPDATE; 1154 - mmu->val = pfn; 1155 - mmu++; 1156 - 1157 - set_phys_to_machine(pfn, mfn); 1158 - } 1159 - __skb_queue_tail(&free_list, skb); 1160 - xfer++; 1126 + kfree_skb(skb); 1161 1127 } 1162 - 1163 - dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n", 1164 - __func__, xfer, noxfer, unused); 1165 - 1166 - if (xfer) { 1167 - if (!xen_feature(XENFEAT_auto_translated_physmap)) { 1168 - /* Do all the remapping work and M2P updates. */ 1169 - MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu, 1170 - NULL, DOMID_SELF); 1171 - mcl++; 1172 - HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl); 1173 - } 1174 - } 1175 - 1176 - __skb_queue_purge(&free_list); 1177 1128 1178 1129 spin_unlock_bh(&np->rx_lock); 1179 1130 } ··· 1302 1339 for (i = 0; i < NET_RX_RING_SIZE; i++) { 1303 1340 np->rx_skbs[i] = NULL; 1304 1341 np->grant_rx_ref[i] = GRANT_INVALID_REF; 1342 + np->grant_tx_page[i] = NULL; 1305 1343 } 1306 1344 1307 1345 /* A grant for every tx ring slot */
+2 -3
drivers/s390/net/qeth_core.h
··· 738 738 int (*freeze)(struct ccwgroup_device *); 739 739 int (*thaw) (struct ccwgroup_device *); 740 740 int (*restore)(struct ccwgroup_device *); 741 + int (*control_event_handler)(struct qeth_card *card, 742 + struct qeth_ipa_cmd *cmd); 741 743 }; 742 744 743 745 struct qeth_vlan_vid { ··· 950 948 int qeth_send_control_data(struct qeth_card *, int, struct qeth_cmd_buffer *, 951 949 int (*reply_cb)(struct qeth_card *, struct qeth_reply*, unsigned long), 952 950 void *reply_param); 953 - void qeth_bridge_state_change(struct qeth_card *card, struct qeth_ipa_cmd *cmd); 954 - void qeth_bridgeport_query_support(struct qeth_card *card); 955 951 int qeth_bridgeport_query_ports(struct qeth_card *card, 956 952 enum qeth_sbp_roles *role, enum qeth_sbp_states *state); 957 953 int qeth_bridgeport_setrole(struct qeth_card *card, enum qeth_sbp_roles role); 958 954 int qeth_bridgeport_an_set(struct qeth_card *card, int enable); 959 - void qeth_bridge_host_event(struct qeth_card *card, struct qeth_ipa_cmd *cmd); 960 955 int qeth_get_priority_queue(struct qeth_card *, struct sk_buff *, int, int); 961 956 int qeth_get_elements_no(struct qeth_card *, struct sk_buff *, int); 962 957 int qeth_get_elements_for_frags(struct sk_buff *);
+6 -12
drivers/s390/net/qeth_core_main.c
··· 69 69 static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int); 70 70 71 71 struct workqueue_struct *qeth_wq; 72 + EXPORT_SYMBOL_GPL(qeth_wq); 72 73 73 74 static void qeth_close_dev_handler(struct work_struct *work) 74 75 { ··· 617 616 qeth_schedule_recovery(card); 618 617 return NULL; 619 618 case IPA_CMD_SETBRIDGEPORT: 620 - if (cmd->data.sbp.hdr.command_code == 621 - IPA_SBP_BRIDGE_PORT_STATE_CHANGE) { 622 - qeth_bridge_state_change(card, cmd); 623 - return NULL; 624 - } else 625 - return cmd; 626 619 case IPA_CMD_ADDRESS_CHANGE_NOTIF: 627 - qeth_bridge_host_event(card, cmd); 628 - return NULL; 620 + if (card->discipline->control_event_handler 621 + (card, cmd)) 622 + return cmd; 623 + else 624 + return NULL; 629 625 case IPA_CMD_MODCCID: 630 626 return cmd; 631 627 case IPA_CMD_REGISTER_LOCAL_ADDR: ··· 4971 4973 qeth_query_setadapterparms(card); 4972 4974 if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) 4973 4975 qeth_query_setdiagass(card); 4974 - qeth_bridgeport_query_support(card); 4975 - if (card->options.sbp.supported_funcs) 4976 - dev_info(&card->gdev->dev, 4977 - "The device represents a HiperSockets Bridge Capable Port\n"); 4978 4976 return 0; 4979 4977 out: 4980 4978 dev_warn(&card->gdev->dev, "The qeth device driver failed to recover "
+35 -6
drivers/s390/net/qeth_l2_main.c
··· 33 33 unsigned long)); 34 34 static void qeth_l2_set_multicast_list(struct net_device *); 35 35 static int qeth_l2_recover(void *); 36 + static void qeth_bridgeport_query_support(struct qeth_card *card); 37 + static void qeth_bridge_state_change(struct qeth_card *card, 38 + struct qeth_ipa_cmd *cmd); 39 + static void qeth_bridge_host_event(struct qeth_card *card, 40 + struct qeth_ipa_cmd *cmd); 36 41 37 42 static int qeth_l2_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 38 43 { ··· 994 989 rc = -ENODEV; 995 990 goto out_remove; 996 991 } 992 + qeth_bridgeport_query_support(card); 993 + if (card->options.sbp.supported_funcs) 994 + dev_info(&card->gdev->dev, 995 + "The device represents a HiperSockets Bridge Capable Port\n"); 997 996 qeth_trace_features(card); 998 997 999 998 if (!card->dev && qeth_l2_setup_netdev(card)) { ··· 1242 1233 return rc; 1243 1234 } 1244 1235 1236 + /* Returns zero if the command is successfully "consumed" */ 1237 + static int qeth_l2_control_event(struct qeth_card *card, 1238 + struct qeth_ipa_cmd *cmd) 1239 + { 1240 + switch (cmd->hdr.command) { 1241 + case IPA_CMD_SETBRIDGEPORT: 1242 + if (cmd->data.sbp.hdr.command_code == 1243 + IPA_SBP_BRIDGE_PORT_STATE_CHANGE) { 1244 + qeth_bridge_state_change(card, cmd); 1245 + return 0; 1246 + } else 1247 + return 1; 1248 + case IPA_CMD_ADDRESS_CHANGE_NOTIF: 1249 + qeth_bridge_host_event(card, cmd); 1250 + return 0; 1251 + default: 1252 + return 1; 1253 + } 1254 + } 1255 + 1245 1256 struct qeth_discipline qeth_l2_discipline = { 1246 1257 .start_poll = qeth_qdio_start_poll, 1247 1258 .input_handler = (qdio_handler_t *) qeth_qdio_input_handler, ··· 1275 1246 .freeze = qeth_l2_pm_suspend, 1276 1247 .thaw = qeth_l2_pm_resume, 1277 1248 .restore = qeth_l2_pm_resume, 1249 + .control_event_handler = qeth_l2_control_event, 1278 1250 }; 1279 1251 EXPORT_SYMBOL_GPL(qeth_l2_discipline); 1280 1252 ··· 1493 1463 kfree(data); 1494 1464 } 1495 1465 1496 - void qeth_bridge_state_change(struct qeth_card *card, struct qeth_ipa_cmd *cmd) 1466 + static void qeth_bridge_state_change(struct qeth_card *card, 1467 + struct qeth_ipa_cmd *cmd) 1497 1468 { 1498 1469 struct qeth_sbp_state_change *qports = 1499 1470 &cmd->data.sbp.data.state_change; ··· 1519 1488 sizeof(struct qeth_sbp_state_change) + extrasize); 1520 1489 queue_work(qeth_wq, &data->worker); 1521 1490 } 1522 - EXPORT_SYMBOL(qeth_bridge_state_change); 1523 1491 1524 1492 struct qeth_bridge_host_data { 1525 1493 struct work_struct worker; ··· 1558 1528 kfree(data); 1559 1529 } 1560 1530 1561 - void qeth_bridge_host_event(struct qeth_card *card, struct qeth_ipa_cmd *cmd) 1531 + static void qeth_bridge_host_event(struct qeth_card *card, 1532 + struct qeth_ipa_cmd *cmd) 1562 1533 { 1563 1534 struct qeth_ipacmd_addr_change *hostevs = 1564 1535 &cmd->data.addrchange; ··· 1591 1560 sizeof(struct qeth_ipacmd_addr_change) + extrasize); 1592 1561 queue_work(qeth_wq, &data->worker); 1593 1562 } 1594 - EXPORT_SYMBOL(qeth_bridge_host_event); 1595 1563 1596 1564 /* SETBRIDGEPORT support; sending commands */ 1597 1565 ··· 1713 1683 * Sets bitmask of supported setbridgeport subfunctions in the qeth_card 1714 1684 * strucutre: card->options.sbp.supported_funcs. 1715 1685 */ 1716 - void qeth_bridgeport_query_support(struct qeth_card *card) 1686 + static void qeth_bridgeport_query_support(struct qeth_card *card) 1717 1687 { 1718 1688 struct qeth_cmd_buffer *iob; 1719 1689 struct qeth_ipa_cmd *cmd; ··· 1739 1709 } 1740 1710 card->options.sbp.supported_funcs = cbctl.data.supported; 1741 1711 } 1742 - EXPORT_SYMBOL_GPL(qeth_bridgeport_query_support); 1743 1712 1744 1713 static int qeth_bridgeport_query_ports_cb(struct qeth_card *card, 1745 1714 struct qeth_reply *reply, unsigned long data)
+8
drivers/s390/net/qeth_l3_main.c
··· 3593 3593 return rc; 3594 3594 } 3595 3595 3596 + /* Returns zero if the command is successfully "consumed" */ 3597 + static int qeth_l3_control_event(struct qeth_card *card, 3598 + struct qeth_ipa_cmd *cmd) 3599 + { 3600 + return 1; 3601 + } 3602 + 3596 3603 struct qeth_discipline qeth_l3_discipline = { 3597 3604 .start_poll = qeth_qdio_start_poll, 3598 3605 .input_handler = (qdio_handler_t *) qeth_qdio_input_handler, ··· 3613 3606 .freeze = qeth_l3_pm_suspend, 3614 3607 .thaw = qeth_l3_pm_resume, 3615 3608 .restore = qeth_l3_pm_resume, 3609 + .control_event_handler = qeth_l3_control_event, 3616 3610 }; 3617 3611 EXPORT_SYMBOL_GPL(qeth_l3_discipline); 3618 3612
+1
include/linux/skbuff.h
··· 2456 2456 void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len); 2457 2457 int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen); 2458 2458 void skb_scrub_packet(struct sk_buff *skb, bool xnet); 2459 + unsigned int skb_gso_transport_seglen(const struct sk_buff *skb); 2459 2460 struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features); 2460 2461 2461 2462 struct skb_checksum_ops {
+26 -1
net/core/skbuff.c
··· 47 47 #include <linux/in.h> 48 48 #include <linux/inet.h> 49 49 #include <linux/slab.h> 50 + #include <linux/tcp.h> 51 + #include <linux/udp.h> 50 52 #include <linux/netdevice.h> 51 53 #ifdef CONFIG_NET_CLS_ACT 52 54 #include <net/pkt_sched.h> ··· 2121 2119 /** 2122 2120 * skb_zerocopy - Zero copy skb to skb 2123 2121 * @to: destination buffer 2124 - * @source: source buffer 2122 + * @from: source buffer 2125 2123 * @len: number of bytes to copy from source buffer 2126 2124 * @hlen: size of linear headroom in destination buffer 2127 2125 * ··· 3918 3916 nf_reset_trace(skb); 3919 3917 } 3920 3918 EXPORT_SYMBOL_GPL(skb_scrub_packet); 3919 + 3920 + /** 3921 + * skb_gso_transport_seglen - Return length of individual segments of a gso packet 3922 + * 3923 + * @skb: GSO skb 3924 + * 3925 + * skb_gso_transport_seglen is used to determine the real size of the 3926 + * individual segments, including Layer4 headers (TCP/UDP). 3927 + * 3928 + * The MAC/L2 or network (IP, IPv6) headers are not accounted for. 3929 + */ 3930 + unsigned int skb_gso_transport_seglen(const struct sk_buff *skb) 3931 + { 3932 + const struct skb_shared_info *shinfo = skb_shinfo(skb); 3933 + unsigned int hdr_len; 3934 + 3935 + if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) 3936 + hdr_len = tcp_hdrlen(skb); 3937 + else 3938 + hdr_len = sizeof(struct udphdr); 3939 + return hdr_len + shinfo->gso_size; 3940 + } 3941 + EXPORT_SYMBOL_GPL(skb_gso_transport_seglen);
+1 -1
net/ieee802154/6lowpan_iphc.c
··· 678 678 hc06_ptr += 3; 679 679 } else { 680 680 /* compress nothing */ 681 - memcpy(hc06_ptr, &hdr, 4); 681 + memcpy(hc06_ptr, hdr, 4); 682 682 /* replace the top byte with new ECN | DSCP format */ 683 683 *hc06_ptr = tmp; 684 684 hc06_ptr += 4;
+1 -1
net/ipv4/ip_gre.c
··· 178 178 else 179 179 itn = net_generic(net, ipgre_net_id); 180 180 181 - iph = (const struct iphdr *)skb->data; 181 + iph = (const struct iphdr *)(icmp_hdr(skb) + 1); 182 182 t = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi->flags, 183 183 iph->daddr, iph->saddr, tpi->key); 184 184
+1 -1
net/ipv4/ip_input.c
··· 314 314 const struct iphdr *iph = ip_hdr(skb); 315 315 struct rtable *rt; 316 316 317 - if (sysctl_ip_early_demux && !skb_dst(skb)) { 317 + if (sysctl_ip_early_demux && !skb_dst(skb) && skb->sk == NULL) { 318 318 const struct net_protocol *ipprot; 319 319 int protocol = iph->protocol; 320 320
+2 -1
net/ipv4/ip_tunnel.c
··· 40 40 #include <linux/if_ether.h> 41 41 #include <linux/if_vlan.h> 42 42 #include <linux/rculist.h> 43 + #include <linux/err.h> 43 44 44 45 #include <net/sock.h> 45 46 #include <net/ip.h> ··· 931 930 } 932 931 rtnl_unlock(); 933 932 934 - return PTR_RET(itn->fb_tunnel_dev); 933 + return PTR_ERR_OR_ZERO(itn->fb_tunnel_dev); 935 934 } 936 935 EXPORT_SYMBOL_GPL(ip_tunnel_init_net); 937 936
+1 -1
net/ipv6/ip6_input.c
··· 49 49 50 50 int ip6_rcv_finish(struct sk_buff *skb) 51 51 { 52 - if (sysctl_ip_early_demux && !skb_dst(skb)) { 52 + if (sysctl_ip_early_demux && !skb_dst(skb) && skb->sk == NULL) { 53 53 const struct inet6_protocol *ipprot; 54 54 55 55 ipprot = rcu_dereference(inet6_protos[ipv6_hdr(skb)->nexthdr]);
+1 -1
net/llc/llc_output.c
··· 43 43 rc = 0; 44 44 break; 45 45 default: 46 - WARN(1, "device type not supported: %d\n", skb->dev->type); 46 + break; 47 47 } 48 48 return rc; 49 49 }
+2
net/rxrpc/ar-connection.c
··· 381 381 382 382 rxrpc_assign_connection_id(conn); 383 383 rx->conn = conn; 384 + } else { 385 + spin_lock(&trans->client_lock); 384 386 } 385 387 386 388 /* we've got a connection with a free channel and we can now attach the
+6 -1
net/rxrpc/ar-recvmsg.c
··· 180 180 if (copy > len - copied) 181 181 copy = len - copied; 182 182 183 - if (skb->ip_summed == CHECKSUM_UNNECESSARY) { 183 + if (skb->ip_summed == CHECKSUM_UNNECESSARY || 184 + skb->ip_summed == CHECKSUM_PARTIAL) { 184 185 ret = skb_copy_datagram_iovec(skb, offset, 185 186 msg->msg_iov, copy); 186 187 } else { ··· 354 353 if (continue_call) 355 354 rxrpc_put_call(continue_call); 356 355 rxrpc_kill_skb(skb); 356 + if (!(flags & MSG_PEEK)) { 357 + if (skb_dequeue(&rx->sk.sk_receive_queue) != skb) 358 + BUG(); 359 + } 357 360 skb_kill_datagram(&rx->sk, skb, flags); 358 361 rxrpc_put_call(call); 359 362 return -EAGAIN;
+3 -10
net/sched/sch_tbf.c
··· 21 21 #include <net/netlink.h> 22 22 #include <net/sch_generic.h> 23 23 #include <net/pkt_sched.h> 24 - #include <net/tcp.h> 25 24 26 25 27 26 /* Simple Token Bucket Filter. ··· 147 148 * Return length of individual segments of a gso packet, 148 149 * including all headers (MAC, IP, TCP/UDP) 149 150 */ 150 - static unsigned int skb_gso_seglen(const struct sk_buff *skb) 151 + static unsigned int skb_gso_mac_seglen(const struct sk_buff *skb) 151 152 { 152 153 unsigned int hdr_len = skb_transport_header(skb) - skb_mac_header(skb); 153 - const struct skb_shared_info *shinfo = skb_shinfo(skb); 154 - 155 - if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) 156 - hdr_len += tcp_hdrlen(skb); 157 - else 158 - hdr_len += sizeof(struct udphdr); 159 - return hdr_len + shinfo->gso_size; 154 + return hdr_len + skb_gso_transport_seglen(skb); 160 155 } 161 156 162 157 /* GSO packet is too big, segment it so that tbf can transmit ··· 195 202 int ret; 196 203 197 204 if (qdisc_pkt_len(skb) > q->max_size) { 198 - if (skb_is_gso(skb) && skb_gso_seglen(skb) <= q->max_size) 205 + if (skb_is_gso(skb) && skb_gso_mac_seglen(skb) <= q->max_size) 199 206 return tbf_segment(skb, sch); 200 207 return qdisc_reshape_fail(skb, sch); 201 208 }