tcp: make urg+gso work for real this time

I should have noticed this earlier... :-) The previous solution
to URG+GSO/TSO will cause SACK block tcp_fragment to do zig-zig
patterns, or even worse, a steep downward slope into packet
counting because each skb pcount would be truncated to pcount
of 2 and then the following fragments of the later portion would
restore the window again.

Basically this reverts "tcp: Do not use TSO/GSO when there is
urgent data" (33cf71cee1). It also removes some unnecessary code
from tcp_current_mss that didn't work as intented either (could
be that something was changed down the road, or it might have
been broken since the dawn of time) because it only works once
urg is already written while this bug shows up starting from
~64k before the urg point.

The retransmissions already are split to mss sized chunks, so
only new data sending paths need splitting in case they have
a segment otherwise suitable for gso/tso. The actually check
can be improved to be more narrow but since this is late -rc
already, I'll postpone thinking the more fine-grained things.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by Ilpo Järvinen and committed by David S. Miller f8269a49 5176da7e

+10 -12
+10 -12
net/ipv4/tcp_output.c
··· 722 722 static void tcp_set_skb_tso_segs(struct sock *sk, struct sk_buff *skb, 723 723 unsigned int mss_now) 724 724 { 725 - if (skb->len <= mss_now || !sk_can_gso(sk) || 726 - tcp_urg_mode(tcp_sk(sk))) { 725 + if (skb->len <= mss_now || !sk_can_gso(sk)) { 727 726 /* Avoid the costly divide in the normal 728 727 * non-TSO case. 729 728 */ ··· 1028 1029 1029 1030 /* Compute the current effective MSS, taking SACKs and IP options, 1030 1031 * and even PMTU discovery events into account. 1031 - * 1032 - * LARGESEND note: !tcp_urg_mode is overkill, only frames up to snd_up 1033 - * cannot be large. However, taking into account rare use of URG, this 1034 - * is not a big flaw. 1035 1032 */ 1036 1033 unsigned int tcp_current_mss(struct sock *sk, int large_allowed) 1037 1034 { ··· 1042 1047 1043 1048 mss_now = tp->mss_cache; 1044 1049 1045 - if (large_allowed && sk_can_gso(sk) && !tcp_urg_mode(tp)) 1050 + if (large_allowed && sk_can_gso(sk)) 1046 1051 doing_tso = 1; 1047 1052 1048 1053 if (dst) { ··· 1159 1164 { 1160 1165 int tso_segs = tcp_skb_pcount(skb); 1161 1166 1162 - if (!tso_segs || 1163 - (tso_segs > 1 && (tcp_skb_mss(skb) != mss_now || 1164 - tcp_urg_mode(tcp_sk(sk))))) { 1167 + if (!tso_segs || (tso_segs > 1 && tcp_skb_mss(skb) != mss_now)) { 1165 1168 tcp_set_skb_tso_segs(sk, skb, mss_now); 1166 1169 tso_segs = tcp_skb_pcount(skb); 1167 1170 } ··· 1512 1519 * send_head. This happens as incoming acks open up the remote 1513 1520 * window for us. 1514 1521 * 1522 + * LARGESEND note: !tcp_urg_mode is overkill, only frames between 1523 + * snd_up-64k-mss .. snd_up cannot be large. However, taking into 1524 + * account rare use of URG, this is not a big flaw. 1525 + * 1515 1526 * Returns 1, if no segments are in flight and we have queued segments, but 1516 1527 * cannot send anything now because of SWS or another problem. 1517 1528 */ ··· 1567 1570 } 1568 1571 1569 1572 limit = mss_now; 1570 - if (tso_segs > 1) 1573 + if (tso_segs > 1 && !tcp_urg_mode(tp)) 1571 1574 limit = tcp_mss_split_point(sk, skb, mss_now, 1572 1575 cwnd_quota); 1573 1576 ··· 1616 1619 */ 1617 1620 void tcp_push_one(struct sock *sk, unsigned int mss_now) 1618 1621 { 1622 + struct tcp_sock *tp = tcp_sk(sk); 1619 1623 struct sk_buff *skb = tcp_send_head(sk); 1620 1624 unsigned int tso_segs, cwnd_quota; 1621 1625 ··· 1631 1633 BUG_ON(!tso_segs); 1632 1634 1633 1635 limit = mss_now; 1634 - if (tso_segs > 1) 1636 + if (tso_segs > 1 && !tcp_urg_mode(tp)) 1635 1637 limit = tcp_mss_split_point(sk, skb, mss_now, 1636 1638 cwnd_quota); 1637 1639