[TCP]: Restore 2.6.24 mark_head_lost behavior for newreno/fack

The fast retransmission can be forced locally to the rfc3517
branch in tcp_update_scoreboard instead of making such fragile
constructs deeper in tcp_mark_head_lost.

This is necessary for the next patch which must not have
loopholes for cnt > packets check. As one can notice,
readability got some improvements too because of this :-).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by Ilpo Järvinen and committed by David S. Miller 1b69d745 bfe87dbc

+8 -8
+8 -8
net/ipv4/tcp_input.c
··· 2134 2134 /* Mark head of queue up as lost. With RFC3517 SACK, the packets is 2135 2135 * is against sacked "cnt", otherwise it's against facked "cnt" 2136 2136 */ 2137 - static void tcp_mark_head_lost(struct sock *sk, int packets, int fast_rexmit) 2137 + static void tcp_mark_head_lost(struct sock *sk, int packets) 2138 2138 { 2139 2139 struct tcp_sock *tp = tcp_sk(sk); 2140 2140 struct sk_buff *skb; ··· 2161 2161 (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)) 2162 2162 cnt += tcp_skb_pcount(skb); 2163 2163 2164 - if (((!fast_rexmit || (tp->lost_out > 0)) && (cnt > packets)) || 2164 + if ((cnt > packets) || 2165 2165 after(TCP_SKB_CB(skb)->end_seq, tp->high_seq)) 2166 2166 break; 2167 2167 if (!(TCP_SKB_CB(skb)->sacked & (TCPCB_SACKED_ACKED|TCPCB_LOST))) { ··· 2180 2180 struct tcp_sock *tp = tcp_sk(sk); 2181 2181 2182 2182 if (tcp_is_reno(tp)) { 2183 - tcp_mark_head_lost(sk, 1, fast_rexmit); 2183 + tcp_mark_head_lost(sk, 1); 2184 2184 } else if (tcp_is_fack(tp)) { 2185 2185 int lost = tp->fackets_out - tp->reordering; 2186 2186 if (lost <= 0) 2187 2187 lost = 1; 2188 - tcp_mark_head_lost(sk, lost, fast_rexmit); 2188 + tcp_mark_head_lost(sk, lost); 2189 2189 } else { 2190 2190 int sacked_upto = tp->sacked_out - tp->reordering; 2191 - if (sacked_upto < 0) 2192 - sacked_upto = 0; 2193 - tcp_mark_head_lost(sk, sacked_upto, fast_rexmit); 2191 + if (sacked_upto < fast_rexmit) 2192 + sacked_upto = fast_rexmit; 2193 + tcp_mark_head_lost(sk, sacked_upto); 2194 2194 } 2195 2195 2196 2196 /* New heuristics: it is possible only after we switched ··· 2524 2524 before(tp->snd_una, tp->high_seq) && 2525 2525 icsk->icsk_ca_state != TCP_CA_Open && 2526 2526 tp->fackets_out > tp->reordering) { 2527 - tcp_mark_head_lost(sk, tp->fackets_out - tp->reordering, 0); 2527 + tcp_mark_head_lost(sk, tp->fackets_out - tp->reordering); 2528 2528 NET_INC_STATS_BH(LINUX_MIB_TCPLOSS); 2529 2529 } 2530 2530