b43: Fix DMA TX bounce buffer copying

b43 allocates a bouncebuffer, if the supplied TX skb is in an invalid
memory range for DMA.
However, this is broken in that it fails to copy over some metadata to the
new skb.

This patch fixes three problems:
* Failure to adjust the ieee80211_tx_info pointer to the new buffer.
This results in a kmemcheck warning.
* Failure to copy the skb cb, which contains ieee80211_tx_info, to the new skb.
This results in breakage of various TX-status postprocessing (Rate control).
* Failure to transfer the queue mapping.
This results in the wrong queue being stopped on saturation and can result in queue overflow.

Signed-off-by: Michael Buesch <mb@bu3sch.de>
Tested-by: Christian Casteyde <casteyde.christian@free.fr>
Signed-off-by: John W. Linville <linville@tuxdriver.com>

authored by Michael Buesch and committed by John W. Linville 9a3f4511 f446d10f

+13 -2
+13 -2
drivers/net/wireless/b43/dma.c
··· 1157 } 1158 1159 static int dma_tx_fragment(struct b43_dmaring *ring, 1160 - struct sk_buff *skb) 1161 { 1162 const struct b43_dma_ops *ops = ring->ops; 1163 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1164 u8 *header; ··· 1225 } 1226 1227 memcpy(skb_put(bounce_skb, skb->len), skb->data, skb->len); 1228 dev_kfree_skb_any(skb); 1229 skb = bounce_skb; 1230 meta->skb = skb; 1231 meta->dmaaddr = map_descbuffer(ring, skb->data, skb->len, 1); 1232 if (b43_dma_mapping_error(ring, meta->dmaaddr, skb->len, 1)) { ··· 1362 * static, so we don't need to store it per frame. */ 1363 ring->queue_prio = skb_get_queue_mapping(skb); 1364 1365 - err = dma_tx_fragment(ring, skb); 1366 if (unlikely(err == -ENOKEY)) { 1367 /* Drop this packet, as we don't have the encryption key 1368 * anymore and must not transmit it unencrypted. */
··· 1157 } 1158 1159 static int dma_tx_fragment(struct b43_dmaring *ring, 1160 + struct sk_buff **in_skb) 1161 { 1162 + struct sk_buff *skb = *in_skb; 1163 const struct b43_dma_ops *ops = ring->ops; 1164 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1165 u8 *header; ··· 1224 } 1225 1226 memcpy(skb_put(bounce_skb, skb->len), skb->data, skb->len); 1227 + memcpy(bounce_skb->cb, skb->cb, sizeof(skb->cb)); 1228 + bounce_skb->dev = skb->dev; 1229 + skb_set_queue_mapping(bounce_skb, skb_get_queue_mapping(skb)); 1230 + info = IEEE80211_SKB_CB(bounce_skb); 1231 + 1232 dev_kfree_skb_any(skb); 1233 skb = bounce_skb; 1234 + *in_skb = bounce_skb; 1235 meta->skb = skb; 1236 meta->dmaaddr = map_descbuffer(ring, skb->data, skb->len, 1); 1237 if (b43_dma_mapping_error(ring, meta->dmaaddr, skb->len, 1)) { ··· 1355 * static, so we don't need to store it per frame. */ 1356 ring->queue_prio = skb_get_queue_mapping(skb); 1357 1358 + /* dma_tx_fragment might reallocate the skb, so invalidate pointers pointing 1359 + * into the skb data or cb now. */ 1360 + hdr = NULL; 1361 + info = NULL; 1362 + err = dma_tx_fragment(ring, &skb); 1363 if (unlikely(err == -ENOKEY)) { 1364 /* Drop this packet, as we don't have the encryption key 1365 * anymore and must not transmit it unencrypted. */