Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

usbip: Implement SG support to vhci-hcd and stub driver

There are bugs on vhci with usb 3.0 storage device. In USB, each SG
list entry buffer should be divisible by the bulk max packet size.
But with native SG support, this problem doesn't matter because the
SG buffer is treated as contiguous buffer. But without native SG
support, USB storage driver breaks SG list into several URBs and the
error occurs because of a buffer size of URB that cannot be divided
by the bulk max packet size. The error situation is as follows.

When USB Storage driver requests 31.5 KB data and has SG list which
has 3584 bytes buffer followed by 7 4096 bytes buffer for some
reason. USB Storage driver splits this SG list into several URBs
because VHCI doesn't support SG and sends them separately. So the
first URB buffer size is 3584 bytes. When receiving data from device,
USB 3.0 device sends data packet of 1024 bytes size because the max
packet size of BULK pipe is 1024 bytes. So device sends 4096 bytes.
But the first URB buffer has only 3584 bytes buffer size. So host
controller terminates the transfer even though there is more data to
receive. So, vhci needs to support SG transfer to prevent this error.

In this patch, vhci supports SG regardless of whether the server's
host controller supports SG or not, because stub driver splits SG
list into several URBs if the server's host controller doesn't
support SG.

To support SG, vhci sets URB_DMA_MAP_SG flag in urb->transfer_flags
if URB has SG list and this flag will tell stub driver to use SG
list. After receiving urb from stub driver, vhci clear URB_DMA_MAP_SG
flag to avoid unnecessary DMA unmapping in HCD.

vhci sends each SG list entry to stub driver. Then, stub driver sees
the total length of the buffer and allocates SG table and pages
according to the total buffer length calling sgl_alloc(). After stub
driver receives completed URB, it again sends each SG list entry to
vhci.

If the server's host controller doesn't support SG, stub driver
breaks a single SG request into several URBs and submits them to
the server's host controller. When all the split URBs are completed,
stub driver reassembles the URBs into a single return command and
sends it to vhci.

Moreover, in the situation where vhci supports SG, but stub driver
does not, or vice versa, usbip works normally. Because there is no
protocol modification, there is no problem in communication between
server and client even if the one has a kernel without SG support.

In the case of vhci supports SG and stub driver doesn't, because
vhci sends only the total length of the buffer to stub driver as
it did before the patch applied, stub driver only needs to allocate
the required length of buffers using only kmalloc() regardless of
whether vhci supports SG or not. But stub driver has to allocate
buffer with kmalloc() as much as the total length of SG buffer which
is quite huge when vhci sends SG request, so it has overhead in
buffer allocation in this situation.

If stub driver needs to send data buffer to vhci because of IN pipe,
stub driver also sends only total length of buffer as metadata and
then sends real data as vhci does. Then vhci receive data from stub
driver and store it to the corresponding buffer of SG list entry.

And for the case of stub driver supports SG and vhci doesn't, since
the USB storage driver checks that vhci doesn't support SG and sends
the request to stub driver by splitting the SG list into multiple
URBs, stub driver allocates a buffer for each URB with kmalloc() as
it did before this patch.

* Test environment

Test uses two difference machines and two different kernel version
to make mismatch situation between the client and the server where
vhci supports SG, but stub driver does not, or vice versa. All tests
are conducted in both full SG support that both vhci and stub support
SG and half SG support that is the mismatch situation. Test kernel
version is 5.3-rc6 with commit "usb: add a HCD_DMA flag instead of
guestimating DMA capabilities" to avoid unnecessary DMA mapping and
unmapping.

- Test kernel version
- 5.3-rc6 with SG support
- 5.1.20-200.fc29.x86_64 without SG support

* SG support test

- Test devices
- Super-speed storage device - SanDisk Ultra USB 3.0
- High-speed storage device - SMI corporation USB 2.0 flash drive

- Test description

Test read and write operation of mass storage device that uses the
BULK transfer. In test, the client reads and writes files whose size
is over 1G and it works normally.

* Regression test

- Test devices
- Super-speed device - Logitech Brio webcam
- High-speed device - Logitech C920 HD Pro webcam
- Full-speed device - Logitech bluetooth mouse
- Britz BR-Orion speaker
- Low-speed device - Logitech wired mouse

- Test description

Moving and click test for mouse. To test the webcam, use gnome-cheese.
To test the speaker, play music and video on the client. All works
normally.

* VUDC compatibility test

VUDC also works well with this patch. Tests are done with two USB
gadget created by CONFIGFS USB gadget. Both use the BULK pipe.

1. Serial gadget
2. Mass storage gadget

- Serial gadget test

Serial gadget on the host sends and receives data using cat command
on the /dev/ttyGS<N>. The client uses minicom to communicate with
the serial gadget.

- Mass storage gadget test

After connecting the gadget with vhci, use "dd" to test read and
write operation on the client side.

Read - dd if=/dev/sd<N> iflag=direct of=/dev/null bs=1G count=1
Write - dd if=<my file path> iflag=direct of=/dev/sd<N> bs=1G count=1

Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
Acked-by: Shuah khan <skhan@linuxfoundation.org>
Link: https://lore.kernel.org/r/20190828032741.12234-1-suwan.kim027@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Suwan Kim and committed by
Greg Kroah-Hartman
ea44d190 2be1fb64

+380 -127
+6 -1
drivers/usb/usbip/stub.h
··· 52 52 unsigned long seqnum; 53 53 struct list_head list; 54 54 struct stub_device *sdev; 55 - struct urb *urb; 55 + struct urb **urbs; 56 + struct scatterlist *sgl; 57 + int num_urbs; 58 + int completed_urbs; 59 + int urb_status; 56 60 57 61 int unlinking; 58 62 }; ··· 90 86 struct bus_id_priv *get_busid_priv(const char *busid); 91 87 void put_busid_priv(struct bus_id_priv *bid); 92 88 int del_match_busid(char *busid); 89 + void stub_free_priv_and_urb(struct stub_priv *priv); 93 90 void stub_device_cleanup_urbs(struct stub_device *sdev); 94 91 95 92 /* stub_rx.c */
+42 -15
drivers/usb/usbip/stub_main.c
··· 6 6 #include <linux/string.h> 7 7 #include <linux/module.h> 8 8 #include <linux/device.h> 9 + #include <linux/scatterlist.h> 9 10 10 11 #include "usbip_common.h" 11 12 #include "stub.h" ··· 282 281 struct stub_priv *priv, *tmp; 283 282 284 283 list_for_each_entry_safe(priv, tmp, listhead, list) { 285 - list_del(&priv->list); 284 + list_del_init(&priv->list); 286 285 return priv; 287 286 } 288 287 289 288 return NULL; 289 + } 290 + 291 + void stub_free_priv_and_urb(struct stub_priv *priv) 292 + { 293 + struct urb *urb; 294 + int i; 295 + 296 + for (i = 0; i < priv->num_urbs; i++) { 297 + urb = priv->urbs[i]; 298 + 299 + if (!urb) 300 + return; 301 + 302 + kfree(urb->setup_packet); 303 + urb->setup_packet = NULL; 304 + 305 + 306 + if (urb->transfer_buffer && !priv->sgl) { 307 + kfree(urb->transfer_buffer); 308 + urb->transfer_buffer = NULL; 309 + } 310 + 311 + if (urb->num_sgs) { 312 + sgl_free(urb->sg); 313 + urb->sg = NULL; 314 + urb->num_sgs = 0; 315 + } 316 + 317 + usb_free_urb(urb); 318 + } 319 + if (!list_empty(&priv->list)) 320 + list_del(&priv->list); 321 + if (priv->sgl) 322 + sgl_free(priv->sgl); 323 + kfree(priv->urbs); 324 + kmem_cache_free(stub_priv_cache, priv); 290 325 } 291 326 292 327 static struct stub_priv *stub_priv_pop(struct stub_device *sdev) ··· 351 314 void stub_device_cleanup_urbs(struct stub_device *sdev) 352 315 { 353 316 struct stub_priv *priv; 354 - struct urb *urb; 317 + int i; 355 318 356 319 dev_dbg(&sdev->udev->dev, "Stub device cleaning up urbs\n"); 357 320 358 321 while ((priv = stub_priv_pop(sdev))) { 359 - urb = priv->urb; 360 - dev_dbg(&sdev->udev->dev, "free urb seqnum %lu\n", 361 - priv->seqnum); 362 - usb_kill_urb(urb); 322 + for (i = 0; i < priv->num_urbs; i++) 323 + usb_kill_urb(priv->urbs[i]); 363 324 364 - kmem_cache_free(stub_priv_cache, priv); 365 - 366 - kfree(urb->transfer_buffer); 367 - urb->transfer_buffer = NULL; 368 - 369 - kfree(urb->setup_packet); 370 - urb->setup_packet = NULL; 371 - 372 - usb_free_urb(urb); 325 + stub_free_priv_and_urb(priv); 373 326 } 374 327 } 375 328
+146 -58
drivers/usb/usbip/stub_rx.c
··· 7 7 #include <linux/kthread.h> 8 8 #include <linux/usb.h> 9 9 #include <linux/usb/hcd.h> 10 + #include <linux/scatterlist.h> 10 11 11 12 #include "usbip_common.h" 12 13 #include "stub.h" ··· 202 201 static int stub_recv_cmd_unlink(struct stub_device *sdev, 203 202 struct usbip_header *pdu) 204 203 { 205 - int ret; 204 + int ret, i; 206 205 unsigned long flags; 207 206 struct stub_priv *priv; 208 207 ··· 247 246 * so a driver in a client host will know the failure 248 247 * of the unlink request ? 249 248 */ 250 - ret = usb_unlink_urb(priv->urb); 251 - if (ret != -EINPROGRESS) 252 - dev_err(&priv->urb->dev->dev, 253 - "failed to unlink a urb # %lu, ret %d\n", 254 - priv->seqnum, ret); 255 - 249 + for (i = priv->completed_urbs; i < priv->num_urbs; i++) { 250 + ret = usb_unlink_urb(priv->urbs[i]); 251 + if (ret != -EINPROGRESS) 252 + dev_err(&priv->urbs[i]->dev->dev, 253 + "failed to unlink %d/%d urb of seqnum %lu, ret %d\n", 254 + i + 1, priv->num_urbs, 255 + priv->seqnum, ret); 256 + } 256 257 return 0; 257 258 } 258 259 ··· 436 433 urb->transfer_flags &= allowed; 437 434 } 438 435 436 + static int stub_recv_xbuff(struct usbip_device *ud, struct stub_priv *priv) 437 + { 438 + int ret; 439 + int i; 440 + 441 + for (i = 0; i < priv->num_urbs; i++) { 442 + ret = usbip_recv_xbuff(ud, priv->urbs[i]); 443 + if (ret < 0) 444 + break; 445 + } 446 + 447 + return ret; 448 + } 449 + 439 450 static void stub_recv_cmd_submit(struct stub_device *sdev, 440 451 struct usbip_header *pdu) 441 452 { 442 - int ret; 443 453 struct stub_priv *priv; 444 454 struct usbip_device *ud = &sdev->ud; 445 455 struct usb_device *udev = sdev->udev; 456 + struct scatterlist *sgl = NULL, *sg; 457 + void *buffer = NULL; 458 + unsigned long long buf_len; 459 + int nents; 460 + int num_urbs = 1; 446 461 int pipe = get_pipe(sdev, pdu); 462 + int use_sg = pdu->u.cmd_submit.transfer_flags & URB_DMA_MAP_SG; 463 + int support_sg = 1; 464 + int np = 0; 465 + int ret, i; 447 466 448 467 if (pipe == -1) 449 468 return; ··· 474 449 if (!priv) 475 450 return; 476 451 477 - /* setup a urb */ 478 - if (usb_pipeisoc(pipe)) 479 - priv->urb = usb_alloc_urb(pdu->u.cmd_submit.number_of_packets, 480 - GFP_KERNEL); 481 - else 482 - priv->urb = usb_alloc_urb(0, GFP_KERNEL); 483 - 484 - if (!priv->urb) { 485 - usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC); 486 - return; 487 - } 452 + buf_len = (unsigned long long)pdu->u.cmd_submit.transfer_buffer_length; 488 453 489 454 /* allocate urb transfer buffer, if needed */ 490 - if (pdu->u.cmd_submit.transfer_buffer_length > 0) { 491 - priv->urb->transfer_buffer = 492 - kzalloc(pdu->u.cmd_submit.transfer_buffer_length, 493 - GFP_KERNEL); 494 - if (!priv->urb->transfer_buffer) { 495 - usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC); 496 - return; 455 + if (buf_len) { 456 + if (use_sg) { 457 + sgl = sgl_alloc(buf_len, GFP_KERNEL, &nents); 458 + if (!sgl) 459 + goto err_malloc; 460 + } else { 461 + buffer = kzalloc(buf_len, GFP_KERNEL); 462 + if (!buffer) 463 + goto err_malloc; 497 464 } 498 465 } 499 466 500 - /* copy urb setup packet */ 501 - priv->urb->setup_packet = kmemdup(&pdu->u.cmd_submit.setup, 8, 502 - GFP_KERNEL); 503 - if (!priv->urb->setup_packet) { 504 - dev_err(&udev->dev, "allocate setup_packet\n"); 505 - usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC); 506 - return; 467 + /* Check if the server's HCD supports SG */ 468 + if (use_sg && !udev->bus->sg_tablesize) { 469 + /* 470 + * If the server's HCD doesn't support SG, break a single SG 471 + * request into several URBs and map each SG list entry to 472 + * corresponding URB buffer. The previously allocated SG 473 + * list is stored in priv->sgl (If the server's HCD support SG, 474 + * SG list is stored only in urb->sg) and it is used as an 475 + * indicator that the server split single SG request into 476 + * several URBs. Later, priv->sgl is used by stub_complete() and 477 + * stub_send_ret_submit() to reassemble the divied URBs. 478 + */ 479 + support_sg = 0; 480 + num_urbs = nents; 481 + priv->completed_urbs = 0; 482 + pdu->u.cmd_submit.transfer_flags &= ~URB_DMA_MAP_SG; 507 483 } 508 484 509 - /* set other members from the base header of pdu */ 510 - priv->urb->context = (void *) priv; 511 - priv->urb->dev = udev; 512 - priv->urb->pipe = pipe; 513 - priv->urb->complete = stub_complete; 485 + /* allocate urb array */ 486 + priv->num_urbs = num_urbs; 487 + priv->urbs = kmalloc_array(num_urbs, sizeof(*priv->urbs), GFP_KERNEL); 488 + if (!priv->urbs) 489 + goto err_urbs; 514 490 515 - usbip_pack_pdu(pdu, priv->urb, USBIP_CMD_SUBMIT, 0); 491 + /* setup a urb */ 492 + if (support_sg) { 493 + if (usb_pipeisoc(pipe)) 494 + np = pdu->u.cmd_submit.number_of_packets; 516 495 496 + priv->urbs[0] = usb_alloc_urb(np, GFP_KERNEL); 497 + if (!priv->urbs[0]) 498 + goto err_urb; 517 499 518 - if (usbip_recv_xbuff(ud, priv->urb) < 0) 500 + if (buf_len) { 501 + if (use_sg) { 502 + priv->urbs[0]->sg = sgl; 503 + priv->urbs[0]->num_sgs = nents; 504 + priv->urbs[0]->transfer_buffer = NULL; 505 + } else { 506 + priv->urbs[0]->transfer_buffer = buffer; 507 + } 508 + } 509 + 510 + /* copy urb setup packet */ 511 + priv->urbs[0]->setup_packet = kmemdup(&pdu->u.cmd_submit.setup, 512 + 8, GFP_KERNEL); 513 + if (!priv->urbs[0]->setup_packet) { 514 + usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC); 515 + return; 516 + } 517 + 518 + usbip_pack_pdu(pdu, priv->urbs[0], USBIP_CMD_SUBMIT, 0); 519 + } else { 520 + for_each_sg(sgl, sg, nents, i) { 521 + priv->urbs[i] = usb_alloc_urb(0, GFP_KERNEL); 522 + /* The URBs which is previously allocated will be freed 523 + * in stub_device_cleanup_urbs() if error occurs. 524 + */ 525 + if (!priv->urbs[i]) 526 + goto err_urb; 527 + 528 + usbip_pack_pdu(pdu, priv->urbs[i], USBIP_CMD_SUBMIT, 0); 529 + priv->urbs[i]->transfer_buffer = sg_virt(sg); 530 + priv->urbs[i]->transfer_buffer_length = sg->length; 531 + } 532 + priv->sgl = sgl; 533 + } 534 + 535 + for (i = 0; i < num_urbs; i++) { 536 + /* set other members from the base header of pdu */ 537 + priv->urbs[i]->context = (void *) priv; 538 + priv->urbs[i]->dev = udev; 539 + priv->urbs[i]->pipe = pipe; 540 + priv->urbs[i]->complete = stub_complete; 541 + 542 + /* no need to submit an intercepted request, but harmless? */ 543 + tweak_special_requests(priv->urbs[i]); 544 + 545 + masking_bogus_flags(priv->urbs[i]); 546 + } 547 + 548 + if (stub_recv_xbuff(ud, priv) < 0) 519 549 return; 520 550 521 - if (usbip_recv_iso(ud, priv->urb) < 0) 551 + if (usbip_recv_iso(ud, priv->urbs[0]) < 0) 522 552 return; 523 553 524 - /* no need to submit an intercepted request, but harmless? */ 525 - tweak_special_requests(priv->urb); 526 - 527 - masking_bogus_flags(priv->urb); 528 554 /* urb is now ready to submit */ 529 - ret = usb_submit_urb(priv->urb, GFP_KERNEL); 555 + for (i = 0; i < priv->num_urbs; i++) { 556 + ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL); 530 557 531 - if (ret == 0) 532 - usbip_dbg_stub_rx("submit urb ok, seqnum %u\n", 533 - pdu->base.seqnum); 534 - else { 535 - dev_err(&udev->dev, "submit_urb error, %d\n", ret); 536 - usbip_dump_header(pdu); 537 - usbip_dump_urb(priv->urb); 558 + if (ret == 0) 559 + usbip_dbg_stub_rx("submit urb ok, seqnum %u\n", 560 + pdu->base.seqnum); 561 + else { 562 + dev_err(&udev->dev, "submit_urb error, %d\n", ret); 563 + usbip_dump_header(pdu); 564 + usbip_dump_urb(priv->urbs[i]); 538 565 539 - /* 540 - * Pessimistic. 541 - * This connection will be discarded. 542 - */ 543 - usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT); 566 + /* 567 + * Pessimistic. 568 + * This connection will be discarded. 569 + */ 570 + usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT); 571 + break; 572 + } 544 573 } 545 574 546 575 usbip_dbg_stub_rx("Leave\n"); 576 + return; 577 + 578 + err_urb: 579 + kfree(priv->urbs); 580 + err_urbs: 581 + kfree(buffer); 582 + sgl_free(sgl); 583 + err_malloc: 584 + usbip_event_add(ud, SDEV_EVENT_ERROR_MALLOC); 547 585 } 548 586 549 587 /* recv a pdu */
+77 -22
drivers/usb/usbip/stub_tx.c
··· 5 5 6 6 #include <linux/kthread.h> 7 7 #include <linux/socket.h> 8 + #include <linux/scatterlist.h> 8 9 9 10 #include "usbip_common.h" 10 11 #include "stub.h" 11 - 12 - static void stub_free_priv_and_urb(struct stub_priv *priv) 13 - { 14 - struct urb *urb = priv->urb; 15 - 16 - kfree(urb->setup_packet); 17 - urb->setup_packet = NULL; 18 - 19 - kfree(urb->transfer_buffer); 20 - urb->transfer_buffer = NULL; 21 - 22 - list_del(&priv->list); 23 - kmem_cache_free(stub_priv_cache, priv); 24 - usb_free_urb(urb); 25 - } 26 12 27 13 /* be in spin_lock_irqsave(&sdev->priv_lock, flags) */ 28 14 void stub_enqueue_ret_unlink(struct stub_device *sdev, __u32 seqnum, ··· 69 83 "urb completion with non-zero status %d\n", 70 84 urb->status); 71 85 break; 86 + } 87 + 88 + /* 89 + * If the server breaks single SG request into the several URBs, the 90 + * URBs must be reassembled before sending completed URB to the vhci. 91 + * Don't wake up the tx thread until all the URBs are completed. 92 + */ 93 + if (priv->sgl) { 94 + priv->completed_urbs++; 95 + 96 + /* Only save the first error status */ 97 + if (urb->status && !priv->urb_status) 98 + priv->urb_status = urb->status; 99 + 100 + if (priv->completed_urbs < priv->num_urbs) 101 + return; 72 102 } 73 103 74 104 /* link a urb to the queue of tx. */ ··· 158 156 size_t total_size = 0; 159 157 160 158 while ((priv = dequeue_from_priv_tx(sdev)) != NULL) { 161 - int ret; 162 - struct urb *urb = priv->urb; 159 + struct urb *urb = priv->urbs[0]; 163 160 struct usbip_header pdu_header; 164 161 struct usbip_iso_packet_descriptor *iso_buffer = NULL; 165 162 struct kvec *iov = NULL; 163 + struct scatterlist *sg; 164 + u32 actual_length = 0; 166 165 int iovnum = 0; 166 + int ret; 167 + int i; 167 168 168 169 txsize = 0; 169 170 memset(&pdu_header, 0, sizeof(pdu_header)); 170 171 memset(&msg, 0, sizeof(msg)); 171 172 172 - if (urb->actual_length > 0 && !urb->transfer_buffer) { 173 + if (urb->actual_length > 0 && !urb->transfer_buffer && 174 + !urb->num_sgs) { 173 175 dev_err(&sdev->udev->dev, 174 176 "urb: actual_length %d transfer_buffer null\n", 175 177 urb->actual_length); ··· 182 176 183 177 if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) 184 178 iovnum = 2 + urb->number_of_packets; 179 + else if (usb_pipein(urb->pipe) && urb->actual_length > 0 && 180 + urb->num_sgs) 181 + iovnum = 1 + urb->num_sgs; 182 + else if (usb_pipein(urb->pipe) && priv->sgl) 183 + iovnum = 1 + priv->num_urbs; 185 184 else 186 185 iovnum = 2; 187 186 ··· 203 192 setup_ret_submit_pdu(&pdu_header, urb); 204 193 usbip_dbg_stub_tx("setup txdata seqnum: %d\n", 205 194 pdu_header.base.seqnum); 195 + 196 + if (priv->sgl) { 197 + for (i = 0; i < priv->num_urbs; i++) 198 + actual_length += priv->urbs[i]->actual_length; 199 + 200 + pdu_header.u.ret_submit.status = priv->urb_status; 201 + pdu_header.u.ret_submit.actual_length = actual_length; 202 + } 203 + 206 204 usbip_header_correct_endian(&pdu_header, 1); 207 205 208 206 iov[iovnum].iov_base = &pdu_header; ··· 220 200 txsize += sizeof(pdu_header); 221 201 222 202 /* 2. setup transfer buffer */ 223 - if (usb_pipein(urb->pipe) && 203 + if (usb_pipein(urb->pipe) && priv->sgl) { 204 + /* If the server split a single SG request into several 205 + * URBs because the server's HCD doesn't support SG, 206 + * reassemble the split URB buffers into a single 207 + * return command. 208 + */ 209 + for (i = 0; i < priv->num_urbs; i++) { 210 + iov[iovnum].iov_base = 211 + priv->urbs[i]->transfer_buffer; 212 + iov[iovnum].iov_len = 213 + priv->urbs[i]->actual_length; 214 + iovnum++; 215 + } 216 + txsize += actual_length; 217 + } else if (usb_pipein(urb->pipe) && 224 218 usb_pipetype(urb->pipe) != PIPE_ISOCHRONOUS && 225 219 urb->actual_length > 0) { 226 - iov[iovnum].iov_base = urb->transfer_buffer; 227 - iov[iovnum].iov_len = urb->actual_length; 228 - iovnum++; 220 + if (urb->num_sgs) { 221 + unsigned int copy = urb->actual_length; 222 + int size; 223 + 224 + for_each_sg(urb->sg, sg, urb->num_sgs, i) { 225 + if (copy == 0) 226 + break; 227 + 228 + if (copy < sg->length) 229 + size = copy; 230 + else 231 + size = sg->length; 232 + 233 + iov[iovnum].iov_base = sg_virt(sg); 234 + iov[iovnum].iov_len = size; 235 + 236 + iovnum++; 237 + copy -= size; 238 + } 239 + } else { 240 + iov[iovnum].iov_base = urb->transfer_buffer; 241 + iov[iovnum].iov_len = urb->actual_length; 242 + iovnum++; 243 + } 229 244 txsize += urb->actual_length; 230 245 } else if (usb_pipein(urb->pipe) && 231 246 usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+41 -18
drivers/usb/usbip/usbip_common.c
··· 680 680 /* some members of urb must be substituted before. */ 681 681 int usbip_recv_xbuff(struct usbip_device *ud, struct urb *urb) 682 682 { 683 - int ret; 683 + struct scatterlist *sg; 684 + int ret = 0; 685 + int recv; 684 686 int size; 687 + int copy; 688 + int i; 685 689 686 690 if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC) { 687 691 /* the direction of urb must be OUT. */ ··· 705 701 if (!(size > 0)) 706 702 return 0; 707 703 708 - if (size > urb->transfer_buffer_length) { 704 + if (size > urb->transfer_buffer_length) 709 705 /* should not happen, probably malicious packet */ 710 - if (ud->side == USBIP_STUB) { 711 - usbip_event_add(ud, SDEV_EVENT_ERROR_TCP); 712 - return 0; 713 - } else { 714 - usbip_event_add(ud, VDEV_EVENT_ERROR_TCP); 715 - return -EPIPE; 716 - } 717 - } 706 + goto error; 718 707 719 - ret = usbip_recv(ud->tcp_socket, urb->transfer_buffer, size); 720 - if (ret != size) { 721 - dev_err(&urb->dev->dev, "recv xbuf, %d\n", ret); 722 - if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC) { 723 - usbip_event_add(ud, SDEV_EVENT_ERROR_TCP); 724 - } else { 725 - usbip_event_add(ud, VDEV_EVENT_ERROR_TCP); 726 - return -EPIPE; 708 + if (urb->num_sgs) { 709 + copy = size; 710 + for_each_sg(urb->sg, sg, urb->num_sgs, i) { 711 + int recv_size; 712 + 713 + if (copy < sg->length) 714 + recv_size = copy; 715 + else 716 + recv_size = sg->length; 717 + 718 + recv = usbip_recv(ud->tcp_socket, sg_virt(sg), 719 + recv_size); 720 + 721 + if (recv != recv_size) 722 + goto error; 723 + 724 + copy -= recv; 725 + ret += recv; 727 726 } 727 + 728 + if (ret != size) 729 + goto error; 730 + } else { 731 + ret = usbip_recv(ud->tcp_socket, urb->transfer_buffer, size); 732 + if (ret != size) 733 + goto error; 728 734 } 729 735 730 736 return ret; 737 + 738 + error: 739 + dev_err(&urb->dev->dev, "recv xbuf, %d\n", ret); 740 + if (ud->side == USBIP_STUB || ud->side == USBIP_VUDC) 741 + usbip_event_add(ud, SDEV_EVENT_ERROR_TCP); 742 + else 743 + usbip_event_add(ud, VDEV_EVENT_ERROR_TCP); 744 + 745 + return -EPIPE; 731 746 } 732 747 EXPORT_SYMBOL_GPL(usbip_recv_xbuff); 733 748
+11 -1
drivers/usb/usbip/vhci_hcd.c
··· 697 697 } 698 698 vdev = &vhci_hcd->vdev[portnum-1]; 699 699 700 - if (!urb->transfer_buffer && urb->transfer_buffer_length) { 700 + if (!urb->transfer_buffer && !urb->num_sgs && 701 + urb->transfer_buffer_length) { 701 702 dev_dbg(dev, "Null URB transfer buffer\n"); 702 703 return -EINVAL; 703 704 } ··· 1144 1143 hcd->speed = HCD_USB3; 1145 1144 hcd->self.root_hub->speed = USB_SPEED_SUPER; 1146 1145 } 1146 + 1147 + /* 1148 + * Support SG. 1149 + * sg_tablesize is an arbitrary value to alleviate memory pressure 1150 + * on the host. 1151 + */ 1152 + hcd->self.sg_tablesize = 32; 1153 + hcd->self.no_sg_constraint = 1; 1154 + 1147 1155 return 0; 1148 1156 } 1149 1157
+3
drivers/usb/usbip/vhci_rx.c
··· 90 90 if (usbip_dbg_flag_vhci_rx) 91 91 usbip_dump_urb(urb); 92 92 93 + if (urb->num_sgs) 94 + urb->transfer_flags &= ~URB_DMA_MAP_SG; 95 + 93 96 usbip_dbg_vhci_rx("now giveback urb %u\n", pdu->base.seqnum); 94 97 95 98 spin_lock_irqsave(&vhci->lock, flags);
+54 -12
drivers/usb/usbip/vhci_tx.c
··· 5 5 6 6 #include <linux/kthread.h> 7 7 #include <linux/slab.h> 8 + #include <linux/scatterlist.h> 8 9 9 10 #include "usbip_common.h" 10 11 #include "vhci.h" ··· 51 50 52 51 static int vhci_send_cmd_submit(struct vhci_device *vdev) 53 52 { 53 + struct usbip_iso_packet_descriptor *iso_buffer = NULL; 54 54 struct vhci_priv *priv = NULL; 55 + struct scatterlist *sg; 55 56 56 57 struct msghdr msg; 57 - struct kvec iov[3]; 58 + struct kvec *iov; 58 59 size_t txsize; 59 60 60 61 size_t total_size = 0; 62 + int iovnum; 63 + int err = -ENOMEM; 64 + int i; 61 65 62 66 while ((priv = dequeue_from_priv_tx(vdev)) != NULL) { 63 67 int ret; 64 68 struct urb *urb = priv->urb; 65 69 struct usbip_header pdu_header; 66 - struct usbip_iso_packet_descriptor *iso_buffer = NULL; 67 70 68 71 txsize = 0; 69 72 memset(&pdu_header, 0, sizeof(pdu_header)); ··· 77 72 usbip_dbg_vhci_tx("setup txdata urb seqnum %lu\n", 78 73 priv->seqnum); 79 74 75 + if (urb->num_sgs && usb_pipeout(urb->pipe)) 76 + iovnum = 2 + urb->num_sgs; 77 + else 78 + iovnum = 3; 79 + 80 + iov = kcalloc(iovnum, sizeof(*iov), GFP_KERNEL); 81 + if (!iov) { 82 + usbip_event_add(&vdev->ud, SDEV_EVENT_ERROR_MALLOC); 83 + return -ENOMEM; 84 + } 85 + 86 + if (urb->num_sgs) 87 + urb->transfer_flags |= URB_DMA_MAP_SG; 88 + 80 89 /* 1. setup usbip_header */ 81 90 setup_cmd_submit_pdu(&pdu_header, urb); 82 91 usbip_header_correct_endian(&pdu_header, 1); 92 + iovnum = 0; 83 93 84 - iov[0].iov_base = &pdu_header; 85 - iov[0].iov_len = sizeof(pdu_header); 94 + iov[iovnum].iov_base = &pdu_header; 95 + iov[iovnum].iov_len = sizeof(pdu_header); 86 96 txsize += sizeof(pdu_header); 97 + iovnum++; 87 98 88 99 /* 2. setup transfer buffer */ 89 100 if (!usb_pipein(urb->pipe) && urb->transfer_buffer_length > 0) { 90 - iov[1].iov_base = urb->transfer_buffer; 91 - iov[1].iov_len = urb->transfer_buffer_length; 101 + if (urb->num_sgs && 102 + !usb_endpoint_xfer_isoc(&urb->ep->desc)) { 103 + for_each_sg(urb->sg, sg, urb->num_sgs, i) { 104 + iov[iovnum].iov_base = sg_virt(sg); 105 + iov[iovnum].iov_len = sg->length; 106 + iovnum++; 107 + } 108 + } else { 109 + iov[iovnum].iov_base = urb->transfer_buffer; 110 + iov[iovnum].iov_len = 111 + urb->transfer_buffer_length; 112 + iovnum++; 113 + } 92 114 txsize += urb->transfer_buffer_length; 93 115 } 94 116 ··· 127 95 if (!iso_buffer) { 128 96 usbip_event_add(&vdev->ud, 129 97 SDEV_EVENT_ERROR_MALLOC); 130 - return -1; 98 + goto err_iso_buffer; 131 99 } 132 100 133 - iov[2].iov_base = iso_buffer; 134 - iov[2].iov_len = len; 101 + iov[iovnum].iov_base = iso_buffer; 102 + iov[iovnum].iov_len = len; 103 + iovnum++; 135 104 txsize += len; 136 105 } 137 106 138 - ret = kernel_sendmsg(vdev->ud.tcp_socket, &msg, iov, 3, txsize); 107 + ret = kernel_sendmsg(vdev->ud.tcp_socket, &msg, iov, iovnum, 108 + txsize); 139 109 if (ret != txsize) { 140 110 pr_err("sendmsg failed!, ret=%d for %zd\n", ret, 141 111 txsize); 142 - kfree(iso_buffer); 143 112 usbip_event_add(&vdev->ud, VDEV_EVENT_ERROR_TCP); 144 - return -1; 113 + err = -EPIPE; 114 + goto err_tx; 145 115 } 146 116 117 + kfree(iov); 147 118 kfree(iso_buffer); 148 119 usbip_dbg_vhci_tx("send txdata\n"); 149 120 ··· 154 119 } 155 120 156 121 return total_size; 122 + 123 + err_tx: 124 + kfree(iso_buffer); 125 + err_iso_buffer: 126 + kfree(iov); 127 + 128 + return err; 157 129 } 158 130 159 131 static struct vhci_unlink *dequeue_from_unlink_tx(struct vhci_device *vdev)