Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dwc3-for-v3.7' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next

usb: dwc3: patches for v3.7 merge window

Some much needed changes for our dwc3 driver. First there's a
rework on the ep0 handling due to some Silicon issue we uncovered
which affects all users of this IP core (there's a missing
XferNotReady(DATA) event in some conditions). This issue which
show up as a SETUP transfers which wouldn't complete ever and
we would fail TD 7.06 of the Link Layer Test from USB-IF and
Lecroy's USB3 Exerciser.

We also fix a long standing bug regarding EP0 enable sequencing
where we weren't setting a particular bit (Ignore Sequence
Number). Since we never saw any problems caused by that, it
didn't deserve being sent to stable tree.

On this pull request we also fix Burst Size initialization which
should be done only in SuperSpeed and we were mistakenly setting
Burst Size to the maximum value on non-SuperSpeed mode. Again,
since we never saw any problems caused by that, we're not sending
this patch to stable.

There's also a memory ordering fix regarding usage of bitmaps in
dwc3 driver.

You will also find some sparse warnings fix, a fix for missed
isochronous packets when the endpoint is already busy, and a
fix for synchronization delay on dwc3_stop_active_transfer().

+170 -155
+2 -1
drivers/usb/dwc3/core.c
··· 100 100 101 101 ret = test_bit(id, dwc3_devs); 102 102 WARN(!ret, "dwc3: ID %d not in use\n", id); 103 + smp_mb__before_clear_bit(); 103 104 clear_bit(id, dwc3_devs); 104 105 } 105 106 EXPORT_SYMBOL_GPL(dwc3_put_device_id); ··· 463 462 return -ENOMEM; 464 463 } 465 464 466 - regs = devm_ioremap(dev, res->start, resource_size(res)); 465 + regs = devm_ioremap_nocache(dev, res->start, resource_size(res)); 467 466 if (!regs) { 468 467 dev_err(dev, "ioremap failed\n"); 469 468 return -ENOMEM;
-2
drivers/usb/dwc3/core.h
··· 457 457 enum dwc3_ep0_next { 458 458 DWC3_EP0_UNKNOWN = 0, 459 459 DWC3_EP0_COMPLETE, 460 - DWC3_EP0_NRDY_SETUP, 461 460 DWC3_EP0_NRDY_DATA, 462 461 DWC3_EP0_NRDY_STATUS, 463 462 }; ··· 783 784 #define DEPEVT_STREAMEVT_NOTFOUND 2 784 785 785 786 /* Control-only Status */ 786 - #define DEPEVT_STATUS_CONTROL_SETUP 0 787 787 #define DEPEVT_STATUS_CONTROL_DATA 1 788 788 #define DEPEVT_STATUS_CONTROL_STATUS 2 789 789
+114 -103
drivers/usb/dwc3/ep0.c
··· 125 125 struct dwc3_request *req) 126 126 { 127 127 struct dwc3 *dwc = dep->dwc; 128 - int ret = 0; 129 128 130 129 req->request.actual = 0; 131 130 req->request.status = -EINPROGRESS; ··· 155 156 156 157 dep->flags &= ~(DWC3_EP_PENDING_REQUEST | 157 158 DWC3_EP0_DIR_IN); 158 - } else if (dwc->delayed_status) { 159 + 160 + return 0; 161 + } 162 + 163 + /* 164 + * In case gadget driver asked us to delay the STATUS phase, 165 + * handle it here. 166 + */ 167 + if (dwc->delayed_status) { 168 + unsigned direction; 169 + 170 + direction = !dwc->ep0_expect_in; 159 171 dwc->delayed_status = false; 160 172 161 173 if (dwc->ep0state == EP0_STATUS_PHASE) 162 - __dwc3_ep0_do_control_status(dwc, dwc->eps[1]); 174 + __dwc3_ep0_do_control_status(dwc, dwc->eps[direction]); 163 175 else 164 176 dev_dbg(dwc->dev, "too early for delayed status\n"); 177 + 178 + return 0; 165 179 } 166 180 167 - return ret; 181 + /* 182 + * Unfortunately we have uncovered a limitation wrt the Data Phase. 183 + * 184 + * Section 9.4 says we can wait for the XferNotReady(DATA) event to 185 + * come before issueing Start Transfer command, but if we do, we will 186 + * miss situations where the host starts another SETUP phase instead of 187 + * the DATA phase. Such cases happen at least on TD.7.6 of the Link 188 + * Layer Compliance Suite. 189 + * 190 + * The problem surfaces due to the fact that in case of back-to-back 191 + * SETUP packets there will be no XferNotReady(DATA) generated and we 192 + * will be stuck waiting for XferNotReady(DATA) forever. 193 + * 194 + * By looking at tables 9-13 and 9-14 of the Databook, we can see that 195 + * it tells us to start Data Phase right away. It also mentions that if 196 + * we receive a SETUP phase instead of the DATA phase, core will issue 197 + * XferComplete for the DATA phase, before actually initiating it in 198 + * the wire, with the TRB's status set to "SETUP_PENDING". Such status 199 + * can only be used to print some debugging logs, as the core expects 200 + * us to go through to the STATUS phase and start a CONTROL_STATUS TRB, 201 + * just so it completes right away, without transferring anything and, 202 + * only then, we can go back to the SETUP phase. 203 + * 204 + * Because of this scenario, SNPS decided to change the programming 205 + * model of control transfers and support on-demand transfers only for 206 + * the STATUS phase. To fix the issue we have now, we will always wait 207 + * for gadget driver to queue the DATA phase's struct usb_request, then 208 + * start it right away. 209 + * 210 + * If we're actually in a 2-stage transfer, we will wait for 211 + * XferNotReady(STATUS). 212 + */ 213 + if (dwc->three_stage_setup) { 214 + unsigned direction; 215 + 216 + direction = dwc->ep0_expect_in; 217 + dwc->ep0state = EP0_DATA_PHASE; 218 + 219 + __dwc3_ep0_do_control_data(dwc, dwc->eps[direction], req); 220 + 221 + dep->flags &= ~DWC3_EP0_DIR_IN; 222 + } 223 + 224 + return 0; 168 225 } 169 226 170 227 int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, ··· 262 207 263 208 static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc) 264 209 { 265 - struct dwc3_ep *dep = dwc->eps[0]; 210 + struct dwc3_ep *dep; 211 + 212 + /* reinitialize physical ep1 */ 213 + dep = dwc->eps[1]; 214 + dep->flags = DWC3_EP_ENABLED; 266 215 267 216 /* stall is always issued on EP0 */ 217 + dep = dwc->eps[0]; 268 218 __dwc3_gadget_ep_set_halt(dep, 1); 269 219 dep->flags = DWC3_EP_ENABLED; 270 220 dwc->delayed_status = false; ··· 758 698 struct dwc3_trb *trb; 759 699 struct dwc3_ep *ep0; 760 700 u32 transferred; 701 + u32 status; 761 702 u32 length; 762 703 u8 epnum; 763 704 ··· 771 710 ur = &r->request; 772 711 773 712 trb = dwc->ep0_trb; 713 + 714 + status = DWC3_TRB_SIZE_TRBSTS(trb->size); 715 + if (status == DWC3_TRBSTS_SETUP_PENDING) { 716 + dev_dbg(dwc->dev, "Setup Pending received\n"); 717 + 718 + if (r) 719 + dwc3_gadget_giveback(ep0, r, -ECONNRESET); 720 + 721 + return; 722 + } 723 + 774 724 length = trb->size & DWC3_TRB_SIZE_MASK; 775 725 776 726 if (dwc->ep0_bounced) { ··· 818 746 { 819 747 struct dwc3_request *r; 820 748 struct dwc3_ep *dep; 749 + struct dwc3_trb *trb; 750 + u32 status; 821 751 822 752 dep = dwc->eps[0]; 753 + trb = dwc->ep0_trb; 823 754 824 755 if (!list_empty(&dep->request_list)) { 825 756 r = next_request(&dep->request_list); ··· 841 766 return; 842 767 } 843 768 } 769 + 770 + status = DWC3_TRB_SIZE_TRBSTS(trb->size); 771 + if (status == DWC3_TRBSTS_SETUP_PENDING) 772 + dev_dbg(dwc->dev, "Setup Pending received\n"); 844 773 845 774 dwc->ep0state = EP0_SETUP_PHASE; 846 775 dwc3_ep0_out_start(dwc); ··· 877 798 default: 878 799 WARN(true, "UNKNOWN ep0state %d\n", dwc->ep0state); 879 800 } 880 - } 881 - 882 - static void dwc3_ep0_do_control_setup(struct dwc3 *dwc, 883 - const struct dwc3_event_depevt *event) 884 - { 885 - dwc3_ep0_out_start(dwc); 886 801 } 887 802 888 803 static void __dwc3_ep0_do_control_data(struct dwc3 *dwc, ··· 931 858 WARN_ON(ret < 0); 932 859 } 933 860 934 - static void dwc3_ep0_do_control_data(struct dwc3 *dwc, 935 - const struct dwc3_event_depevt *event) 936 - { 937 - struct dwc3_ep *dep; 938 - struct dwc3_request *req; 939 - 940 - dep = dwc->eps[0]; 941 - 942 - if (list_empty(&dep->request_list)) { 943 - dev_vdbg(dwc->dev, "pending request for EP0 Data phase\n"); 944 - dep->flags |= DWC3_EP_PENDING_REQUEST; 945 - 946 - if (event->endpoint_number) 947 - dep->flags |= DWC3_EP0_DIR_IN; 948 - return; 949 - } 950 - 951 - req = next_request(&dep->request_list); 952 - dep = dwc->eps[event->endpoint_number]; 953 - 954 - __dwc3_ep0_do_control_data(dwc, dep, req); 955 - } 956 - 957 861 static int dwc3_ep0_start_control_status(struct dwc3_ep *dep) 958 862 { 959 863 struct dwc3 *dwc = dep->dwc; ··· 962 912 __dwc3_ep0_do_control_status(dwc, dep); 963 913 } 964 914 915 + static void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep) 916 + { 917 + struct dwc3_gadget_ep_cmd_params params; 918 + u32 cmd; 919 + int ret; 920 + 921 + if (!dep->resource_index) 922 + return; 923 + 924 + cmd = DWC3_DEPCMD_ENDTRANSFER; 925 + cmd |= DWC3_DEPCMD_CMDIOC; 926 + cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 927 + memset(&params, 0, sizeof(params)); 928 + ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params); 929 + WARN_ON_ONCE(ret); 930 + dep->resource_index = 0; 931 + } 932 + 965 933 static void dwc3_ep0_xfernotready(struct dwc3 *dwc, 966 934 const struct dwc3_event_depevt *event) 967 935 { 968 936 dwc->setup_packet_pending = true; 969 937 970 - /* 971 - * This part is very tricky: If we have just handled 972 - * XferNotReady(Setup) and we're now expecting a 973 - * XferComplete but, instead, we receive another 974 - * XferNotReady(Setup), we should STALL and restart 975 - * the state machine. 976 - * 977 - * In all other cases, we just continue waiting 978 - * for the XferComplete event. 979 - * 980 - * We are a little bit unsafe here because we're 981 - * not trying to ensure that last event was, indeed, 982 - * XferNotReady(Setup). 983 - * 984 - * Still, we don't expect any condition where that 985 - * should happen and, even if it does, it would be 986 - * another error condition. 987 - */ 988 - if (dwc->ep0_next_event == DWC3_EP0_COMPLETE) { 989 - switch (event->status) { 990 - case DEPEVT_STATUS_CONTROL_SETUP: 991 - dev_vdbg(dwc->dev, "Unexpected XferNotReady(Setup)\n"); 992 - dwc3_ep0_stall_and_restart(dwc); 993 - break; 994 - case DEPEVT_STATUS_CONTROL_DATA: 995 - /* FALLTHROUGH */ 996 - case DEPEVT_STATUS_CONTROL_STATUS: 997 - /* FALLTHROUGH */ 998 - default: 999 - dev_vdbg(dwc->dev, "waiting for XferComplete\n"); 1000 - } 1001 - 1002 - return; 1003 - } 1004 - 1005 938 switch (event->status) { 1006 - case DEPEVT_STATUS_CONTROL_SETUP: 1007 - dev_vdbg(dwc->dev, "Control Setup\n"); 1008 - 1009 - dwc->ep0state = EP0_SETUP_PHASE; 1010 - 1011 - dwc3_ep0_do_control_setup(dwc, event); 1012 - break; 1013 - 1014 939 case DEPEVT_STATUS_CONTROL_DATA: 1015 940 dev_vdbg(dwc->dev, "Control Data\n"); 1016 941 1017 - dwc->ep0state = EP0_DATA_PHASE; 1018 - 1019 - if (dwc->ep0_next_event != DWC3_EP0_NRDY_DATA) { 1020 - dev_vdbg(dwc->dev, "Expected %d got %d\n", 1021 - dwc->ep0_next_event, 1022 - DWC3_EP0_NRDY_DATA); 1023 - 1024 - dwc3_ep0_stall_and_restart(dwc); 1025 - return; 1026 - } 1027 - 1028 942 /* 1029 - * One of the possible error cases is when Host _does_ 1030 - * request for Data Phase, but it does so on the wrong 1031 - * direction. 943 + * We already have a DATA transfer in the controller's cache, 944 + * if we receive a XferNotReady(DATA) we will ignore it, unless 945 + * it's for the wrong direction. 1032 946 * 1033 - * Here, we already know ep0_next_event is DATA (see above), 1034 - * so we only need to check for direction. 947 + * In that case, we must issue END_TRANSFER command to the Data 948 + * Phase we already have started and issue SetStall on the 949 + * control endpoint. 1035 950 */ 1036 951 if (dwc->ep0_expect_in != event->endpoint_number) { 952 + struct dwc3_ep *dep = dwc->eps[dwc->ep0_expect_in]; 953 + 1037 954 dev_vdbg(dwc->dev, "Wrong direction for Data phase\n"); 955 + dwc3_ep0_end_control_data(dwc, dep); 1038 956 dwc3_ep0_stall_and_restart(dwc); 1039 957 return; 1040 958 } 1041 959 1042 - dwc3_ep0_do_control_data(dwc, event); 1043 960 break; 1044 961 1045 962 case DEPEVT_STATUS_CONTROL_STATUS: 963 + if (dwc->ep0_next_event != DWC3_EP0_NRDY_STATUS) 964 + return; 965 + 1046 966 dev_vdbg(dwc->dev, "Control Status\n"); 1047 967 1048 968 dwc->ep0state = EP0_STATUS_PHASE; 1049 - 1050 - if (dwc->ep0_next_event != DWC3_EP0_NRDY_STATUS) { 1051 - dev_vdbg(dwc->dev, "Expected %d got %d\n", 1052 - dwc->ep0_next_event, 1053 - DWC3_EP0_NRDY_STATUS); 1054 - 1055 - dwc3_ep0_stall_and_restart(dwc); 1056 - return; 1057 - } 1058 969 1059 970 if (dwc->delayed_status) { 1060 971 WARN_ON_ONCE(event->endpoint_number != 1);
+54 -49
drivers/usb/dwc3/gadget.c
··· 431 431 432 432 static int dwc3_gadget_set_ep_config(struct dwc3 *dwc, struct dwc3_ep *dep, 433 433 const struct usb_endpoint_descriptor *desc, 434 - const struct usb_ss_ep_comp_descriptor *comp_desc) 434 + const struct usb_ss_ep_comp_descriptor *comp_desc, 435 + bool ignore) 435 436 { 436 437 struct dwc3_gadget_ep_cmd_params params; 437 438 438 439 memset(&params, 0x00, sizeof(params)); 439 440 440 441 params.param0 = DWC3_DEPCFG_EP_TYPE(usb_endpoint_type(desc)) 441 - | DWC3_DEPCFG_MAX_PACKET_SIZE(usb_endpoint_maxp(desc)) 442 - | DWC3_DEPCFG_BURST_SIZE(dep->endpoint.maxburst - 1); 442 + | DWC3_DEPCFG_MAX_PACKET_SIZE(usb_endpoint_maxp(desc)); 443 + 444 + /* Burst size is only needed in SuperSpeed mode */ 445 + if (dwc->gadget.speed == USB_SPEED_SUPER) { 446 + u32 burst = dep->endpoint.maxburst - 1; 447 + 448 + params.param0 |= DWC3_DEPCFG_BURST_SIZE(burst); 449 + } 450 + 451 + if (ignore) 452 + params.param0 |= DWC3_DEPCFG_IGN_SEQ_NUM; 443 453 444 454 params.param1 = DWC3_DEPCFG_XFER_COMPLETE_EN 445 455 | DWC3_DEPCFG_XFER_NOT_READY_EN; ··· 508 498 */ 509 499 static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, 510 500 const struct usb_endpoint_descriptor *desc, 511 - const struct usb_ss_ep_comp_descriptor *comp_desc) 501 + const struct usb_ss_ep_comp_descriptor *comp_desc, 502 + bool ignore) 512 503 { 513 504 struct dwc3 *dwc = dep->dwc; 514 505 u32 reg; ··· 521 510 return ret; 522 511 } 523 512 524 - ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc); 513 + ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore); 525 514 if (ret) 526 515 return ret; 527 516 ··· 569 558 if (!list_empty(&dep->req_queued)) { 570 559 dwc3_stop_active_transfer(dwc, dep->number); 571 560 572 - /* 573 - * NOTICE: We are violating what the Databook says about the 574 - * EndTransfer command. Ideally we would _always_ wait for the 575 - * EndTransfer Command Completion IRQ, but that's causing too 576 - * much trouble synchronizing between us and gadget driver. 577 - * 578 - * We have discussed this with the IP Provider and it was 579 - * suggested to giveback all requests here, but give HW some 580 - * extra time to synchronize with the interconnect. We're using 581 - * an arbitraty 100us delay for that. 582 - * 583 - * Note also that a similar handling was tested by Synopsys 584 - * (thanks a lot Paul) and nothing bad has come out of it. 585 - * In short, what we're doing is: 586 - * 587 - * - Issue EndTransfer WITH CMDIOC bit set 588 - * - Wait 100us 589 - * - giveback all requests to gadget driver 590 - */ 591 - udelay(100); 592 - 561 + /* - giveback all requests to gadget driver */ 593 562 while (!list_empty(&dep->req_queued)) { 594 563 req = next_request(&dep->req_queued); 595 564 ··· 648 657 dep = to_dwc3_ep(ep); 649 658 dwc = dep->dwc; 650 659 660 + if (dep->flags & DWC3_EP_ENABLED) { 661 + dev_WARN_ONCE(dwc->dev, true, "%s is already enabled\n", 662 + dep->name); 663 + return 0; 664 + } 665 + 651 666 switch (usb_endpoint_type(desc)) { 652 667 case USB_ENDPOINT_XFER_CONTROL: 653 668 strlcat(dep->name, "-control", sizeof(dep->name)); ··· 671 674 dev_err(dwc->dev, "invalid endpoint transfer type\n"); 672 675 } 673 676 674 - if (dep->flags & DWC3_EP_ENABLED) { 675 - dev_WARN_ONCE(dwc->dev, true, "%s is already enabled\n", 676 - dep->name); 677 - return 0; 678 - } 679 - 680 677 dev_vdbg(dwc->dev, "Enabling %s\n", dep->name); 681 678 682 679 spin_lock_irqsave(&dwc->lock, flags); 683 - ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc); 680 + ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false); 684 681 spin_unlock_irqrestore(&dwc->lock, flags); 685 682 686 683 return ret; ··· 1078 1087 * 1079 1088 */ 1080 1089 if (dep->flags & DWC3_EP_PENDING_REQUEST) { 1081 - int ret; 1082 - 1083 1090 ret = __dwc3_gadget_kick_transfer(dep, 0, true); 1084 - if (ret && ret != -EBUSY) { 1085 - struct dwc3 *dwc = dep->dwc; 1086 - 1091 + if (ret && ret != -EBUSY) 1087 1092 dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1088 1093 dep->name); 1089 - } 1090 1094 } 1091 1095 1092 1096 /* ··· 1090 1104 * core may not see the modified TRB(s). 1091 1105 */ 1092 1106 if (usb_endpoint_xfer_isoc(dep->endpoint.desc) && 1093 - (dep->flags & DWC3_EP_BUSY)) { 1107 + (dep->flags & DWC3_EP_BUSY) && 1108 + !(dep->flags & DWC3_EP_MISSED_ISOC)) { 1094 1109 WARN_ON_ONCE(!dep->resource_index); 1095 1110 ret = __dwc3_gadget_kick_transfer(dep, dep->resource_index, 1096 1111 false); 1097 - if (ret && ret != -EBUSY) { 1098 - struct dwc3 *dwc = dep->dwc; 1099 - 1112 + if (ret && ret != -EBUSY) 1100 1113 dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1101 1114 dep->name); 1102 - } 1103 1115 } 1104 1116 1105 1117 /* ··· 1502 1518 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 1503 1519 1504 1520 dep = dwc->eps[0]; 1505 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL); 1521 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1506 1522 if (ret) { 1507 1523 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1508 1524 goto err0; 1509 1525 } 1510 1526 1511 1527 dep = dwc->eps[1]; 1512 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL); 1528 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1513 1529 if (ret) { 1514 1530 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1515 1531 goto err1; ··· 1734 1750 int i; 1735 1751 1736 1752 for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) { 1737 - struct dwc3_ep *dep = dwc->eps[i]; 1753 + dep = dwc->eps[i]; 1738 1754 1739 1755 if (!(dep->flags & DWC3_EP_ENABLED)) 1740 1756 continue; ··· 1861 1877 if (!dep->resource_index) 1862 1878 return; 1863 1879 1880 + /* 1881 + * NOTICE: We are violating what the Databook says about the 1882 + * EndTransfer command. Ideally we would _always_ wait for the 1883 + * EndTransfer Command Completion IRQ, but that's causing too 1884 + * much trouble synchronizing between us and gadget driver. 1885 + * 1886 + * We have discussed this with the IP Provider and it was 1887 + * suggested to giveback all requests here, but give HW some 1888 + * extra time to synchronize with the interconnect. We're using 1889 + * an arbitraty 100us delay for that. 1890 + * 1891 + * Note also that a similar handling was tested by Synopsys 1892 + * (thanks a lot Paul) and nothing bad has come out of it. 1893 + * In short, what we're doing is: 1894 + * 1895 + * - Issue EndTransfer WITH CMDIOC bit set 1896 + * - Wait 100us 1897 + */ 1898 + 1864 1899 cmd = DWC3_DEPCMD_ENDTRANSFER; 1865 1900 cmd |= DWC3_DEPCMD_HIPRI_FORCERM | DWC3_DEPCMD_CMDIOC; 1866 1901 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); ··· 1887 1884 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params); 1888 1885 WARN_ON_ONCE(ret); 1889 1886 dep->resource_index = 0; 1887 + 1888 + udelay(100); 1890 1889 } 1891 1890 1892 1891 static void dwc3_stop_active_transfers(struct dwc3 *dwc) ··· 2146 2141 } 2147 2142 2148 2143 dep = dwc->eps[0]; 2149 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL); 2144 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2150 2145 if (ret) { 2151 2146 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2152 2147 return; 2153 2148 } 2154 2149 2155 2150 dep = dwc->eps[1]; 2156 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL); 2151 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2157 2152 if (ret) { 2158 2153 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2159 2154 return;