Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'thunderbolt-for-v5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into usb-next

Mika writes:

thunderbolt: Changes for v5.13 merge window

This includes following Thunderbolt/USB4 changes for v5.13 merge window:

* Debugfs improvements

* Align the inter-domain (peer-to-peer) support with the USB4
inter-domain spec for better interoperability

* Add support for USB4 DROM and the new product descriptor

* More KUnit tests

* Detailed uevent for routers

* Few miscellaneous improvements

All these have been in linux-next without reported issues.

* tag 'thunderbolt-for-v5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt: (24 commits)
thunderbolt: Hide authorized attribute if router does not support PCIe tunnels
thunderbolt: Add details to router uevent
thunderbolt: Unlock on error path in tb_domain_add()
thunderbolt: Add support for USB4 DROM
thunderbolt: Check quirks in tb_switch_add()
thunderbolt: Add KUnit tests for DMA tunnels
thunderbolt: Add KUnit tests for XDomain properties
net: thunderbolt: Align the driver to the USB4 networking spec
thunderbolt: Allow multiple DMA tunnels over a single XDomain connection
thunderbolt: Drop unused tb_port_set_initial_credits()
thunderbolt: Use dedicated flow control for DMA tunnels
thunderbolt: Add support for maxhopid XDomain property
thunderbolt: Add tb_property_copy_dir()
thunderbolt: Align XDomain protocol timeouts with the spec
thunderbolt: Use pseudo-random number as initial property block generation
thunderbolt: Do not re-establish XDomain DMA paths automatically
thunderbolt: Add more logging to XDomain connections
Documentation / thunderbolt: Drop speed/lanes entries for XDomain
thunderbolt: Decrease control channel timeout for software connection manager
thunderbolt: Do not pass timeout for tb_cfg_reset()
...

+1297 -448
+7 -28
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 1 - What: /sys/bus/thunderbolt/devices/<xdomain>/rx_speed 2 - Date: Feb 2021 3 - KernelVersion: 5.11 4 - Contact: Isaac Hazan <isaac.hazan@intel.com> 5 - Description: This attribute reports the XDomain RX speed per lane. 6 - All RX lanes run at the same speed. 7 - 8 - What: /sys/bus/thunderbolt/devices/<xdomain>/rx_lanes 9 - Date: Feb 2021 10 - KernelVersion: 5.11 11 - Contact: Isaac Hazan <isaac.hazan@intel.com> 12 - Description: This attribute reports the number of RX lanes the XDomain 13 - is using simultaneously through its upstream port. 14 - 15 - What: /sys/bus/thunderbolt/devices/<xdomain>/tx_speed 16 - Date: Feb 2021 17 - KernelVersion: 5.11 18 - Contact: Isaac Hazan <isaac.hazan@intel.com> 19 - Description: This attribute reports the XDomain TX speed per lane. 20 - All TX lanes run at the same speed. 21 - 22 - What: /sys/bus/thunderbolt/devices/<xdomain>/tx_lanes 23 - Date: Feb 2021 24 - KernelVersion: 5.11 25 - Contact: Isaac Hazan <isaac.hazan@intel.com> 26 - Description: This attribute reports number of TX lanes the XDomain 27 - is using simultaneously through its upstream port. 28 - 29 1 What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 30 2 Date: Jun 2018 31 3 KernelVersion: 4.17 ··· 133 161 Contact: thunderbolt-software@lists.01.org 134 162 Description: This attribute contains name of this device extracted from 135 163 the device DROM. 164 + 165 + What: /sys/bus/thunderbolt/devices/.../maxhopid 166 + Date: Jul 2021 167 + KernelVersion: 5.13 168 + Contact: Mika Westerberg <mika.westerberg@linux.intel.com> 169 + Description: Only set for XDomains. The maximum HopID the other host 170 + supports as its input HopID. 136 171 137 172 What: /sys/bus/thunderbolt/devices/.../rx_speed 138 173 Date: Jan 2020
+42 -14
drivers/net/thunderbolt.c
··· 25 25 /* Protocol timeouts in ms */ 26 26 #define TBNET_LOGIN_DELAY 4500 27 27 #define TBNET_LOGIN_TIMEOUT 500 28 - #define TBNET_LOGOUT_TIMEOUT 100 28 + #define TBNET_LOGOUT_TIMEOUT 1000 29 29 30 30 #define TBNET_RING_SIZE 256 31 - #define TBNET_LOCAL_PATH 0xf 32 31 #define TBNET_LOGIN_RETRIES 60 33 - #define TBNET_LOGOUT_RETRIES 5 32 + #define TBNET_LOGOUT_RETRIES 10 34 33 #define TBNET_MATCH_FRAGS_ID BIT(1) 34 + #define TBNET_64K_FRAMES BIT(2) 35 35 #define TBNET_MAX_MTU SZ_64K 36 36 #define TBNET_FRAME_SIZE SZ_4K 37 37 #define TBNET_MAX_PAYLOAD_SIZE \ ··· 154 154 * @login_sent: ThunderboltIP login message successfully sent 155 155 * @login_received: ThunderboltIP login message received from the remote 156 156 * host 157 - * @transmit_path: HopID the other end needs to use building the 158 - * opposite side path. 157 + * @local_transmit_path: HopID we are using to send out packets 158 + * @remote_transmit_path: HopID the other end is using to send packets to us 159 159 * @connection_lock: Lock serializing access to @login_sent, 160 160 * @login_received and @transmit_path. 161 161 * @login_retries: Number of login retries currently done ··· 184 184 atomic_t command_id; 185 185 bool login_sent; 186 186 bool login_received; 187 - u32 transmit_path; 187 + int local_transmit_path; 188 + int remote_transmit_path; 188 189 struct mutex connection_lock; 189 190 int login_retries; 190 191 struct delayed_work login_work; ··· 258 257 atomic_inc_return(&net->command_id)); 259 258 260 259 request.proto_version = TBIP_LOGIN_PROTO_VERSION; 261 - request.transmit_path = TBNET_LOCAL_PATH; 260 + request.transmit_path = net->local_transmit_path; 262 261 263 262 return tb_xdomain_request(xd, &request, sizeof(request), 264 263 TB_CFG_PKG_XDOMAIN_RESP, &reply, ··· 365 364 mutex_lock(&net->connection_lock); 366 365 367 366 if (net->login_sent && net->login_received) { 368 - int retries = TBNET_LOGOUT_RETRIES; 367 + int ret, retries = TBNET_LOGOUT_RETRIES; 369 368 370 369 while (send_logout && retries-- > 0) { 371 - int ret = tbnet_logout_request(net); 370 + ret = tbnet_logout_request(net); 372 371 if (ret != -ETIMEDOUT) 373 372 break; 374 373 } ··· 378 377 tbnet_free_buffers(&net->rx_ring); 379 378 tbnet_free_buffers(&net->tx_ring); 380 379 381 - if (tb_xdomain_disable_paths(net->xd)) 380 + ret = tb_xdomain_disable_paths(net->xd, 381 + net->local_transmit_path, 382 + net->rx_ring.ring->hop, 383 + net->remote_transmit_path, 384 + net->tx_ring.ring->hop); 385 + if (ret) 382 386 netdev_warn(net->dev, "failed to disable DMA paths\n"); 387 + 388 + tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path); 389 + net->remote_transmit_path = 0; 383 390 } 384 391 385 392 net->login_retries = 0; ··· 433 424 if (!ret) { 434 425 mutex_lock(&net->connection_lock); 435 426 net->login_received = true; 436 - net->transmit_path = pkg->transmit_path; 427 + net->remote_transmit_path = pkg->transmit_path; 437 428 438 429 /* If we reached the number of max retries or 439 430 * previous logout, schedule another round of ··· 606 597 if (!connected) 607 598 return; 608 599 600 + ret = tb_xdomain_alloc_in_hopid(net->xd, net->remote_transmit_path); 601 + if (ret != net->remote_transmit_path) { 602 + netdev_err(net->dev, "failed to allocate Rx HopID\n"); 603 + return; 604 + } 605 + 609 606 /* Both logins successful so enable the high-speed DMA paths and 610 607 * start the network device queue. 611 608 */ 612 - ret = tb_xdomain_enable_paths(net->xd, TBNET_LOCAL_PATH, 609 + ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path, 613 610 net->rx_ring.ring->hop, 614 - net->transmit_path, 611 + net->remote_transmit_path, 615 612 net->tx_ring.ring->hop); 616 613 if (ret) { 617 614 netdev_err(net->dev, "failed to enable DMA paths\n"); ··· 644 629 err_stop_rings: 645 630 tb_ring_stop(net->rx_ring.ring); 646 631 tb_ring_stop(net->tx_ring.ring); 632 + tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path); 647 633 } 648 634 649 635 static void tbnet_login_work(struct work_struct *work) ··· 867 851 struct tb_xdomain *xd = net->xd; 868 852 u16 sof_mask, eof_mask; 869 853 struct tb_ring *ring; 854 + int hopid; 870 855 871 856 netif_carrier_off(dev); 872 857 ··· 878 861 return -ENOMEM; 879 862 } 880 863 net->tx_ring.ring = ring; 864 + 865 + hopid = tb_xdomain_alloc_out_hopid(xd, -1); 866 + if (hopid < 0) { 867 + netdev_err(dev, "failed to allocate Tx HopID\n"); 868 + tb_ring_free(net->tx_ring.ring); 869 + net->tx_ring.ring = NULL; 870 + return hopid; 871 + } 872 + net->local_transmit_path = hopid; 881 873 882 874 sof_mask = BIT(TBIP_PDF_FRAME_START); 883 875 eof_mask = BIT(TBIP_PDF_FRAME_END); ··· 919 893 920 894 tb_ring_free(net->rx_ring.ring); 921 895 net->rx_ring.ring = NULL; 896 + 897 + tb_xdomain_release_out_hopid(net->xd, net->local_transmit_path); 922 898 tb_ring_free(net->tx_ring.ring); 923 899 net->tx_ring.ring = NULL; 924 900 ··· 1368 1340 * the moment. 1369 1341 */ 1370 1342 tb_property_add_immediate(tbnet_dir, "prtcstns", 1371 - TBNET_MATCH_FRAGS_ID); 1343 + TBNET_MATCH_FRAGS_ID | TBNET_64K_FRAMES); 1372 1344 1373 1345 ret = tb_register_property_dir("network", tbnet_dir); 1374 1346 if (ret) {
+12 -9
drivers/thunderbolt/ctl.c
··· 17 17 18 18 19 19 #define TB_CTL_RX_PKG_COUNT 10 20 - #define TB_CTL_RETRIES 4 20 + #define TB_CTL_RETRIES 1 21 21 22 22 /** 23 23 * struct tb_ctl - Thunderbolt control channel ··· 29 29 * @request_queue_lock: Lock protecting @request_queue 30 30 * @request_queue: List of outstanding requests 31 31 * @running: Is the control channel running at the moment 32 + * @timeout_msec: Default timeout for non-raw control messages 32 33 * @callback: Callback called when hotplug message is received 33 34 * @callback_data: Data passed to @callback 34 35 */ ··· 44 43 struct list_head request_queue; 45 44 bool running; 46 45 46 + int timeout_msec; 47 47 event_cb callback; 48 48 void *callback_data; 49 49 }; ··· 615 613 /** 616 614 * tb_ctl_alloc() - allocate a control channel 617 615 * @nhi: Pointer to NHI 616 + * @timeout_msec: Default timeout used with non-raw control messages 618 617 * @cb: Callback called for plug events 619 618 * @cb_data: Data passed to @cb 620 619 * ··· 623 620 * 624 621 * Return: Returns a pointer on success or NULL on failure. 625 622 */ 626 - struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data) 623 + struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb, 624 + void *cb_data) 627 625 { 628 626 int i; 629 627 struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL); 630 628 if (!ctl) 631 629 return NULL; 632 630 ctl->nhi = nhi; 631 + ctl->timeout_msec = timeout_msec; 633 632 ctl->callback = cb; 634 633 ctl->callback_data = cb_data; 635 634 ··· 807 802 * tb_cfg_reset() - send a reset packet and wait for a response 808 803 * @ctl: Control channel pointer 809 804 * @route: Router string for the router to send reset 810 - * @timeout_msec: Timeout in ms how long to wait for the response 811 805 * 812 806 * If the switch at route is incorrectly configured then we will not receive a 813 807 * reply (even though the switch will reset). The caller should check for 814 808 * -ETIMEDOUT and attempt to reconfigure the switch. 815 809 */ 816 - struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route, 817 - int timeout_msec) 810 + struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route) 818 811 { 819 812 struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) }; 820 813 struct tb_cfg_result res = { 0 }; ··· 834 831 req->response_size = sizeof(reply); 835 832 req->response_type = TB_CFG_PKG_RESET; 836 833 837 - res = tb_cfg_request_sync(ctl, req, timeout_msec); 834 + res = tb_cfg_request_sync(ctl, req, ctl->timeout_msec); 838 835 839 836 tb_cfg_request_put(req); 840 837 ··· 1010 1007 enum tb_cfg_space space, u32 offset, u32 length) 1011 1008 { 1012 1009 struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port, 1013 - space, offset, length, TB_CFG_DEFAULT_TIMEOUT); 1010 + space, offset, length, ctl->timeout_msec); 1014 1011 switch (res.err) { 1015 1012 case 0: 1016 1013 /* Success */ ··· 1036 1033 enum tb_cfg_space space, u32 offset, u32 length) 1037 1034 { 1038 1035 struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port, 1039 - space, offset, length, TB_CFG_DEFAULT_TIMEOUT); 1036 + space, offset, length, ctl->timeout_msec); 1040 1037 switch (res.err) { 1041 1038 case 0: 1042 1039 /* Success */ ··· 1074 1071 u32 dummy; 1075 1072 struct tb_cfg_result res = tb_cfg_read_raw(ctl, &dummy, route, 0, 1076 1073 TB_CFG_SWITCH, 0, 1, 1077 - TB_CFG_DEFAULT_TIMEOUT); 1074 + ctl->timeout_msec); 1078 1075 if (res.err == 1) 1079 1076 return -EIO; 1080 1077 if (res.err)
+3 -5
drivers/thunderbolt/ctl.h
··· 21 21 typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type, 22 22 const void *buf, size_t size); 23 23 24 - struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data); 24 + struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb, 25 + void *cb_data); 25 26 void tb_ctl_start(struct tb_ctl *ctl); 26 27 void tb_ctl_stop(struct tb_ctl *ctl); 27 28 void tb_ctl_free(struct tb_ctl *ctl); 28 29 29 30 /* configuration commands */ 30 - 31 - #define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */ 32 31 33 32 struct tb_cfg_result { 34 33 u64 response_route; ··· 123 124 } 124 125 125 126 int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug); 126 - struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route, 127 - int timeout_msec); 127 + struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route); 128 128 struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer, 129 129 u64 route, u32 port, 130 130 enum tb_cfg_space space, u32 offset,
+24 -13
drivers/thunderbolt/debugfs.c
··· 251 251 return ret < 0 ? ret : count; 252 252 } 253 253 254 + static void cap_show_by_dw(struct seq_file *s, struct tb_switch *sw, 255 + struct tb_port *port, unsigned int cap, 256 + unsigned int offset, u8 cap_id, u8 vsec_id, 257 + int dwords) 258 + { 259 + int i, ret; 260 + u32 data; 261 + 262 + for (i = 0; i < dwords; i++) { 263 + if (port) 264 + ret = tb_port_read(port, &data, TB_CFG_PORT, cap + offset + i, 1); 265 + else 266 + ret = tb_sw_read(sw, &data, TB_CFG_SWITCH, cap + offset + i, 1); 267 + if (ret) { 268 + seq_printf(s, "0x%04x <not accessible>\n", cap + offset + i); 269 + continue; 270 + } 271 + 272 + seq_printf(s, "0x%04x %4d 0x%02x 0x%02x 0x%08x\n", cap + offset + i, 273 + offset + i, cap_id, vsec_id, data); 274 + } 275 + } 276 + 254 277 static void cap_show(struct seq_file *s, struct tb_switch *sw, 255 278 struct tb_port *port, unsigned int cap, u8 cap_id, 256 279 u8 vsec_id, int length) ··· 290 267 else 291 268 ret = tb_sw_read(sw, data, TB_CFG_SWITCH, cap + offset, dwords); 292 269 if (ret) { 293 - seq_printf(s, "0x%04x <not accessible>\n", 294 - cap + offset); 295 - if (dwords > 1) 296 - seq_printf(s, "0x%04x ...\n", cap + offset + 1); 270 + cap_show_by_dw(s, sw, port, cap, offset, cap_id, vsec_id, length); 297 271 return; 298 272 } 299 273 ··· 361 341 } else { 362 342 length = header.extended_short.length; 363 343 vsec_id = header.extended_short.vsec_id; 364 - /* 365 - * Ice Lake and Tiger Lake do not implement the 366 - * full length of the capability, only first 32 367 - * dwords so hard-code it here. 368 - */ 369 - if (!vsec_id && 370 - (tb_switch_is_ice_lake(port->sw) || 371 - tb_switch_is_tiger_lake(port->sw))) 372 - length = 32; 373 344 } 374 345 break; 375 346
+31 -4
drivers/thunderbolt/dma_test.c
··· 13 13 #include <linux/sizes.h> 14 14 #include <linux/thunderbolt.h> 15 15 16 - #define DMA_TEST_HOPID 8 17 16 #define DMA_TEST_TX_RING_SIZE 64 18 17 #define DMA_TEST_RX_RING_SIZE 256 19 18 #define DMA_TEST_FRAME_SIZE SZ_4K ··· 71 72 * @svc: XDomain service the driver is bound to 72 73 * @xd: XDomain the service belongs to 73 74 * @rx_ring: Software ring holding RX frames 75 + * @rx_hopid: HopID used for receiving frames 74 76 * @tx_ring: Software ring holding TX frames 77 + * @tx_hopid: HopID used for sending fames 75 78 * @packets_to_send: Number of packets to send 76 79 * @packets_to_receive: Number of packets to receive 77 80 * @packets_sent: Actual number of packets sent ··· 93 92 const struct tb_service *svc; 94 93 struct tb_xdomain *xd; 95 94 struct tb_ring *rx_ring; 95 + int rx_hopid; 96 96 struct tb_ring *tx_ring; 97 + int tx_hopid; 97 98 unsigned int packets_to_send; 98 99 unsigned int packets_to_receive; 99 100 unsigned int packets_sent; ··· 122 119 static void dma_test_free_rings(struct dma_test *dt) 123 120 { 124 121 if (dt->rx_ring) { 122 + tb_xdomain_release_in_hopid(dt->xd, dt->rx_hopid); 125 123 tb_ring_free(dt->rx_ring); 126 124 dt->rx_ring = NULL; 127 125 } 128 126 if (dt->tx_ring) { 127 + tb_xdomain_release_out_hopid(dt->xd, dt->tx_hopid); 129 128 tb_ring_free(dt->tx_ring); 130 129 dt->tx_ring = NULL; 131 130 } ··· 156 151 157 152 dt->tx_ring = ring; 158 153 e2e_tx_hop = ring->hop; 154 + 155 + ret = tb_xdomain_alloc_out_hopid(xd, -1); 156 + if (ret < 0) { 157 + dma_test_free_rings(dt); 158 + return ret; 159 + } 160 + 161 + dt->tx_hopid = ret; 159 162 } 160 163 161 164 if (dt->packets_to_receive) { ··· 181 168 } 182 169 183 170 dt->rx_ring = ring; 171 + 172 + ret = tb_xdomain_alloc_in_hopid(xd, -1); 173 + if (ret < 0) { 174 + dma_test_free_rings(dt); 175 + return ret; 176 + } 177 + 178 + dt->rx_hopid = ret; 184 179 } 185 180 186 - ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID, 181 + ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid, 187 182 dt->tx_ring ? dt->tx_ring->hop : 0, 188 - DMA_TEST_HOPID, 183 + dt->rx_hopid, 189 184 dt->rx_ring ? dt->rx_ring->hop : 0); 190 185 if (ret) { 191 186 dma_test_free_rings(dt); ··· 210 189 211 190 static void dma_test_stop_rings(struct dma_test *dt) 212 191 { 192 + int ret; 193 + 213 194 if (dt->rx_ring) 214 195 tb_ring_stop(dt->rx_ring); 215 196 if (dt->tx_ring) 216 197 tb_ring_stop(dt->tx_ring); 217 198 218 - if (tb_xdomain_disable_paths(dt->xd)) 199 + ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid, 200 + dt->tx_ring ? dt->tx_ring->hop : 0, 201 + dt->rx_hopid, 202 + dt->rx_ring ? dt->rx_ring->hop : 0); 203 + if (ret) 219 204 dev_warn(&dt->svc->dev, "failed to disable DMA paths\n"); 220 205 221 206 dma_test_free_rings(dt);
+51 -38
drivers/thunderbolt/domain.c
··· 341 341 .release = tb_domain_release, 342 342 }; 343 343 344 + static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type, 345 + const void *buf, size_t size) 346 + { 347 + struct tb *tb = data; 348 + 349 + if (!tb->cm_ops->handle_event) { 350 + tb_warn(tb, "domain does not have event handler\n"); 351 + return true; 352 + } 353 + 354 + switch (type) { 355 + case TB_CFG_PKG_XDOMAIN_REQ: 356 + case TB_CFG_PKG_XDOMAIN_RESP: 357 + if (tb_is_xdomain_enabled()) 358 + return tb_xdomain_handle_request(tb, type, buf, size); 359 + break; 360 + 361 + default: 362 + tb->cm_ops->handle_event(tb, type, buf, size); 363 + } 364 + 365 + return true; 366 + } 367 + 344 368 /** 345 369 * tb_domain_alloc() - Allocate a domain 346 370 * @nhi: Pointer to the host controller 371 + * @timeout_msec: Control channel timeout for non-raw messages 347 372 * @privsize: Size of the connection manager private data 348 373 * 349 374 * Allocates and initializes a new Thunderbolt domain. Connection ··· 380 355 * 381 356 * Return: allocated domain structure on %NULL in case of error 382 357 */ 383 - struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) 358 + struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize) 384 359 { 385 360 struct tb *tb; 386 361 ··· 407 382 if (!tb->wq) 408 383 goto err_remove_ida; 409 384 385 + tb->ctl = tb_ctl_alloc(nhi, timeout_msec, tb_domain_event_cb, tb); 386 + if (!tb->ctl) 387 + goto err_destroy_wq; 388 + 410 389 tb->dev.parent = &nhi->pdev->dev; 411 390 tb->dev.bus = &tb_bus_type; 412 391 tb->dev.type = &tb_domain_type; ··· 420 391 421 392 return tb; 422 393 394 + err_destroy_wq: 395 + destroy_workqueue(tb->wq); 423 396 err_remove_ida: 424 397 ida_simple_remove(&tb_domain_ida, tb->index); 425 398 err_free: 426 399 kfree(tb); 427 400 428 401 return NULL; 429 - } 430 - 431 - static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type, 432 - const void *buf, size_t size) 433 - { 434 - struct tb *tb = data; 435 - 436 - if (!tb->cm_ops->handle_event) { 437 - tb_warn(tb, "domain does not have event handler\n"); 438 - return true; 439 - } 440 - 441 - switch (type) { 442 - case TB_CFG_PKG_XDOMAIN_REQ: 443 - case TB_CFG_PKG_XDOMAIN_RESP: 444 - if (tb_is_xdomain_enabled()) 445 - return tb_xdomain_handle_request(tb, type, buf, size); 446 - break; 447 - 448 - default: 449 - tb->cm_ops->handle_event(tb, type, buf, size); 450 - } 451 - 452 - return true; 453 402 } 454 403 455 404 /** ··· 449 442 return -EINVAL; 450 443 451 444 mutex_lock(&tb->lock); 452 - 453 - tb->ctl = tb_ctl_alloc(tb->nhi, tb_domain_event_cb, tb); 454 - if (!tb->ctl) { 455 - ret = -ENOMEM; 456 - goto err_unlock; 457 - } 458 - 459 445 /* 460 446 * tb_schedule_hotplug_handler may be called as soon as the config 461 447 * channel is started. Thats why we have to hold the lock here. ··· 493 493 device_del(&tb->dev); 494 494 err_ctl_stop: 495 495 tb_ctl_stop(tb->ctl); 496 - err_unlock: 497 496 mutex_unlock(&tb->lock); 498 497 499 498 return ret; ··· 792 793 * tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain 793 794 * @tb: Domain enabling the DMA paths 794 795 * @xd: XDomain DMA paths are created to 796 + * @transmit_path: HopID we are using to send out packets 797 + * @transmit_ring: DMA ring used to send out packets 798 + * @receive_path: HopID the other end is using to send packets to us 799 + * @receive_ring: DMA ring used to receive packets from @receive_path 795 800 * 796 801 * Calls connection manager specific method to enable DMA paths to the 797 802 * XDomain in question. ··· 804 801 * particular returns %-ENOTSUPP if the connection manager 805 802 * implementation does not support XDomains. 806 803 */ 807 - int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 804 + int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 805 + int transmit_path, int transmit_ring, 806 + int receive_path, int receive_ring) 808 807 { 809 808 if (!tb->cm_ops->approve_xdomain_paths) 810 809 return -ENOTSUPP; 811 810 812 - return tb->cm_ops->approve_xdomain_paths(tb, xd); 811 + return tb->cm_ops->approve_xdomain_paths(tb, xd, transmit_path, 812 + transmit_ring, receive_path, receive_ring); 813 813 } 814 814 815 815 /** 816 816 * tb_domain_disconnect_xdomain_paths() - Disable DMA paths for XDomain 817 817 * @tb: Domain disabling the DMA paths 818 818 * @xd: XDomain whose DMA paths are disconnected 819 + * @transmit_path: HopID we are using to send out packets 820 + * @transmit_ring: DMA ring used to send out packets 821 + * @receive_path: HopID the other end is using to send packets to us 822 + * @receive_ring: DMA ring used to receive packets from @receive_path 819 823 * 820 824 * Calls connection manager specific method to disconnect DMA paths to 821 825 * the XDomain in question. ··· 831 821 * particular returns %-ENOTSUPP if the connection manager 832 822 * implementation does not support XDomains. 833 823 */ 834 - int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 824 + int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 825 + int transmit_path, int transmit_ring, 826 + int receive_path, int receive_ring) 835 827 { 836 828 if (!tb->cm_ops->disconnect_xdomain_paths) 837 829 return -ENOTSUPP; 838 830 839 - return tb->cm_ops->disconnect_xdomain_paths(tb, xd); 831 + return tb->cm_ops->disconnect_xdomain_paths(tb, xd, transmit_path, 832 + transmit_ring, receive_path, receive_ring); 840 833 } 841 834 842 835 static int disconnect_xdomain(struct device *dev, void *data) ··· 850 837 851 838 xd = tb_to_xdomain(dev); 852 839 if (xd && xd->tb == tb) 853 - ret = tb_xdomain_disable_paths(xd); 840 + ret = tb_xdomain_disable_all_paths(xd); 854 841 855 842 return ret; 856 843 }
+80 -25
drivers/thunderbolt/eeprom.c
··· 277 277 u8 unknown4:2; 278 278 } __packed; 279 279 280 + /* USB4 product descriptor */ 281 + struct tb_drom_entry_desc { 282 + struct tb_drom_entry_header header; 283 + u16 bcdUSBSpec; 284 + u16 idVendor; 285 + u16 idProduct; 286 + u16 bcdProductFWRevision; 287 + u32 TID; 288 + u8 productHWRevision; 289 + }; 280 290 281 291 /** 282 292 * tb_drom_read_uid_only() - Read UID directly from DROM ··· 339 329 if (!sw->device_name) 340 330 return -ENOMEM; 341 331 break; 332 + case 9: { 333 + const struct tb_drom_entry_desc *desc = 334 + (const struct tb_drom_entry_desc *)entry; 335 + 336 + if (!sw->vendor && !sw->device) { 337 + sw->vendor = desc->idVendor; 338 + sw->device = desc->idProduct; 339 + } 340 + break; 341 + } 342 342 } 343 343 344 344 return 0; ··· 541 521 return tb_eeprom_read_n(sw, offset, val, count); 542 522 } 543 523 524 + static int tb_drom_parse(struct tb_switch *sw) 525 + { 526 + const struct tb_drom_header *header = 527 + (const struct tb_drom_header *)sw->drom; 528 + u32 crc; 529 + 530 + crc = tb_crc8((u8 *) &header->uid, 8); 531 + if (crc != header->uid_crc8) { 532 + tb_sw_warn(sw, 533 + "DROM UID CRC8 mismatch (expected: %#x, got: %#x), aborting\n", 534 + header->uid_crc8, crc); 535 + return -EINVAL; 536 + } 537 + if (!sw->uid) 538 + sw->uid = header->uid; 539 + sw->vendor = header->vendor_id; 540 + sw->device = header->model_id; 541 + 542 + crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); 543 + if (crc != header->data_crc32) { 544 + tb_sw_warn(sw, 545 + "DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n", 546 + header->data_crc32, crc); 547 + } 548 + 549 + return tb_drom_parse_entries(sw); 550 + } 551 + 552 + static int usb4_drom_parse(struct tb_switch *sw) 553 + { 554 + const struct tb_drom_header *header = 555 + (const struct tb_drom_header *)sw->drom; 556 + u32 crc; 557 + 558 + crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); 559 + if (crc != header->data_crc32) { 560 + tb_sw_warn(sw, 561 + "DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n", 562 + header->data_crc32, crc); 563 + return -EINVAL; 564 + } 565 + 566 + return tb_drom_parse_entries(sw); 567 + } 568 + 544 569 /** 545 570 * tb_drom_read() - Copy DROM to sw->drom and parse it 546 571 * @sw: Router whose DROM to read and parse ··· 599 534 int tb_drom_read(struct tb_switch *sw) 600 535 { 601 536 u16 size; 602 - u32 crc; 603 537 struct tb_drom_header *header; 604 538 int res, retries = 1; 605 539 ··· 663 599 goto err; 664 600 } 665 601 666 - crc = tb_crc8((u8 *) &header->uid, 8); 667 - if (crc != header->uid_crc8) { 668 - tb_sw_warn(sw, 669 - "drom uid crc8 mismatch (expected: %#x, got: %#x), aborting\n", 670 - header->uid_crc8, crc); 671 - goto err; 672 - } 673 - if (!sw->uid) 674 - sw->uid = header->uid; 675 - sw->vendor = header->vendor_id; 676 - sw->device = header->model_id; 677 - tb_check_quirks(sw); 602 + tb_sw_dbg(sw, "DROM version: %d\n", header->device_rom_revision); 678 603 679 - crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); 680 - if (crc != header->data_crc32) { 681 - tb_sw_warn(sw, 682 - "drom data crc32 mismatch (expected: %#x, got: %#x), continuing\n", 683 - header->data_crc32, crc); 604 + switch (header->device_rom_revision) { 605 + case 3: 606 + res = usb4_drom_parse(sw); 607 + break; 608 + default: 609 + tb_sw_warn(sw, "DROM device_rom_revision %#x unknown\n", 610 + header->device_rom_revision); 611 + fallthrough; 612 + case 1: 613 + res = tb_drom_parse(sw); 614 + break; 684 615 } 685 616 686 - if (header->device_rom_revision > 2) 687 - tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n", 688 - header->device_rom_revision); 689 - 690 - res = tb_drom_parse_entries(sw); 691 617 /* If the DROM parsing fails, wait a moment and retry once */ 692 618 if (res == -EILSEQ && retries--) { 693 619 tb_sw_warn(sw, "parsing DROM failed, retrying\n"); ··· 687 633 goto parse; 688 634 } 689 635 690 - return res; 636 + if (!res) 637 + return 0; 638 + 691 639 err: 692 640 kfree(sw->drom); 693 641 sw->drom = NULL; 694 642 return -EIO; 695 - 696 643 }
+21 -13
drivers/thunderbolt/icm.c
··· 557 557 return 0; 558 558 } 559 559 560 - static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 560 + static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 561 + int transmit_path, int transmit_ring, 562 + int receive_path, int receive_ring) 561 563 { 562 564 struct icm_fr_pkg_approve_xdomain_response reply; 563 565 struct icm_fr_pkg_approve_xdomain request; ··· 570 568 request.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT | xd->link; 571 569 memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 572 570 573 - request.transmit_path = xd->transmit_path; 574 - request.transmit_ring = xd->transmit_ring; 575 - request.receive_path = xd->receive_path; 576 - request.receive_ring = xd->receive_ring; 571 + request.transmit_path = transmit_path; 572 + request.transmit_ring = transmit_ring; 573 + request.receive_path = receive_path; 574 + request.receive_ring = receive_ring; 577 575 578 576 memset(&reply, 0, sizeof(reply)); 579 577 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), ··· 587 585 return 0; 588 586 } 589 587 590 - static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 588 + static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 589 + int transmit_path, int transmit_ring, 590 + int receive_path, int receive_ring) 591 591 { 592 592 u8 phy_port; 593 593 u8 cmd; ··· 1126 1122 return 0; 1127 1123 } 1128 1124 1129 - static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 1125 + static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 1126 + int transmit_path, int transmit_ring, 1127 + int receive_path, int receive_ring) 1130 1128 { 1131 1129 struct icm_tr_pkg_approve_xdomain_response reply; 1132 1130 struct icm_tr_pkg_approve_xdomain request; ··· 1138 1132 request.hdr.code = ICM_APPROVE_XDOMAIN; 1139 1133 request.route_hi = upper_32_bits(xd->route); 1140 1134 request.route_lo = lower_32_bits(xd->route); 1141 - request.transmit_path = xd->transmit_path; 1142 - request.transmit_ring = xd->transmit_ring; 1143 - request.receive_path = xd->receive_path; 1144 - request.receive_ring = xd->receive_ring; 1135 + request.transmit_path = transmit_path; 1136 + request.transmit_ring = transmit_ring; 1137 + request.receive_path = receive_path; 1138 + request.receive_ring = receive_ring; 1145 1139 memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 1146 1140 1147 1141 memset(&reply, 0, sizeof(reply)); ··· 1182 1176 return 0; 1183 1177 } 1184 1178 1185 - static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 1179 + static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 1180 + int transmit_path, int transmit_ring, 1181 + int receive_path, int receive_ring) 1186 1182 { 1187 1183 int ret; 1188 1184 ··· 2424 2416 struct icm *icm; 2425 2417 struct tb *tb; 2426 2418 2427 - tb = tb_domain_alloc(nhi, sizeof(struct icm)); 2419 + tb = tb_domain_alloc(nhi, ICM_TIMEOUT, sizeof(struct icm)); 2428 2420 if (!tb) 2429 2421 return NULL; 2430 2422
+71
drivers/thunderbolt/property.c
··· 502 502 } 503 503 504 504 /** 505 + * tb_property_copy_dir() - Take a deep copy of directory 506 + * @dir: Directory to copy 507 + * 508 + * This function takes a deep copy of @dir and returns back the copy. In 509 + * case of error returns %NULL. The resulting directory needs to be 510 + * released by calling tb_property_free_dir(). 511 + */ 512 + struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir) 513 + { 514 + struct tb_property *property, *p = NULL; 515 + struct tb_property_dir *d; 516 + 517 + if (!dir) 518 + return NULL; 519 + 520 + d = tb_property_create_dir(dir->uuid); 521 + if (!d) 522 + return NULL; 523 + 524 + list_for_each_entry(property, &dir->properties, list) { 525 + struct tb_property *p; 526 + 527 + p = tb_property_alloc(property->key, property->type); 528 + if (!p) 529 + goto err_free; 530 + 531 + p->length = property->length; 532 + 533 + switch (property->type) { 534 + case TB_PROPERTY_TYPE_DIRECTORY: 535 + p->value.dir = tb_property_copy_dir(property->value.dir); 536 + if (!p->value.dir) 537 + goto err_free; 538 + break; 539 + 540 + case TB_PROPERTY_TYPE_DATA: 541 + p->value.data = kmemdup(property->value.data, 542 + property->length * 4, 543 + GFP_KERNEL); 544 + if (!p->value.data) 545 + goto err_free; 546 + break; 547 + 548 + case TB_PROPERTY_TYPE_TEXT: 549 + p->value.text = kzalloc(p->length * 4, GFP_KERNEL); 550 + if (!p->value.text) 551 + goto err_free; 552 + strcpy(p->value.text, property->value.text); 553 + break; 554 + 555 + case TB_PROPERTY_TYPE_VALUE: 556 + p->value.immediate = property->value.immediate; 557 + break; 558 + 559 + default: 560 + break; 561 + } 562 + 563 + list_add_tail(&p->list, &d->properties); 564 + } 565 + 566 + return d; 567 + 568 + err_free: 569 + kfree(p); 570 + tb_property_free_dir(d); 571 + 572 + return NULL; 573 + } 574 + 575 + /** 505 576 * tb_property_add_immediate() - Add immediate property to directory 506 577 * @parent: Directory to add the property 507 578 * @key: Key for the property
+51 -24
drivers/thunderbolt/switch.c
··· 627 627 } 628 628 629 629 /** 630 - * tb_port_set_initial_credits() - Set initial port link credits allocated 631 - * @port: Port to set the initial credits 632 - * @credits: Number of credits to to allocate 633 - * 634 - * Set initial credits value to be used for ingress shared buffering. 635 - */ 636 - int tb_port_set_initial_credits(struct tb_port *port, u32 credits) 637 - { 638 - u32 data; 639 - int ret; 640 - 641 - ret = tb_port_read(port, &data, TB_CFG_PORT, ADP_CS_5, 1); 642 - if (ret) 643 - return ret; 644 - 645 - data &= ~ADP_CS_5_LCA_MASK; 646 - data |= (credits << ADP_CS_5_LCA_SHIFT) & ADP_CS_5_LCA_MASK; 647 - 648 - return tb_port_write(port, &data, TB_CFG_PORT, ADP_CS_5, 1); 649 - } 650 - 651 - /** 652 630 * tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER 653 631 * @port: Port whose counters to clear 654 632 * @counter: Counter index to clear ··· 1309 1331 TB_CFG_SWITCH, 2, 2); 1310 1332 if (res.err) 1311 1333 return res.err; 1312 - res = tb_cfg_reset(sw->tb->ctl, tb_route(sw), TB_CFG_DEFAULT_TIMEOUT); 1334 + res = tb_cfg_reset(sw->tb->ctl, tb_route(sw)); 1313 1335 if (res.err > 0) 1314 1336 return -EIO; 1315 1337 return res.err; ··· 1740 1762 NULL, 1741 1763 }; 1742 1764 1765 + static bool has_port(const struct tb_switch *sw, enum tb_port_type type) 1766 + { 1767 + const struct tb_port *port; 1768 + 1769 + tb_switch_for_each_port(sw, port) { 1770 + if (!port->disabled && port->config.type == type) 1771 + return true; 1772 + } 1773 + 1774 + return false; 1775 + } 1776 + 1743 1777 static umode_t switch_attr_is_visible(struct kobject *kobj, 1744 1778 struct attribute *attr, int n) 1745 1779 { ··· 1760 1770 1761 1771 if (attr == &dev_attr_authorized.attr) { 1762 1772 if (sw->tb->security_level == TB_SECURITY_NOPCIE || 1763 - sw->tb->security_level == TB_SECURITY_DPONLY) 1773 + sw->tb->security_level == TB_SECURITY_DPONLY || 1774 + !has_port(sw, TB_TYPE_PCIE_UP)) 1764 1775 return 0; 1765 1776 } else if (attr == &dev_attr_device.attr) { 1766 1777 if (!sw->device) ··· 1840 1849 kfree(sw); 1841 1850 } 1842 1851 1852 + static int tb_switch_uevent(struct device *dev, struct kobj_uevent_env *env) 1853 + { 1854 + struct tb_switch *sw = tb_to_switch(dev); 1855 + const char *type; 1856 + 1857 + if (sw->config.thunderbolt_version == USB4_VERSION_1_0) { 1858 + if (add_uevent_var(env, "USB4_VERSION=1.0")) 1859 + return -ENOMEM; 1860 + } 1861 + 1862 + if (!tb_route(sw)) { 1863 + type = "host"; 1864 + } else { 1865 + const struct tb_port *port; 1866 + bool hub = false; 1867 + 1868 + /* Device is hub if it has any downstream ports */ 1869 + tb_switch_for_each_port(sw, port) { 1870 + if (!port->disabled && !tb_is_upstream_port(port) && 1871 + tb_port_is_null(port)) { 1872 + hub = true; 1873 + break; 1874 + } 1875 + } 1876 + 1877 + type = hub ? "hub" : "device"; 1878 + } 1879 + 1880 + if (add_uevent_var(env, "USB4_TYPE=%s", type)) 1881 + return -ENOMEM; 1882 + return 0; 1883 + } 1884 + 1843 1885 /* 1844 1886 * Currently only need to provide the callbacks. Everything else is handled 1845 1887 * in the connection manager. ··· 1906 1882 struct device_type tb_switch_type = { 1907 1883 .name = "thunderbolt_device", 1908 1884 .release = tb_switch_release, 1885 + .uevent = tb_switch_uevent, 1909 1886 .pm = &tb_switch_pm_ops, 1910 1887 }; 1911 1888 ··· 2566 2541 return ret; 2567 2542 } 2568 2543 tb_sw_dbg(sw, "uid: %#llx\n", sw->uid); 2544 + 2545 + tb_check_quirks(sw); 2569 2546 2570 2547 ret = tb_switch_set_uuid(sw); 2571 2548 if (ret) {
+33 -19
drivers/thunderbolt/tb.c
··· 15 15 #include "tb_regs.h" 16 16 #include "tunnel.h" 17 17 18 + #define TB_TIMEOUT 100 /* ms */ 19 + 18 20 /** 19 21 * struct tb_cm - Simple Thunderbolt connection manager 20 22 * @tunnel_list: List of active tunnels ··· 1079 1077 return 0; 1080 1078 } 1081 1079 1082 - static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 1080 + static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 1081 + int transmit_path, int transmit_ring, 1082 + int receive_path, int receive_ring) 1083 1083 { 1084 1084 struct tb_cm *tcm = tb_priv(tb); 1085 1085 struct tb_port *nhi_port, *dst_port; ··· 1093 1089 nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI); 1094 1090 1095 1091 mutex_lock(&tb->lock); 1096 - tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring, 1097 - xd->transmit_path, xd->receive_ring, 1098 - xd->receive_path); 1092 + tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path, 1093 + transmit_ring, receive_path, receive_ring); 1099 1094 if (!tunnel) { 1100 1095 mutex_unlock(&tb->lock); 1101 1096 return -ENOMEM; ··· 1113 1110 return 0; 1114 1111 } 1115 1112 1116 - static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 1113 + static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 1114 + int transmit_path, int transmit_ring, 1115 + int receive_path, int receive_ring) 1117 1116 { 1118 - struct tb_port *dst_port; 1119 - struct tb_tunnel *tunnel; 1117 + struct tb_cm *tcm = tb_priv(tb); 1118 + struct tb_port *nhi_port, *dst_port; 1119 + struct tb_tunnel *tunnel, *n; 1120 1120 struct tb_switch *sw; 1121 1121 1122 1122 sw = tb_to_switch(xd->dev.parent); 1123 1123 dst_port = tb_port_at(xd->route, sw); 1124 + nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI); 1124 1125 1125 - /* 1126 - * It is possible that the tunnel was already teared down (in 1127 - * case of cable disconnect) so it is fine if we cannot find it 1128 - * here anymore. 1129 - */ 1130 - tunnel = tb_find_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port); 1131 - tb_deactivate_and_free_tunnel(tunnel); 1126 + list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) { 1127 + if (!tb_tunnel_is_dma(tunnel)) 1128 + continue; 1129 + if (tunnel->src_port != nhi_port || tunnel->dst_port != dst_port) 1130 + continue; 1131 + 1132 + if (tb_tunnel_match_dma(tunnel, transmit_path, transmit_ring, 1133 + receive_path, receive_ring)) 1134 + tb_deactivate_and_free_tunnel(tunnel); 1135 + } 1132 1136 } 1133 1137 1134 - static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 1138 + static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 1139 + int transmit_path, int transmit_ring, 1140 + int receive_path, int receive_ring) 1135 1141 { 1136 1142 if (!xd->is_unplugged) { 1137 1143 mutex_lock(&tb->lock); 1138 - __tb_disconnect_xdomain_paths(tb, xd); 1144 + __tb_disconnect_xdomain_paths(tb, xd, transmit_path, 1145 + transmit_ring, receive_path, 1146 + receive_ring); 1139 1147 mutex_unlock(&tb->lock); 1140 1148 } 1141 1149 return 0; ··· 1222 1208 * tb_xdomain_remove() so setting XDomain as 1223 1209 * unplugged here prevents deadlock if they call 1224 1210 * tb_xdomain_disable_paths(). We will tear down 1225 - * the path below. 1211 + * all the tunnels below. 1226 1212 */ 1227 1213 xd->is_unplugged = true; 1228 1214 tb_xdomain_remove(xd); 1229 1215 port->xdomain = NULL; 1230 - __tb_disconnect_xdomain_paths(tb, xd); 1216 + __tb_disconnect_xdomain_paths(tb, xd, -1, -1, -1, -1); 1231 1217 tb_xdomain_put(xd); 1232 1218 tb_port_unconfigure_xdomain(port); 1233 1219 } else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) { ··· 1576 1562 struct tb_cm *tcm; 1577 1563 struct tb *tb; 1578 1564 1579 - tb = tb_domain_alloc(nhi, sizeof(*tcm)); 1565 + tb = tb_domain_alloc(nhi, TB_TIMEOUT, sizeof(*tcm)); 1580 1566 if (!tb) 1581 1567 return NULL; 1582 1568
+13 -32
drivers/thunderbolt/tb.h
··· 406 406 int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, 407 407 const u8 *challenge, u8 *response); 408 408 int (*disconnect_pcie_paths)(struct tb *tb); 409 - int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); 410 - int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); 409 + int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd, 410 + int transmit_path, int transmit_ring, 411 + int receive_path, int receive_ring); 412 + int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd, 413 + int transmit_path, int transmit_ring, 414 + int receive_path, int receive_ring); 411 415 int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata, 412 416 u8 *status, const void *tx_data, size_t tx_data_len, 413 417 void *rx_data, size_t rx_data_len); ··· 629 625 int tb_xdomain_init(void); 630 626 void tb_xdomain_exit(void); 631 627 632 - struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize); 628 + struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize); 633 629 int tb_domain_add(struct tb *tb); 634 630 void tb_domain_remove(struct tb *tb); 635 631 int tb_domain_suspend_noirq(struct tb *tb); ··· 645 641 int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); 646 642 int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw); 647 643 int tb_domain_disconnect_pcie_paths(struct tb *tb); 648 - int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); 649 - int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); 644 + int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 645 + int transmit_path, int transmit_ring, 646 + int receive_path, int receive_ring); 647 + int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, 648 + int transmit_path, int transmit_ring, 649 + int receive_path, int receive_ring); 650 650 int tb_domain_disconnect_all_paths(struct tb *tb); 651 651 652 652 static inline struct tb *tb_domain_get(struct tb *tb) ··· 795 787 return false; 796 788 } 797 789 798 - static inline bool tb_switch_is_ice_lake(const struct tb_switch *sw) 799 - { 800 - if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) { 801 - switch (sw->config.device_id) { 802 - case PCI_DEVICE_ID_INTEL_ICL_NHI0: 803 - case PCI_DEVICE_ID_INTEL_ICL_NHI1: 804 - return true; 805 - } 806 - } 807 - return false; 808 - } 809 - 810 - static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw) 811 - { 812 - if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) { 813 - switch (sw->config.device_id) { 814 - case PCI_DEVICE_ID_INTEL_TGL_NHI0: 815 - case PCI_DEVICE_ID_INTEL_TGL_NHI1: 816 - case PCI_DEVICE_ID_INTEL_TGL_H_NHI0: 817 - case PCI_DEVICE_ID_INTEL_TGL_H_NHI1: 818 - return true; 819 - } 820 - } 821 - return false; 822 - } 823 - 824 790 /** 825 791 * tb_switch_is_usb4() - Is the switch USB4 compliant 826 792 * @sw: Switch to check ··· 842 860 843 861 int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); 844 862 int tb_port_add_nfc_credits(struct tb_port *port, int credits); 845 - int tb_port_set_initial_credits(struct tb_port *port, u32 credits); 846 863 int tb_port_clear_counter(struct tb_port *port, int counter); 847 864 int tb_port_unlock(struct tb_port *port); 848 865 int tb_port_enable(struct tb_port *port);
+492
drivers/thunderbolt/test.c
··· 119 119 sw->ports[7].config.type = TB_TYPE_NHI; 120 120 sw->ports[7].config.max_in_hop_id = 11; 121 121 sw->ports[7].config.max_out_hop_id = 11; 122 + sw->ports[7].config.nfc_credits = 0x41800000; 122 123 123 124 sw->ports[8].config.type = TB_TYPE_PCIE_DOWN; 124 125 sw->ports[8].config.max_in_hop_id = 8; ··· 1595 1594 tb_tunnel_free(dp_tunnel); 1596 1595 } 1597 1596 1597 + static void tb_test_tunnel_dma(struct kunit *test) 1598 + { 1599 + struct tb_port *nhi, *port; 1600 + struct tb_tunnel *tunnel; 1601 + struct tb_switch *host; 1602 + 1603 + /* 1604 + * Create DMA tunnel from NHI to port 1 and back. 1605 + * 1606 + * [Host 1] 1607 + * 1 ^ In HopID 1 -> Out HopID 8 1608 + * | 1609 + * v In HopID 8 -> Out HopID 1 1610 + * ............ Domain border 1611 + * | 1612 + * [Host 2] 1613 + */ 1614 + host = alloc_host(test); 1615 + nhi = &host->ports[7]; 1616 + port = &host->ports[1]; 1617 + 1618 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); 1619 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1620 + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); 1621 + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); 1622 + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); 1623 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1624 + /* RX path */ 1625 + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); 1626 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); 1627 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8); 1628 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi); 1629 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 1); 1630 + /* TX path */ 1631 + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 1); 1632 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi); 1633 + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1); 1634 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].out_port, port); 1635 + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].next_hop_index, 8); 1636 + 1637 + tb_tunnel_free(tunnel); 1638 + } 1639 + 1640 + static void tb_test_tunnel_dma_rx(struct kunit *test) 1641 + { 1642 + struct tb_port *nhi, *port; 1643 + struct tb_tunnel *tunnel; 1644 + struct tb_switch *host; 1645 + 1646 + /* 1647 + * Create DMA RX tunnel from port 1 to NHI. 1648 + * 1649 + * [Host 1] 1650 + * 1 ^ 1651 + * | 1652 + * | In HopID 15 -> Out HopID 2 1653 + * ............ Domain border 1654 + * | 1655 + * [Host 2] 1656 + */ 1657 + host = alloc_host(test); 1658 + nhi = &host->ports[7]; 1659 + port = &host->ports[1]; 1660 + 1661 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 2); 1662 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1663 + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); 1664 + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); 1665 + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); 1666 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1); 1667 + /* RX path */ 1668 + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); 1669 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); 1670 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 15); 1671 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi); 1672 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 2); 1673 + 1674 + tb_tunnel_free(tunnel); 1675 + } 1676 + 1677 + static void tb_test_tunnel_dma_tx(struct kunit *test) 1678 + { 1679 + struct tb_port *nhi, *port; 1680 + struct tb_tunnel *tunnel; 1681 + struct tb_switch *host; 1682 + 1683 + /* 1684 + * Create DMA TX tunnel from NHI to port 1. 1685 + * 1686 + * [Host 1] 1687 + * 1 | In HopID 2 -> Out HopID 15 1688 + * | 1689 + * v 1690 + * ............ Domain border 1691 + * | 1692 + * [Host 2] 1693 + */ 1694 + host = alloc_host(test); 1695 + nhi = &host->ports[7]; 1696 + port = &host->ports[1]; 1697 + 1698 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 2, -1, -1); 1699 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1700 + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); 1701 + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); 1702 + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); 1703 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1); 1704 + /* TX path */ 1705 + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); 1706 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, nhi); 1707 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 2); 1708 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, port); 1709 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 15); 1710 + 1711 + tb_tunnel_free(tunnel); 1712 + } 1713 + 1714 + static void tb_test_tunnel_dma_chain(struct kunit *test) 1715 + { 1716 + struct tb_switch *host, *dev1, *dev2; 1717 + struct tb_port *nhi, *port; 1718 + struct tb_tunnel *tunnel; 1719 + 1720 + /* 1721 + * Create DMA tunnel from NHI to Device #2 port 3 and back. 1722 + * 1723 + * [Host 1] 1724 + * 1 ^ In HopID 1 -> Out HopID x 1725 + * | 1726 + * 1 | In HopID x -> Out HopID 1 1727 + * [Device #1] 1728 + * 7 \ 1729 + * 1 \ 1730 + * [Device #2] 1731 + * 3 | In HopID x -> Out HopID 8 1732 + * | 1733 + * v In HopID 8 -> Out HopID x 1734 + * ............ Domain border 1735 + * | 1736 + * [Host 2] 1737 + */ 1738 + host = alloc_host(test); 1739 + dev1 = alloc_dev_default(test, host, 0x1, true); 1740 + dev2 = alloc_dev_default(test, dev1, 0x701, true); 1741 + 1742 + nhi = &host->ports[7]; 1743 + port = &dev2->ports[3]; 1744 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); 1745 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1746 + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); 1747 + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); 1748 + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); 1749 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1750 + /* RX path */ 1751 + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3); 1752 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); 1753 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8); 1754 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, 1755 + &dev2->ports[1]); 1756 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].in_port, 1757 + &dev1->ports[7]); 1758 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port, 1759 + &dev1->ports[1]); 1760 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].in_port, 1761 + &host->ports[1]); 1762 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, nhi); 1763 + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[2].next_hop_index, 1); 1764 + /* TX path */ 1765 + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3); 1766 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi); 1767 + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1); 1768 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].in_port, 1769 + &dev1->ports[1]); 1770 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port, 1771 + &dev1->ports[7]); 1772 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].in_port, 1773 + &dev2->ports[1]); 1774 + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, port); 1775 + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[2].next_hop_index, 8); 1776 + 1777 + tb_tunnel_free(tunnel); 1778 + } 1779 + 1780 + static void tb_test_tunnel_dma_match(struct kunit *test) 1781 + { 1782 + struct tb_port *nhi, *port; 1783 + struct tb_tunnel *tunnel; 1784 + struct tb_switch *host; 1785 + 1786 + host = alloc_host(test); 1787 + nhi = &host->ports[7]; 1788 + port = &host->ports[1]; 1789 + 1790 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, 15, 1); 1791 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1792 + 1793 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1)); 1794 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, 1, 15, 1)); 1795 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); 1796 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1)); 1797 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1)); 1798 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1)); 1799 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1)); 1800 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 1)); 1801 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); 1802 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, -1, 8, -1)); 1803 + 1804 + tb_tunnel_free(tunnel); 1805 + 1806 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, -1, -1); 1807 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1808 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1)); 1809 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1)); 1810 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1)); 1811 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); 1812 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1)); 1813 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); 1814 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1)); 1815 + 1816 + tb_tunnel_free(tunnel); 1817 + 1818 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 11); 1819 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1820 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 11)); 1821 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1)); 1822 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 11)); 1823 + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); 1824 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); 1825 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 10, 11)); 1826 + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1)); 1827 + 1828 + tb_tunnel_free(tunnel); 1829 + } 1830 + 1831 + static const u32 root_directory[] = { 1832 + 0x55584401, /* "UXD" v1 */ 1833 + 0x00000018, /* Root directory length */ 1834 + 0x76656e64, /* "vend" */ 1835 + 0x6f726964, /* "orid" */ 1836 + 0x76000001, /* "v" R 1 */ 1837 + 0x00000a27, /* Immediate value, ! Vendor ID */ 1838 + 0x76656e64, /* "vend" */ 1839 + 0x6f726964, /* "orid" */ 1840 + 0x74000003, /* "t" R 3 */ 1841 + 0x0000001a, /* Text leaf offset, (“Apple Inc.”) */ 1842 + 0x64657669, /* "devi" */ 1843 + 0x63656964, /* "ceid" */ 1844 + 0x76000001, /* "v" R 1 */ 1845 + 0x0000000a, /* Immediate value, ! Device ID */ 1846 + 0x64657669, /* "devi" */ 1847 + 0x63656964, /* "ceid" */ 1848 + 0x74000003, /* "t" R 3 */ 1849 + 0x0000001d, /* Text leaf offset, (“Macintosh”) */ 1850 + 0x64657669, /* "devi" */ 1851 + 0x63657276, /* "cerv" */ 1852 + 0x76000001, /* "v" R 1 */ 1853 + 0x80000100, /* Immediate value, Device Revision */ 1854 + 0x6e657477, /* "netw" */ 1855 + 0x6f726b00, /* "ork" */ 1856 + 0x44000014, /* "D" R 20 */ 1857 + 0x00000021, /* Directory data offset, (Network Directory) */ 1858 + 0x4170706c, /* "Appl" */ 1859 + 0x6520496e, /* "e In" */ 1860 + 0x632e0000, /* "c." ! */ 1861 + 0x4d616369, /* "Maci" */ 1862 + 0x6e746f73, /* "ntos" */ 1863 + 0x68000000, /* "h" */ 1864 + 0x00000000, /* padding */ 1865 + 0xca8961c6, /* Directory UUID, Network Directory */ 1866 + 0x9541ce1c, /* Directory UUID, Network Directory */ 1867 + 0x5949b8bd, /* Directory UUID, Network Directory */ 1868 + 0x4f5a5f2e, /* Directory UUID, Network Directory */ 1869 + 0x70727463, /* "prtc" */ 1870 + 0x69640000, /* "id" */ 1871 + 0x76000001, /* "v" R 1 */ 1872 + 0x00000001, /* Immediate value, Network Protocol ID */ 1873 + 0x70727463, /* "prtc" */ 1874 + 0x76657273, /* "vers" */ 1875 + 0x76000001, /* "v" R 1 */ 1876 + 0x00000001, /* Immediate value, Network Protocol Version */ 1877 + 0x70727463, /* "prtc" */ 1878 + 0x72657673, /* "revs" */ 1879 + 0x76000001, /* "v" R 1 */ 1880 + 0x00000001, /* Immediate value, Network Protocol Revision */ 1881 + 0x70727463, /* "prtc" */ 1882 + 0x73746e73, /* "stns" */ 1883 + 0x76000001, /* "v" R 1 */ 1884 + 0x00000000, /* Immediate value, Network Protocol Settings */ 1885 + }; 1886 + 1887 + static const uuid_t network_dir_uuid = 1888 + UUID_INIT(0xc66189ca, 0x1cce, 0x4195, 1889 + 0xbd, 0xb8, 0x49, 0x59, 0x2e, 0x5f, 0x5a, 0x4f); 1890 + 1891 + static void tb_test_property_parse(struct kunit *test) 1892 + { 1893 + struct tb_property_dir *dir, *network_dir; 1894 + struct tb_property *p; 1895 + 1896 + dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); 1897 + KUNIT_ASSERT_TRUE(test, dir != NULL); 1898 + 1899 + p = tb_property_find(dir, "foo", TB_PROPERTY_TYPE_TEXT); 1900 + KUNIT_ASSERT_TRUE(test, !p); 1901 + 1902 + p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_TEXT); 1903 + KUNIT_ASSERT_TRUE(test, p != NULL); 1904 + KUNIT_EXPECT_STREQ(test, p->value.text, "Apple Inc."); 1905 + 1906 + p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_VALUE); 1907 + KUNIT_ASSERT_TRUE(test, p != NULL); 1908 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa27); 1909 + 1910 + p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_TEXT); 1911 + KUNIT_ASSERT_TRUE(test, p != NULL); 1912 + KUNIT_EXPECT_STREQ(test, p->value.text, "Macintosh"); 1913 + 1914 + p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_VALUE); 1915 + KUNIT_ASSERT_TRUE(test, p != NULL); 1916 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa); 1917 + 1918 + p = tb_property_find(dir, "missing", TB_PROPERTY_TYPE_DIRECTORY); 1919 + KUNIT_ASSERT_TRUE(test, !p); 1920 + 1921 + p = tb_property_find(dir, "network", TB_PROPERTY_TYPE_DIRECTORY); 1922 + KUNIT_ASSERT_TRUE(test, p != NULL); 1923 + 1924 + network_dir = p->value.dir; 1925 + KUNIT_EXPECT_TRUE(test, uuid_equal(network_dir->uuid, &network_dir_uuid)); 1926 + 1927 + p = tb_property_find(network_dir, "prtcid", TB_PROPERTY_TYPE_VALUE); 1928 + KUNIT_ASSERT_TRUE(test, p != NULL); 1929 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); 1930 + 1931 + p = tb_property_find(network_dir, "prtcvers", TB_PROPERTY_TYPE_VALUE); 1932 + KUNIT_ASSERT_TRUE(test, p != NULL); 1933 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); 1934 + 1935 + p = tb_property_find(network_dir, "prtcrevs", TB_PROPERTY_TYPE_VALUE); 1936 + KUNIT_ASSERT_TRUE(test, p != NULL); 1937 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); 1938 + 1939 + p = tb_property_find(network_dir, "prtcstns", TB_PROPERTY_TYPE_VALUE); 1940 + KUNIT_ASSERT_TRUE(test, p != NULL); 1941 + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x0); 1942 + 1943 + p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_VALUE); 1944 + KUNIT_EXPECT_TRUE(test, !p); 1945 + p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_TEXT); 1946 + KUNIT_EXPECT_TRUE(test, !p); 1947 + 1948 + tb_property_free_dir(dir); 1949 + } 1950 + 1951 + static void tb_test_property_format(struct kunit *test) 1952 + { 1953 + struct tb_property_dir *dir; 1954 + ssize_t block_len; 1955 + u32 *block; 1956 + int ret, i; 1957 + 1958 + dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); 1959 + KUNIT_ASSERT_TRUE(test, dir != NULL); 1960 + 1961 + ret = tb_property_format_dir(dir, NULL, 0); 1962 + KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory)); 1963 + 1964 + block_len = ret; 1965 + 1966 + block = kunit_kzalloc(test, block_len * sizeof(u32), GFP_KERNEL); 1967 + KUNIT_ASSERT_TRUE(test, block != NULL); 1968 + 1969 + ret = tb_property_format_dir(dir, block, block_len); 1970 + KUNIT_EXPECT_EQ(test, ret, 0); 1971 + 1972 + for (i = 0; i < ARRAY_SIZE(root_directory); i++) 1973 + KUNIT_EXPECT_EQ(test, root_directory[i], block[i]); 1974 + 1975 + tb_property_free_dir(dir); 1976 + } 1977 + 1978 + static void compare_dirs(struct kunit *test, struct tb_property_dir *d1, 1979 + struct tb_property_dir *d2) 1980 + { 1981 + struct tb_property *p1, *p2, *tmp; 1982 + int n1, n2, i; 1983 + 1984 + if (d1->uuid) { 1985 + KUNIT_ASSERT_TRUE(test, d2->uuid != NULL); 1986 + KUNIT_ASSERT_TRUE(test, uuid_equal(d1->uuid, d2->uuid)); 1987 + } else { 1988 + KUNIT_ASSERT_TRUE(test, d2->uuid == NULL); 1989 + } 1990 + 1991 + n1 = 0; 1992 + tb_property_for_each(d1, tmp) 1993 + n1++; 1994 + KUNIT_ASSERT_NE(test, n1, 0); 1995 + 1996 + n2 = 0; 1997 + tb_property_for_each(d2, tmp) 1998 + n2++; 1999 + KUNIT_ASSERT_NE(test, n2, 0); 2000 + 2001 + KUNIT_ASSERT_EQ(test, n1, n2); 2002 + 2003 + p1 = NULL; 2004 + p2 = NULL; 2005 + for (i = 0; i < n1; i++) { 2006 + p1 = tb_property_get_next(d1, p1); 2007 + KUNIT_ASSERT_TRUE(test, p1 != NULL); 2008 + p2 = tb_property_get_next(d2, p2); 2009 + KUNIT_ASSERT_TRUE(test, p2 != NULL); 2010 + 2011 + KUNIT_ASSERT_STREQ(test, &p1->key[0], &p2->key[0]); 2012 + KUNIT_ASSERT_EQ(test, p1->type, p2->type); 2013 + KUNIT_ASSERT_EQ(test, p1->length, p2->length); 2014 + 2015 + switch (p1->type) { 2016 + case TB_PROPERTY_TYPE_DIRECTORY: 2017 + KUNIT_ASSERT_TRUE(test, p1->value.dir != NULL); 2018 + KUNIT_ASSERT_TRUE(test, p2->value.dir != NULL); 2019 + compare_dirs(test, p1->value.dir, p2->value.dir); 2020 + break; 2021 + 2022 + case TB_PROPERTY_TYPE_DATA: 2023 + KUNIT_ASSERT_TRUE(test, p1->value.data != NULL); 2024 + KUNIT_ASSERT_TRUE(test, p2->value.data != NULL); 2025 + KUNIT_ASSERT_TRUE(test, 2026 + !memcmp(p1->value.data, p2->value.data, 2027 + p1->length * 4) 2028 + ); 2029 + break; 2030 + 2031 + case TB_PROPERTY_TYPE_TEXT: 2032 + KUNIT_ASSERT_TRUE(test, p1->value.text != NULL); 2033 + KUNIT_ASSERT_TRUE(test, p2->value.text != NULL); 2034 + KUNIT_ASSERT_STREQ(test, p1->value.text, p2->value.text); 2035 + break; 2036 + 2037 + case TB_PROPERTY_TYPE_VALUE: 2038 + KUNIT_ASSERT_EQ(test, p1->value.immediate, 2039 + p2->value.immediate); 2040 + break; 2041 + default: 2042 + KUNIT_FAIL(test, "unexpected property type"); 2043 + break; 2044 + } 2045 + } 2046 + } 2047 + 2048 + static void tb_test_property_copy(struct kunit *test) 2049 + { 2050 + struct tb_property_dir *src, *dst; 2051 + u32 *block; 2052 + int ret, i; 2053 + 2054 + src = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); 2055 + KUNIT_ASSERT_TRUE(test, src != NULL); 2056 + 2057 + dst = tb_property_copy_dir(src); 2058 + KUNIT_ASSERT_TRUE(test, dst != NULL); 2059 + 2060 + /* Compare the structures */ 2061 + compare_dirs(test, src, dst); 2062 + 2063 + /* Compare the resulting property block */ 2064 + ret = tb_property_format_dir(dst, NULL, 0); 2065 + KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory)); 2066 + 2067 + block = kunit_kzalloc(test, sizeof(root_directory), GFP_KERNEL); 2068 + KUNIT_ASSERT_TRUE(test, block != NULL); 2069 + 2070 + ret = tb_property_format_dir(dst, block, ARRAY_SIZE(root_directory)); 2071 + KUNIT_EXPECT_TRUE(test, !ret); 2072 + 2073 + for (i = 0; i < ARRAY_SIZE(root_directory); i++) 2074 + KUNIT_EXPECT_EQ(test, root_directory[i], block[i]); 2075 + 2076 + tb_property_free_dir(dst); 2077 + tb_property_free_dir(src); 2078 + } 2079 + 1598 2080 static struct kunit_case tb_test_cases[] = { 1599 2081 KUNIT_CASE(tb_test_path_basic), 1600 2082 KUNIT_CASE(tb_test_path_not_connected_walk), ··· 2100 1616 KUNIT_CASE(tb_test_tunnel_dp_max_length), 2101 1617 KUNIT_CASE(tb_test_tunnel_port_on_path), 2102 1618 KUNIT_CASE(tb_test_tunnel_usb3), 1619 + KUNIT_CASE(tb_test_tunnel_dma), 1620 + KUNIT_CASE(tb_test_tunnel_dma_rx), 1621 + KUNIT_CASE(tb_test_tunnel_dma_tx), 1622 + KUNIT_CASE(tb_test_tunnel_dma_chain), 1623 + KUNIT_CASE(tb_test_tunnel_dma_match), 1624 + KUNIT_CASE(tb_test_property_parse), 1625 + KUNIT_CASE(tb_test_property_format), 1626 + KUNIT_CASE(tb_test_property_copy), 2103 1627 { } 2104 1628 }; 2105 1629
+75 -27
drivers/thunderbolt/tunnel.c
··· 794 794 return min(max_credits, 13U); 795 795 } 796 796 797 - static int tb_dma_activate(struct tb_tunnel *tunnel, bool active) 798 - { 799 - struct tb_port *nhi = tunnel->src_port; 800 - u32 credits; 801 - 802 - credits = active ? tb_dma_credits(nhi) : 0; 803 - return tb_port_set_initial_credits(nhi, credits); 804 - } 805 - 806 - static void tb_dma_init_path(struct tb_path *path, unsigned int isb, 807 - unsigned int efc, u32 credits) 797 + static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits) 808 798 { 809 799 int i; 810 800 811 801 path->egress_fc_enable = efc; 812 802 path->ingress_fc_enable = TB_PATH_ALL; 813 803 path->egress_shared_buffer = TB_PATH_NONE; 814 - path->ingress_shared_buffer = isb; 804 + path->ingress_shared_buffer = TB_PATH_NONE; 815 805 path->priority = 5; 816 806 path->weight = 1; 817 807 path->clear_fc = true; ··· 815 825 * @tb: Pointer to the domain structure 816 826 * @nhi: Host controller port 817 827 * @dst: Destination null port which the other domain is connected to 818 - * @transmit_ring: NHI ring number used to send packets towards the 819 - * other domain. Set to %0 if TX path is not needed. 820 828 * @transmit_path: HopID used for transmitting packets 821 - * @receive_ring: NHI ring number used to receive packets from the 822 - * other domain. Set to %0 if RX path is not needed. 829 + * @transmit_ring: NHI ring number used to send packets towards the 830 + * other domain. Set to %-1 if TX path is not needed. 823 831 * @receive_path: HopID used for receiving packets 832 + * @receive_ring: NHI ring number used to receive packets from the 833 + * other domain. Set to %-1 if RX path is not needed. 824 834 * 825 835 * Return: Returns a tb_tunnel on success or NULL on failure. 826 836 */ 827 837 struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, 828 - struct tb_port *dst, int transmit_ring, 829 - int transmit_path, int receive_ring, 830 - int receive_path) 838 + struct tb_port *dst, int transmit_path, 839 + int transmit_ring, int receive_path, 840 + int receive_ring) 831 841 { 832 842 struct tb_tunnel *tunnel; 833 843 size_t npaths = 0, i = 0; 834 844 struct tb_path *path; 835 845 u32 credits; 836 846 837 - if (receive_ring) 847 + if (receive_ring > 0) 838 848 npaths++; 839 - if (transmit_ring) 849 + if (transmit_ring > 0) 840 850 npaths++; 841 851 842 852 if (WARN_ON(!npaths)) ··· 846 856 if (!tunnel) 847 857 return NULL; 848 858 849 - tunnel->activate = tb_dma_activate; 850 859 tunnel->src_port = nhi; 851 860 tunnel->dst_port = dst; 852 861 853 862 credits = tb_dma_credits(nhi); 854 863 855 - if (receive_ring) { 864 + if (receive_ring > 0) { 856 865 path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, 857 866 "DMA RX"); 858 867 if (!path) { 859 868 tb_tunnel_free(tunnel); 860 869 return NULL; 861 870 } 862 - tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, 863 - credits); 871 + tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits); 864 872 tunnel->paths[i++] = path; 865 873 } 866 874 867 - if (transmit_ring) { 875 + if (transmit_ring > 0) { 868 876 path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, 869 877 "DMA TX"); 870 878 if (!path) { 871 879 tb_tunnel_free(tunnel); 872 880 return NULL; 873 881 } 874 - tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); 882 + tb_dma_init_path(path, TB_PATH_ALL, credits); 875 883 tunnel->paths[i++] = path; 876 884 } 877 885 878 886 return tunnel; 887 + } 888 + 889 + /** 890 + * tb_tunnel_match_dma() - Match DMA tunnel 891 + * @tunnel: Tunnel to match 892 + * @transmit_path: HopID used for transmitting packets. Pass %-1 to ignore. 893 + * @transmit_ring: NHI ring number used to send packets towards the 894 + * other domain. Pass %-1 to ignore. 895 + * @receive_path: HopID used for receiving packets. Pass %-1 to ignore. 896 + * @receive_ring: NHI ring number used to receive packets from the 897 + * other domain. Pass %-1 to ignore. 898 + * 899 + * This function can be used to match specific DMA tunnel, if there are 900 + * multiple DMA tunnels going through the same XDomain connection. 901 + * Returns true if there is match and false otherwise. 902 + */ 903 + bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path, 904 + int transmit_ring, int receive_path, int receive_ring) 905 + { 906 + const struct tb_path *tx_path = NULL, *rx_path = NULL; 907 + int i; 908 + 909 + if (!receive_ring || !transmit_ring) 910 + return false; 911 + 912 + for (i = 0; i < tunnel->npaths; i++) { 913 + const struct tb_path *path = tunnel->paths[i]; 914 + 915 + if (!path) 916 + continue; 917 + 918 + if (tb_port_is_nhi(path->hops[0].in_port)) 919 + tx_path = path; 920 + else if (tb_port_is_nhi(path->hops[path->path_length - 1].out_port)) 921 + rx_path = path; 922 + } 923 + 924 + if (transmit_ring > 0 || transmit_path > 0) { 925 + if (!tx_path) 926 + return false; 927 + if (transmit_ring > 0 && 928 + (tx_path->hops[0].in_hop_index != transmit_ring)) 929 + return false; 930 + if (transmit_path > 0 && 931 + (tx_path->hops[tx_path->path_length - 1].next_hop_index != transmit_path)) 932 + return false; 933 + } 934 + 935 + if (receive_ring > 0 || receive_path > 0) { 936 + if (!rx_path) 937 + return false; 938 + if (receive_path > 0 && 939 + (rx_path->hops[0].in_hop_index != receive_path)) 940 + return false; 941 + if (receive_ring > 0 && 942 + (rx_path->hops[rx_path->path_length - 1].next_hop_index != receive_ring)) 943 + return false; 944 + } 945 + 946 + return true; 879 947 } 880 948 881 949 static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
+5 -3
drivers/thunderbolt/tunnel.h
··· 70 70 struct tb_port *out, int max_up, 71 71 int max_down); 72 72 struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, 73 - struct tb_port *dst, int transmit_ring, 74 - int transmit_path, int receive_ring, 75 - int receive_path); 73 + struct tb_port *dst, int transmit_path, 74 + int transmit_ring, int receive_path, 75 + int receive_ring); 76 + bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path, 77 + int transmit_ring, int receive_path, int receive_ring); 76 78 struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down); 77 79 struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, 78 80 struct tb_port *down, int max_up,
+251 -175
drivers/thunderbolt/xdomain.c
··· 12 12 #include <linux/kmod.h> 13 13 #include <linux/module.h> 14 14 #include <linux/pm_runtime.h> 15 + #include <linux/prandom.h> 15 16 #include <linux/utsname.h> 16 17 #include <linux/uuid.h> 17 18 #include <linux/workqueue.h> 18 19 19 20 #include "tb.h" 20 21 21 - #define XDOMAIN_DEFAULT_TIMEOUT 5000 /* ms */ 22 + #define XDOMAIN_DEFAULT_TIMEOUT 1000 /* ms */ 22 23 #define XDOMAIN_UUID_RETRIES 10 23 - #define XDOMAIN_PROPERTIES_RETRIES 60 24 + #define XDOMAIN_PROPERTIES_RETRIES 10 24 25 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 25 26 #define XDOMAIN_BONDING_WAIT 100 /* ms */ 27 + #define XDOMAIN_DEFAULT_MAX_HOPID 15 26 28 27 29 struct xdomain_request_work { 28 30 struct work_struct work; ··· 36 34 module_param_named(xdomain, tb_xdomain_enabled, bool, 0444); 37 35 MODULE_PARM_DESC(xdomain, "allow XDomain protocol (default: true)"); 38 36 39 - /* Serializes access to the properties and protocol handlers below */ 37 + /* 38 + * Serializes access to the properties and protocol handlers below. If 39 + * you need to take both this lock and the struct tb_xdomain lock, take 40 + * this one first. 41 + */ 40 42 static DEFINE_MUTEX(xdomain_lock); 41 43 42 44 /* Properties exposed to the remote domains */ 43 45 static struct tb_property_dir *xdomain_property_dir; 44 - static u32 *xdomain_property_block; 45 - static u32 xdomain_property_block_len; 46 46 static u32 xdomain_property_block_gen; 47 47 48 48 /* Additional protocol handlers */ ··· 389 385 } 390 386 391 387 static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl *ctl, 392 - u64 route, u8 sequence, const uuid_t *src_uuid, 393 - const struct tb_xdp_properties *req) 388 + struct tb_xdomain *xd, u8 sequence, const struct tb_xdp_properties *req) 394 389 { 395 390 struct tb_xdp_properties_response *res; 396 391 size_t total_size; ··· 401 398 * protocol supports forwarding, though which we might add 402 399 * support later on. 403 400 */ 404 - if (!uuid_equal(src_uuid, &req->dst_uuid)) { 405 - tb_xdp_error_response(ctl, route, sequence, 401 + if (!uuid_equal(xd->local_uuid, &req->dst_uuid)) { 402 + tb_xdp_error_response(ctl, xd->route, sequence, 406 403 ERROR_UNKNOWN_DOMAIN); 407 404 return 0; 408 405 } 409 406 410 - mutex_lock(&xdomain_lock); 407 + mutex_lock(&xd->lock); 411 408 412 - if (req->offset >= xdomain_property_block_len) { 413 - mutex_unlock(&xdomain_lock); 409 + if (req->offset >= xd->local_property_block_len) { 410 + mutex_unlock(&xd->lock); 414 411 return -EINVAL; 415 412 } 416 413 417 - len = xdomain_property_block_len - req->offset; 414 + len = xd->local_property_block_len - req->offset; 418 415 len = min_t(u16, len, TB_XDP_PROPERTIES_MAX_DATA_LENGTH); 419 416 total_size = sizeof(*res) + len * 4; 420 417 421 418 res = kzalloc(total_size, GFP_KERNEL); 422 419 if (!res) { 423 - mutex_unlock(&xdomain_lock); 420 + mutex_unlock(&xd->lock); 424 421 return -ENOMEM; 425 422 } 426 423 427 - tb_xdp_fill_header(&res->hdr, route, sequence, PROPERTIES_RESPONSE, 424 + tb_xdp_fill_header(&res->hdr, xd->route, sequence, PROPERTIES_RESPONSE, 428 425 total_size); 429 - res->generation = xdomain_property_block_gen; 430 - res->data_length = xdomain_property_block_len; 426 + res->generation = xd->local_property_block_gen; 427 + res->data_length = xd->local_property_block_len; 431 428 res->offset = req->offset; 432 - uuid_copy(&res->src_uuid, src_uuid); 429 + uuid_copy(&res->src_uuid, xd->local_uuid); 433 430 uuid_copy(&res->dst_uuid, &req->src_uuid); 434 - memcpy(res->data, &xdomain_property_block[req->offset], len * 4); 431 + memcpy(res->data, &xd->local_property_block[req->offset], len * 4); 435 432 436 - mutex_unlock(&xdomain_lock); 433 + mutex_unlock(&xd->lock); 437 434 438 435 ret = __tb_xdomain_response(ctl, res, total_size, 439 436 TB_CFG_PKG_XDOMAIN_RESP); ··· 515 512 } 516 513 EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler); 517 514 518 - static int rebuild_property_block(void) 515 + static void update_property_block(struct tb_xdomain *xd) 519 516 { 520 - u32 *block, len; 521 - int ret; 522 - 523 - ret = tb_property_format_dir(xdomain_property_dir, NULL, 0); 524 - if (ret < 0) 525 - return ret; 526 - 527 - len = ret; 528 - 529 - block = kcalloc(len, sizeof(u32), GFP_KERNEL); 530 - if (!block) 531 - return -ENOMEM; 532 - 533 - ret = tb_property_format_dir(xdomain_property_dir, block, len); 534 - if (ret) { 535 - kfree(block); 536 - return ret; 537 - } 538 - 539 - kfree(xdomain_property_block); 540 - xdomain_property_block = block; 541 - xdomain_property_block_len = len; 542 - xdomain_property_block_gen++; 543 - 544 - return 0; 545 - } 546 - 547 - static void finalize_property_block(void) 548 - { 549 - const struct tb_property *nodename; 550 - 551 - /* 552 - * On first XDomain connection we set up the the system 553 - * nodename. This delayed here because userspace may not have it 554 - * set when the driver is first probed. 555 - */ 556 517 mutex_lock(&xdomain_lock); 557 - nodename = tb_property_find(xdomain_property_dir, "deviceid", 558 - TB_PROPERTY_TYPE_TEXT); 559 - if (!nodename) { 560 - tb_property_add_text(xdomain_property_dir, "deviceid", 561 - utsname()->nodename); 562 - rebuild_property_block(); 518 + mutex_lock(&xd->lock); 519 + /* 520 + * If the local property block is not up-to-date, rebuild it now 521 + * based on the global property template. 522 + */ 523 + if (!xd->local_property_block || 524 + xd->local_property_block_gen < xdomain_property_block_gen) { 525 + struct tb_property_dir *dir; 526 + int ret, block_len; 527 + u32 *block; 528 + 529 + dir = tb_property_copy_dir(xdomain_property_dir); 530 + if (!dir) { 531 + dev_warn(&xd->dev, "failed to copy properties\n"); 532 + goto out_unlock; 533 + } 534 + 535 + /* Fill in non-static properties now */ 536 + tb_property_add_text(dir, "deviceid", utsname()->nodename); 537 + tb_property_add_immediate(dir, "maxhopid", xd->local_max_hopid); 538 + 539 + ret = tb_property_format_dir(dir, NULL, 0); 540 + if (ret < 0) { 541 + dev_warn(&xd->dev, "local property block creation failed\n"); 542 + tb_property_free_dir(dir); 543 + goto out_unlock; 544 + } 545 + 546 + block_len = ret; 547 + block = kcalloc(block_len, sizeof(*block), GFP_KERNEL); 548 + if (!block) { 549 + tb_property_free_dir(dir); 550 + goto out_unlock; 551 + } 552 + 553 + ret = tb_property_format_dir(dir, block, block_len); 554 + if (ret) { 555 + dev_warn(&xd->dev, "property block generation failed\n"); 556 + tb_property_free_dir(dir); 557 + kfree(block); 558 + goto out_unlock; 559 + } 560 + 561 + tb_property_free_dir(dir); 562 + /* Release the previous block */ 563 + kfree(xd->local_property_block); 564 + /* Assign new one */ 565 + xd->local_property_block = block; 566 + xd->local_property_block_len = block_len; 567 + xd->local_property_block_gen = xdomain_property_block_gen; 563 568 } 569 + 570 + out_unlock: 571 + mutex_unlock(&xd->lock); 564 572 mutex_unlock(&xdomain_lock); 565 573 } 566 574 ··· 582 568 const struct tb_xdomain_header *xhdr = &pkg->xd_hdr; 583 569 struct tb *tb = xw->tb; 584 570 struct tb_ctl *ctl = tb->ctl; 571 + struct tb_xdomain *xd; 585 572 const uuid_t *uuid; 586 573 int ret = 0; 587 574 u32 sequence; ··· 604 589 goto out; 605 590 } 606 591 607 - finalize_property_block(); 592 + tb_dbg(tb, "%llx: received XDomain request %#x\n", route, pkg->type); 593 + 594 + xd = tb_xdomain_find_by_route_locked(tb, route); 595 + if (xd) 596 + update_property_block(xd); 608 597 609 598 switch (pkg->type) { 610 599 case PROPERTIES_REQUEST: 611 - ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid, 612 - (const struct tb_xdp_properties *)pkg); 600 + if (xd) { 601 + ret = tb_xdp_properties_response(tb, ctl, xd, sequence, 602 + (const struct tb_xdp_properties *)pkg); 603 + } 613 604 break; 614 605 615 - case PROPERTIES_CHANGED_REQUEST: { 616 - struct tb_xdomain *xd; 617 - 606 + case PROPERTIES_CHANGED_REQUEST: 618 607 ret = tb_xdp_properties_changed_response(ctl, route, sequence); 619 608 620 609 /* ··· 626 607 * the xdomain related to this connection as well in 627 608 * case there is a change in services it offers. 628 609 */ 629 - xd = tb_xdomain_find_by_route_locked(tb, route); 630 - if (xd) { 631 - if (device_is_registered(&xd->dev)) { 632 - queue_delayed_work(tb->wq, &xd->get_properties_work, 633 - msecs_to_jiffies(50)); 634 - } 635 - tb_xdomain_put(xd); 610 + if (xd && device_is_registered(&xd->dev)) { 611 + queue_delayed_work(tb->wq, &xd->get_properties_work, 612 + msecs_to_jiffies(50)); 636 613 } 637 - 638 614 break; 639 - } 640 615 641 616 case UUID_REQUEST_OLD: 642 617 case UUID_REQUEST: ··· 642 629 ERROR_NOT_SUPPORTED); 643 630 break; 644 631 } 632 + 633 + tb_xdomain_put(xd); 645 634 646 635 if (ret) { 647 636 tb_warn(tb, "failed to send XDomain response for %#x\n", ··· 826 811 if (!svc) 827 812 return 0; 828 813 829 - if (!tb_property_find(xd->properties, svc->key, 814 + if (!tb_property_find(xd->remote_properties, svc->key, 830 815 TB_PROPERTY_TYPE_DIRECTORY)) 831 816 device_unregister(dev); 832 817 ··· 886 871 device_for_each_child_reverse(&xd->dev, xd, remove_missing_service); 887 872 888 873 /* Then re-enumerate properties creating new services as we go */ 889 - tb_property_for_each(xd->properties, p) { 874 + tb_property_for_each(xd->remote_properties, p) { 890 875 if (p->type != TB_PROPERTY_TYPE_DIRECTORY) 891 876 continue; 892 877 ··· 943 928 return -EINVAL; 944 929 xd->vendor = p->value.immediate; 945 930 931 + p = tb_property_find(dir, "maxhopid", TB_PROPERTY_TYPE_VALUE); 932 + /* 933 + * USB4 inter-domain spec suggests using 15 as HopID if the 934 + * other end does not announce it in a property. This is for 935 + * TBT3 compatibility. 936 + */ 937 + xd->remote_max_hopid = p ? p->value.immediate : XDOMAIN_DEFAULT_MAX_HOPID; 938 + 946 939 kfree(xd->device_name); 947 940 xd->device_name = NULL; 948 941 kfree(xd->vendor_name); ··· 965 942 xd->vendor_name = kstrdup(p->value.text, GFP_KERNEL); 966 943 967 944 return 0; 968 - } 969 - 970 - /* Called with @xd->lock held */ 971 - static void tb_xdomain_restore_paths(struct tb_xdomain *xd) 972 - { 973 - if (!xd->resume) 974 - return; 975 - 976 - xd->resume = false; 977 - if (xd->transmit_path) { 978 - dev_dbg(&xd->dev, "re-establishing DMA path\n"); 979 - tb_domain_approve_xdomain_paths(xd->tb, xd); 980 - } 981 945 } 982 946 983 947 static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) ··· 1012 1002 uuid_t uuid; 1013 1003 int ret; 1014 1004 1005 + dev_dbg(&xd->dev, "requesting remote UUID\n"); 1006 + 1015 1007 ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid); 1016 1008 if (ret < 0) { 1017 1009 if (xd->uuid_retries-- > 0) { 1010 + dev_dbg(&xd->dev, "failed to request UUID, retrying\n"); 1018 1011 queue_delayed_work(xd->tb->wq, &xd->get_uuid_work, 1019 1012 msecs_to_jiffies(100)); 1020 1013 } else { ··· 1025 1012 } 1026 1013 return; 1027 1014 } 1015 + 1016 + dev_dbg(&xd->dev, "got remote UUID %pUb\n", &uuid); 1028 1017 1029 1018 if (uuid_equal(&uuid, xd->local_uuid)) 1030 1019 dev_dbg(&xd->dev, "intra-domain loop detected\n"); ··· 1067 1052 u32 gen = 0; 1068 1053 int ret; 1069 1054 1055 + dev_dbg(&xd->dev, "requesting remote properties\n"); 1056 + 1070 1057 ret = tb_xdp_properties_request(tb->ctl, xd->route, xd->local_uuid, 1071 1058 xd->remote_uuid, xd->properties_retries, 1072 1059 &block, &gen); 1073 1060 if (ret < 0) { 1074 1061 if (xd->properties_retries-- > 0) { 1062 + dev_dbg(&xd->dev, 1063 + "failed to request remote properties, retrying\n"); 1075 1064 queue_delayed_work(xd->tb->wq, &xd->get_properties_work, 1076 1065 msecs_to_jiffies(1000)); 1077 1066 } else { ··· 1092 1073 mutex_lock(&xd->lock); 1093 1074 1094 1075 /* Only accept newer generation properties */ 1095 - if (xd->properties && gen <= xd->property_block_gen) { 1096 - /* 1097 - * On resume it is likely that the properties block is 1098 - * not changed (unless the other end added or removed 1099 - * services). However, we need to make sure the existing 1100 - * DMA paths are restored properly. 1101 - */ 1102 - tb_xdomain_restore_paths(xd); 1076 + if (xd->remote_properties && gen <= xd->remote_property_block_gen) 1103 1077 goto err_free_block; 1104 - } 1105 1078 1106 1079 dir = tb_property_parse_dir(block, ret); 1107 1080 if (!dir) { ··· 1108 1097 } 1109 1098 1110 1099 /* Release the existing one */ 1111 - if (xd->properties) { 1112 - tb_property_free_dir(xd->properties); 1100 + if (xd->remote_properties) { 1101 + tb_property_free_dir(xd->remote_properties); 1113 1102 update = true; 1114 1103 } 1115 1104 1116 - xd->properties = dir; 1117 - xd->property_block_gen = gen; 1105 + xd->remote_properties = dir; 1106 + xd->remote_property_block_gen = gen; 1118 1107 1119 1108 tb_xdomain_update_link_attributes(xd); 1120 - 1121 - tb_xdomain_restore_paths(xd); 1122 1109 1123 1110 mutex_unlock(&xd->lock); 1124 1111 ··· 1132 1123 dev_err(&xd->dev, "failed to add XDomain device\n"); 1133 1124 return; 1134 1125 } 1126 + dev_info(&xd->dev, "new host found, vendor=%#x device=%#x\n", 1127 + xd->vendor, xd->device); 1128 + if (xd->vendor_name && xd->device_name) 1129 + dev_info(&xd->dev, "%s %s\n", xd->vendor_name, 1130 + xd->device_name); 1135 1131 } else { 1136 1132 kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE); 1137 1133 } ··· 1157 1143 properties_changed_work.work); 1158 1144 int ret; 1159 1145 1146 + dev_dbg(&xd->dev, "sending properties changed notification\n"); 1147 + 1160 1148 ret = tb_xdp_properties_changed_request(xd->tb->ctl, xd->route, 1161 1149 xd->properties_changed_retries, xd->local_uuid); 1162 1150 if (ret) { 1163 - if (xd->properties_changed_retries-- > 0) 1151 + if (xd->properties_changed_retries-- > 0) { 1152 + dev_dbg(&xd->dev, 1153 + "failed to send properties changed notification, retrying\n"); 1164 1154 queue_delayed_work(xd->tb->wq, 1165 1155 &xd->properties_changed_work, 1166 1156 msecs_to_jiffies(1000)); 1157 + } 1158 + dev_err(&xd->dev, "failed to send properties changed notification\n"); 1167 1159 return; 1168 1160 } 1169 1161 ··· 1199 1179 return ret; 1200 1180 } 1201 1181 static DEVICE_ATTR_RO(device_name); 1182 + 1183 + static ssize_t maxhopid_show(struct device *dev, struct device_attribute *attr, 1184 + char *buf) 1185 + { 1186 + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); 1187 + 1188 + return sprintf(buf, "%d\n", xd->remote_max_hopid); 1189 + } 1190 + static DEVICE_ATTR_RO(maxhopid); 1202 1191 1203 1192 static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, 1204 1193 char *buf) ··· 1267 1238 static struct attribute *xdomain_attrs[] = { 1268 1239 &dev_attr_device.attr, 1269 1240 &dev_attr_device_name.attr, 1241 + &dev_attr_maxhopid.attr, 1270 1242 &dev_attr_rx_lanes.attr, 1271 1243 &dev_attr_rx_speed.attr, 1272 1244 &dev_attr_tx_lanes.attr, ··· 1293 1263 1294 1264 put_device(xd->dev.parent); 1295 1265 1296 - tb_property_free_dir(xd->properties); 1266 + kfree(xd->local_property_block); 1267 + tb_property_free_dir(xd->remote_properties); 1268 + ida_destroy(&xd->out_hopids); 1269 + ida_destroy(&xd->in_hopids); 1297 1270 ida_destroy(&xd->service_ids); 1298 1271 1299 1272 kfree(xd->local_uuid); ··· 1343 1310 1344 1311 static int __maybe_unused tb_xdomain_resume(struct device *dev) 1345 1312 { 1346 - struct tb_xdomain *xd = tb_to_xdomain(dev); 1347 - 1348 - /* 1349 - * Ask tb_xdomain_get_properties() restore any existing DMA 1350 - * paths after properties are re-read. 1351 - */ 1352 - xd->resume = true; 1353 - start_handshake(xd); 1354 - 1313 + start_handshake(tb_to_xdomain(dev)); 1355 1314 return 0; 1356 1315 } 1357 1316 ··· 1388 1363 1389 1364 xd->tb = tb; 1390 1365 xd->route = route; 1366 + xd->local_max_hopid = down->config.max_in_hop_id; 1391 1367 ida_init(&xd->service_ids); 1368 + ida_init(&xd->in_hopids); 1369 + ida_init(&xd->out_hopids); 1392 1370 mutex_init(&xd->lock); 1393 1371 INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid); 1394 1372 INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties); ··· 1417 1389 xd->dev.type = &tb_xdomain_type; 1418 1390 xd->dev.groups = xdomain_attr_groups; 1419 1391 dev_set_name(&xd->dev, "%u-%llx", tb->index, route); 1392 + 1393 + dev_dbg(&xd->dev, "local UUID %pUb\n", local_uuid); 1394 + if (remote_uuid) 1395 + dev_dbg(&xd->dev, "remote UUID %pUb\n", remote_uuid); 1420 1396 1421 1397 /* 1422 1398 * This keeps the DMA powered on as long as we have active ··· 1484 1452 pm_runtime_put_noidle(&xd->dev); 1485 1453 pm_runtime_set_suspended(&xd->dev); 1486 1454 1487 - if (!device_is_registered(&xd->dev)) 1455 + if (!device_is_registered(&xd->dev)) { 1488 1456 put_device(&xd->dev); 1489 - else 1457 + } else { 1458 + dev_info(&xd->dev, "host disconnected\n"); 1490 1459 device_unregister(&xd->dev); 1460 + } 1491 1461 } 1492 1462 1493 1463 /** ··· 1557 1523 EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable); 1558 1524 1559 1525 /** 1526 + * tb_xdomain_alloc_in_hopid() - Allocate input HopID for tunneling 1527 + * @xd: XDomain connection 1528 + * @hopid: Preferred HopID or %-1 for next available 1529 + * 1530 + * Returns allocated HopID or negative errno. Specifically returns 1531 + * %-ENOSPC if there are no more available HopIDs. Returned HopID is 1532 + * guaranteed to be within range supported by the input lane adapter. 1533 + * Call tb_xdomain_release_in_hopid() to release the allocated HopID. 1534 + */ 1535 + int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid) 1536 + { 1537 + if (hopid < 0) 1538 + hopid = TB_PATH_MIN_HOPID; 1539 + if (hopid < TB_PATH_MIN_HOPID || hopid > xd->local_max_hopid) 1540 + return -EINVAL; 1541 + 1542 + return ida_alloc_range(&xd->in_hopids, hopid, xd->local_max_hopid, 1543 + GFP_KERNEL); 1544 + } 1545 + EXPORT_SYMBOL_GPL(tb_xdomain_alloc_in_hopid); 1546 + 1547 + /** 1548 + * tb_xdomain_alloc_out_hopid() - Allocate output HopID for tunneling 1549 + * @xd: XDomain connection 1550 + * @hopid: Preferred HopID or %-1 for next available 1551 + * 1552 + * Returns allocated HopID or negative errno. Specifically returns 1553 + * %-ENOSPC if there are no more available HopIDs. Returned HopID is 1554 + * guaranteed to be within range supported by the output lane adapter. 1555 + * Call tb_xdomain_release_in_hopid() to release the allocated HopID. 1556 + */ 1557 + int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid) 1558 + { 1559 + if (hopid < 0) 1560 + hopid = TB_PATH_MIN_HOPID; 1561 + if (hopid < TB_PATH_MIN_HOPID || hopid > xd->remote_max_hopid) 1562 + return -EINVAL; 1563 + 1564 + return ida_alloc_range(&xd->out_hopids, hopid, xd->remote_max_hopid, 1565 + GFP_KERNEL); 1566 + } 1567 + EXPORT_SYMBOL_GPL(tb_xdomain_alloc_out_hopid); 1568 + 1569 + /** 1570 + * tb_xdomain_release_in_hopid() - Release input HopID 1571 + * @xd: XDomain connection 1572 + * @hopid: HopID to release 1573 + */ 1574 + void tb_xdomain_release_in_hopid(struct tb_xdomain *xd, int hopid) 1575 + { 1576 + ida_free(&xd->in_hopids, hopid); 1577 + } 1578 + EXPORT_SYMBOL_GPL(tb_xdomain_release_in_hopid); 1579 + 1580 + /** 1581 + * tb_xdomain_release_out_hopid() - Release output HopID 1582 + * @xd: XDomain connection 1583 + * @hopid: HopID to release 1584 + */ 1585 + void tb_xdomain_release_out_hopid(struct tb_xdomain *xd, int hopid) 1586 + { 1587 + ida_free(&xd->out_hopids, hopid); 1588 + } 1589 + EXPORT_SYMBOL_GPL(tb_xdomain_release_out_hopid); 1590 + 1591 + /** 1560 1592 * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection 1561 1593 * @xd: XDomain connection 1562 - * @transmit_path: HopID of the transmit path the other end is using to 1563 - * send packets 1564 - * @transmit_ring: DMA ring used to receive packets from the other end 1565 - * @receive_path: HopID of the receive path the other end is using to 1566 - * receive packets 1567 - * @receive_ring: DMA ring used to send packets to the other end 1594 + * @transmit_path: HopID we are using to send out packets 1595 + * @transmit_ring: DMA ring used to send out packets 1596 + * @receive_path: HopID the other end is using to send packets to us 1597 + * @receive_ring: DMA ring used to receive packets from @receive_path 1568 1598 * 1569 1599 * The function enables DMA paths accordingly so that after successful 1570 1600 * return the caller can send and receive packets using high-speed DMA 1571 - * path. 1601 + * path. If a transmit or receive path is not needed, pass %-1 for those 1602 + * parameters. 1572 1603 * 1573 1604 * Return: %0 in case of success and negative errno in case of error 1574 1605 */ 1575 - int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, 1576 - u16 transmit_ring, u16 receive_path, 1577 - u16 receive_ring) 1606 + int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path, 1607 + int transmit_ring, int receive_path, 1608 + int receive_ring) 1578 1609 { 1579 - int ret; 1580 - 1581 - mutex_lock(&xd->lock); 1582 - 1583 - if (xd->transmit_path) { 1584 - ret = xd->transmit_path == transmit_path ? 0 : -EBUSY; 1585 - goto exit_unlock; 1586 - } 1587 - 1588 - xd->transmit_path = transmit_path; 1589 - xd->transmit_ring = transmit_ring; 1590 - xd->receive_path = receive_path; 1591 - xd->receive_ring = receive_ring; 1592 - 1593 - ret = tb_domain_approve_xdomain_paths(xd->tb, xd); 1594 - 1595 - exit_unlock: 1596 - mutex_unlock(&xd->lock); 1597 - 1598 - return ret; 1610 + return tb_domain_approve_xdomain_paths(xd->tb, xd, transmit_path, 1611 + transmit_ring, receive_path, 1612 + receive_ring); 1599 1613 } 1600 1614 EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths); 1601 1615 1602 1616 /** 1603 1617 * tb_xdomain_disable_paths() - Disable DMA paths for XDomain connection 1604 1618 * @xd: XDomain connection 1619 + * @transmit_path: HopID we are using to send out packets 1620 + * @transmit_ring: DMA ring used to send out packets 1621 + * @receive_path: HopID the other end is using to send packets to us 1622 + * @receive_ring: DMA ring used to receive packets from @receive_path 1605 1623 * 1606 1624 * This does the opposite of tb_xdomain_enable_paths(). After call to 1607 - * this the caller is not expected to use the rings anymore. 1625 + * this the caller is not expected to use the rings anymore. Passing %-1 1626 + * as path/ring parameter means don't care. Normally the callers should 1627 + * pass the same values here as they do when paths are enabled. 1608 1628 * 1609 1629 * Return: %0 in case of success and negative errno in case of error 1610 1630 */ 1611 - int tb_xdomain_disable_paths(struct tb_xdomain *xd) 1631 + int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path, 1632 + int transmit_ring, int receive_path, 1633 + int receive_ring) 1612 1634 { 1613 - int ret = 0; 1614 - 1615 - mutex_lock(&xd->lock); 1616 - if (xd->transmit_path) { 1617 - xd->transmit_path = 0; 1618 - xd->transmit_ring = 0; 1619 - xd->receive_path = 0; 1620 - xd->receive_ring = 0; 1621 - 1622 - ret = tb_domain_disconnect_xdomain_paths(xd->tb, xd); 1623 - } 1624 - mutex_unlock(&xd->lock); 1625 - 1626 - return ret; 1635 + return tb_domain_disconnect_xdomain_paths(xd->tb, xd, transmit_path, 1636 + transmit_ring, receive_path, 1637 + receive_ring); 1627 1638 } 1628 1639 EXPORT_SYMBOL_GPL(tb_xdomain_disable_paths); 1629 1640 ··· 1905 1826 if (ret) 1906 1827 goto err_unlock; 1907 1828 1908 - ret = rebuild_property_block(); 1909 - if (ret) { 1910 - remove_directory(key, dir); 1911 - goto err_unlock; 1912 - } 1829 + xdomain_property_block_gen++; 1913 1830 1914 1831 mutex_unlock(&xdomain_lock); 1915 1832 update_all_xdomains(); ··· 1931 1856 1932 1857 mutex_lock(&xdomain_lock); 1933 1858 if (remove_directory(key, dir)) 1934 - ret = rebuild_property_block(); 1859 + xdomain_property_block_gen++; 1935 1860 mutex_unlock(&xdomain_lock); 1936 1861 1937 1862 if (!ret) ··· 1950 1875 * directories. Those will be added by service drivers 1951 1876 * themselves when they are loaded. 1952 1877 * 1953 - * We also add node name later when first connection is made. 1878 + * Rest of the properties are filled dynamically based on these 1879 + * when the P2P connection is made. 1954 1880 */ 1955 1881 tb_property_add_immediate(xdomain_property_dir, "vendorid", 1956 1882 PCI_VENDOR_ID_INTEL); ··· 1959 1883 tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1); 1960 1884 tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100); 1961 1885 1886 + xdomain_property_block_gen = prandom_u32(); 1962 1887 return 0; 1963 1888 } 1964 1889 1965 1890 void tb_xdomain_exit(void) 1966 1891 { 1967 - kfree(xdomain_property_block); 1968 1892 tb_property_free_dir(xdomain_property_dir); 1969 1893 }
+35 -19
include/linux/thunderbolt.h
··· 146 146 size_t block_len); 147 147 ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block, 148 148 size_t block_len); 149 + struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir); 149 150 struct tb_property_dir *tb_property_create_dir(const uuid_t *uuid); 150 151 void tb_property_free_dir(struct tb_property_dir *dir); 151 152 int tb_property_add_immediate(struct tb_property_dir *parent, const char *key, ··· 180 179 * @route: Route string the other domain can be reached 181 180 * @vendor: Vendor ID of the remote domain 182 181 * @device: Device ID of the demote domain 182 + * @local_max_hopid: Maximum input HopID of this host 183 + * @remote_max_hopid: Maximum input HopID of the remote host 183 184 * @lock: Lock to serialize access to the following fields of this structure 184 185 * @vendor_name: Name of the vendor (or %NULL if not known) 185 186 * @device_name: Name of the device (or %NULL if not known) 186 187 * @link_speed: Speed of the link in Gb/s 187 188 * @link_width: Width of the link (1 or 2) 188 189 * @is_unplugged: The XDomain is unplugged 189 - * @resume: The XDomain is being resumed 190 190 * @needs_uuid: If the XDomain does not have @remote_uuid it will be 191 191 * queried first 192 - * @transmit_path: HopID which the remote end expects us to transmit 193 - * @transmit_ring: Local ring (hop) where outgoing packets are pushed 194 - * @receive_path: HopID which we expect the remote end to transmit 195 - * @receive_ring: Local ring (hop) where incoming packets arrive 196 192 * @service_ids: Used to generate IDs for the services 197 - * @properties: Properties exported by the remote domain 198 - * @property_block_gen: Generation of @properties 199 - * @properties_lock: Lock protecting @properties. 193 + * @in_hopids: Input HopIDs for DMA tunneling 194 + * @out_hopids; Output HopIDs for DMA tunneling 195 + * @local_property_block: Local block of properties 196 + * @local_property_block_gen: Generation of @local_property_block 197 + * @local_property_block_len: Length of the @local_property_block in dwords 198 + * @remote_properties: Properties exported by the remote domain 199 + * @remote_property_block_gen: Generation of @remote_properties 200 200 * @get_uuid_work: Work used to retrieve @remote_uuid 201 201 * @uuid_retries: Number of times left @remote_uuid is requested before 202 202 * giving up ··· 227 225 u64 route; 228 226 u16 vendor; 229 227 u16 device; 228 + unsigned int local_max_hopid; 229 + unsigned int remote_max_hopid; 230 230 struct mutex lock; 231 231 const char *vendor_name; 232 232 const char *device_name; 233 233 unsigned int link_speed; 234 234 unsigned int link_width; 235 235 bool is_unplugged; 236 - bool resume; 237 236 bool needs_uuid; 238 - u16 transmit_path; 239 - u16 transmit_ring; 240 - u16 receive_path; 241 - u16 receive_ring; 242 237 struct ida service_ids; 243 - struct tb_property_dir *properties; 244 - u32 property_block_gen; 238 + struct ida in_hopids; 239 + struct ida out_hopids; 240 + u32 *local_property_block; 241 + u32 local_property_block_gen; 242 + u32 local_property_block_len; 243 + struct tb_property_dir *remote_properties; 244 + u32 remote_property_block_gen; 245 245 struct delayed_work get_uuid_work; 246 246 int uuid_retries; 247 247 struct delayed_work get_properties_work; ··· 256 252 257 253 int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd); 258 254 void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd); 259 - int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, 260 - u16 transmit_ring, u16 receive_path, 261 - u16 receive_ring); 262 - int tb_xdomain_disable_paths(struct tb_xdomain *xd); 255 + int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid); 256 + void tb_xdomain_release_in_hopid(struct tb_xdomain *xd, int hopid); 257 + int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid); 258 + void tb_xdomain_release_out_hopid(struct tb_xdomain *xd, int hopid); 259 + int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path, 260 + int transmit_ring, int receive_path, 261 + int receive_ring); 262 + int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path, 263 + int transmit_ring, int receive_path, 264 + int receive_ring); 265 + 266 + static inline int tb_xdomain_disable_all_paths(struct tb_xdomain *xd) 267 + { 268 + return tb_xdomain_disable_paths(xd, -1, -1, -1, -1); 269 + } 270 + 263 271 struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid); 264 272 struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route); 265 273