Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'thunderbolt-for-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into char-misc-next

Mike writes:

thunderbolt: Changes for v4.17 merge window

New features:

- Intel Titan Ridge Thunderbolt 3 controller support
- Preboot ACL supported, allowing more secure way to boot from
Thunderbolt devices
- New "USB only" security level

In addition there are a couple of fixes for increasing timeout when
authenticating the ICM firmware and reading root switch config space.
Preventing a crash on certain Lenovo systems where ICM firmware for some
reason is not always properly starting up.

+1186 -115
+33
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 1 + What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 2 + Date: Jun 2018 3 + KernelVersion: 4.17 4 + Contact: thunderbolt-software@lists.01.org 5 + Description: Holds a comma separated list of device unique_ids that 6 + are allowed to be connected automatically during system 7 + startup (e.g boot devices). The list always contains 8 + maximum supported number of unique_ids where unused 9 + entries are empty. This allows the userspace software 10 + to determine how many entries the controller supports. 11 + If there are multiple controllers, each controller has 12 + its own ACL list and size may be different between the 13 + controllers. 14 + 15 + System BIOS may have an option "Preboot ACL" or similar 16 + that needs to be selected before this list is taken into 17 + consideration. 18 + 19 + Software always updates a full list in each write. 20 + 21 + If a device is authorized automatically during boot its 22 + boot attribute is set to 1. 23 + 1 24 What: /sys/bus/thunderbolt/devices/.../domainX/security 2 25 Date: Sep 2017 3 26 KernelVersion: 4.13 ··· 35 12 minimum. User needs to authorize each device. 36 13 dponly: Automatically tunnel Display port (and USB). No 37 14 PCIe tunnels are created. 15 + usbonly: Automatically tunnel USB controller of the 16 + connected Thunderbolt dock (and Display Port). All 17 + PCIe links downstream of the dock are removed. 38 18 39 19 What: /sys/bus/thunderbolt/devices/.../authorized 40 20 Date: Sep 2017 ··· 63 37 authorized. In case of failure errno will be ENOKEY if 64 38 the device did not contain a key at all, and 65 39 EKEYREJECTED if the challenge response did not match. 40 + 41 + What: /sys/bus/thunderbolt/devices/.../boot 42 + Date: Jun 2018 43 + KernelVersion: 4.17 44 + Contact: thunderbolt-software@lists.01.org 45 + Description: This attribute contains 1 if Thunderbolt device was already 46 + authorized on boot and 0 otherwise. 66 47 67 48 What: /sys/bus/thunderbolt/devices/.../key 68 49 Date: Sep 2017
+10 -5
Documentation/admin-guide/thunderbolt.rst
··· 21 21 Security levels and how to use them 22 22 ----------------------------------- 23 23 Starting with Intel Falcon Ridge Thunderbolt controller there are 4 24 - security levels available. The reason for these is the fact that the 25 - connected devices can be DMA masters and thus read contents of the host 26 - memory without CPU and OS knowing about it. There are ways to prevent 27 - this by setting up an IOMMU but it is not always available for various 28 - reasons. 24 + security levels available. Intel Titan Ridge added one more security level 25 + (usbonly). The reason for these is the fact that the connected devices can 26 + be DMA masters and thus read contents of the host memory without CPU and OS 27 + knowing about it. There are ways to prevent this by setting up an IOMMU but 28 + it is not always available for various reasons. 29 29 30 30 The security levels are as follows: 31 31 ··· 51 51 The firmware automatically creates tunnels for Display Port and 52 52 USB. No PCIe tunneling is done. In BIOS settings this is 53 53 typically called *Display Port Only*. 54 + 55 + usbonly 56 + The firmware automatically creates tunnels for the USB controller and 57 + Display Port in a dock. All PCIe links downstream of the dock are 58 + removed. 54 59 55 60 The current security level can be read from 56 61 ``/sys/bus/thunderbolt/devices/domainX/security`` where ``domainX`` is
+12 -14
drivers/thunderbolt/dma_port.c
··· 170 170 171 171 static int dma_find_port(struct tb_switch *sw) 172 172 { 173 - int port, ret; 174 - u32 type; 173 + static const int ports[] = { 3, 5, 7 }; 174 + int i; 175 175 176 176 /* 177 - * The DMA (NHI) port is either 3 or 5 depending on the 178 - * controller. Try both starting from 5 which is more common. 177 + * The DMA (NHI) port is either 3, 5 or 7 depending on the 178 + * controller. Try all of them. 179 179 */ 180 - port = 5; 181 - ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1, 182 - DMA_PORT_TIMEOUT); 183 - if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 184 - return port; 180 + for (i = 0; i < ARRAY_SIZE(ports); i++) { 181 + u32 type; 182 + int ret; 185 183 186 - port = 3; 187 - ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1, 188 - DMA_PORT_TIMEOUT); 189 - if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 190 - return port; 184 + ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), ports[i], 185 + 2, 1, DMA_PORT_TIMEOUT); 186 + if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 187 + return ports[i]; 188 + } 191 189 192 190 return -ENODEV; 193 191 }
+129 -1
drivers/thunderbolt/domain.c
··· 117 117 [TB_SECURITY_USER] = "user", 118 118 [TB_SECURITY_SECURE] = "secure", 119 119 [TB_SECURITY_DPONLY] = "dponly", 120 + [TB_SECURITY_USBONLY] = "usbonly", 120 121 }; 122 + 123 + static ssize_t boot_acl_show(struct device *dev, struct device_attribute *attr, 124 + char *buf) 125 + { 126 + struct tb *tb = container_of(dev, struct tb, dev); 127 + uuid_t *uuids; 128 + ssize_t ret; 129 + int i; 130 + 131 + uuids = kcalloc(tb->nboot_acl, sizeof(uuid_t), GFP_KERNEL); 132 + if (!uuids) 133 + return -ENOMEM; 134 + 135 + if (mutex_lock_interruptible(&tb->lock)) { 136 + ret = -ERESTARTSYS; 137 + goto out; 138 + } 139 + ret = tb->cm_ops->get_boot_acl(tb, uuids, tb->nboot_acl); 140 + if (ret) { 141 + mutex_unlock(&tb->lock); 142 + goto out; 143 + } 144 + mutex_unlock(&tb->lock); 145 + 146 + for (ret = 0, i = 0; i < tb->nboot_acl; i++) { 147 + if (!uuid_is_null(&uuids[i])) 148 + ret += snprintf(buf + ret, PAGE_SIZE - ret, "%pUb", 149 + &uuids[i]); 150 + 151 + ret += snprintf(buf + ret, PAGE_SIZE - ret, "%s", 152 + i < tb->nboot_acl - 1 ? "," : "\n"); 153 + } 154 + 155 + out: 156 + kfree(uuids); 157 + return ret; 158 + } 159 + 160 + static ssize_t boot_acl_store(struct device *dev, struct device_attribute *attr, 161 + const char *buf, size_t count) 162 + { 163 + struct tb *tb = container_of(dev, struct tb, dev); 164 + char *str, *s, *uuid_str; 165 + ssize_t ret = 0; 166 + uuid_t *acl; 167 + int i = 0; 168 + 169 + /* 170 + * Make sure the value is not bigger than tb->nboot_acl * UUID 171 + * length + commas and optional "\n". Also the smallest allowable 172 + * string is tb->nboot_acl * ",". 173 + */ 174 + if (count > (UUID_STRING_LEN + 1) * tb->nboot_acl + 1) 175 + return -EINVAL; 176 + if (count < tb->nboot_acl - 1) 177 + return -EINVAL; 178 + 179 + str = kstrdup(buf, GFP_KERNEL); 180 + if (!str) 181 + return -ENOMEM; 182 + 183 + acl = kcalloc(tb->nboot_acl, sizeof(uuid_t), GFP_KERNEL); 184 + if (!acl) { 185 + ret = -ENOMEM; 186 + goto err_free_str; 187 + } 188 + 189 + uuid_str = strim(str); 190 + while ((s = strsep(&uuid_str, ",")) != NULL && i < tb->nboot_acl) { 191 + size_t len = strlen(s); 192 + 193 + if (len) { 194 + if (len != UUID_STRING_LEN) { 195 + ret = -EINVAL; 196 + goto err_free_acl; 197 + } 198 + ret = uuid_parse(s, &acl[i]); 199 + if (ret) 200 + goto err_free_acl; 201 + } 202 + 203 + i++; 204 + } 205 + 206 + if (s || i < tb->nboot_acl) { 207 + ret = -EINVAL; 208 + goto err_free_acl; 209 + } 210 + 211 + if (mutex_lock_interruptible(&tb->lock)) { 212 + ret = -ERESTARTSYS; 213 + goto err_free_acl; 214 + } 215 + ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl); 216 + mutex_unlock(&tb->lock); 217 + 218 + err_free_acl: 219 + kfree(acl); 220 + err_free_str: 221 + kfree(str); 222 + 223 + return ret ?: count; 224 + } 225 + static DEVICE_ATTR_RW(boot_acl); 121 226 122 227 static ssize_t security_show(struct device *dev, struct device_attribute *attr, 123 228 char *buf) 124 229 { 125 230 struct tb *tb = container_of(dev, struct tb, dev); 231 + const char *name = "unknown"; 126 232 127 - return sprintf(buf, "%s\n", tb_security_names[tb->security_level]); 233 + if (tb->security_level < ARRAY_SIZE(tb_security_names)) 234 + name = tb_security_names[tb->security_level]; 235 + 236 + return sprintf(buf, "%s\n", name); 128 237 } 129 238 static DEVICE_ATTR_RO(security); 130 239 131 240 static struct attribute *domain_attrs[] = { 241 + &dev_attr_boot_acl.attr, 132 242 &dev_attr_security.attr, 133 243 NULL, 134 244 }; 135 245 246 + static umode_t domain_attr_is_visible(struct kobject *kobj, 247 + struct attribute *attr, int n) 248 + { 249 + struct device *dev = container_of(kobj, struct device, kobj); 250 + struct tb *tb = container_of(dev, struct tb, dev); 251 + 252 + if (attr == &dev_attr_boot_acl.attr) { 253 + if (tb->nboot_acl && 254 + tb->cm_ops->get_boot_acl && 255 + tb->cm_ops->set_boot_acl) 256 + return attr->mode; 257 + return 0; 258 + } 259 + 260 + return attr->mode; 261 + } 262 + 136 263 static struct attribute_group domain_attr_group = { 264 + .is_visible = domain_attr_is_visible, 137 265 .attrs = domain_attrs, 138 266 }; 139 267
+685 -79
drivers/thunderbolt/icm.c
··· 41 41 #define PHY_PORT_CS1_LINK_STATE_MASK GENMASK(29, 26) 42 42 #define PHY_PORT_CS1_LINK_STATE_SHIFT 26 43 43 44 - #define ICM_TIMEOUT 5000 /* ms */ 44 + #define ICM_TIMEOUT 5000 /* ms */ 45 + #define ICM_APPROVE_TIMEOUT 10000 /* ms */ 45 46 #define ICM_MAX_LINK 4 46 47 #define ICM_MAX_DEPTH 6 47 48 ··· 56 55 * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides 57 56 * (only set when @upstream_port is not %NULL) 58 57 * @safe_mode: ICM is in safe mode 58 + * @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported) 59 59 * @is_supported: Checks if we can support ICM on this controller 60 60 * @get_mode: Read and return the ICM firmware mode (optional) 61 61 * @get_route: Find a route string for given switch 62 + * @driver_ready: Send driver ready message to ICM 62 63 * @device_connected: Handle device connected ICM message 63 64 * @device_disconnected: Handle device disconnected ICM message 64 65 * @xdomain_connected - Handle XDomain connected ICM message ··· 70 67 struct mutex request_lock; 71 68 struct delayed_work rescan_work; 72 69 struct pci_dev *upstream_port; 70 + size_t max_boot_acl; 73 71 int vnd_cap; 74 72 bool safe_mode; 75 73 bool (*is_supported)(struct tb *tb); 76 74 int (*get_mode)(struct tb *tb); 77 75 int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); 76 + int (*driver_ready)(struct tb *tb, 77 + enum tb_security_level *security_level, 78 + size_t *nboot_acl); 78 79 void (*device_connected)(struct tb *tb, 79 80 const struct icm_pkg_header *hdr); 80 81 void (*device_disconnected)(struct tb *tb, ··· 116 109 static inline u64 get_route(u32 route_hi, u32 route_lo) 117 110 { 118 111 return (u64)route_hi << 32 | route_lo; 112 + } 113 + 114 + static inline u64 get_parent_route(u64 route) 115 + { 116 + int depth = tb_route_length(route); 117 + return depth ? route & ~(0xffULL << (depth - 1) * TB_ROUTE_SHIFT) : 0; 119 118 } 120 119 121 120 static bool icm_match(const struct tb_cfg_request *req, ··· 258 245 return ret; 259 246 } 260 247 248 + static int 249 + icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 250 + size_t *nboot_acl) 251 + { 252 + struct icm_fr_pkg_driver_ready_response reply; 253 + struct icm_pkg_driver_ready request = { 254 + .hdr.code = ICM_DRIVER_READY, 255 + }; 256 + int ret; 257 + 258 + memset(&reply, 0, sizeof(reply)); 259 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 260 + 1, ICM_TIMEOUT); 261 + if (ret) 262 + return ret; 263 + 264 + if (security_level) 265 + *security_level = reply.security_level & ICM_FR_SLEVEL_MASK; 266 + 267 + return 0; 268 + } 269 + 261 270 static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw) 262 271 { 263 272 struct icm_fr_pkg_approve_device request; ··· 295 260 memset(&reply, 0, sizeof(reply)); 296 261 /* Use larger timeout as establishing tunnels can take some time */ 297 262 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 298 - 1, 10000); 263 + 1, ICM_APPROVE_TIMEOUT); 299 264 if (ret) 300 265 return ret; 301 266 ··· 409 374 return 0; 410 375 } 411 376 377 + static void add_switch(struct tb_switch *parent_sw, u64 route, 378 + const uuid_t *uuid, u8 connection_id, u8 connection_key, 379 + u8 link, u8 depth, enum tb_security_level security_level, 380 + bool authorized, bool boot) 381 + { 382 + struct tb_switch *sw; 383 + 384 + sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route); 385 + if (!sw) 386 + return; 387 + 388 + sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL); 389 + sw->connection_id = connection_id; 390 + sw->connection_key = connection_key; 391 + sw->link = link; 392 + sw->depth = depth; 393 + sw->authorized = authorized; 394 + sw->security_level = security_level; 395 + sw->boot = boot; 396 + 397 + /* Link the two switches now */ 398 + tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 399 + tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); 400 + 401 + if (tb_switch_add(sw)) { 402 + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 403 + tb_switch_put(sw); 404 + return; 405 + } 406 + } 407 + 408 + static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, 409 + u64 route, u8 connection_id, u8 connection_key, 410 + u8 link, u8 depth, bool boot) 411 + { 412 + /* Disconnect from parent */ 413 + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 414 + /* Re-connect via updated port*/ 415 + tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 416 + 417 + /* Update with the new addressing information */ 418 + sw->config.route_hi = upper_32_bits(route); 419 + sw->config.route_lo = lower_32_bits(route); 420 + sw->connection_id = connection_id; 421 + sw->connection_key = connection_key; 422 + sw->link = link; 423 + sw->depth = depth; 424 + sw->boot = boot; 425 + 426 + /* This switch still exists */ 427 + sw->is_unplugged = false; 428 + } 429 + 412 430 static void remove_switch(struct tb_switch *sw) 413 431 { 414 432 struct tb_switch *parent_sw; ··· 471 383 tb_switch_remove(sw); 472 384 } 473 385 386 + static void add_xdomain(struct tb_switch *sw, u64 route, 387 + const uuid_t *local_uuid, const uuid_t *remote_uuid, 388 + u8 link, u8 depth) 389 + { 390 + struct tb_xdomain *xd; 391 + 392 + xd = tb_xdomain_alloc(sw->tb, &sw->dev, route, local_uuid, remote_uuid); 393 + if (!xd) 394 + return; 395 + 396 + xd->link = link; 397 + xd->depth = depth; 398 + 399 + tb_port_at(route, sw)->xdomain = xd; 400 + 401 + tb_xdomain_add(xd); 402 + } 403 + 404 + static void update_xdomain(struct tb_xdomain *xd, u64 route, u8 link) 405 + { 406 + xd->link = link; 407 + xd->route = route; 408 + xd->is_unplugged = false; 409 + } 410 + 411 + static void remove_xdomain(struct tb_xdomain *xd) 412 + { 413 + struct tb_switch *sw; 414 + 415 + sw = tb_to_switch(xd->dev.parent); 416 + tb_port_at(xd->route, sw)->xdomain = NULL; 417 + tb_xdomain_remove(xd); 418 + } 419 + 474 420 static void 475 421 icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 476 422 { 477 423 const struct icm_fr_event_device_connected *pkg = 478 424 (const struct icm_fr_event_device_connected *)hdr; 425 + enum tb_security_level security_level; 479 426 struct tb_switch *sw, *parent_sw; 480 427 struct icm *icm = tb_priv(tb); 481 428 bool authorized = false; 429 + struct tb_xdomain *xd; 482 430 u8 link, depth; 431 + bool boot; 483 432 u64 route; 484 433 int ret; 485 434 ··· 524 399 depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> 525 400 ICM_LINK_INFO_DEPTH_SHIFT; 526 401 authorized = pkg->link_info & ICM_LINK_INFO_APPROVED; 402 + security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 403 + ICM_FLAGS_SLEVEL_SHIFT; 404 + boot = pkg->link_info & ICM_LINK_INFO_BOOT; 405 + 406 + if (pkg->link_info & ICM_LINK_INFO_REJECTED) { 407 + tb_info(tb, "switch at %u.%u was rejected by ICM firmware because topology limit exceeded\n", 408 + link, depth); 409 + return; 410 + } 527 411 528 412 ret = icm->get_route(tb, link, depth, &route); 529 413 if (ret) { ··· 559 425 */ 560 426 if (sw->depth == depth && sw_phy_port == phy_port && 561 427 !!sw->authorized == authorized) { 562 - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 563 - tb_port_at(route, parent_sw)->remote = 564 - tb_upstream_port(sw); 565 - sw->config.route_hi = upper_32_bits(route); 566 - sw->config.route_lo = lower_32_bits(route); 567 - sw->connection_id = pkg->connection_id; 568 - sw->connection_key = pkg->connection_key; 569 - sw->link = link; 570 - sw->depth = depth; 571 - sw->is_unplugged = false; 428 + update_switch(parent_sw, sw, route, pkg->connection_id, 429 + pkg->connection_key, link, depth, boot); 572 430 tb_switch_put(sw); 573 431 return; 574 432 } ··· 593 467 tb_switch_put(sw); 594 468 } 595 469 470 + /* Remove existing XDomain connection if found */ 471 + xd = tb_xdomain_find_by_link_depth(tb, link, depth); 472 + if (xd) { 473 + remove_xdomain(xd); 474 + tb_xdomain_put(xd); 475 + } 476 + 596 477 parent_sw = tb_switch_find_by_link_depth(tb, link, depth - 1); 597 478 if (!parent_sw) { 598 479 tb_err(tb, "failed to find parent switch for %u.%u\n", ··· 607 474 return; 608 475 } 609 476 610 - sw = tb_switch_alloc(tb, &parent_sw->dev, route); 611 - if (!sw) { 612 - tb_switch_put(parent_sw); 613 - return; 614 - } 477 + add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 478 + pkg->connection_key, link, depth, security_level, 479 + authorized, boot); 615 480 616 - sw->uuid = kmemdup(&pkg->ep_uuid, sizeof(pkg->ep_uuid), GFP_KERNEL); 617 - sw->connection_id = pkg->connection_id; 618 - sw->connection_key = pkg->connection_key; 619 - sw->link = link; 620 - sw->depth = depth; 621 - sw->authorized = authorized; 622 - sw->security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 623 - ICM_FLAGS_SLEVEL_SHIFT; 624 - 625 - /* Link the two switches now */ 626 - tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 627 - tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); 628 - 629 - ret = tb_switch_add(sw); 630 - if (ret) { 631 - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 632 - tb_switch_put(sw); 633 - } 634 481 tb_switch_put(parent_sw); 635 482 } 636 483 ··· 640 527 641 528 remove_switch(sw); 642 529 tb_switch_put(sw); 643 - } 644 - 645 - static void remove_xdomain(struct tb_xdomain *xd) 646 - { 647 - struct tb_switch *sw; 648 - 649 - sw = tb_to_switch(xd->dev.parent); 650 - tb_port_at(xd->route, sw)->xdomain = NULL; 651 - tb_xdomain_remove(xd); 652 530 } 653 531 654 532 static void ··· 681 577 phy_port = phy_port_from_route(route, depth); 682 578 683 579 if (xd->depth == depth && xd_phy_port == phy_port) { 684 - xd->link = link; 685 - xd->route = route; 686 - xd->is_unplugged = false; 580 + update_xdomain(xd, route, link); 687 581 tb_xdomain_put(xd); 688 582 return; 689 583 } ··· 731 629 return; 732 630 } 733 631 734 - xd = tb_xdomain_alloc(sw->tb, &sw->dev, route, 735 - &pkg->local_uuid, &pkg->remote_uuid); 736 - if (!xd) { 737 - tb_switch_put(sw); 738 - return; 739 - } 740 - 741 - xd->link = link; 742 - xd->depth = depth; 743 - 744 - tb_port_at(route, sw)->xdomain = xd; 745 - 746 - tb_xdomain_add(xd); 632 + add_xdomain(sw, route, &pkg->local_uuid, &pkg->remote_uuid, link, 633 + depth); 747 634 tb_switch_put(sw); 748 635 } 749 636 ··· 749 658 * cannot find it here. 750 659 */ 751 660 xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid); 661 + if (xd) { 662 + remove_xdomain(xd); 663 + tb_xdomain_put(xd); 664 + } 665 + } 666 + 667 + static int 668 + icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 669 + size_t *nboot_acl) 670 + { 671 + struct icm_tr_pkg_driver_ready_response reply; 672 + struct icm_pkg_driver_ready request = { 673 + .hdr.code = ICM_DRIVER_READY, 674 + }; 675 + int ret; 676 + 677 + memset(&reply, 0, sizeof(reply)); 678 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 679 + 1, 20000); 680 + if (ret) 681 + return ret; 682 + 683 + if (security_level) 684 + *security_level = reply.info & ICM_TR_INFO_SLEVEL_MASK; 685 + if (nboot_acl) 686 + *nboot_acl = (reply.info & ICM_TR_INFO_BOOT_ACL_MASK) >> 687 + ICM_TR_INFO_BOOT_ACL_SHIFT; 688 + return 0; 689 + } 690 + 691 + static int icm_tr_approve_switch(struct tb *tb, struct tb_switch *sw) 692 + { 693 + struct icm_tr_pkg_approve_device request; 694 + struct icm_tr_pkg_approve_device reply; 695 + int ret; 696 + 697 + memset(&request, 0, sizeof(request)); 698 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 699 + request.hdr.code = ICM_APPROVE_DEVICE; 700 + request.route_lo = sw->config.route_lo; 701 + request.route_hi = sw->config.route_hi; 702 + request.connection_id = sw->connection_id; 703 + 704 + memset(&reply, 0, sizeof(reply)); 705 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 706 + 1, ICM_APPROVE_TIMEOUT); 707 + if (ret) 708 + return ret; 709 + 710 + if (reply.hdr.flags & ICM_FLAGS_ERROR) { 711 + tb_warn(tb, "PCIe tunnel creation failed\n"); 712 + return -EIO; 713 + } 714 + 715 + return 0; 716 + } 717 + 718 + static int icm_tr_add_switch_key(struct tb *tb, struct tb_switch *sw) 719 + { 720 + struct icm_tr_pkg_add_device_key_response reply; 721 + struct icm_tr_pkg_add_device_key request; 722 + int ret; 723 + 724 + memset(&request, 0, sizeof(request)); 725 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 726 + request.hdr.code = ICM_ADD_DEVICE_KEY; 727 + request.route_lo = sw->config.route_lo; 728 + request.route_hi = sw->config.route_hi; 729 + request.connection_id = sw->connection_id; 730 + memcpy(request.key, sw->key, TB_SWITCH_KEY_SIZE); 731 + 732 + memset(&reply, 0, sizeof(reply)); 733 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 734 + 1, ICM_TIMEOUT); 735 + if (ret) 736 + return ret; 737 + 738 + if (reply.hdr.flags & ICM_FLAGS_ERROR) { 739 + tb_warn(tb, "Adding key to switch failed\n"); 740 + return -EIO; 741 + } 742 + 743 + return 0; 744 + } 745 + 746 + static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw, 747 + const u8 *challenge, u8 *response) 748 + { 749 + struct icm_tr_pkg_challenge_device_response reply; 750 + struct icm_tr_pkg_challenge_device request; 751 + int ret; 752 + 753 + memset(&request, 0, sizeof(request)); 754 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 755 + request.hdr.code = ICM_CHALLENGE_DEVICE; 756 + request.route_lo = sw->config.route_lo; 757 + request.route_hi = sw->config.route_hi; 758 + request.connection_id = sw->connection_id; 759 + memcpy(request.challenge, challenge, TB_SWITCH_KEY_SIZE); 760 + 761 + memset(&reply, 0, sizeof(reply)); 762 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 763 + 1, ICM_TIMEOUT); 764 + if (ret) 765 + return ret; 766 + 767 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 768 + return -EKEYREJECTED; 769 + if (reply.hdr.flags & ICM_FLAGS_NO_KEY) 770 + return -ENOKEY; 771 + 772 + memcpy(response, reply.response, TB_SWITCH_KEY_SIZE); 773 + 774 + return 0; 775 + } 776 + 777 + static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 778 + { 779 + struct icm_tr_pkg_approve_xdomain_response reply; 780 + struct icm_tr_pkg_approve_xdomain request; 781 + int ret; 782 + 783 + memset(&request, 0, sizeof(request)); 784 + request.hdr.code = ICM_APPROVE_XDOMAIN; 785 + request.route_hi = upper_32_bits(xd->route); 786 + request.route_lo = lower_32_bits(xd->route); 787 + request.transmit_path = xd->transmit_path; 788 + request.transmit_ring = xd->transmit_ring; 789 + request.receive_path = xd->receive_path; 790 + request.receive_ring = xd->receive_ring; 791 + memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 792 + 793 + memset(&reply, 0, sizeof(reply)); 794 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 795 + 1, ICM_TIMEOUT); 796 + if (ret) 797 + return ret; 798 + 799 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 800 + return -EIO; 801 + 802 + return 0; 803 + } 804 + 805 + static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd, 806 + int stage) 807 + { 808 + struct icm_tr_pkg_disconnect_xdomain_response reply; 809 + struct icm_tr_pkg_disconnect_xdomain request; 810 + int ret; 811 + 812 + memset(&request, 0, sizeof(request)); 813 + request.hdr.code = ICM_DISCONNECT_XDOMAIN; 814 + request.stage = stage; 815 + request.route_hi = upper_32_bits(xd->route); 816 + request.route_lo = lower_32_bits(xd->route); 817 + memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 818 + 819 + memset(&reply, 0, sizeof(reply)); 820 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 821 + 1, ICM_TIMEOUT); 822 + if (ret) 823 + return ret; 824 + 825 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 826 + return -EIO; 827 + 828 + return 0; 829 + } 830 + 831 + static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 832 + { 833 + int ret; 834 + 835 + ret = icm_tr_xdomain_tear_down(tb, xd, 1); 836 + if (ret) 837 + return ret; 838 + 839 + usleep_range(10, 50); 840 + return icm_tr_xdomain_tear_down(tb, xd, 2); 841 + } 842 + 843 + static void 844 + icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 845 + { 846 + const struct icm_tr_event_device_connected *pkg = 847 + (const struct icm_tr_event_device_connected *)hdr; 848 + enum tb_security_level security_level; 849 + struct tb_switch *sw, *parent_sw; 850 + struct tb_xdomain *xd; 851 + bool authorized, boot; 852 + u64 route; 853 + 854 + /* 855 + * Currently we don't use the QoS information coming with the 856 + * device connected message so simply just ignore that extra 857 + * packet for now. 858 + */ 859 + if (pkg->hdr.packet_id) 860 + return; 861 + 862 + /* 863 + * After NVM upgrade adding root switch device fails because we 864 + * initiated reset. During that time ICM might still send device 865 + * connected message which we ignore here. 866 + */ 867 + if (!tb->root_switch) 868 + return; 869 + 870 + route = get_route(pkg->route_hi, pkg->route_lo); 871 + authorized = pkg->link_info & ICM_LINK_INFO_APPROVED; 872 + security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 873 + ICM_FLAGS_SLEVEL_SHIFT; 874 + boot = pkg->link_info & ICM_LINK_INFO_BOOT; 875 + 876 + if (pkg->link_info & ICM_LINK_INFO_REJECTED) { 877 + tb_info(tb, "switch at %llx was rejected by ICM firmware because topology limit exceeded\n", 878 + route); 879 + return; 880 + } 881 + 882 + sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid); 883 + if (sw) { 884 + /* Update the switch if it is still in the same place */ 885 + if (tb_route(sw) == route && !!sw->authorized == authorized) { 886 + parent_sw = tb_to_switch(sw->dev.parent); 887 + update_switch(parent_sw, sw, route, pkg->connection_id, 888 + 0, 0, 0, boot); 889 + tb_switch_put(sw); 890 + return; 891 + } 892 + 893 + remove_switch(sw); 894 + tb_switch_put(sw); 895 + } 896 + 897 + /* Another switch with the same address */ 898 + sw = tb_switch_find_by_route(tb, route); 899 + if (sw) { 900 + remove_switch(sw); 901 + tb_switch_put(sw); 902 + } 903 + 904 + /* XDomain connection with the same address */ 905 + xd = tb_xdomain_find_by_route(tb, route); 906 + if (xd) { 907 + remove_xdomain(xd); 908 + tb_xdomain_put(xd); 909 + } 910 + 911 + parent_sw = tb_switch_find_by_route(tb, get_parent_route(route)); 912 + if (!parent_sw) { 913 + tb_err(tb, "failed to find parent switch for %llx\n", route); 914 + return; 915 + } 916 + 917 + add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 918 + 0, 0, 0, security_level, authorized, boot); 919 + 920 + tb_switch_put(parent_sw); 921 + } 922 + 923 + static void 924 + icm_tr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) 925 + { 926 + const struct icm_tr_event_device_disconnected *pkg = 927 + (const struct icm_tr_event_device_disconnected *)hdr; 928 + struct tb_switch *sw; 929 + u64 route; 930 + 931 + route = get_route(pkg->route_hi, pkg->route_lo); 932 + 933 + sw = tb_switch_find_by_route(tb, route); 934 + if (!sw) { 935 + tb_warn(tb, "no switch exists at %llx, ignoring\n", route); 936 + return; 937 + } 938 + 939 + remove_switch(sw); 940 + tb_switch_put(sw); 941 + } 942 + 943 + static void 944 + icm_tr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr) 945 + { 946 + const struct icm_tr_event_xdomain_connected *pkg = 947 + (const struct icm_tr_event_xdomain_connected *)hdr; 948 + struct tb_xdomain *xd; 949 + struct tb_switch *sw; 950 + u64 route; 951 + 952 + if (!tb->root_switch) 953 + return; 954 + 955 + route = get_route(pkg->local_route_hi, pkg->local_route_lo); 956 + 957 + xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid); 958 + if (xd) { 959 + if (xd->route == route) { 960 + update_xdomain(xd, route, 0); 961 + tb_xdomain_put(xd); 962 + return; 963 + } 964 + 965 + remove_xdomain(xd); 966 + tb_xdomain_put(xd); 967 + } 968 + 969 + /* An existing xdomain with the same address */ 970 + xd = tb_xdomain_find_by_route(tb, route); 971 + if (xd) { 972 + remove_xdomain(xd); 973 + tb_xdomain_put(xd); 974 + } 975 + 976 + /* 977 + * If the user disconnected a switch during suspend and 978 + * connected another host to the same port, remove the switch 979 + * first. 980 + */ 981 + sw = get_switch_at_route(tb->root_switch, route); 982 + if (sw) 983 + remove_switch(sw); 984 + 985 + sw = tb_switch_find_by_route(tb, get_parent_route(route)); 986 + if (!sw) { 987 + tb_warn(tb, "no switch exists at %llx, ignoring\n", route); 988 + return; 989 + } 990 + 991 + add_xdomain(sw, route, &pkg->local_uuid, &pkg->remote_uuid, 0, 0); 992 + tb_switch_put(sw); 993 + } 994 + 995 + static void 996 + icm_tr_xdomain_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) 997 + { 998 + const struct icm_tr_event_xdomain_disconnected *pkg = 999 + (const struct icm_tr_event_xdomain_disconnected *)hdr; 1000 + struct tb_xdomain *xd; 1001 + u64 route; 1002 + 1003 + route = get_route(pkg->route_hi, pkg->route_lo); 1004 + 1005 + xd = tb_xdomain_find_by_route(tb, route); 752 1006 if (xd) { 753 1007 remove_xdomain(xd); 754 1008 tb_xdomain_put(xd); ··· 1164 728 static int icm_ar_get_mode(struct tb *tb) 1165 729 { 1166 730 struct tb_nhi *nhi = tb->nhi; 1167 - int retries = 5; 731 + int retries = 60; 1168 732 u32 val; 1169 733 1170 734 do { 1171 735 val = ioread32(nhi->iobase + REG_FW_STS); 1172 736 if (val & REG_FW_STS_NVM_AUTH_DONE) 1173 737 break; 1174 - msleep(30); 738 + msleep(50); 1175 739 } while (--retries); 1176 740 1177 741 if (!retries) { ··· 1180 744 } 1181 745 1182 746 return nhi_mailbox_mode(nhi); 747 + } 748 + 749 + static int 750 + icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level, 751 + size_t *nboot_acl) 752 + { 753 + struct icm_ar_pkg_driver_ready_response reply; 754 + struct icm_pkg_driver_ready request = { 755 + .hdr.code = ICM_DRIVER_READY, 756 + }; 757 + int ret; 758 + 759 + memset(&reply, 0, sizeof(reply)); 760 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 761 + 1, ICM_TIMEOUT); 762 + if (ret) 763 + return ret; 764 + 765 + if (security_level) 766 + *security_level = reply.info & ICM_AR_INFO_SLEVEL_MASK; 767 + if (nboot_acl && (reply.info & ICM_AR_INFO_BOOT_ACL_SUPPORTED)) 768 + *nboot_acl = (reply.info & ICM_AR_INFO_BOOT_ACL_MASK) >> 769 + ICM_AR_INFO_BOOT_ACL_SHIFT; 770 + return 0; 1183 771 } 1184 772 1185 773 static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route) ··· 1225 765 return -EIO; 1226 766 1227 767 *route = get_route(reply.route_hi, reply.route_lo); 768 + return 0; 769 + } 770 + 771 + static int icm_ar_get_boot_acl(struct tb *tb, uuid_t *uuids, size_t nuuids) 772 + { 773 + struct icm_ar_pkg_preboot_acl_response reply; 774 + struct icm_ar_pkg_preboot_acl request = { 775 + .hdr = { .code = ICM_PREBOOT_ACL }, 776 + }; 777 + int ret, i; 778 + 779 + memset(&reply, 0, sizeof(reply)); 780 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 781 + 1, ICM_TIMEOUT); 782 + if (ret) 783 + return ret; 784 + 785 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 786 + return -EIO; 787 + 788 + for (i = 0; i < nuuids; i++) { 789 + u32 *uuid = (u32 *)&uuids[i]; 790 + 791 + uuid[0] = reply.acl[i].uuid_lo; 792 + uuid[1] = reply.acl[i].uuid_hi; 793 + 794 + if (uuid[0] == 0xffffffff && uuid[1] == 0xffffffff) { 795 + /* Map empty entries to null UUID */ 796 + uuid[0] = 0; 797 + uuid[1] = 0; 798 + } else { 799 + /* Upper two DWs are always one's */ 800 + uuid[2] = 0xffffffff; 801 + uuid[3] = 0xffffffff; 802 + } 803 + } 804 + 805 + return ret; 806 + } 807 + 808 + static int icm_ar_set_boot_acl(struct tb *tb, const uuid_t *uuids, 809 + size_t nuuids) 810 + { 811 + struct icm_ar_pkg_preboot_acl_response reply; 812 + struct icm_ar_pkg_preboot_acl request = { 813 + .hdr = { 814 + .code = ICM_PREBOOT_ACL, 815 + .flags = ICM_FLAGS_WRITE, 816 + }, 817 + }; 818 + int ret, i; 819 + 820 + for (i = 0; i < nuuids; i++) { 821 + const u32 *uuid = (const u32 *)&uuids[i]; 822 + 823 + if (uuid_is_null(&uuids[i])) { 824 + /* 825 + * Map null UUID to the empty (all one) entries 826 + * for ICM. 827 + */ 828 + request.acl[i].uuid_lo = 0xffffffff; 829 + request.acl[i].uuid_hi = 0xffffffff; 830 + } else { 831 + /* Two high DWs need to be set to all one */ 832 + if (uuid[2] != 0xffffffff || uuid[3] != 0xffffffff) 833 + return -EINVAL; 834 + 835 + request.acl[i].uuid_lo = uuid[0]; 836 + request.acl[i].uuid_hi = uuid[1]; 837 + } 838 + } 839 + 840 + memset(&reply, 0, sizeof(reply)); 841 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 842 + 1, ICM_TIMEOUT); 843 + if (ret) 844 + return ret; 845 + 846 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 847 + return -EIO; 848 + 1228 849 return 0; 1229 850 } 1230 851 ··· 1355 814 } 1356 815 1357 816 static int 1358 - __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level) 817 + __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level, 818 + size_t *nboot_acl) 1359 819 { 1360 - struct icm_pkg_driver_ready_response reply; 1361 - struct icm_pkg_driver_ready request = { 1362 - .hdr.code = ICM_DRIVER_READY, 1363 - }; 1364 - unsigned int retries = 10; 820 + struct icm *icm = tb_priv(tb); 821 + unsigned int retries = 50; 1365 822 int ret; 1366 823 1367 - memset(&reply, 0, sizeof(reply)); 1368 - ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1369 - 1, ICM_TIMEOUT); 1370 - if (ret) 824 + ret = icm->driver_ready(tb, security_level, nboot_acl); 825 + if (ret) { 826 + tb_err(tb, "failed to send driver ready to ICM\n"); 1371 827 return ret; 1372 - 1373 - if (security_level) 1374 - *security_level = reply.security_level & 0xf; 828 + } 1375 829 1376 830 /* 1377 831 * Hold on here until the switch config space is accessible so ··· 1384 848 msleep(50); 1385 849 } while (--retries); 1386 850 851 + tb_err(tb, "failed to read root switch config space, giving up\n"); 1387 852 return -ETIMEDOUT; 1388 853 } 1389 854 ··· 1451 914 { 1452 915 struct icm *icm = tb_priv(tb); 1453 916 u32 val; 917 + 918 + if (!icm->upstream_port) 919 + return -ENODEV; 1454 920 1455 921 /* Put ARC to wait for CIO reset event to happen */ 1456 922 val = ioread32(nhi->iobase + REG_FW_STS); ··· 1594 1054 break; 1595 1055 1596 1056 default: 1057 + if (ret < 0) 1058 + return ret; 1059 + 1597 1060 tb_err(tb, "ICM firmware is in wrong mode: %u\n", ret); 1598 1061 return -ENODEV; 1599 1062 } ··· 1632 1089 return 0; 1633 1090 } 1634 1091 1635 - return __icm_driver_ready(tb, &tb->security_level); 1092 + ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl); 1093 + if (ret) 1094 + return ret; 1095 + 1096 + /* 1097 + * Make sure the number of supported preboot ACL matches what we 1098 + * expect or disable the whole feature. 1099 + */ 1100 + if (tb->nboot_acl > icm->max_boot_acl) 1101 + tb->nboot_acl = 0; 1102 + 1103 + return 0; 1636 1104 } 1637 1105 1638 1106 static int icm_suspend(struct tb *tb) ··· 1739 1185 * Now all existing children should be resumed, start events 1740 1186 * from ICM to get updated status. 1741 1187 */ 1742 - __icm_driver_ready(tb, NULL); 1188 + __icm_driver_ready(tb, NULL, NULL); 1743 1189 1744 1190 /* 1745 1191 * We do not get notifications of devices that have been ··· 1792 1238 return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0); 1793 1239 } 1794 1240 1795 - /* Falcon Ridge and Alpine Ridge */ 1241 + /* Falcon Ridge */ 1796 1242 static const struct tb_cm_ops icm_fr_ops = { 1797 1243 .driver_ready = icm_driver_ready, 1798 1244 .start = icm_start, ··· 1806 1252 .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1807 1253 .approve_xdomain_paths = icm_fr_approve_xdomain_paths, 1808 1254 .disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths, 1255 + }; 1256 + 1257 + /* Alpine Ridge */ 1258 + static const struct tb_cm_ops icm_ar_ops = { 1259 + .driver_ready = icm_driver_ready, 1260 + .start = icm_start, 1261 + .stop = icm_stop, 1262 + .suspend = icm_suspend, 1263 + .complete = icm_complete, 1264 + .handle_event = icm_handle_event, 1265 + .get_boot_acl = icm_ar_get_boot_acl, 1266 + .set_boot_acl = icm_ar_set_boot_acl, 1267 + .approve_switch = icm_fr_approve_switch, 1268 + .add_switch_key = icm_fr_add_switch_key, 1269 + .challenge_switch_key = icm_fr_challenge_switch_key, 1270 + .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1271 + .approve_xdomain_paths = icm_fr_approve_xdomain_paths, 1272 + .disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths, 1273 + }; 1274 + 1275 + /* Titan Ridge */ 1276 + static const struct tb_cm_ops icm_tr_ops = { 1277 + .driver_ready = icm_driver_ready, 1278 + .start = icm_start, 1279 + .stop = icm_stop, 1280 + .suspend = icm_suspend, 1281 + .complete = icm_complete, 1282 + .handle_event = icm_handle_event, 1283 + .get_boot_acl = icm_ar_get_boot_acl, 1284 + .set_boot_acl = icm_ar_set_boot_acl, 1285 + .approve_switch = icm_tr_approve_switch, 1286 + .add_switch_key = icm_tr_add_switch_key, 1287 + .challenge_switch_key = icm_tr_challenge_switch_key, 1288 + .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1289 + .approve_xdomain_paths = icm_tr_approve_xdomain_paths, 1290 + .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 1809 1291 }; 1810 1292 1811 1293 struct tb *icm_probe(struct tb_nhi *nhi) ··· 1862 1272 case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 1863 1273 icm->is_supported = icm_fr_is_supported; 1864 1274 icm->get_route = icm_fr_get_route; 1275 + icm->driver_ready = icm_fr_driver_ready; 1865 1276 icm->device_connected = icm_fr_device_connected; 1866 1277 icm->device_disconnected = icm_fr_device_disconnected; 1867 1278 icm->xdomain_connected = icm_fr_xdomain_connected; ··· 1875 1284 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI: 1876 1285 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI: 1877 1286 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI: 1287 + icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 1878 1288 icm->is_supported = icm_ar_is_supported; 1879 1289 icm->get_mode = icm_ar_get_mode; 1880 1290 icm->get_route = icm_ar_get_route; 1291 + icm->driver_ready = icm_ar_driver_ready; 1881 1292 icm->device_connected = icm_fr_device_connected; 1882 1293 icm->device_disconnected = icm_fr_device_disconnected; 1883 1294 icm->xdomain_connected = icm_fr_xdomain_connected; 1884 1295 icm->xdomain_disconnected = icm_fr_xdomain_disconnected; 1885 - tb->cm_ops = &icm_fr_ops; 1296 + tb->cm_ops = &icm_ar_ops; 1297 + break; 1298 + 1299 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI: 1300 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI: 1301 + icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 1302 + icm->is_supported = icm_ar_is_supported; 1303 + icm->get_mode = icm_ar_get_mode; 1304 + icm->driver_ready = icm_tr_driver_ready; 1305 + icm->device_connected = icm_tr_device_connected; 1306 + icm->device_disconnected = icm_tr_device_disconnected; 1307 + icm->xdomain_connected = icm_tr_xdomain_connected; 1308 + icm->xdomain_disconnected = icm_tr_xdomain_disconnected; 1309 + tb->cm_ops = &icm_tr_ops; 1886 1310 break; 1887 1311 } 1888 1312
+4 -1
drivers/thunderbolt/nhi.c
··· 1036 1036 */ 1037 1037 tb_domain_put(tb); 1038 1038 nhi_shutdown(nhi); 1039 - return -EIO; 1039 + return res; 1040 1040 } 1041 1041 pci_set_drvdata(pdev, tb); 1042 1042 ··· 1064 1064 * we just disable hotplug, the 1065 1065 * pci-tunnels stay alive. 1066 1066 */ 1067 + .thaw_noirq = nhi_resume_noirq, 1067 1068 .restore_noirq = nhi_resume_noirq, 1068 1069 .suspend = nhi_suspend, 1069 1070 .freeze = nhi_suspend, ··· 1111 1110 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) }, 1112 1111 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) }, 1113 1112 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) }, 1113 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) }, 1114 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) }, 1114 1115 1115 1116 { 0,} 1116 1117 };
+5
drivers/thunderbolt/nhi.h
··· 45 45 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI 0x15dc 46 46 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI 0x15dd 47 47 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI 0x15de 48 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE 0x15e7 49 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI 0x15e8 50 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea 51 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb 52 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef 48 53 49 54 #endif
+60 -1
drivers/thunderbolt/switch.c
··· 716 716 if (sw->authorized) 717 717 goto unlock; 718 718 719 + /* 720 + * Make sure there is no PCIe rescan ongoing when a new PCIe 721 + * tunnel is created. Otherwise the PCIe rescan code might find 722 + * the new tunnel too early. 723 + */ 724 + pci_lock_rescan_remove(); 725 + 719 726 switch (val) { 720 727 /* Approve switch */ 721 728 case 1: ··· 741 734 default: 742 735 break; 743 736 } 737 + 738 + pci_unlock_rescan_remove(); 744 739 745 740 if (!ret) { 746 741 sw->authorized = val; ··· 774 765 return ret ? ret : count; 775 766 } 776 767 static DEVICE_ATTR_RW(authorized); 768 + 769 + static ssize_t boot_show(struct device *dev, struct device_attribute *attr, 770 + char *buf) 771 + { 772 + struct tb_switch *sw = tb_to_switch(dev); 773 + 774 + return sprintf(buf, "%u\n", sw->boot); 775 + } 776 + static DEVICE_ATTR_RO(boot); 777 777 778 778 static ssize_t device_show(struct device *dev, struct device_attribute *attr, 779 779 char *buf) ··· 960 942 961 943 static struct attribute *switch_attrs[] = { 962 944 &dev_attr_authorized.attr, 945 + &dev_attr_boot.attr, 963 946 &dev_attr_device.attr, 964 947 &dev_attr_device_name.attr, 965 948 &dev_attr_key.attr, ··· 987 968 } else if (attr == &dev_attr_nvm_authenticate.attr || 988 969 attr == &dev_attr_nvm_version.attr) { 989 970 if (sw->dma_port) 971 + return attr->mode; 972 + return 0; 973 + } else if (attr == &dev_attr_boot.attr) { 974 + if (tb_route(sw)) 990 975 return attr->mode; 991 976 return 0; 992 977 } ··· 1051 1028 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE: 1052 1029 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE: 1053 1030 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE: 1031 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE: 1032 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE: 1033 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE: 1054 1034 return 3; 1055 1035 1056 1036 default: ··· 1496 1470 u8 link; 1497 1471 u8 depth; 1498 1472 const uuid_t *uuid; 1473 + u64 route; 1499 1474 }; 1500 1475 1501 1476 static int tb_switch_match(struct device *dev, void *data) ··· 1511 1484 1512 1485 if (lookup->uuid) 1513 1486 return !memcmp(sw->uuid, lookup->uuid, sizeof(*lookup->uuid)); 1487 + 1488 + if (lookup->route) { 1489 + return sw->config.route_lo == lower_32_bits(lookup->route) && 1490 + sw->config.route_hi == upper_32_bits(lookup->route); 1491 + } 1514 1492 1515 1493 /* Root switch is matched only by depth */ 1516 1494 if (!lookup->depth) ··· 1551 1519 } 1552 1520 1553 1521 /** 1554 - * tb_switch_find_by_link_depth() - Find switch by UUID 1522 + * tb_switch_find_by_uuid() - Find switch by UUID 1555 1523 * @tb: Domain the switch belongs 1556 1524 * @uuid: UUID to look for 1557 1525 * ··· 1566 1534 memset(&lookup, 0, sizeof(lookup)); 1567 1535 lookup.tb = tb; 1568 1536 lookup.uuid = uuid; 1537 + 1538 + dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); 1539 + if (dev) 1540 + return tb_to_switch(dev); 1541 + 1542 + return NULL; 1543 + } 1544 + 1545 + /** 1546 + * tb_switch_find_by_route() - Find switch by route string 1547 + * @tb: Domain the switch belongs 1548 + * @route: Route string to look for 1549 + * 1550 + * Returned switch has reference count increased so the caller needs to 1551 + * call tb_switch_put() when done with the switch. 1552 + */ 1553 + struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route) 1554 + { 1555 + struct tb_sw_lookup lookup; 1556 + struct device *dev; 1557 + 1558 + if (!route) 1559 + return tb_switch_get(tb->root_switch); 1560 + 1561 + memset(&lookup, 0, sizeof(lookup)); 1562 + lookup.tb = tb; 1563 + lookup.route = route; 1569 1564 1570 1565 dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); 1571 1566 if (dev)
+14
drivers/thunderbolt/tb.h
··· 66 66 * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) 67 67 * @no_nvm_upgrade: Prevent NVM upgrade of this switch 68 68 * @safe_mode: The switch is in safe-mode 69 + * @boot: Whether the switch was already authorized on boot or not 69 70 * @authorized: Whether the switch is authorized by user or policy 70 71 * @work: Work used to automatically authorize a switch 71 72 * @security_level: Switch supported security level ··· 100 99 struct tb_switch_nvm *nvm; 101 100 bool no_nvm_upgrade; 102 101 bool safe_mode; 102 + bool boot; 103 103 unsigned int authorized; 104 104 struct work_struct work; 105 105 enum tb_security_level security_level; ··· 200 198 * @suspend: Connection manager specific suspend 201 199 * @complete: Connection manager specific complete 202 200 * @handle_event: Handle thunderbolt event 201 + * @get_boot_acl: Get boot ACL list 202 + * @set_boot_acl: Set boot ACL list 203 203 * @approve_switch: Approve switch 204 204 * @add_switch_key: Add key to switch 205 205 * @challenge_switch_key: Challenge switch using key ··· 219 215 void (*complete)(struct tb *tb); 220 216 void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type, 221 217 const void *buf, size_t size); 218 + int (*get_boot_acl)(struct tb *tb, uuid_t *uuids, size_t nuuids); 219 + int (*set_boot_acl)(struct tb *tb, const uuid_t *uuids, size_t nuuids); 222 220 int (*approve_switch)(struct tb *tb, struct tb_switch *sw); 223 221 int (*add_switch_key)(struct tb *tb, struct tb_switch *sw); 224 222 int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, ··· 392 386 struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, 393 387 u8 depth); 394 388 struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid); 389 + struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route); 390 + 391 + static inline struct tb_switch *tb_switch_get(struct tb_switch *sw) 392 + { 393 + if (sw) 394 + get_device(&sw->dev); 395 + return sw; 396 + } 395 397 396 398 static inline void tb_switch_put(struct tb_switch *sw) 397 399 {
+179 -1
drivers/thunderbolt/tb_msgs.h
··· 102 102 ICM_ADD_DEVICE_KEY = 0x6, 103 103 ICM_GET_ROUTE = 0xa, 104 104 ICM_APPROVE_XDOMAIN = 0x10, 105 + ICM_DISCONNECT_XDOMAIN = 0x11, 106 + ICM_PREBOOT_ACL = 0x18, 105 107 }; 106 108 107 109 enum icm_event_code { ··· 124 122 #define ICM_FLAGS_NO_KEY BIT(1) 125 123 #define ICM_FLAGS_SLEVEL_SHIFT 3 126 124 #define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3) 125 + #define ICM_FLAGS_WRITE BIT(7) 127 126 128 127 struct icm_pkg_driver_ready { 129 128 struct icm_pkg_header hdr; 130 129 }; 131 130 132 - struct icm_pkg_driver_ready_response { 131 + /* Falcon Ridge only messages */ 132 + 133 + struct icm_fr_pkg_driver_ready_response { 133 134 struct icm_pkg_header hdr; 134 135 u8 romver; 135 136 u8 ramver; 136 137 u16 security_level; 137 138 }; 139 + 140 + #define ICM_FR_SLEVEL_MASK 0xf 138 141 139 142 /* Falcon Ridge & Alpine Ridge common messages */ 140 143 ··· 183 176 #define ICM_LINK_INFO_DEPTH_SHIFT 4 184 177 #define ICM_LINK_INFO_DEPTH_MASK GENMASK(7, 4) 185 178 #define ICM_LINK_INFO_APPROVED BIT(8) 179 + #define ICM_LINK_INFO_REJECTED BIT(9) 180 + #define ICM_LINK_INFO_BOOT BIT(10) 186 181 187 182 struct icm_fr_pkg_approve_device { 188 183 struct icm_pkg_header hdr; ··· 279 270 280 271 /* Alpine Ridge only messages */ 281 272 273 + struct icm_ar_pkg_driver_ready_response { 274 + struct icm_pkg_header hdr; 275 + u8 romver; 276 + u8 ramver; 277 + u16 info; 278 + }; 279 + 280 + #define ICM_AR_INFO_SLEVEL_MASK GENMASK(3, 0) 281 + #define ICM_AR_INFO_BOOT_ACL_SHIFT 7 282 + #define ICM_AR_INFO_BOOT_ACL_MASK GENMASK(11, 7) 283 + #define ICM_AR_INFO_BOOT_ACL_SUPPORTED BIT(13) 284 + 282 285 struct icm_ar_pkg_get_route { 283 286 struct icm_pkg_header hdr; 284 287 u16 reserved; ··· 303 282 u16 link_info; 304 283 u32 route_hi; 305 284 u32 route_lo; 285 + }; 286 + 287 + struct icm_ar_boot_acl_entry { 288 + u32 uuid_lo; 289 + u32 uuid_hi; 290 + }; 291 + 292 + #define ICM_AR_PREBOOT_ACL_ENTRIES 16 293 + 294 + struct icm_ar_pkg_preboot_acl { 295 + struct icm_pkg_header hdr; 296 + struct icm_ar_boot_acl_entry acl[ICM_AR_PREBOOT_ACL_ENTRIES]; 297 + }; 298 + 299 + struct icm_ar_pkg_preboot_acl_response { 300 + struct icm_pkg_header hdr; 301 + struct icm_ar_boot_acl_entry acl[ICM_AR_PREBOOT_ACL_ENTRIES]; 302 + }; 303 + 304 + /* Titan Ridge messages */ 305 + 306 + struct icm_tr_pkg_driver_ready_response { 307 + struct icm_pkg_header hdr; 308 + u16 reserved1; 309 + u16 info; 310 + u32 nvm_version; 311 + u16 device_id; 312 + u16 reserved2; 313 + }; 314 + 315 + #define ICM_TR_INFO_SLEVEL_MASK GENMASK(2, 0) 316 + #define ICM_TR_INFO_BOOT_ACL_SHIFT 7 317 + #define ICM_TR_INFO_BOOT_ACL_MASK GENMASK(12, 7) 318 + 319 + struct icm_tr_event_device_connected { 320 + struct icm_pkg_header hdr; 321 + uuid_t ep_uuid; 322 + u32 route_hi; 323 + u32 route_lo; 324 + u8 connection_id; 325 + u8 reserved; 326 + u16 link_info; 327 + u32 ep_name[55]; 328 + }; 329 + 330 + struct icm_tr_event_device_disconnected { 331 + struct icm_pkg_header hdr; 332 + u32 route_hi; 333 + u32 route_lo; 334 + }; 335 + 336 + struct icm_tr_event_xdomain_connected { 337 + struct icm_pkg_header hdr; 338 + u16 reserved; 339 + u16 link_info; 340 + uuid_t remote_uuid; 341 + uuid_t local_uuid; 342 + u32 local_route_hi; 343 + u32 local_route_lo; 344 + u32 remote_route_hi; 345 + u32 remote_route_lo; 346 + }; 347 + 348 + struct icm_tr_event_xdomain_disconnected { 349 + struct icm_pkg_header hdr; 350 + u32 route_hi; 351 + u32 route_lo; 352 + uuid_t remote_uuid; 353 + }; 354 + 355 + struct icm_tr_pkg_approve_device { 356 + struct icm_pkg_header hdr; 357 + uuid_t ep_uuid; 358 + u32 route_hi; 359 + u32 route_lo; 360 + u8 connection_id; 361 + u8 reserved1[3]; 362 + }; 363 + 364 + struct icm_tr_pkg_add_device_key { 365 + struct icm_pkg_header hdr; 366 + uuid_t ep_uuid; 367 + u32 route_hi; 368 + u32 route_lo; 369 + u8 connection_id; 370 + u8 reserved[3]; 371 + u32 key[8]; 372 + }; 373 + 374 + struct icm_tr_pkg_challenge_device { 375 + struct icm_pkg_header hdr; 376 + uuid_t ep_uuid; 377 + u32 route_hi; 378 + u32 route_lo; 379 + u8 connection_id; 380 + u8 reserved[3]; 381 + u32 challenge[8]; 382 + }; 383 + 384 + struct icm_tr_pkg_approve_xdomain { 385 + struct icm_pkg_header hdr; 386 + u32 route_hi; 387 + u32 route_lo; 388 + uuid_t remote_uuid; 389 + u16 transmit_path; 390 + u16 transmit_ring; 391 + u16 receive_path; 392 + u16 receive_ring; 393 + }; 394 + 395 + struct icm_tr_pkg_disconnect_xdomain { 396 + struct icm_pkg_header hdr; 397 + u8 stage; 398 + u8 reserved[3]; 399 + u32 route_hi; 400 + u32 route_lo; 401 + uuid_t remote_uuid; 402 + }; 403 + 404 + struct icm_tr_pkg_challenge_device_response { 405 + struct icm_pkg_header hdr; 406 + uuid_t ep_uuid; 407 + u32 route_hi; 408 + u32 route_lo; 409 + u8 connection_id; 410 + u8 reserved[3]; 411 + u32 challenge[8]; 412 + u32 response[8]; 413 + }; 414 + 415 + struct icm_tr_pkg_add_device_key_response { 416 + struct icm_pkg_header hdr; 417 + uuid_t ep_uuid; 418 + u32 route_hi; 419 + u32 route_lo; 420 + u8 connection_id; 421 + u8 reserved[3]; 422 + }; 423 + 424 + struct icm_tr_pkg_approve_xdomain_response { 425 + struct icm_pkg_header hdr; 426 + u32 route_hi; 427 + u32 route_lo; 428 + uuid_t remote_uuid; 429 + u16 transmit_path; 430 + u16 transmit_ring; 431 + u16 receive_path; 432 + u16 receive_ring; 433 + }; 434 + 435 + struct icm_tr_pkg_disconnect_xdomain_response { 436 + struct icm_pkg_header hdr; 437 + u8 stage; 438 + u8 reserved[3]; 439 + u32 route_hi; 440 + u32 route_lo; 441 + uuid_t remote_uuid; 306 442 }; 307 443 308 444 /* XDomain messages */
+36 -13
drivers/thunderbolt/xdomain.c
··· 1255 1255 const uuid_t *uuid; 1256 1256 u8 link; 1257 1257 u8 depth; 1258 + u64 route; 1258 1259 }; 1259 1260 1260 1261 static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw, ··· 1276 1275 if (lookup->uuid) { 1277 1276 if (uuid_equal(xd->remote_uuid, lookup->uuid)) 1278 1277 return xd; 1279 - } else if (lookup->link == xd->link && 1278 + } else if (lookup->link && 1279 + lookup->link == xd->link && 1280 1280 lookup->depth == xd->depth) { 1281 + return xd; 1282 + } else if (lookup->route && 1283 + lookup->route == xd->route) { 1281 1284 return xd; 1282 1285 } 1283 1286 } else if (port->remote) { ··· 1318 1313 lookup.uuid = uuid; 1319 1314 1320 1315 xd = switch_find_xdomain(tb->root_switch, &lookup); 1321 - if (xd) { 1322 - get_device(&xd->dev); 1323 - return xd; 1324 - } 1325 - 1326 - return NULL; 1316 + return tb_xdomain_get(xd); 1327 1317 } 1328 1318 EXPORT_SYMBOL_GPL(tb_xdomain_find_by_uuid); 1329 1319 ··· 1349 1349 lookup.depth = depth; 1350 1350 1351 1351 xd = switch_find_xdomain(tb->root_switch, &lookup); 1352 - if (xd) { 1353 - get_device(&xd->dev); 1354 - return xd; 1355 - } 1356 - 1357 - return NULL; 1352 + return tb_xdomain_get(xd); 1358 1353 } 1354 + 1355 + /** 1356 + * tb_xdomain_find_by_route() - Find an XDomain by route string 1357 + * @tb: Domain where the XDomain belongs to 1358 + * @route: XDomain route string 1359 + * 1360 + * Finds XDomain by walking through the Thunderbolt topology below @tb. 1361 + * The returned XDomain will have its reference count increased so the 1362 + * caller needs to call tb_xdomain_put() when it is done with the 1363 + * object. 1364 + * 1365 + * This will find all XDomains including the ones that are not yet added 1366 + * to the bus (handshake is still in progress). 1367 + * 1368 + * The caller needs to hold @tb->lock. 1369 + */ 1370 + struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route) 1371 + { 1372 + struct tb_xdomain_lookup lookup; 1373 + struct tb_xdomain *xd; 1374 + 1375 + memset(&lookup, 0, sizeof(lookup)); 1376 + lookup.route = route; 1377 + 1378 + xd = switch_find_xdomain(tb->root_switch, &lookup); 1379 + return tb_xdomain_get(xd); 1380 + } 1381 + EXPORT_SYMBOL_GPL(tb_xdomain_find_by_route); 1359 1382 1360 1383 bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type, 1361 1384 const void *buf, size_t size)
+19
include/linux/thunderbolt.h
··· 45 45 * @TB_SECURITY_USER: User approval required at minimum 46 46 * @TB_SECURITY_SECURE: One time saved key required at minimum 47 47 * @TB_SECURITY_DPONLY: Only tunnel Display port (and USB) 48 + * @TB_SECURITY_USBONLY: Only tunnel USB controller of the connected 49 + * Thunderbolt dock (and Display Port). All PCIe 50 + * links downstream of the dock are removed. 48 51 */ 49 52 enum tb_security_level { 50 53 TB_SECURITY_NONE, 51 54 TB_SECURITY_USER, 52 55 TB_SECURITY_SECURE, 53 56 TB_SECURITY_DPONLY, 57 + TB_SECURITY_USBONLY, 54 58 }; 55 59 56 60 /** ··· 69 65 * @cm_ops: Connection manager specific operations vector 70 66 * @index: Linux assigned domain number 71 67 * @security_level: Current security level 68 + * @nboot_acl: Number of boot ACLs the domain supports 72 69 * @privdata: Private connection manager specific data 73 70 */ 74 71 struct tb { ··· 82 77 const struct tb_cm_ops *cm_ops; 83 78 int index; 84 79 enum tb_security_level security_level; 80 + size_t nboot_acl; 85 81 unsigned long privdata[0]; 86 82 }; 87 83 ··· 243 237 u16 receive_ring); 244 238 int tb_xdomain_disable_paths(struct tb_xdomain *xd); 245 239 struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid); 240 + struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route); 246 241 247 242 static inline struct tb_xdomain * 248 243 tb_xdomain_find_by_uuid_locked(struct tb *tb, const uuid_t *uuid) ··· 252 245 253 246 mutex_lock(&tb->lock); 254 247 xd = tb_xdomain_find_by_uuid(tb, uuid); 248 + mutex_unlock(&tb->lock); 249 + 250 + return xd; 251 + } 252 + 253 + static inline struct tb_xdomain * 254 + tb_xdomain_find_by_route_locked(struct tb *tb, u64 route) 255 + { 256 + struct tb_xdomain *xd; 257 + 258 + mutex_lock(&tb->lock); 259 + xd = tb_xdomain_find_by_route(tb, route); 255 260 mutex_unlock(&tb->lock); 256 261 257 262 return xd;