Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

thunderbolt: Add support for host and device NVM firmware upgrade

Starting from Intel Falcon Ridge the NVM firmware can be upgraded by
using DMA configuration based mailbox commands. If we detect that the
host or device (device support starts from Intel Alpine Ridge) has the
DMA configuration based mailbox we expose NVM information to the
userspace as two separate Linux NVMem devices: nvm_active and
nvm_non_active. The former is read-only portion of the active NVM which
firmware upgrade tools can be use to find out suitable NVM image if the
device identification strings are not enough.

The latter is write-only portion where the new NVM image is to be
written by the userspace. It is up to the userspace to find out right
NVM image (the kernel does very minimal validation). The ICM firmware
itself authenticates the new NVM firmware and fails the operation if it
is not what is expected.

We also expose two new sysfs files per each switch: nvm_version and
nvm_authenticate which can be used to read the active NVM version and
start the upgrade process.

We also introduce safe mode which is the mode a switch goes when it does
not have properly authenticated firmware. In this mode the switch only
accepts a couple of commands including flashing a new NVM firmware image
and triggering power cycle.

This code is based on the work done by Amir Levy and Michael Jamet.

Signed-off-by: Michael Jamet <michael.jamet@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Yehezkel Bernat <yehezkel.bernat@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andreas Noever <andreas.noever@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Mika Westerberg and committed by
Greg Kroah-Hartman
e6b245cc f67cf491

+707 -24
+26
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 82 82 This is either read from hardware registers (UUID on 83 83 newer hardware) or based on UID from the device DROM. 84 84 Can be used to uniquely identify particular device. 85 + 86 + What: /sys/bus/thunderbolt/devices/.../nvm_version 87 + Date: Sep 2017 88 + KernelVersion: 4.13 89 + Contact: thunderbolt-software@lists.01.org 90 + Description: If the device has upgradeable firmware the version 91 + number is available here. Format: %x.%x, major.minor. 92 + If the device is in safe mode reading the file returns 93 + -ENODATA instead as the NVM version is not available. 94 + 95 + What: /sys/bus/thunderbolt/devices/.../nvm_authenticate 96 + Date: Sep 2017 97 + KernelVersion: 4.13 98 + Contact: thunderbolt-software@lists.01.org 99 + Description: When new NVM image is written to the non-active NVM 100 + area (through non_activeX NVMem device), the 101 + authentication procedure is started by writing 1 to 102 + this file. If everything goes well, the device is 103 + restarted with the new NVM firmware. If the image 104 + verification fails an error code is returned instead. 105 + 106 + When read holds status of the last authentication 107 + operation if an error occurred during the process. This 108 + is directly the status value from the DMA configuration 109 + based mailbox before the device is power cycled. Writing 110 + 0 here clears the status.
+1
drivers/thunderbolt/Kconfig
··· 6 6 select CRC32 7 7 select CRYPTO 8 8 select CRYPTO_HASH 9 + select NVMEM 9 10 help 10 11 Thunderbolt Controller driver. This driver is required if you 11 12 want to hotplug Thunderbolt devices on Apple hardware or on PCs
+18
drivers/thunderbolt/domain.c
··· 426 426 return ret; 427 427 } 428 428 429 + /** 430 + * tb_domain_disconnect_pcie_paths() - Disconnect all PCIe paths 431 + * @tb: Domain whose PCIe paths to disconnect 432 + * 433 + * This needs to be called in preparation for NVM upgrade of the host 434 + * controller. Makes sure all PCIe paths are disconnected. 435 + * 436 + * Return %0 on success and negative errno in case of error. 437 + */ 438 + int tb_domain_disconnect_pcie_paths(struct tb *tb) 439 + { 440 + if (!tb->cm_ops->disconnect_pcie_paths) 441 + return -EPERM; 442 + 443 + return tb->cm_ops->disconnect_pcie_paths(tb); 444 + } 445 + 429 446 int tb_domain_init(void) 430 447 { 431 448 return bus_register(&tb_bus_type); ··· 452 435 { 453 436 bus_unregister(&tb_bus_type); 454 437 ida_destroy(&tb_domain_ida); 438 + tb_switch_exit(); 455 439 }
+32 -1
drivers/thunderbolt/icm.c
··· 54 54 * where ICM needs to be started manually 55 55 * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides 56 56 * (only set when @upstream_port is not %NULL) 57 + * @safe_mode: ICM is in safe mode 57 58 * @is_supported: Checks if we can support ICM on this controller 58 59 * @get_mode: Read and return the ICM firmware mode (optional) 59 60 * @get_route: Find a route string for given switch ··· 66 65 struct delayed_work rescan_work; 67 66 struct pci_dev *upstream_port; 68 67 int vnd_cap; 68 + bool safe_mode; 69 69 bool (*is_supported)(struct tb *tb); 70 70 int (*get_mode)(struct tb *tb); 71 71 int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); ··· 854 852 ret = icm->get_mode(tb); 855 853 856 854 switch (ret) { 855 + case NHI_FW_SAFE_MODE: 856 + icm->safe_mode = true; 857 + break; 858 + 857 859 case NHI_FW_CM_MODE: 858 860 /* Ask ICM to accept all Thunderbolt devices */ 859 861 nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0); ··· 885 879 886 880 static int icm_driver_ready(struct tb *tb) 887 881 { 882 + struct icm *icm = tb_priv(tb); 888 883 int ret; 889 884 890 885 ret = icm_firmware_init(tb); 891 886 if (ret) 892 887 return ret; 888 + 889 + if (icm->safe_mode) { 890 + tb_info(tb, "Thunderbolt host controller is in safe mode.\n"); 891 + tb_info(tb, "You need to update NVM firmware of the controller before it can be used.\n"); 892 + tb_info(tb, "For latest updates check https://thunderbolttechnology.net/updates.\n"); 893 + return 0; 894 + } 893 895 894 896 return __icm_driver_ready(tb, &tb->security_level); 895 897 } ··· 989 975 990 976 static int icm_start(struct tb *tb) 991 977 { 978 + struct icm *icm = tb_priv(tb); 992 979 int ret; 993 980 994 - tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); 981 + if (icm->safe_mode) 982 + tb->root_switch = tb_switch_alloc_safe_mode(tb, &tb->dev, 0); 983 + else 984 + tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); 995 985 if (!tb->root_switch) 996 986 return -ENODEV; 987 + 988 + /* 989 + * NVM upgrade has not been tested on Apple systems and they 990 + * don't provide images publicly either. To be on the safe side 991 + * prevent root switch NVM upgrade on Macs for now. 992 + */ 993 + tb->root_switch->no_nvm_upgrade = is_apple(); 997 994 998 995 ret = tb_switch_add(tb->root_switch); 999 996 if (ret) ··· 1023 998 nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); 1024 999 } 1025 1000 1001 + static int icm_disconnect_pcie_paths(struct tb *tb) 1002 + { 1003 + return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0); 1004 + } 1005 + 1026 1006 /* Falcon Ridge and Alpine Ridge */ 1027 1007 static const struct tb_cm_ops icm_fr_ops = { 1028 1008 .driver_ready = icm_driver_ready, ··· 1039 1009 .approve_switch = icm_fr_approve_switch, 1040 1010 .add_switch_key = icm_fr_add_switch_key, 1041 1011 .challenge_switch_key = icm_fr_challenge_switch_key, 1012 + .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1042 1013 }; 1043 1014 1044 1015 struct tb *icm_probe(struct tb_nhi *nhi)
+1
drivers/thunderbolt/nhi.h
··· 155 155 156 156 enum nhi_mailbox_cmd { 157 157 NHI_MAILBOX_SAVE_DEVS = 0x05, 158 + NHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06, 158 159 NHI_MAILBOX_DRV_UNLOADS = 0x07, 159 160 NHI_MAILBOX_ALLOW_ALL_DEVS = 0x23, 160 161 };
+583 -22
drivers/thunderbolt/switch.c
··· 5 5 */ 6 6 7 7 #include <linux/delay.h> 8 + #include <linux/idr.h> 9 + #include <linux/nvmem-provider.h> 10 + #include <linux/sizes.h> 8 11 #include <linux/slab.h> 12 + #include <linux/vmalloc.h> 9 13 10 14 #include "tb.h" 11 15 12 16 /* Switch authorization from userspace is serialized by this lock */ 13 17 static DEFINE_MUTEX(switch_lock); 18 + 19 + /* Switch NVM support */ 20 + 21 + #define NVM_DEVID 0x05 22 + #define NVM_VERSION 0x08 23 + #define NVM_CSS 0x10 24 + #define NVM_FLASH_SIZE 0x45 25 + 26 + #define NVM_MIN_SIZE SZ_32K 27 + #define NVM_MAX_SIZE SZ_512K 28 + 29 + static DEFINE_IDA(nvm_ida); 30 + 31 + struct nvm_auth_status { 32 + struct list_head list; 33 + uuid_be uuid; 34 + u32 status; 35 + }; 36 + 37 + /* 38 + * Hold NVM authentication failure status per switch This information 39 + * needs to stay around even when the switch gets power cycled so we 40 + * keep it separately. 41 + */ 42 + static LIST_HEAD(nvm_auth_status_cache); 43 + static DEFINE_MUTEX(nvm_auth_status_lock); 44 + 45 + static struct nvm_auth_status *__nvm_get_auth_status(const struct tb_switch *sw) 46 + { 47 + struct nvm_auth_status *st; 48 + 49 + list_for_each_entry(st, &nvm_auth_status_cache, list) { 50 + if (!uuid_be_cmp(st->uuid, *sw->uuid)) 51 + return st; 52 + } 53 + 54 + return NULL; 55 + } 56 + 57 + static void nvm_get_auth_status(const struct tb_switch *sw, u32 *status) 58 + { 59 + struct nvm_auth_status *st; 60 + 61 + mutex_lock(&nvm_auth_status_lock); 62 + st = __nvm_get_auth_status(sw); 63 + mutex_unlock(&nvm_auth_status_lock); 64 + 65 + *status = st ? st->status : 0; 66 + } 67 + 68 + static void nvm_set_auth_status(const struct tb_switch *sw, u32 status) 69 + { 70 + struct nvm_auth_status *st; 71 + 72 + if (WARN_ON(!sw->uuid)) 73 + return; 74 + 75 + mutex_lock(&nvm_auth_status_lock); 76 + st = __nvm_get_auth_status(sw); 77 + 78 + if (!st) { 79 + st = kzalloc(sizeof(*st), GFP_KERNEL); 80 + if (!st) 81 + goto unlock; 82 + 83 + memcpy(&st->uuid, sw->uuid, sizeof(st->uuid)); 84 + INIT_LIST_HEAD(&st->list); 85 + list_add_tail(&st->list, &nvm_auth_status_cache); 86 + } 87 + 88 + st->status = status; 89 + unlock: 90 + mutex_unlock(&nvm_auth_status_lock); 91 + } 92 + 93 + static void nvm_clear_auth_status(const struct tb_switch *sw) 94 + { 95 + struct nvm_auth_status *st; 96 + 97 + mutex_lock(&nvm_auth_status_lock); 98 + st = __nvm_get_auth_status(sw); 99 + if (st) { 100 + list_del(&st->list); 101 + kfree(st); 102 + } 103 + mutex_unlock(&nvm_auth_status_lock); 104 + } 105 + 106 + static int nvm_validate_and_write(struct tb_switch *sw) 107 + { 108 + unsigned int image_size, hdr_size; 109 + const u8 *buf = sw->nvm->buf; 110 + u16 ds_size; 111 + int ret; 112 + 113 + if (!buf) 114 + return -EINVAL; 115 + 116 + image_size = sw->nvm->buf_data_size; 117 + if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) 118 + return -EINVAL; 119 + 120 + /* 121 + * FARB pointer must point inside the image and must at least 122 + * contain parts of the digital section we will be reading here. 123 + */ 124 + hdr_size = (*(u32 *)buf) & 0xffffff; 125 + if (hdr_size + NVM_DEVID + 2 >= image_size) 126 + return -EINVAL; 127 + 128 + /* Digital section start should be aligned to 4k page */ 129 + if (!IS_ALIGNED(hdr_size, SZ_4K)) 130 + return -EINVAL; 131 + 132 + /* 133 + * Read digital section size and check that it also fits inside 134 + * the image. 135 + */ 136 + ds_size = *(u16 *)(buf + hdr_size); 137 + if (ds_size >= image_size) 138 + return -EINVAL; 139 + 140 + if (!sw->safe_mode) { 141 + u16 device_id; 142 + 143 + /* 144 + * Make sure the device ID in the image matches the one 145 + * we read from the switch config space. 146 + */ 147 + device_id = *(u16 *)(buf + hdr_size + NVM_DEVID); 148 + if (device_id != sw->config.device_id) 149 + return -EINVAL; 150 + 151 + if (sw->generation < 3) { 152 + /* Write CSS headers first */ 153 + ret = dma_port_flash_write(sw->dma_port, 154 + DMA_PORT_CSS_ADDRESS, buf + NVM_CSS, 155 + DMA_PORT_CSS_MAX_SIZE); 156 + if (ret) 157 + return ret; 158 + } 159 + 160 + /* Skip headers in the image */ 161 + buf += hdr_size; 162 + image_size -= hdr_size; 163 + } 164 + 165 + return dma_port_flash_write(sw->dma_port, 0, buf, image_size); 166 + } 167 + 168 + static int nvm_authenticate_host(struct tb_switch *sw) 169 + { 170 + int ret; 171 + 172 + /* 173 + * Root switch NVM upgrade requires that we disconnect the 174 + * existing PCIe paths first (in case it is not in safe mode 175 + * already). 176 + */ 177 + if (!sw->safe_mode) { 178 + ret = tb_domain_disconnect_pcie_paths(sw->tb); 179 + if (ret) 180 + return ret; 181 + /* 182 + * The host controller goes away pretty soon after this if 183 + * everything goes well so getting timeout is expected. 184 + */ 185 + ret = dma_port_flash_update_auth(sw->dma_port); 186 + return ret == -ETIMEDOUT ? 0 : ret; 187 + } 188 + 189 + /* 190 + * From safe mode we can get out by just power cycling the 191 + * switch. 192 + */ 193 + dma_port_power_cycle(sw->dma_port); 194 + return 0; 195 + } 196 + 197 + static int nvm_authenticate_device(struct tb_switch *sw) 198 + { 199 + int ret, retries = 10; 200 + 201 + ret = dma_port_flash_update_auth(sw->dma_port); 202 + if (ret && ret != -ETIMEDOUT) 203 + return ret; 204 + 205 + /* 206 + * Poll here for the authentication status. It takes some time 207 + * for the device to respond (we get timeout for a while). Once 208 + * we get response the device needs to be power cycled in order 209 + * to the new NVM to be taken into use. 210 + */ 211 + do { 212 + u32 status; 213 + 214 + ret = dma_port_flash_update_auth_status(sw->dma_port, &status); 215 + if (ret < 0 && ret != -ETIMEDOUT) 216 + return ret; 217 + if (ret > 0) { 218 + if (status) { 219 + tb_sw_warn(sw, "failed to authenticate NVM\n"); 220 + nvm_set_auth_status(sw, status); 221 + } 222 + 223 + tb_sw_info(sw, "power cycling the switch now\n"); 224 + dma_port_power_cycle(sw->dma_port); 225 + return 0; 226 + } 227 + 228 + msleep(500); 229 + } while (--retries); 230 + 231 + return -ETIMEDOUT; 232 + } 233 + 234 + static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, 235 + size_t bytes) 236 + { 237 + struct tb_switch *sw = priv; 238 + 239 + return dma_port_flash_read(sw->dma_port, offset, val, bytes); 240 + } 241 + 242 + static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, 243 + size_t bytes) 244 + { 245 + struct tb_switch *sw = priv; 246 + int ret = 0; 247 + 248 + if (mutex_lock_interruptible(&switch_lock)) 249 + return -ERESTARTSYS; 250 + 251 + /* 252 + * Since writing the NVM image might require some special steps, 253 + * for example when CSS headers are written, we cache the image 254 + * locally here and handle the special cases when the user asks 255 + * us to authenticate the image. 256 + */ 257 + if (!sw->nvm->buf) { 258 + sw->nvm->buf = vmalloc(NVM_MAX_SIZE); 259 + if (!sw->nvm->buf) { 260 + ret = -ENOMEM; 261 + goto unlock; 262 + } 263 + } 264 + 265 + sw->nvm->buf_data_size = offset + bytes; 266 + memcpy(sw->nvm->buf + offset, val, bytes); 267 + 268 + unlock: 269 + mutex_unlock(&switch_lock); 270 + 271 + return ret; 272 + } 273 + 274 + static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id, 275 + size_t size, bool active) 276 + { 277 + struct nvmem_config config; 278 + 279 + memset(&config, 0, sizeof(config)); 280 + 281 + if (active) { 282 + config.name = "nvm_active"; 283 + config.reg_read = tb_switch_nvm_read; 284 + } else { 285 + config.name = "nvm_non_active"; 286 + config.reg_write = tb_switch_nvm_write; 287 + } 288 + 289 + config.id = id; 290 + config.stride = 4; 291 + config.word_size = 4; 292 + config.size = size; 293 + config.dev = &sw->dev; 294 + config.owner = THIS_MODULE; 295 + config.root_only = true; 296 + config.priv = sw; 297 + 298 + return nvmem_register(&config); 299 + } 300 + 301 + static int tb_switch_nvm_add(struct tb_switch *sw) 302 + { 303 + struct nvmem_device *nvm_dev; 304 + struct tb_switch_nvm *nvm; 305 + u32 val; 306 + int ret; 307 + 308 + if (!sw->dma_port) 309 + return 0; 310 + 311 + nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); 312 + if (!nvm) 313 + return -ENOMEM; 314 + 315 + nvm->id = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL); 316 + 317 + /* 318 + * If the switch is in safe-mode the only accessible portion of 319 + * the NVM is the non-active one where userspace is expected to 320 + * write new functional NVM. 321 + */ 322 + if (!sw->safe_mode) { 323 + u32 nvm_size, hdr_size; 324 + 325 + ret = dma_port_flash_read(sw->dma_port, NVM_FLASH_SIZE, &val, 326 + sizeof(val)); 327 + if (ret) 328 + goto err_ida; 329 + 330 + hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K; 331 + nvm_size = (SZ_1M << (val & 7)) / 8; 332 + nvm_size = (nvm_size - hdr_size) / 2; 333 + 334 + ret = dma_port_flash_read(sw->dma_port, NVM_VERSION, &val, 335 + sizeof(val)); 336 + if (ret) 337 + goto err_ida; 338 + 339 + nvm->major = val >> 16; 340 + nvm->minor = val >> 8; 341 + 342 + nvm_dev = register_nvmem(sw, nvm->id, nvm_size, true); 343 + if (IS_ERR(nvm_dev)) { 344 + ret = PTR_ERR(nvm_dev); 345 + goto err_ida; 346 + } 347 + nvm->active = nvm_dev; 348 + } 349 + 350 + nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false); 351 + if (IS_ERR(nvm_dev)) { 352 + ret = PTR_ERR(nvm_dev); 353 + goto err_nvm_active; 354 + } 355 + nvm->non_active = nvm_dev; 356 + 357 + mutex_lock(&switch_lock); 358 + sw->nvm = nvm; 359 + mutex_unlock(&switch_lock); 360 + 361 + return 0; 362 + 363 + err_nvm_active: 364 + if (nvm->active) 365 + nvmem_unregister(nvm->active); 366 + err_ida: 367 + ida_simple_remove(&nvm_ida, nvm->id); 368 + kfree(nvm); 369 + 370 + return ret; 371 + } 372 + 373 + static void tb_switch_nvm_remove(struct tb_switch *sw) 374 + { 375 + struct tb_switch_nvm *nvm; 376 + 377 + mutex_lock(&switch_lock); 378 + nvm = sw->nvm; 379 + sw->nvm = NULL; 380 + mutex_unlock(&switch_lock); 381 + 382 + if (!nvm) 383 + return; 384 + 385 + /* Remove authentication status in case the switch is unplugged */ 386 + if (!nvm->authenticating) 387 + nvm_clear_auth_status(sw); 388 + 389 + nvmem_unregister(nvm->non_active); 390 + if (nvm->active) 391 + nvmem_unregister(nvm->active); 392 + ida_simple_remove(&nvm_ida, nvm->id); 393 + vfree(nvm->buf); 394 + kfree(nvm); 395 + } 14 396 15 397 /* port utility functions */ 16 398 ··· 830 448 } 831 449 static DEVICE_ATTR_RW(key); 832 450 451 + static ssize_t nvm_authenticate_show(struct device *dev, 452 + struct device_attribute *attr, char *buf) 453 + { 454 + struct tb_switch *sw = tb_to_switch(dev); 455 + u32 status; 456 + 457 + nvm_get_auth_status(sw, &status); 458 + return sprintf(buf, "%#x\n", status); 459 + } 460 + 461 + static ssize_t nvm_authenticate_store(struct device *dev, 462 + struct device_attribute *attr, const char *buf, size_t count) 463 + { 464 + struct tb_switch *sw = tb_to_switch(dev); 465 + bool val; 466 + int ret; 467 + 468 + if (mutex_lock_interruptible(&switch_lock)) 469 + return -ERESTARTSYS; 470 + 471 + /* If NVMem devices are not yet added */ 472 + if (!sw->nvm) { 473 + ret = -EAGAIN; 474 + goto exit_unlock; 475 + } 476 + 477 + ret = kstrtobool(buf, &val); 478 + if (ret) 479 + goto exit_unlock; 480 + 481 + /* Always clear the authentication status */ 482 + nvm_clear_auth_status(sw); 483 + 484 + if (val) { 485 + ret = nvm_validate_and_write(sw); 486 + if (ret) 487 + goto exit_unlock; 488 + 489 + sw->nvm->authenticating = true; 490 + 491 + if (!tb_route(sw)) 492 + ret = nvm_authenticate_host(sw); 493 + else 494 + ret = nvm_authenticate_device(sw); 495 + } 496 + 497 + exit_unlock: 498 + mutex_unlock(&switch_lock); 499 + 500 + if (ret) 501 + return ret; 502 + return count; 503 + } 504 + static DEVICE_ATTR_RW(nvm_authenticate); 505 + 506 + static ssize_t nvm_version_show(struct device *dev, 507 + struct device_attribute *attr, char *buf) 508 + { 509 + struct tb_switch *sw = tb_to_switch(dev); 510 + int ret; 511 + 512 + if (mutex_lock_interruptible(&switch_lock)) 513 + return -ERESTARTSYS; 514 + 515 + if (sw->safe_mode) 516 + ret = -ENODATA; 517 + else if (!sw->nvm) 518 + ret = -EAGAIN; 519 + else 520 + ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor); 521 + 522 + mutex_unlock(&switch_lock); 523 + 524 + return ret; 525 + } 526 + static DEVICE_ATTR_RO(nvm_version); 527 + 833 528 static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, 834 529 char *buf) 835 530 { ··· 939 480 &dev_attr_device.attr, 940 481 &dev_attr_device_name.attr, 941 482 &dev_attr_key.attr, 483 + &dev_attr_nvm_authenticate.attr, 484 + &dev_attr_nvm_version.attr, 942 485 &dev_attr_vendor.attr, 943 486 &dev_attr_vendor_name.attr, 944 487 &dev_attr_unique_id.attr, ··· 959 498 sw->security_level == TB_SECURITY_SECURE) 960 499 return attr->mode; 961 500 return 0; 501 + } else if (attr == &dev_attr_nvm_authenticate.attr || 502 + attr == &dev_attr_nvm_version.attr) { 503 + if (sw->dma_port) 504 + return attr->mode; 505 + return 0; 962 506 } 963 507 964 - return attr->mode; 508 + return sw->safe_mode ? 0 : attr->mode; 965 509 } 966 510 967 511 static struct attribute_group switch_group = { ··· 1118 652 } 1119 653 1120 654 /** 655 + * tb_switch_alloc_safe_mode() - allocate a switch that is in safe mode 656 + * @tb: Pointer to the owning domain 657 + * @parent: Parent device for this switch 658 + * @route: Route string for this switch 659 + * 660 + * This creates a switch in safe mode. This means the switch pretty much 661 + * lacks all capabilities except DMA configuration port before it is 662 + * flashed with a valid NVM firmware. 663 + * 664 + * The returned switch must be released by calling tb_switch_put(). 665 + * 666 + * Return: Pointer to the allocated switch or %NULL in case of failure 667 + */ 668 + struct tb_switch * 669 + tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route) 670 + { 671 + struct tb_switch *sw; 672 + 673 + sw = kzalloc(sizeof(*sw), GFP_KERNEL); 674 + if (!sw) 675 + return NULL; 676 + 677 + sw->tb = tb; 678 + sw->config.depth = tb_route_length(route); 679 + sw->config.route_hi = upper_32_bits(route); 680 + sw->config.route_lo = lower_32_bits(route); 681 + sw->safe_mode = true; 682 + 683 + device_initialize(&sw->dev); 684 + sw->dev.parent = parent; 685 + sw->dev.bus = &tb_bus_type; 686 + sw->dev.type = &tb_switch_type; 687 + sw->dev.groups = switch_groups; 688 + dev_set_name(&sw->dev, "%u-%llx", tb->index, tb_route(sw)); 689 + 690 + return sw; 691 + } 692 + 693 + /** 1121 694 * tb_switch_configure() - Uploads configuration to the switch 1122 695 * @sw: Switch to configure 1123 696 * ··· 1222 717 sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL); 1223 718 } 1224 719 1225 - static void tb_switch_add_dma_port(struct tb_switch *sw) 720 + static int tb_switch_add_dma_port(struct tb_switch *sw) 1226 721 { 722 + u32 status; 723 + int ret; 724 + 1227 725 switch (sw->generation) { 1228 726 case 3: 1229 727 break; ··· 1234 726 case 2: 1235 727 /* Only root switch can be upgraded */ 1236 728 if (tb_route(sw)) 1237 - return; 729 + return 0; 1238 730 break; 1239 731 1240 732 default: 1241 - return; 733 + /* 734 + * DMA port is the only thing available when the switch 735 + * is in safe mode. 736 + */ 737 + if (!sw->safe_mode) 738 + return 0; 739 + break; 1242 740 } 1243 741 742 + if (sw->no_nvm_upgrade) 743 + return 0; 744 + 1244 745 sw->dma_port = dma_port_alloc(sw); 746 + if (!sw->dma_port) 747 + return 0; 748 + 749 + /* 750 + * Check status of the previous flash authentication. If there 751 + * is one we need to power cycle the switch in any case to make 752 + * it functional again. 753 + */ 754 + ret = dma_port_flash_update_auth_status(sw->dma_port, &status); 755 + if (ret <= 0) 756 + return ret; 757 + 758 + if (status) { 759 + tb_sw_info(sw, "switch flash authentication failed\n"); 760 + tb_switch_set_uuid(sw); 761 + nvm_set_auth_status(sw, status); 762 + } 763 + 764 + tb_sw_info(sw, "power cycling the switch now\n"); 765 + dma_port_power_cycle(sw->dma_port); 766 + 767 + /* 768 + * We return error here which causes the switch adding failure. 769 + * It should appear back after power cycle is complete. 770 + */ 771 + return -ESHUTDOWN; 1245 772 } 1246 773 1247 774 /** ··· 1302 759 * to the userspace. NVM can be accessed through DMA 1303 760 * configuration based mailbox. 1304 761 */ 1305 - tb_switch_add_dma_port(sw); 1306 - 1307 - /* read drom */ 1308 - ret = tb_drom_read(sw); 1309 - if (ret) { 1310 - tb_sw_warn(sw, "tb_eeprom_read_rom failed\n"); 762 + ret = tb_switch_add_dma_port(sw); 763 + if (ret) 1311 764 return ret; 1312 - } 1313 - tb_sw_info(sw, "uid: %#llx\n", sw->uid); 1314 765 1315 - tb_switch_set_uuid(sw); 1316 - 1317 - for (i = 0; i <= sw->config.max_port_number; i++) { 1318 - if (sw->ports[i].disabled) { 1319 - tb_port_info(&sw->ports[i], "disabled by eeprom\n"); 1320 - continue; 1321 - } 1322 - ret = tb_init_port(&sw->ports[i]); 1323 - if (ret) 766 + if (!sw->safe_mode) { 767 + /* read drom */ 768 + ret = tb_drom_read(sw); 769 + if (ret) { 770 + tb_sw_warn(sw, "tb_eeprom_read_rom failed\n"); 1324 771 return ret; 772 + } 773 + tb_sw_info(sw, "uid: %#llx\n", sw->uid); 774 + 775 + tb_switch_set_uuid(sw); 776 + 777 + for (i = 0; i <= sw->config.max_port_number; i++) { 778 + if (sw->ports[i].disabled) { 779 + tb_port_info(&sw->ports[i], "disabled by eeprom\n"); 780 + continue; 781 + } 782 + ret = tb_init_port(&sw->ports[i]); 783 + if (ret) 784 + return ret; 785 + } 1325 786 } 1326 787 1327 - return device_add(&sw->dev); 788 + ret = device_add(&sw->dev); 789 + if (ret) 790 + return ret; 791 + 792 + ret = tb_switch_nvm_add(sw); 793 + if (ret) 794 + device_del(&sw->dev); 795 + 796 + return ret; 1328 797 } 1329 798 1330 799 /** ··· 1363 808 if (!sw->is_unplugged) 1364 809 tb_plug_events_active(sw, false); 1365 810 811 + tb_switch_nvm_remove(sw); 1366 812 device_unregister(&sw->dev); 1367 813 } 1368 814 ··· 1531 975 return tb_to_switch(dev); 1532 976 1533 977 return NULL; 978 + } 979 + 980 + void tb_switch_exit(void) 981 + { 982 + ida_destroy(&nvm_ida); 1534 983 }
+7
drivers/thunderbolt/tb.c
··· 369 369 if (!tb->root_switch) 370 370 return -ENOMEM; 371 371 372 + /* 373 + * ICM firmware upgrade needs running firmware and in native 374 + * mode that is not available so disable firmware upgrade of the 375 + * root switch. 376 + */ 377 + tb->root_switch->no_nvm_upgrade = true; 378 + 372 379 ret = tb_switch_configure(tb->root_switch); 373 380 if (ret) { 374 381 tb_switch_put(tb->root_switch);
+39 -1
drivers/thunderbolt/tb.h
··· 7 7 #ifndef TB_H_ 8 8 #define TB_H_ 9 9 10 + #include <linux/nvmem-provider.h> 10 11 #include <linux/pci.h> 11 12 #include <linux/uuid.h> 12 13 13 14 #include "tb_regs.h" 14 15 #include "ctl.h" 15 16 #include "dma_port.h" 17 + 18 + /** 19 + * struct tb_switch_nvm - Structure holding switch NVM information 20 + * @major: Major version number of the active NVM portion 21 + * @minor: Minor version number of the active NVM portion 22 + * @id: Identifier used with both NVM portions 23 + * @active: Active portion NVMem device 24 + * @non_active: Non-active portion NVMem device 25 + * @buf: Buffer where the NVM image is stored before it is written to 26 + * the actual NVM flash device 27 + * @buf_data_size: Number of bytes actually consumed by the new NVM 28 + * image 29 + * @authenticating: The switch is authenticating the new NVM 30 + */ 31 + struct tb_switch_nvm { 32 + u8 major; 33 + u8 minor; 34 + int id; 35 + struct nvmem_device *active; 36 + struct nvmem_device *non_active; 37 + void *buf; 38 + size_t buf_data_size; 39 + bool authenticating; 40 + }; 16 41 17 42 /** 18 43 * enum tb_security_level - Thunderbolt security level ··· 64 39 * @ports: Ports in this switch 65 40 * @dma_port: If the switch has port supporting DMA configuration based 66 41 * mailbox this will hold the pointer to that (%NULL 67 - * otherwise). 42 + * otherwise). If set it also means the switch has 43 + * upgradeable NVM. 68 44 * @tb: Pointer to the domain the switch belongs to 69 45 * @uid: Unique ID of the switch 70 46 * @uuid: UUID of the switch (or %NULL if not supported) ··· 77 51 * @cap_plug_events: Offset to the plug events capability (%0 if not found) 78 52 * @is_unplugged: The switch is going away 79 53 * @drom: DROM of the switch (%NULL if not found) 54 + * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) 55 + * @no_nvm_upgrade: Prevent NVM upgrade of this switch 56 + * @safe_mode: The switch is in safe-mode 80 57 * @authorized: Whether the switch is authorized by user or policy 81 58 * @work: Work used to automatically authorize a switch 82 59 * @security_level: Switch supported security level ··· 110 81 int cap_plug_events; 111 82 bool is_unplugged; 112 83 u8 *drom; 84 + struct tb_switch_nvm *nvm; 85 + bool no_nvm_upgrade; 86 + bool safe_mode; 113 87 unsigned int authorized; 114 88 struct work_struct work; 115 89 enum tb_security_level security_level; ··· 204 172 * @approve_switch: Approve switch 205 173 * @add_switch_key: Add key to switch 206 174 * @challenge_switch_key: Challenge switch using key 175 + * @disconnect_pcie_paths: Disconnects PCIe paths before NVM update 207 176 */ 208 177 struct tb_cm_ops { 209 178 int (*driver_ready)(struct tb *tb); ··· 220 187 int (*add_switch_key)(struct tb *tb, struct tb_switch *sw); 221 188 int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, 222 189 const u8 *challenge, u8 *response); 190 + int (*disconnect_pcie_paths)(struct tb *tb); 223 191 }; 224 192 225 193 /** ··· 374 340 375 341 int tb_domain_init(void); 376 342 void tb_domain_exit(void); 343 + void tb_switch_exit(void); 377 344 378 345 struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize); 379 346 int tb_domain_add(struct tb *tb); ··· 386 351 int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw); 387 352 int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); 388 353 int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw); 354 + int tb_domain_disconnect_pcie_paths(struct tb *tb); 389 355 390 356 static inline void tb_domain_put(struct tb *tb) 391 357 { ··· 395 359 396 360 struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, 397 361 u64 route); 362 + struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb, 363 + struct device *parent, u64 route); 398 364 int tb_switch_configure(struct tb_switch *sw); 399 365 int tb_switch_add(struct tb_switch *sw); 400 366 void tb_switch_remove(struct tb_switch *sw);