x86/microcode: Default-disable late loading

It is dangerous and it should not be used anyway - there's a nice early
loading already.

Requested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20220525161232.14924-3-bp@alien8.de

authored by Borislav Petkov and committed by Thomas Gleixner a77a94f8 181b6f40

+19 -1
+11
arch/x86/Kconfig
··· 1350 1350 If you select this option, microcode patch loading support for AMD 1351 1351 processors will be enabled. 1352 1352 1353 + config MICROCODE_LATE_LOADING 1354 + bool "Late microcode loading (DANGEROUS)" 1355 + default n 1356 + depends on MICROCODE 1357 + help 1358 + Loading microcode late, when the system is up and executing instructions 1359 + is a tricky business and should be avoided if possible. Just the sequence 1360 + of synchronizing all cores and SMT threads is one fragile dance which does 1361 + not guarantee that cores might not softlock after the loading. Therefore, 1362 + use this at your own risk. Late loading taints the kernel too. 1363 + 1353 1364 config X86_MSR 1354 1365 tristate "/dev/cpu/*/msr - Model-specific register support" 1355 1366 help
+2
arch/x86/kernel/cpu/common.c
··· 2222 2222 } 2223 2223 #endif 2224 2224 2225 + #ifdef CONFIG_MICROCODE_LATE_LOADING 2225 2226 /* 2226 2227 * The microcode loader calls this upon late microcode load to recheck features, 2227 2228 * only when microcode has been updated. Caller holds microcode_mutex and CPU ··· 2252 2251 pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n"); 2253 2252 pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n"); 2254 2253 } 2254 + #endif 2255 2255 2256 2256 /* 2257 2257 * Invoked from core CPU hotplug code after hotplug operations
+6 -1
arch/x86/kernel/cpu/microcode/core.c
··· 376 376 /* fake device for request_firmware */ 377 377 static struct platform_device *microcode_pdev; 378 378 379 + #ifdef CONFIG_MICROCODE_LATE_LOADING 379 380 /* 380 381 * Late loading dance. Why the heavy-handed stomp_machine effort? 381 382 * ··· 544 543 return ret; 545 544 } 546 545 546 + static DEVICE_ATTR_WO(reload); 547 + #endif 548 + 547 549 static ssize_t version_show(struct device *dev, 548 550 struct device_attribute *attr, char *buf) 549 551 { ··· 563 559 return sprintf(buf, "0x%x\n", uci->cpu_sig.pf); 564 560 } 565 561 566 - static DEVICE_ATTR_WO(reload); 567 562 static DEVICE_ATTR(version, 0444, version_show, NULL); 568 563 static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL); 569 564 ··· 715 712 } 716 713 717 714 static struct attribute *cpu_root_microcode_attrs[] = { 715 + #ifdef CONFIG_MICROCODE_LATE_LOADING 718 716 &dev_attr_reload.attr, 717 + #endif 719 718 NULL 720 719 }; 721 720