Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull kernel lockdown mode from James Morris:
"This is the latest iteration of the kernel lockdown patchset, from
Matthew Garrett, David Howells and others.

From the original description:

This patchset introduces an optional kernel lockdown feature,
intended to strengthen the boundary between UID 0 and the kernel.
When enabled, various pieces of kernel functionality are restricted.
Applications that rely on low-level access to either hardware or the
kernel may cease working as a result - therefore this should not be
enabled without appropriate evaluation beforehand.

The majority of mainstream distributions have been carrying variants
of this patchset for many years now, so there's value in providing a
doesn't meet every distribution requirement, but gets us much closer
to not requiring external patches.

There are two major changes since this was last proposed for mainline:

- Separating lockdown from EFI secure boot. Background discussion is
covered here: https://lwn.net/Articles/751061/

- Implementation as an LSM, with a default stackable lockdown LSM
module. This allows the lockdown feature to be policy-driven,
rather than encoding an implicit policy within the mechanism.

The new locked_down LSM hook is provided to allow LSMs to make a
policy decision around whether kernel functionality that would allow
tampering with or examining the runtime state of the kernel should be
permitted.

The included lockdown LSM provides an implementation with a simple
policy intended for general purpose use. This policy provides a coarse
level of granularity, controllable via the kernel command line:

lockdown={integrity|confidentiality}

Enable the kernel lockdown feature. If set to integrity, kernel features
that allow userland to modify the running kernel are disabled. If set to
confidentiality, kernel features that allow userland to extract
confidential information from the kernel are also disabled.

This may also be controlled via /sys/kernel/security/lockdown and
overriden by kernel configuration.

New or existing LSMs may implement finer-grained controls of the
lockdown features. Refer to the lockdown_reason documentation in
include/linux/security.h for details.

The lockdown feature has had signficant design feedback and review
across many subsystems. This code has been in linux-next for some
weeks, with a few fixes applied along the way.

Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf
when kernel lockdown is in confidentiality mode") is missing a
Signed-off-by from its author. Matthew responded that he is providing
this under category (c) of the DCO"

* 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits)
kexec: Fix file verification on S390
security: constify some arrays in lockdown LSM
lockdown: Print current->comm in restriction messages
efi: Restrict efivar_ssdt_load when the kernel is locked down
tracefs: Restrict tracefs when the kernel is locked down
debugfs: Restrict debugfs when the kernel is locked down
kexec: Allow kexec_file() with appropriate IMA policy when locked down
lockdown: Lock down perf when in confidentiality mode
bpf: Restrict bpf when kernel lockdown is in confidentiality mode
lockdown: Lock down tracing and perf kprobes when in confidentiality mode
lockdown: Lock down /proc/kcore
x86/mmiotrace: Lock down the testmmiotrace module
lockdown: Lock down module params that specify hardware parameters (eg. ioport)
lockdown: Lock down TIOCSSERIAL
lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down
acpi: Disable ACPI table override if the kernel is locked down
acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down
ACPI: Limit access to custom_method when the kernel is locked down
x86/msr: Restrict MSR access when the kernel is locked down
x86: Lock down IO port access when the kernel is locked down
...

+863 -78
+9
Documentation/admin-guide/kernel-parameters.txt
··· 2276 2276 lockd.nlm_udpport=M [NFS] Assign UDP port. 2277 2277 Format: <integer> 2278 2278 2279 + lockdown= [SECURITY] 2280 + { integrity | confidentiality } 2281 + Enable the kernel lockdown feature. If set to 2282 + integrity, kernel features that allow userland to 2283 + modify the running kernel are disabled. If set to 2284 + confidentiality, kernel features that allow userland 2285 + to extract confidential information from the kernel 2286 + are also disabled. 2287 + 2279 2288 locktorture.nreaders_stress= [KNL] 2280 2289 Set the number of locking read-acquisition kthreads. 2281 2290 Defaults to being automatically set based on the
+3 -3
arch/arm64/Kconfig
··· 982 982 for kernel and initramfs as opposed to list of segments as 983 983 accepted by previous system call. 984 984 985 - config KEXEC_VERIFY_SIG 985 + config KEXEC_SIG 986 986 bool "Verify kernel signature during kexec_file_load() syscall" 987 987 depends on KEXEC_FILE 988 988 help ··· 997 997 config KEXEC_IMAGE_VERIFY_SIG 998 998 bool "Enable Image signature verification support" 999 999 default y 1000 - depends on KEXEC_VERIFY_SIG 1000 + depends on KEXEC_SIG 1001 1001 depends on EFI && SIGNED_PE_FILE_VERIFICATION 1002 1002 help 1003 1003 Enable Image signature verification support. 1004 1004 1005 1005 comment "Support for PE file signature verification disabled" 1006 - depends on KEXEC_VERIFY_SIG 1006 + depends on KEXEC_SIG 1007 1007 depends on !EFI || !SIGNED_PE_FILE_VERIFICATION 1008 1008 1009 1009 config CRASH_DUMP
+1 -1
arch/s390/Kconfig
··· 554 554 def_bool y 555 555 depends on KEXEC_FILE 556 556 557 - config KEXEC_VERIFY_SIG 557 + config KEXEC_SIG 558 558 bool "Verify kernel signature during kexec_file_load() syscall" 559 559 depends on KEXEC_FILE && MODULE_SIG_FORMAT 560 560 help
+2 -2
arch/s390/kernel/kexec_elf.c
··· 130 130 const struct kexec_file_ops s390_kexec_elf_ops = { 131 131 .probe = s390_elf_probe, 132 132 .load = s390_elf_load, 133 - #ifdef CONFIG_KEXEC_VERIFY_SIG 133 + #ifdef CONFIG_KEXEC_SIG 134 134 .verify_sig = s390_verify_sig, 135 - #endif /* CONFIG_KEXEC_VERIFY_SIG */ 135 + #endif /* CONFIG_KEXEC_SIG */ 136 136 };
+2 -2
arch/s390/kernel/kexec_image.c
··· 59 59 const struct kexec_file_ops s390_kexec_image_ops = { 60 60 .probe = s390_image_probe, 61 61 .load = s390_image_load, 62 - #ifdef CONFIG_KEXEC_VERIFY_SIG 62 + #ifdef CONFIG_KEXEC_SIG 63 63 .verify_sig = s390_verify_sig, 64 - #endif /* CONFIG_KEXEC_VERIFY_SIG */ 64 + #endif /* CONFIG_KEXEC_SIG */ 65 65 };
+2 -2
arch/s390/kernel/machine_kexec_file.c
··· 22 22 NULL, 23 23 }; 24 24 25 - #ifdef CONFIG_KEXEC_VERIFY_SIG 25 + #ifdef CONFIG_KEXEC_SIG 26 26 int s390_verify_sig(const char *kernel, unsigned long kernel_len) 27 27 { 28 28 const unsigned long marker_len = sizeof(MODULE_SIG_STRING) - 1; ··· 68 68 VERIFYING_MODULE_SIGNATURE, 69 69 NULL, NULL); 70 70 } 71 - #endif /* CONFIG_KEXEC_VERIFY_SIG */ 71 + #endif /* CONFIG_KEXEC_SIG */ 72 72 73 73 static int kexec_file_update_purgatory(struct kimage *image, 74 74 struct s390_load_data *data)
+16 -6
arch/x86/Kconfig
··· 2031 2031 config ARCH_HAS_KEXEC_PURGATORY 2032 2032 def_bool KEXEC_FILE 2033 2033 2034 - config KEXEC_VERIFY_SIG 2034 + config KEXEC_SIG 2035 2035 bool "Verify kernel signature during kexec_file_load() syscall" 2036 2036 depends on KEXEC_FILE 2037 + ---help--- 2038 + 2039 + This option makes the kexec_file_load() syscall check for a valid 2040 + signature of the kernel image. The image can still be loaded without 2041 + a valid signature unless you also enable KEXEC_SIG_FORCE, though if 2042 + there's a signature that we can check, then it must be valid. 2043 + 2044 + In addition to this option, you need to enable signature 2045 + verification for the corresponding kernel image type being 2046 + loaded in order for this to work. 2047 + 2048 + config KEXEC_SIG_FORCE 2049 + bool "Require a valid signature in kexec_file_load() syscall" 2050 + depends on KEXEC_SIG 2037 2051 ---help--- 2038 2052 This option makes kernel signature verification mandatory for 2039 2053 the kexec_file_load() syscall. 2040 2054 2041 - In addition to that option, you need to enable signature 2042 - verification for the corresponding kernel image type being 2043 - loaded in order for this to work. 2044 - 2045 2055 config KEXEC_BZIMAGE_VERIFY_SIG 2046 2056 bool "Enable bzImage signature verification support" 2047 - depends on KEXEC_VERIFY_SIG 2057 + depends on KEXEC_SIG 2048 2058 depends on SIGNED_PE_FILE_VERIFICATION 2049 2059 select SYSTEM_TRUSTED_KEYRING 2050 2060 ---help---
+13 -6
arch/x86/boot/compressed/acpi.c
··· 26 26 */ 27 27 #define MAX_ADDR_LEN 19 28 28 29 - static acpi_physical_address get_acpi_rsdp(void) 29 + static acpi_physical_address get_cmdline_acpi_rsdp(void) 30 30 { 31 31 acpi_physical_address addr = 0; 32 32 ··· 278 278 { 279 279 acpi_physical_address pa; 280 280 281 - pa = get_acpi_rsdp(); 282 - 283 - if (!pa) 284 - pa = boot_params->acpi_rsdp_addr; 281 + pa = boot_params->acpi_rsdp_addr; 285 282 286 283 /* 287 284 * Try to get EFI data from setup_data. This can happen when we're a ··· 308 311 char arg[10]; 309 312 u8 *entry; 310 313 311 - rsdp = (struct acpi_table_rsdp *)(long)boot_params->acpi_rsdp_addr; 314 + /* 315 + * Check whether we were given an RSDP on the command line. We don't 316 + * stash this in boot params because the kernel itself may have 317 + * different ideas about whether to trust a command-line parameter. 318 + */ 319 + rsdp = (struct acpi_table_rsdp *)get_cmdline_acpi_rsdp(); 320 + 321 + if (!rsdp) 322 + rsdp = (struct acpi_table_rsdp *)(long) 323 + boot_params->acpi_rsdp_addr; 324 + 312 325 if (!rsdp) 313 326 return 0; 314 327
+9
arch/x86/include/asm/acpi.h
··· 117 117 return !!acpi_lapic; 118 118 } 119 119 120 + #define ACPI_HAVE_ARCH_SET_ROOT_POINTER 121 + static inline void acpi_arch_set_root_pointer(u64 addr) 122 + { 123 + x86_init.acpi.set_root_pointer(addr); 124 + } 125 + 120 126 #define ACPI_HAVE_ARCH_GET_ROOT_POINTER 121 127 static inline u64 acpi_arch_get_root_pointer(void) 122 128 { ··· 131 125 132 126 void acpi_generic_reduced_hw_init(void); 133 127 128 + void x86_default_set_root_pointer(u64 addr); 134 129 u64 x86_default_get_root_pointer(void); 135 130 136 131 #else /* !CONFIG_ACPI */ ··· 144 137 static inline void disable_acpi(void) { } 145 138 146 139 static inline void acpi_generic_reduced_hw_init(void) { } 140 + 141 + static inline void x86_default_set_root_pointer(u64 addr) { } 147 142 148 143 static inline u64 x86_default_get_root_pointer(void) 149 144 {
+2
arch/x86/include/asm/x86_init.h
··· 134 134 135 135 /** 136 136 * struct x86_init_acpi - x86 ACPI init functions 137 + * @set_root_poitner: set RSDP address 137 138 * @get_root_pointer: get RSDP address 138 139 * @reduced_hw_early_init: hardware reduced platform early init 139 140 */ 140 141 struct x86_init_acpi { 142 + void (*set_root_pointer)(u64 addr); 141 143 u64 (*get_root_pointer)(void); 142 144 void (*reduced_hw_early_init)(void); 143 145 };
+5
arch/x86/kernel/acpi/boot.c
··· 1760 1760 e820__update_table_print(); 1761 1761 } 1762 1762 1763 + void x86_default_set_root_pointer(u64 addr) 1764 + { 1765 + boot_params.acpi_rsdp_addr = addr; 1766 + } 1767 + 1763 1768 u64 x86_default_get_root_pointer(void) 1764 1769 { 1765 1770 return boot_params.acpi_rsdp_addr;
+2 -2
arch/x86/kernel/ima_arch.c
··· 74 74 75 75 /* secureboot arch rules */ 76 76 static const char * const sb_arch_rules[] = { 77 - #if !IS_ENABLED(CONFIG_KEXEC_VERIFY_SIG) 77 + #if !IS_ENABLED(CONFIG_KEXEC_SIG) 78 78 "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig", 79 - #endif /* CONFIG_KEXEC_VERIFY_SIG */ 79 + #endif /* CONFIG_KEXEC_SIG */ 80 80 "measure func=KEXEC_KERNEL_CHECK", 81 81 #if !IS_ENABLED(CONFIG_MODULE_SIG) 82 82 "appraise func=MODULE_CHECK appraise_type=imasig",
+5 -2
arch/x86/kernel/ioport.c
··· 11 11 #include <linux/errno.h> 12 12 #include <linux/types.h> 13 13 #include <linux/ioport.h> 14 + #include <linux/security.h> 14 15 #include <linux/smp.h> 15 16 #include <linux/stddef.h> 16 17 #include <linux/slab.h> ··· 32 31 33 32 if ((from + num <= from) || (from + num > IO_BITMAP_BITS)) 34 33 return -EINVAL; 35 - if (turn_on && !capable(CAP_SYS_RAWIO)) 34 + if (turn_on && (!capable(CAP_SYS_RAWIO) || 35 + security_locked_down(LOCKDOWN_IOPORT))) 36 36 return -EPERM; 37 37 38 38 /* ··· 128 126 return -EINVAL; 129 127 /* Trying to gain more privileges? */ 130 128 if (level > old) { 131 - if (!capable(CAP_SYS_RAWIO)) 129 + if (!capable(CAP_SYS_RAWIO) || 130 + security_locked_down(LOCKDOWN_IOPORT)) 132 131 return -EPERM; 133 132 } 134 133 regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) |
+1
arch/x86/kernel/kexec-bzimage64.c
··· 180 180 if (efi_enabled(EFI_OLD_MEMMAP)) 181 181 return 0; 182 182 183 + params->secure_boot = boot_params.secure_boot; 183 184 ei->efi_loader_signature = current_ei->efi_loader_signature; 184 185 ei->efi_systab = current_ei->efi_systab; 185 186 ei->efi_systab_hi = current_ei->efi_systab_hi;
+8
arch/x86/kernel/msr.c
··· 34 34 #include <linux/notifier.h> 35 35 #include <linux/uaccess.h> 36 36 #include <linux/gfp.h> 37 + #include <linux/security.h> 37 38 38 39 #include <asm/cpufeature.h> 39 40 #include <asm/msr.h> ··· 79 78 int cpu = iminor(file_inode(file)); 80 79 int err = 0; 81 80 ssize_t bytes = 0; 81 + 82 + err = security_locked_down(LOCKDOWN_MSR); 83 + if (err) 84 + return err; 82 85 83 86 if (count % 8) 84 87 return -EINVAL; /* Invalid chunk size */ ··· 135 130 err = -EFAULT; 136 131 break; 137 132 } 133 + err = security_locked_down(LOCKDOWN_MSR); 134 + if (err) 135 + break; 138 136 err = wrmsr_safe_regs_on_cpu(cpu, regs); 139 137 if (err) 140 138 break;
+1
arch/x86/kernel/x86_init.c
··· 95 95 }, 96 96 97 97 .acpi = { 98 + .set_root_pointer = x86_default_set_root_pointer, 98 99 .get_root_pointer = x86_default_get_root_pointer, 99 100 .reduced_hw_early_init = acpi_generic_reduced_hw_init, 100 101 },
+5
arch/x86/mm/testmmiotrace.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/io.h> 10 10 #include <linux/mmiotrace.h> 11 + #include <linux/security.h> 11 12 12 13 static unsigned long mmio_address; 13 14 module_param_hw(mmio_address, ulong, iomem, 0); ··· 116 115 static int __init init(void) 117 116 { 118 117 unsigned long size = (read_far) ? (8 << 20) : (16 << 10); 118 + int ret = security_locked_down(LOCKDOWN_MMIOTRACE); 119 + 120 + if (ret) 121 + return ret; 119 122 120 123 if (mmio_address == 0) { 121 124 pr_err("you have to use the module argument mmio_address.\n");
+3 -1
crypto/asymmetric_keys/verify_pefile.c
··· 96 96 97 97 if (!ddir->certs.virtual_address || !ddir->certs.size) { 98 98 pr_debug("Unsigned PE binary\n"); 99 - return -EKEYREJECTED; 99 + return -ENODATA; 100 100 } 101 101 102 102 chkaddr(ctx->header_size, ddir->certs.virtual_address, ··· 402 402 * 403 403 * (*) 0 if at least one signature chain intersects with the keys in the trust 404 404 * keyring, or: 405 + * 406 + * (*) -ENODATA if there is no signature present. 405 407 * 406 408 * (*) -ENOPKG if a suitable crypto module couldn't be found for a check on a 407 409 * chain.
+6
drivers/acpi/custom_method.c
··· 9 9 #include <linux/uaccess.h> 10 10 #include <linux/debugfs.h> 11 11 #include <linux/acpi.h> 12 + #include <linux/security.h> 12 13 13 14 #include "internal.h" 14 15 ··· 30 29 31 30 struct acpi_table_header table; 32 31 acpi_status status; 32 + int ret; 33 + 34 + ret = security_locked_down(LOCKDOWN_ACPI_TABLES); 35 + if (ret) 36 + return ret; 33 37 34 38 if (!(*ppos)) { 35 39 /* parse the table header to get the table length */
+13 -1
drivers/acpi/osl.c
··· 27 27 #include <linux/list.h> 28 28 #include <linux/jiffies.h> 29 29 #include <linux/semaphore.h> 30 + #include <linux/security.h> 30 31 31 32 #include <asm/io.h> 32 33 #include <linux/uaccess.h> ··· 183 182 acpi_physical_address pa; 184 183 185 184 #ifdef CONFIG_KEXEC 186 - if (acpi_rsdp) 185 + /* 186 + * We may have been provided with an RSDP on the command line, 187 + * but if a malicious user has done so they may be pointing us 188 + * at modified ACPI tables that could alter kernel behaviour - 189 + * so, we check the lockdown status before making use of 190 + * it. If we trust it then also stash it in an architecture 191 + * specific location (if appropriate) so it can be carried 192 + * over further kexec()s. 193 + */ 194 + if (acpi_rsdp && !security_locked_down(LOCKDOWN_ACPI_TABLES)) { 195 + acpi_arch_set_root_pointer(acpi_rsdp); 187 196 return acpi_rsdp; 197 + } 188 198 #endif 189 199 pa = acpi_arch_get_root_pointer(); 190 200 if (pa)
+6
drivers/acpi/tables.c
··· 20 20 #include <linux/memblock.h> 21 21 #include <linux/earlycpio.h> 22 22 #include <linux/initrd.h> 23 + #include <linux/security.h> 23 24 #include "internal.h" 24 25 25 26 #ifdef CONFIG_ACPI_CUSTOM_DSDT ··· 578 577 } 579 578 if (table_nr == 0) 580 579 return; 580 + 581 + if (security_locked_down(LOCKDOWN_ACPI_TABLES)) { 582 + pr_notice("kernel is locked down, ignoring table override\n"); 583 + return; 584 + } 581 585 582 586 acpi_tables_addr = 583 587 memblock_find_in_range(0, ACPI_TABLE_UPGRADE_MAX_PHYS,
+5 -2
drivers/char/mem.c
··· 29 29 #include <linux/export.h> 30 30 #include <linux/io.h> 31 31 #include <linux/uio.h> 32 - 33 32 #include <linux/uaccess.h> 33 + #include <linux/security.h> 34 34 35 35 #ifdef CONFIG_IA64 36 36 # include <linux/efi.h> ··· 807 807 808 808 static int open_port(struct inode *inode, struct file *filp) 809 809 { 810 - return capable(CAP_SYS_RAWIO) ? 0 : -EPERM; 810 + if (!capable(CAP_SYS_RAWIO)) 811 + return -EPERM; 812 + 813 + return security_locked_down(LOCKDOWN_DEV_MEM); 811 814 } 812 815 813 816 #define zero_lseek null_lseek
+6
drivers/firmware/efi/efi.c
··· 30 30 #include <linux/acpi.h> 31 31 #include <linux/ucs2_string.h> 32 32 #include <linux/memblock.h> 33 + #include <linux/security.h> 33 34 34 35 #include <asm/early_ioremap.h> 35 36 ··· 222 221 static char efivar_ssdt[EFIVAR_SSDT_NAME_MAX] __initdata; 223 222 static int __init efivar_ssdt_setup(char *str) 224 223 { 224 + int ret = security_locked_down(LOCKDOWN_ACPI_TABLES); 225 + 226 + if (ret) 227 + return ret; 228 + 225 229 if (strlen(str) < sizeof(efivar_ssdt)) 226 230 memcpy(efivar_ssdt, str, strlen(str)); 227 231 else
+16
drivers/pci/pci-sysfs.c
··· 755 755 unsigned int size = count; 756 756 loff_t init_off = off; 757 757 u8 *data = (u8 *) buf; 758 + int ret; 759 + 760 + ret = security_locked_down(LOCKDOWN_PCI_ACCESS); 761 + if (ret) 762 + return ret; 758 763 759 764 if (off > dev->cfg_size) 760 765 return 0; ··· 1021 1016 int bar = (unsigned long)attr->private; 1022 1017 enum pci_mmap_state mmap_type; 1023 1018 struct resource *res = &pdev->resource[bar]; 1019 + int ret; 1020 + 1021 + ret = security_locked_down(LOCKDOWN_PCI_ACCESS); 1022 + if (ret) 1023 + return ret; 1024 1024 1025 1025 if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start)) 1026 1026 return -EINVAL; ··· 1102 1092 struct bin_attribute *attr, char *buf, 1103 1093 loff_t off, size_t count) 1104 1094 { 1095 + int ret; 1096 + 1097 + ret = security_locked_down(LOCKDOWN_PCI_ACCESS); 1098 + if (ret) 1099 + return ret; 1100 + 1105 1101 return pci_resource_io(filp, kobj, attr, buf, off, count, true); 1106 1102 } 1107 1103
+12 -2
drivers/pci/proc.c
··· 13 13 #include <linux/seq_file.h> 14 14 #include <linux/capability.h> 15 15 #include <linux/uaccess.h> 16 + #include <linux/security.h> 16 17 #include <asm/byteorder.h> 17 18 #include "pci.h" 18 19 ··· 116 115 struct pci_dev *dev = PDE_DATA(ino); 117 116 int pos = *ppos; 118 117 int size = dev->cfg_size; 119 - int cnt; 118 + int cnt, ret; 119 + 120 + ret = security_locked_down(LOCKDOWN_PCI_ACCESS); 121 + if (ret) 122 + return ret; 120 123 121 124 if (pos >= size) 122 125 return 0; ··· 201 196 #endif /* HAVE_PCI_MMAP */ 202 197 int ret = 0; 203 198 199 + ret = security_locked_down(LOCKDOWN_PCI_ACCESS); 200 + if (ret) 201 + return ret; 202 + 204 203 switch (cmd) { 205 204 case PCIIOC_CONTROLLER: 206 205 ret = pci_domain_nr(dev->bus); ··· 247 238 struct pci_filp_private *fpriv = file->private_data; 248 239 int i, ret, write_combine = 0, res_bit = IORESOURCE_MEM; 249 240 250 - if (!capable(CAP_SYS_RAWIO)) 241 + if (!capable(CAP_SYS_RAWIO) || 242 + security_locked_down(LOCKDOWN_PCI_ACCESS)) 251 243 return -EPERM; 252 244 253 245 if (fpriv->mmap_state == pci_mmap_io) {
+3 -1
drivers/pci/syscall.c
··· 7 7 8 8 #include <linux/errno.h> 9 9 #include <linux/pci.h> 10 + #include <linux/security.h> 10 11 #include <linux/syscalls.h> 11 12 #include <linux/uaccess.h> 12 13 #include "pci.h" ··· 91 90 u32 dword; 92 91 int err = 0; 93 92 94 - if (!capable(CAP_SYS_ADMIN)) 93 + if (!capable(CAP_SYS_ADMIN) || 94 + security_locked_down(LOCKDOWN_PCI_ACCESS)) 95 95 return -EPERM; 96 96 97 97 dev = pci_get_domain_bus_and_slot(0, bus, dfn);
+5
drivers/pcmcia/cistpl.c
··· 21 21 #include <linux/pci.h> 22 22 #include <linux/ioport.h> 23 23 #include <linux/io.h> 24 + #include <linux/security.h> 24 25 #include <asm/byteorder.h> 25 26 #include <asm/unaligned.h> 26 27 ··· 1575 1574 { 1576 1575 struct pcmcia_socket *s; 1577 1576 int error; 1577 + 1578 + error = security_locked_down(LOCKDOWN_PCMCIA_CIS); 1579 + if (error) 1580 + return error; 1578 1581 1579 1582 s = to_socket(container_of(kobj, struct device, kobj)); 1580 1583
+5
drivers/tty/serial/serial_core.c
··· 22 22 #include <linux/serial_core.h> 23 23 #include <linux/delay.h> 24 24 #include <linux/mutex.h> 25 + #include <linux/security.h> 25 26 26 27 #include <linux/irq.h> 27 28 #include <linux/uaccess.h> ··· 862 861 uport->custom_divisor = new_info->custom_divisor; 863 862 goto check_and_exit; 864 863 } 864 + 865 + retval = security_locked_down(LOCKDOWN_TIOCSSERIAL); 866 + if (retval && (change_irq || change_port)) 867 + goto exit; 865 868 866 869 /* 867 870 * Ask the low level driver to verify the settings.
+30
fs/debugfs/file.c
··· 19 19 #include <linux/atomic.h> 20 20 #include <linux/device.h> 21 21 #include <linux/poll.h> 22 + #include <linux/security.h> 22 23 23 24 #include "internal.h" 24 25 ··· 137 136 } 138 137 EXPORT_SYMBOL_GPL(debugfs_file_put); 139 138 139 + /* 140 + * Only permit access to world-readable files when the kernel is locked down. 141 + * We also need to exclude any file that has ways to write or alter it as root 142 + * can bypass the permissions check. 143 + */ 144 + static bool debugfs_is_locked_down(struct inode *inode, 145 + struct file *filp, 146 + const struct file_operations *real_fops) 147 + { 148 + if ((inode->i_mode & 07777) == 0444 && 149 + !(filp->f_mode & FMODE_WRITE) && 150 + !real_fops->unlocked_ioctl && 151 + !real_fops->compat_ioctl && 152 + !real_fops->mmap) 153 + return false; 154 + 155 + return security_locked_down(LOCKDOWN_DEBUGFS); 156 + } 157 + 140 158 static int open_proxy_open(struct inode *inode, struct file *filp) 141 159 { 142 160 struct dentry *dentry = F_DENTRY(filp); ··· 167 147 return r == -EIO ? -ENOENT : r; 168 148 169 149 real_fops = debugfs_real_fops(filp); 150 + 151 + r = debugfs_is_locked_down(inode, filp, real_fops); 152 + if (r) 153 + goto out; 154 + 170 155 real_fops = fops_get(real_fops); 171 156 if (!real_fops) { 172 157 /* Huh? Module did not clean up after itself at exit? */ ··· 297 272 return r == -EIO ? -ENOENT : r; 298 273 299 274 real_fops = debugfs_real_fops(filp); 275 + 276 + r = debugfs_is_locked_down(inode, filp, real_fops); 277 + if (r) 278 + goto out; 279 + 300 280 real_fops = fops_get(real_fops); 301 281 if (!real_fops) { 302 282 /* Huh? Module did not cleanup after itself at exit? */
+30 -2
fs/debugfs/inode.c
··· 26 26 #include <linux/parser.h> 27 27 #include <linux/magic.h> 28 28 #include <linux/slab.h> 29 + #include <linux/security.h> 29 30 30 31 #include "internal.h" 31 32 ··· 35 34 static struct vfsmount *debugfs_mount; 36 35 static int debugfs_mount_count; 37 36 static bool debugfs_registered; 37 + 38 + /* 39 + * Don't allow access attributes to be changed whilst the kernel is locked down 40 + * so that we can use the file mode as part of a heuristic to determine whether 41 + * to lock down individual files. 42 + */ 43 + static int debugfs_setattr(struct dentry *dentry, struct iattr *ia) 44 + { 45 + int ret = security_locked_down(LOCKDOWN_DEBUGFS); 46 + 47 + if (ret && (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID))) 48 + return ret; 49 + return simple_setattr(dentry, ia); 50 + } 51 + 52 + static const struct inode_operations debugfs_file_inode_operations = { 53 + .setattr = debugfs_setattr, 54 + }; 55 + static const struct inode_operations debugfs_dir_inode_operations = { 56 + .lookup = simple_lookup, 57 + .setattr = debugfs_setattr, 58 + }; 59 + static const struct inode_operations debugfs_symlink_inode_operations = { 60 + .get_link = simple_get_link, 61 + .setattr = debugfs_setattr, 62 + }; 38 63 39 64 static struct inode *debugfs_get_inode(struct super_block *sb) 40 65 { ··· 396 369 inode->i_mode = mode; 397 370 inode->i_private = data; 398 371 372 + inode->i_op = &debugfs_file_inode_operations; 399 373 inode->i_fop = proxy_fops; 400 374 dentry->d_fsdata = (void *)((unsigned long)real_fops | 401 375 DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); ··· 560 532 } 561 533 562 534 inode->i_mode = S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO; 563 - inode->i_op = &simple_dir_inode_operations; 535 + inode->i_op = &debugfs_dir_inode_operations; 564 536 inode->i_fop = &simple_dir_operations; 565 537 566 538 /* directory inodes start off with i_nlink == 2 (for "." entry) */ ··· 660 632 return failed_creating(dentry); 661 633 } 662 634 inode->i_mode = S_IFLNK | S_IRWXUGO; 663 - inode->i_op = &simple_symlink_inode_operations; 635 + inode->i_op = &debugfs_symlink_inode_operations; 664 636 inode->i_link = link; 665 637 d_instantiate(dentry, inode); 666 638 return end_creating(dentry);
+6
fs/proc/kcore.c
··· 31 31 #include <linux/ioport.h> 32 32 #include <linux/memory.h> 33 33 #include <linux/sched/task.h> 34 + #include <linux/security.h> 34 35 #include <asm/sections.h> 35 36 #include "internal.h" 36 37 ··· 546 545 547 546 static int open_kcore(struct inode *inode, struct file *filp) 548 547 { 548 + int ret = security_locked_down(LOCKDOWN_KCORE); 549 + 549 550 if (!capable(CAP_SYS_RAWIO)) 550 551 return -EPERM; 552 + 553 + if (ret) 554 + return ret; 551 555 552 556 filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); 553 557 if (!filp->private_data)
+41 -1
fs/tracefs/inode.c
··· 20 20 #include <linux/parser.h> 21 21 #include <linux/magic.h> 22 22 #include <linux/slab.h> 23 + #include <linux/security.h> 23 24 24 25 #define TRACEFS_DEFAULT_MODE 0700 25 26 26 27 static struct vfsmount *tracefs_mount; 27 28 static int tracefs_mount_count; 28 29 static bool tracefs_registered; 30 + 31 + static int default_open_file(struct inode *inode, struct file *filp) 32 + { 33 + struct dentry *dentry = filp->f_path.dentry; 34 + struct file_operations *real_fops; 35 + int ret; 36 + 37 + if (!dentry) 38 + return -EINVAL; 39 + 40 + ret = security_locked_down(LOCKDOWN_TRACEFS); 41 + if (ret) 42 + return ret; 43 + 44 + real_fops = dentry->d_fsdata; 45 + if (!real_fops->open) 46 + return 0; 47 + return real_fops->open(inode, filp); 48 + } 29 49 30 50 static ssize_t default_read_file(struct file *file, char __user *buf, 31 51 size_t count, loff_t *ppos) ··· 241 221 return 0; 242 222 } 243 223 224 + static void tracefs_destroy_inode(struct inode *inode) 225 + { 226 + if (S_ISREG(inode->i_mode)) 227 + kfree(inode->i_fop); 228 + } 229 + 244 230 static int tracefs_remount(struct super_block *sb, int *flags, char *data) 245 231 { 246 232 int err; ··· 283 257 static const struct super_operations tracefs_super_operations = { 284 258 .statfs = simple_statfs, 285 259 .remount_fs = tracefs_remount, 260 + .destroy_inode = tracefs_destroy_inode, 286 261 .show_options = tracefs_show_options, 287 262 }; 288 263 ··· 414 387 struct dentry *parent, void *data, 415 388 const struct file_operations *fops) 416 389 { 390 + struct file_operations *proxy_fops; 417 391 struct dentry *dentry; 418 392 struct inode *inode; 419 393 ··· 430 402 if (unlikely(!inode)) 431 403 return failed_creating(dentry); 432 404 405 + proxy_fops = kzalloc(sizeof(struct file_operations), GFP_KERNEL); 406 + if (unlikely(!proxy_fops)) { 407 + iput(inode); 408 + return failed_creating(dentry); 409 + } 410 + 411 + if (!fops) 412 + fops = &tracefs_file_operations; 413 + 414 + dentry->d_fsdata = (void *)fops; 415 + memcpy(proxy_fops, fops, sizeof(*proxy_fops)); 416 + proxy_fops->open = default_open_file; 433 417 inode->i_mode = mode; 434 - inode->i_fop = fops ? fops : &tracefs_file_operations; 418 + inode->i_fop = proxy_fops; 435 419 inode->i_private = data; 436 420 d_instantiate(dentry, inode); 437 421 fsnotify_create(dentry->d_parent->d_inode, dentry);
+7 -1
include/asm-generic/vmlinux.lds.h
··· 215 215 __start_lsm_info = .; \ 216 216 KEEP(*(.lsm_info.init)) \ 217 217 __end_lsm_info = .; 218 + #define EARLY_LSM_TABLE() . = ALIGN(8); \ 219 + __start_early_lsm_info = .; \ 220 + KEEP(*(.early_lsm_info.init)) \ 221 + __end_early_lsm_info = .; 218 222 #else 219 223 #define LSM_TABLE() 224 + #define EARLY_LSM_TABLE() 220 225 #endif 221 226 222 227 #define ___OF_TABLE(cfg, name) _OF_TABLE_##cfg(name) ··· 632 627 ACPI_PROBE_TABLE(timer) \ 633 628 THERMAL_TABLE(governor) \ 634 629 EARLYCON_TABLE() \ 635 - LSM_TABLE() 630 + LSM_TABLE() \ 631 + EARLY_LSM_TABLE() 636 632 637 633 #define INIT_TEXT \ 638 634 *(.init.text .init.text.*) \
+6
include/linux/acpi.h
··· 643 643 int acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem, int *timer_count); 644 644 #endif 645 645 646 + #ifndef ACPI_HAVE_ARCH_SET_ROOT_POINTER 647 + static inline void acpi_arch_set_root_pointer(u64 addr) 648 + { 649 + } 650 + #endif 651 + 646 652 #ifndef ACPI_HAVE_ARCH_GET_ROOT_POINTER 647 653 static inline u64 acpi_arch_get_root_pointer(void) 648 654 {
+9
include/linux/ima.h
··· 131 131 return 0; 132 132 } 133 133 #endif /* CONFIG_IMA_APPRAISE */ 134 + 135 + #if defined(CONFIG_IMA_APPRAISE) && defined(CONFIG_INTEGRITY_TRUSTED_KEYRING) 136 + extern bool ima_appraise_signature(enum kernel_read_file_id func); 137 + #else 138 + static inline bool ima_appraise_signature(enum kernel_read_file_id func) 139 + { 140 + return false; 141 + } 142 + #endif /* CONFIG_IMA_APPRAISE && CONFIG_INTEGRITY_TRUSTED_KEYRING */ 134 143 #endif /* _LINUX_IMA_H */
+2 -2
include/linux/kexec.h
··· 125 125 unsigned long cmdline_len); 126 126 typedef int (kexec_cleanup_t)(void *loader_data); 127 127 128 - #ifdef CONFIG_KEXEC_VERIFY_SIG 128 + #ifdef CONFIG_KEXEC_SIG 129 129 typedef int (kexec_verify_sig_t)(const char *kernel_buf, 130 130 unsigned long kernel_len); 131 131 #endif ··· 134 134 kexec_probe_t *probe; 135 135 kexec_load_t *load; 136 136 kexec_cleanup_t *cleanup; 137 - #ifdef CONFIG_KEXEC_VERIFY_SIG 137 + #ifdef CONFIG_KEXEC_SIG 138 138 kexec_verify_sig_t *verify_sig; 139 139 #endif 140 140 };
+13
include/linux/lsm_hooks.h
··· 1449 1449 * @bpf_prog_free_security: 1450 1450 * Clean up the security information stored inside bpf prog. 1451 1451 * 1452 + * @locked_down 1453 + * Determine whether a kernel feature that potentially enables arbitrary 1454 + * code execution in kernel space should be permitted. 1455 + * 1456 + * @what: kernel feature being accessed 1452 1457 */ 1453 1458 union security_list_options { 1454 1459 int (*binder_set_context_mgr)(struct task_struct *mgr); ··· 1817 1812 int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux); 1818 1813 void (*bpf_prog_free_security)(struct bpf_prog_aux *aux); 1819 1814 #endif /* CONFIG_BPF_SYSCALL */ 1815 + int (*locked_down)(enum lockdown_reason what); 1820 1816 }; 1821 1817 1822 1818 struct security_hook_heads { ··· 2059 2053 struct hlist_head bpf_prog_alloc_security; 2060 2054 struct hlist_head bpf_prog_free_security; 2061 2055 #endif /* CONFIG_BPF_SYSCALL */ 2056 + struct hlist_head locked_down; 2062 2057 } __randomize_layout; 2063 2058 2064 2059 /* ··· 2118 2111 }; 2119 2112 2120 2113 extern struct lsm_info __start_lsm_info[], __end_lsm_info[]; 2114 + extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[]; 2121 2115 2122 2116 #define DEFINE_LSM(lsm) \ 2123 2117 static struct lsm_info __lsm_##lsm \ 2124 2118 __used __section(.lsm_info.init) \ 2119 + __aligned(sizeof(unsigned long)) 2120 + 2121 + #define DEFINE_EARLY_LSM(lsm) \ 2122 + static struct lsm_info __early_lsm_##lsm \ 2123 + __used __section(.early_lsm_info.init) \ 2125 2124 __aligned(sizeof(unsigned long)) 2126 2125 2127 2126 #ifdef CONFIG_SECURITY_SELINUX_DISABLE
+59
include/linux/security.h
··· 77 77 LSM_POLICY_CHANGE, 78 78 }; 79 79 80 + /* 81 + * These are reasons that can be passed to the security_locked_down() 82 + * LSM hook. Lockdown reasons that protect kernel integrity (ie, the 83 + * ability for userland to modify kernel code) are placed before 84 + * LOCKDOWN_INTEGRITY_MAX. Lockdown reasons that protect kernel 85 + * confidentiality (ie, the ability for userland to extract 86 + * information from the running kernel that would otherwise be 87 + * restricted) are placed before LOCKDOWN_CONFIDENTIALITY_MAX. 88 + * 89 + * LSM authors should note that the semantics of any given lockdown 90 + * reason are not guaranteed to be stable - the same reason may block 91 + * one set of features in one kernel release, and a slightly different 92 + * set of features in a later kernel release. LSMs that seek to expose 93 + * lockdown policy at any level of granularity other than "none", 94 + * "integrity" or "confidentiality" are responsible for either 95 + * ensuring that they expose a consistent level of functionality to 96 + * userland, or ensuring that userland is aware that this is 97 + * potentially a moving target. It is easy to misuse this information 98 + * in a way that could break userspace. Please be careful not to do 99 + * so. 100 + * 101 + * If you add to this, remember to extend lockdown_reasons in 102 + * security/lockdown/lockdown.c. 103 + */ 104 + enum lockdown_reason { 105 + LOCKDOWN_NONE, 106 + LOCKDOWN_MODULE_SIGNATURE, 107 + LOCKDOWN_DEV_MEM, 108 + LOCKDOWN_KEXEC, 109 + LOCKDOWN_HIBERNATION, 110 + LOCKDOWN_PCI_ACCESS, 111 + LOCKDOWN_IOPORT, 112 + LOCKDOWN_MSR, 113 + LOCKDOWN_ACPI_TABLES, 114 + LOCKDOWN_PCMCIA_CIS, 115 + LOCKDOWN_TIOCSSERIAL, 116 + LOCKDOWN_MODULE_PARAMETERS, 117 + LOCKDOWN_MMIOTRACE, 118 + LOCKDOWN_DEBUGFS, 119 + LOCKDOWN_INTEGRITY_MAX, 120 + LOCKDOWN_KCORE, 121 + LOCKDOWN_KPROBES, 122 + LOCKDOWN_BPF_READ, 123 + LOCKDOWN_PERF, 124 + LOCKDOWN_TRACEFS, 125 + LOCKDOWN_CONFIDENTIALITY_MAX, 126 + }; 127 + 80 128 /* These functions are in security/commoncap.c */ 81 129 extern int cap_capable(const struct cred *cred, struct user_namespace *ns, 82 130 int cap, unsigned int opts); ··· 243 195 244 196 /* prototypes */ 245 197 extern int security_init(void); 198 + extern int early_security_init(void); 246 199 247 200 /* Security operations */ 248 201 int security_binder_set_context_mgr(struct task_struct *mgr); ··· 441 392 int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen); 442 393 int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen); 443 394 int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen); 395 + int security_locked_down(enum lockdown_reason what); 444 396 #else /* CONFIG_SECURITY */ 445 397 446 398 static inline int call_blocking_lsm_notifier(enum lsm_event event, void *data) ··· 469 419 */ 470 420 471 421 static inline int security_init(void) 422 + { 423 + return 0; 424 + } 425 + 426 + static inline int early_security_init(void) 472 427 { 473 428 return 0; 474 429 } ··· 1264 1209 static inline int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen) 1265 1210 { 1266 1211 return -EOPNOTSUPP; 1212 + } 1213 + static inline int security_locked_down(enum lockdown_reason what) 1214 + { 1215 + return 0; 1267 1216 } 1268 1217 #endif /* CONFIG_SECURITY */ 1269 1218
+5
init/Kconfig
··· 2061 2061 kernel build dependency so that the signing tool can use its crypto 2062 2062 library. 2063 2063 2064 + You should enable this option if you wish to use either 2065 + CONFIG_SECURITY_LOCKDOWN_LSM or lockdown functionality imposed via 2066 + another LSM - otherwise unsigned modules will be loadable regardless 2067 + of the lockdown policy. 2068 + 2064 2069 !!!WARNING!!! If you enable this option, you MUST make sure that the 2065 2070 module DOES NOT get stripped after being signed. This includes the 2066 2071 debuginfo strip done by some packagers (such as rpmbuild) and
+1
init/main.c
··· 593 593 boot_cpu_init(); 594 594 page_address_init(); 595 595 pr_notice("%s", linux_banner); 596 + early_security_init(); 596 597 setup_arch(&command_line); 597 598 setup_command_line(command_line); 598 599 setup_nr_cpu_ids();
+7
kernel/events/core.c
··· 10917 10917 perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN)) 10918 10918 return -EACCES; 10919 10919 10920 + err = security_locked_down(LOCKDOWN_PERF); 10921 + if (err && (attr.sample_type & PERF_SAMPLE_REGS_INTR)) 10922 + /* REGS_INTR can leak data, lockdown must prevent this */ 10923 + return err; 10924 + 10925 + err = 0; 10926 + 10920 10927 /* 10921 10928 * In cgroup mode, the pid argument is used to pass the fd 10922 10929 * opened to the cgroup directory in cgroupfs. The cpu argument
+8
kernel/kexec.c
··· 206 206 return result; 207 207 208 208 /* 209 + * kexec can be used to circumvent module loading restrictions, so 210 + * prevent loading in that case 211 + */ 212 + result = security_locked_down(LOCKDOWN_KEXEC); 213 + if (result) 214 + return result; 215 + 216 + /* 209 217 * Verify we have a legal set of flags 210 218 * This leaves us room for future extensions. 211 219 */
+59 -9
kernel/kexec_file.c
··· 88 88 return kexec_image_post_load_cleanup_default(image); 89 89 } 90 90 91 - #ifdef CONFIG_KEXEC_VERIFY_SIG 91 + #ifdef CONFIG_KEXEC_SIG 92 92 static int kexec_image_verify_sig_default(struct kimage *image, void *buf, 93 93 unsigned long buf_len) 94 94 { ··· 177 177 image->image_loader_data = NULL; 178 178 } 179 179 180 + #ifdef CONFIG_KEXEC_SIG 181 + static int 182 + kimage_validate_signature(struct kimage *image) 183 + { 184 + const char *reason; 185 + int ret; 186 + 187 + ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf, 188 + image->kernel_buf_len); 189 + switch (ret) { 190 + case 0: 191 + break; 192 + 193 + /* Certain verification errors are non-fatal if we're not 194 + * checking errors, provided we aren't mandating that there 195 + * must be a valid signature. 196 + */ 197 + case -ENODATA: 198 + reason = "kexec of unsigned image"; 199 + goto decide; 200 + case -ENOPKG: 201 + reason = "kexec of image with unsupported crypto"; 202 + goto decide; 203 + case -ENOKEY: 204 + reason = "kexec of image with unavailable key"; 205 + decide: 206 + if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) { 207 + pr_notice("%s rejected\n", reason); 208 + return ret; 209 + } 210 + 211 + /* If IMA is guaranteed to appraise a signature on the kexec 212 + * image, permit it even if the kernel is otherwise locked 213 + * down. 214 + */ 215 + if (!ima_appraise_signature(READING_KEXEC_IMAGE) && 216 + security_locked_down(LOCKDOWN_KEXEC)) 217 + return -EPERM; 218 + 219 + return 0; 220 + 221 + /* All other errors are fatal, including nomem, unparseable 222 + * signatures and signature check failures - even if signatures 223 + * aren't required. 224 + */ 225 + default: 226 + pr_notice("kernel signature verification failed (%d).\n", ret); 227 + } 228 + 229 + return ret; 230 + } 231 + #endif 232 + 180 233 /* 181 234 * In file mode list of segments is prepared by kernel. Copy relevant 182 235 * data from user space, do error checking, prepare segment list ··· 239 186 const char __user *cmdline_ptr, 240 187 unsigned long cmdline_len, unsigned flags) 241 188 { 242 - int ret = 0; 189 + int ret; 243 190 void *ldata; 244 191 loff_t size; 245 192 ··· 255 202 if (ret) 256 203 goto out; 257 204 258 - #ifdef CONFIG_KEXEC_VERIFY_SIG 259 - ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf, 260 - image->kernel_buf_len); 261 - if (ret) { 262 - pr_debug("kernel signature verification failed.\n"); 205 + #ifdef CONFIG_KEXEC_SIG 206 + ret = kimage_validate_signature(image); 207 + 208 + if (ret) 263 209 goto out; 264 - } 265 - pr_debug("kernel signature verification successful.\n"); 266 210 #endif 267 211 /* It is possible that there no initramfs is being loaded */ 268 212 if (!(flags & KEXEC_FILE_NO_INITRAMFS)) {
+31 -8
kernel/module.c
··· 2839 2839 #ifdef CONFIG_MODULE_SIG 2840 2840 static int module_sig_check(struct load_info *info, int flags) 2841 2841 { 2842 - int err = -ENOKEY; 2842 + int err = -ENODATA; 2843 2843 const unsigned long markerlen = sizeof(MODULE_SIG_STRING) - 1; 2844 + const char *reason; 2844 2845 const void *mod = info->hdr; 2845 2846 2846 2847 /* ··· 2856 2855 err = mod_verify_sig(mod, info); 2857 2856 } 2858 2857 2859 - if (!err) { 2858 + switch (err) { 2859 + case 0: 2860 2860 info->sig_ok = true; 2861 2861 return 0; 2862 + 2863 + /* We don't permit modules to be loaded into trusted kernels 2864 + * without a valid signature on them, but if we're not 2865 + * enforcing, certain errors are non-fatal. 2866 + */ 2867 + case -ENODATA: 2868 + reason = "Loading of unsigned module"; 2869 + goto decide; 2870 + case -ENOPKG: 2871 + reason = "Loading of module with unsupported crypto"; 2872 + goto decide; 2873 + case -ENOKEY: 2874 + reason = "Loading of module with unavailable key"; 2875 + decide: 2876 + if (is_module_sig_enforced()) { 2877 + pr_notice("%s is rejected\n", reason); 2878 + return -EKEYREJECTED; 2879 + } 2880 + 2881 + return security_locked_down(LOCKDOWN_MODULE_SIGNATURE); 2882 + 2883 + /* All other errors are fatal, including nomem, unparseable 2884 + * signatures and signature check failures - even if signatures 2885 + * aren't required. 2886 + */ 2887 + default: 2888 + return err; 2862 2889 } 2863 - 2864 - /* Not having a signature is only an error if we're strict. */ 2865 - if (err == -ENOKEY && !is_module_sig_enforced()) 2866 - err = 0; 2867 - 2868 - return err; 2869 2890 } 2870 2891 #else /* !CONFIG_MODULE_SIG */ 2871 2892 static int module_sig_check(struct load_info *info, int flags)
+16 -5
kernel/params.c
··· 12 12 #include <linux/err.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/ctype.h> 15 + #include <linux/security.h> 15 16 16 17 #ifdef CONFIG_SYSFS 17 18 /* Protects all built-in parameters, modules use their own param_lock */ ··· 97 96 return parameqn(a, b, strlen(a)+1); 98 97 } 99 98 100 - static void param_check_unsafe(const struct kernel_param *kp) 99 + static bool param_check_unsafe(const struct kernel_param *kp) 101 100 { 101 + if (kp->flags & KERNEL_PARAM_FL_HWPARAM && 102 + security_locked_down(LOCKDOWN_MODULE_PARAMETERS)) 103 + return false; 104 + 102 105 if (kp->flags & KERNEL_PARAM_FL_UNSAFE) { 103 106 pr_notice("Setting dangerous option %s - tainting kernel\n", 104 107 kp->name); 105 108 add_taint(TAINT_USER, LOCKDEP_STILL_OK); 106 109 } 110 + 111 + return true; 107 112 } 108 113 109 114 static int parse_one(char *param, ··· 139 132 pr_debug("handling %s with %p\n", param, 140 133 params[i].ops->set); 141 134 kernel_param_lock(params[i].mod); 142 - param_check_unsafe(&params[i]); 143 - err = params[i].ops->set(val, &params[i]); 135 + if (param_check_unsafe(&params[i])) 136 + err = params[i].ops->set(val, &params[i]); 137 + else 138 + err = -EPERM; 144 139 kernel_param_unlock(params[i].mod); 145 140 return err; 146 141 } ··· 562 553 return -EPERM; 563 554 564 555 kernel_param_lock(mk->mod); 565 - param_check_unsafe(attribute->param); 566 - err = attribute->param->ops->set(buf, attribute->param); 556 + if (param_check_unsafe(attribute->param)) 557 + err = attribute->param->ops->set(buf, attribute->param); 558 + else 559 + err = -EPERM; 567 560 kernel_param_unlock(mk->mod); 568 561 if (!err) 569 562 return len;
+2 -1
kernel/power/hibernate.c
··· 30 30 #include <linux/ctype.h> 31 31 #include <linux/genhd.h> 32 32 #include <linux/ktime.h> 33 + #include <linux/security.h> 33 34 #include <trace/events/power.h> 34 35 35 36 #include "power.h" ··· 69 68 70 69 bool hibernation_available(void) 71 70 { 72 - return (nohibernate == 0); 71 + return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION); 73 72 } 74 73 75 74 /**
+10
kernel/trace/bpf_trace.c
··· 142 142 { 143 143 int ret; 144 144 145 + ret = security_locked_down(LOCKDOWN_BPF_READ); 146 + if (ret < 0) 147 + goto out; 148 + 145 149 ret = probe_kernel_read(dst, unsafe_ptr, size); 146 150 if (unlikely(ret < 0)) 151 + out: 147 152 memset(dst, 0, size); 148 153 149 154 return ret; ··· 574 569 { 575 570 int ret; 576 571 572 + ret = security_locked_down(LOCKDOWN_BPF_READ); 573 + if (ret < 0) 574 + goto out; 575 + 577 576 /* 578 577 * The strncpy_from_unsafe() call will likely not fill the entire 579 578 * buffer, but that's okay in this circumstance as we're probing ··· 589 580 */ 590 581 ret = strncpy_from_unsafe(dst, unsafe_ptr, size); 591 582 if (unlikely(ret < 0)) 583 + out: 592 584 memset(dst, 0, size); 593 585 594 586 return ret;
+5
kernel/trace/trace_kprobe.c
··· 11 11 #include <linux/uaccess.h> 12 12 #include <linux/rculist.h> 13 13 #include <linux/error-injection.h> 14 + #include <linux/security.h> 14 15 15 16 #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ 16 17 ··· 460 459 static int __register_trace_kprobe(struct trace_kprobe *tk) 461 460 { 462 461 int i, ret; 462 + 463 + ret = security_locked_down(LOCKDOWN_KPROBES); 464 + if (ret) 465 + return ret; 463 466 464 467 if (trace_kprobe_is_registered(tk)) 465 468 return -EINVAL;
+6 -5
security/Kconfig
··· 237 237 source "security/loadpin/Kconfig" 238 238 source "security/yama/Kconfig" 239 239 source "security/safesetid/Kconfig" 240 + source "security/lockdown/Kconfig" 240 241 241 242 source "security/integrity/Kconfig" 242 243 ··· 277 276 278 277 config LSM 279 278 string "Ordered list of enabled LSMs" 280 - default "yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor" if DEFAULT_SECURITY_SMACK 281 - default "yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" if DEFAULT_SECURITY_APPARMOR 282 - default "yama,loadpin,safesetid,integrity,tomoyo" if DEFAULT_SECURITY_TOMOYO 283 - default "yama,loadpin,safesetid,integrity" if DEFAULT_SECURITY_DAC 284 - default "yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor" 279 + default "lockdown,yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor" if DEFAULT_SECURITY_SMACK 280 + default "lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" if DEFAULT_SECURITY_APPARMOR 281 + default "lockdown,yama,loadpin,safesetid,integrity,tomoyo" if DEFAULT_SECURITY_TOMOYO 282 + default "lockdown,yama,loadpin,safesetid,integrity" if DEFAULT_SECURITY_DAC 283 + default "lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor" 285 284 help 286 285 A comma-separated list of LSMs, in initialization order. 287 286 Any LSMs left off this list will be ignored. This can be
+2
security/Makefile
··· 11 11 subdir-$(CONFIG_SECURITY_YAMA) += yama 12 12 subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin 13 13 subdir-$(CONFIG_SECURITY_SAFESETID) += safesetid 14 + subdir-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown 14 15 15 16 # always enable default capabilities 16 17 obj-y += commoncap.o ··· 28 27 obj-$(CONFIG_SECURITY_YAMA) += yama/ 29 28 obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/ 30 29 obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/ 30 + obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown/ 31 31 obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o 32 32 33 33 # Object integrity file lists
+1 -1
security/integrity/ima/Kconfig
··· 160 160 161 161 config IMA_ARCH_POLICY 162 162 bool "Enable loading an IMA architecture specific policy" 163 - depends on (KEXEC_VERIFY_SIG && IMA) || IMA_APPRAISE \ 163 + depends on (KEXEC_SIG && IMA) || IMA_APPRAISE \ 164 164 && INTEGRITY_ASYMMETRIC_KEYS 165 165 default n 166 166 help
+2
security/integrity/ima/ima.h
··· 114 114 u64 count; 115 115 }; 116 116 117 + extern const int read_idmap[]; 118 + 117 119 #ifdef CONFIG_HAVE_IMA_KEXEC 118 120 void ima_load_kexec_buffer(void); 119 121 #else
+2 -2
security/integrity/ima/ima_main.c
··· 518 518 return 0; 519 519 } 520 520 521 - static const int read_idmap[READING_MAX_ID] = { 521 + const int read_idmap[READING_MAX_ID] = { 522 522 [READING_FIRMWARE] = FIRMWARE_CHECK, 523 523 [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK, 524 524 [READING_MODULE] = MODULE_CHECK, ··· 590 590 591 591 switch (id) { 592 592 case LOADING_KEXEC_IMAGE: 593 - if (IS_ENABLED(CONFIG_KEXEC_VERIFY_SIG) 593 + if (IS_ENABLED(CONFIG_KEXEC_SIG) 594 594 && arch_ima_get_secureboot()) { 595 595 pr_err("impossible to appraise a kernel image without a file descriptor; try using kexec_file_load syscall.\n"); 596 596 return -EACCES;
+50
security/integrity/ima/ima_policy.c
··· 1507 1507 return 0; 1508 1508 } 1509 1509 #endif /* CONFIG_IMA_READ_POLICY */ 1510 + 1511 + #if defined(CONFIG_IMA_APPRAISE) && defined(CONFIG_INTEGRITY_TRUSTED_KEYRING) 1512 + /* 1513 + * ima_appraise_signature: whether IMA will appraise a given function using 1514 + * an IMA digital signature. This is restricted to cases where the kernel 1515 + * has a set of built-in trusted keys in order to avoid an attacker simply 1516 + * loading additional keys. 1517 + */ 1518 + bool ima_appraise_signature(enum kernel_read_file_id id) 1519 + { 1520 + struct ima_rule_entry *entry; 1521 + bool found = false; 1522 + enum ima_hooks func; 1523 + 1524 + if (id >= READING_MAX_ID) 1525 + return false; 1526 + 1527 + func = read_idmap[id] ?: FILE_CHECK; 1528 + 1529 + rcu_read_lock(); 1530 + list_for_each_entry_rcu(entry, ima_rules, list) { 1531 + if (entry->action != APPRAISE) 1532 + continue; 1533 + 1534 + /* 1535 + * A generic entry will match, but otherwise require that it 1536 + * match the func we're looking for 1537 + */ 1538 + if (entry->func && entry->func != func) 1539 + continue; 1540 + 1541 + /* 1542 + * We require this to be a digital signature, not a raw IMA 1543 + * hash. 1544 + */ 1545 + if (entry->flags & IMA_DIGSIG_REQUIRED) 1546 + found = true; 1547 + 1548 + /* 1549 + * We've found a rule that matches, so break now even if it 1550 + * didn't require a digital signature - a later rule that does 1551 + * won't override it, so would be a false positive. 1552 + */ 1553 + break; 1554 + } 1555 + 1556 + rcu_read_unlock(); 1557 + return found; 1558 + } 1559 + #endif /* CONFIG_IMA_APPRAISE && CONFIG_INTEGRITY_TRUSTED_KEYRING */
+47
security/lockdown/Kconfig
··· 1 + config SECURITY_LOCKDOWN_LSM 2 + bool "Basic module for enforcing kernel lockdown" 3 + depends on SECURITY 4 + select MODULE_SIG if MODULES 5 + help 6 + Build support for an LSM that enforces a coarse kernel lockdown 7 + behaviour. 8 + 9 + config SECURITY_LOCKDOWN_LSM_EARLY 10 + bool "Enable lockdown LSM early in init" 11 + depends on SECURITY_LOCKDOWN_LSM 12 + help 13 + Enable the lockdown LSM early in boot. This is necessary in order 14 + to ensure that lockdown enforcement can be carried out on kernel 15 + boot parameters that are otherwise parsed before the security 16 + subsystem is fully initialised. If enabled, lockdown will 17 + unconditionally be called before any other LSMs. 18 + 19 + choice 20 + prompt "Kernel default lockdown mode" 21 + default LOCK_DOWN_KERNEL_FORCE_NONE 22 + depends on SECURITY_LOCKDOWN_LSM 23 + help 24 + The kernel can be configured to default to differing levels of 25 + lockdown. 26 + 27 + config LOCK_DOWN_KERNEL_FORCE_NONE 28 + bool "None" 29 + help 30 + No lockdown functionality is enabled by default. Lockdown may be 31 + enabled via the kernel commandline or /sys/kernel/security/lockdown. 32 + 33 + config LOCK_DOWN_KERNEL_FORCE_INTEGRITY 34 + bool "Integrity" 35 + help 36 + The kernel runs in integrity mode by default. Features that allow 37 + the kernel to be modified at runtime are disabled. 38 + 39 + config LOCK_DOWN_KERNEL_FORCE_CONFIDENTIALITY 40 + bool "Confidentiality" 41 + help 42 + The kernel runs in confidentiality mode by default. Features that 43 + allow the kernel to be modified at runtime or that permit userland 44 + code to read confidential material held inside the kernel are 45 + disabled. 46 + 47 + endchoice
+1
security/lockdown/Makefile
··· 1 + obj-$(CONFIG_SECURITY_LOCKDOWN_LSM) += lockdown.o
+191
security/lockdown/lockdown.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Lock down the kernel 3 + * 4 + * Copyright (C) 2016 Red Hat, Inc. All Rights Reserved. 5 + * Written by David Howells (dhowells@redhat.com) 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public Licence 9 + * as published by the Free Software Foundation; either version 10 + * 2 of the Licence, or (at your option) any later version. 11 + */ 12 + 13 + #include <linux/security.h> 14 + #include <linux/export.h> 15 + #include <linux/lsm_hooks.h> 16 + 17 + static enum lockdown_reason kernel_locked_down; 18 + 19 + static const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = { 20 + [LOCKDOWN_NONE] = "none", 21 + [LOCKDOWN_MODULE_SIGNATURE] = "unsigned module loading", 22 + [LOCKDOWN_DEV_MEM] = "/dev/mem,kmem,port", 23 + [LOCKDOWN_KEXEC] = "kexec of unsigned images", 24 + [LOCKDOWN_HIBERNATION] = "hibernation", 25 + [LOCKDOWN_PCI_ACCESS] = "direct PCI access", 26 + [LOCKDOWN_IOPORT] = "raw io port access", 27 + [LOCKDOWN_MSR] = "raw MSR access", 28 + [LOCKDOWN_ACPI_TABLES] = "modifying ACPI tables", 29 + [LOCKDOWN_PCMCIA_CIS] = "direct PCMCIA CIS storage", 30 + [LOCKDOWN_TIOCSSERIAL] = "reconfiguration of serial port IO", 31 + [LOCKDOWN_MODULE_PARAMETERS] = "unsafe module parameters", 32 + [LOCKDOWN_MMIOTRACE] = "unsafe mmio", 33 + [LOCKDOWN_DEBUGFS] = "debugfs access", 34 + [LOCKDOWN_INTEGRITY_MAX] = "integrity", 35 + [LOCKDOWN_KCORE] = "/proc/kcore access", 36 + [LOCKDOWN_KPROBES] = "use of kprobes", 37 + [LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM", 38 + [LOCKDOWN_PERF] = "unsafe use of perf", 39 + [LOCKDOWN_TRACEFS] = "use of tracefs", 40 + [LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality", 41 + }; 42 + 43 + static const enum lockdown_reason lockdown_levels[] = {LOCKDOWN_NONE, 44 + LOCKDOWN_INTEGRITY_MAX, 45 + LOCKDOWN_CONFIDENTIALITY_MAX}; 46 + 47 + /* 48 + * Put the kernel into lock-down mode. 49 + */ 50 + static int lock_kernel_down(const char *where, enum lockdown_reason level) 51 + { 52 + if (kernel_locked_down >= level) 53 + return -EPERM; 54 + 55 + kernel_locked_down = level; 56 + pr_notice("Kernel is locked down from %s; see man kernel_lockdown.7\n", 57 + where); 58 + return 0; 59 + } 60 + 61 + static int __init lockdown_param(char *level) 62 + { 63 + if (!level) 64 + return -EINVAL; 65 + 66 + if (strcmp(level, "integrity") == 0) 67 + lock_kernel_down("command line", LOCKDOWN_INTEGRITY_MAX); 68 + else if (strcmp(level, "confidentiality") == 0) 69 + lock_kernel_down("command line", LOCKDOWN_CONFIDENTIALITY_MAX); 70 + else 71 + return -EINVAL; 72 + 73 + return 0; 74 + } 75 + 76 + early_param("lockdown", lockdown_param); 77 + 78 + /** 79 + * lockdown_is_locked_down - Find out if the kernel is locked down 80 + * @what: Tag to use in notice generated if lockdown is in effect 81 + */ 82 + static int lockdown_is_locked_down(enum lockdown_reason what) 83 + { 84 + if (WARN(what >= LOCKDOWN_CONFIDENTIALITY_MAX, 85 + "Invalid lockdown reason")) 86 + return -EPERM; 87 + 88 + if (kernel_locked_down >= what) { 89 + if (lockdown_reasons[what]) 90 + pr_notice("Lockdown: %s: %s is restricted; see man kernel_lockdown.7\n", 91 + current->comm, lockdown_reasons[what]); 92 + return -EPERM; 93 + } 94 + 95 + return 0; 96 + } 97 + 98 + static struct security_hook_list lockdown_hooks[] __lsm_ro_after_init = { 99 + LSM_HOOK_INIT(locked_down, lockdown_is_locked_down), 100 + }; 101 + 102 + static int __init lockdown_lsm_init(void) 103 + { 104 + #if defined(CONFIG_LOCK_DOWN_KERNEL_FORCE_INTEGRITY) 105 + lock_kernel_down("Kernel configuration", LOCKDOWN_INTEGRITY_MAX); 106 + #elif defined(CONFIG_LOCK_DOWN_KERNEL_FORCE_CONFIDENTIALITY) 107 + lock_kernel_down("Kernel configuration", LOCKDOWN_CONFIDENTIALITY_MAX); 108 + #endif 109 + security_add_hooks(lockdown_hooks, ARRAY_SIZE(lockdown_hooks), 110 + "lockdown"); 111 + return 0; 112 + } 113 + 114 + static ssize_t lockdown_read(struct file *filp, char __user *buf, size_t count, 115 + loff_t *ppos) 116 + { 117 + char temp[80]; 118 + int i, offset = 0; 119 + 120 + for (i = 0; i < ARRAY_SIZE(lockdown_levels); i++) { 121 + enum lockdown_reason level = lockdown_levels[i]; 122 + 123 + if (lockdown_reasons[level]) { 124 + const char *label = lockdown_reasons[level]; 125 + 126 + if (kernel_locked_down == level) 127 + offset += sprintf(temp+offset, "[%s] ", label); 128 + else 129 + offset += sprintf(temp+offset, "%s ", label); 130 + } 131 + } 132 + 133 + /* Convert the last space to a newline if needed. */ 134 + if (offset > 0) 135 + temp[offset-1] = '\n'; 136 + 137 + return simple_read_from_buffer(buf, count, ppos, temp, strlen(temp)); 138 + } 139 + 140 + static ssize_t lockdown_write(struct file *file, const char __user *buf, 141 + size_t n, loff_t *ppos) 142 + { 143 + char *state; 144 + int i, len, err = -EINVAL; 145 + 146 + state = memdup_user_nul(buf, n); 147 + if (IS_ERR(state)) 148 + return PTR_ERR(state); 149 + 150 + len = strlen(state); 151 + if (len && state[len-1] == '\n') { 152 + state[len-1] = '\0'; 153 + len--; 154 + } 155 + 156 + for (i = 0; i < ARRAY_SIZE(lockdown_levels); i++) { 157 + enum lockdown_reason level = lockdown_levels[i]; 158 + const char *label = lockdown_reasons[level]; 159 + 160 + if (label && !strcmp(state, label)) 161 + err = lock_kernel_down("securityfs", level); 162 + } 163 + 164 + kfree(state); 165 + return err ? err : n; 166 + } 167 + 168 + static const struct file_operations lockdown_ops = { 169 + .read = lockdown_read, 170 + .write = lockdown_write, 171 + }; 172 + 173 + static int __init lockdown_secfs_init(void) 174 + { 175 + struct dentry *dentry; 176 + 177 + dentry = securityfs_create_file("lockdown", 0600, NULL, NULL, 178 + &lockdown_ops); 179 + return PTR_ERR_OR_ZERO(dentry); 180 + } 181 + 182 + core_initcall(lockdown_secfs_init); 183 + 184 + #ifdef CONFIG_SECURITY_LOCKDOWN_LSM_EARLY 185 + DEFINE_EARLY_LSM(lockdown) = { 186 + #else 187 + DEFINE_LSM(lockdown) = { 188 + #endif 189 + .name = "lockdown", 190 + .init = lockdown_lsm_init, 191 + };
+48 -8
security/security.c
··· 33 33 34 34 /* How many LSMs were built into the kernel? */ 35 35 #define LSM_COUNT (__end_lsm_info - __start_lsm_info) 36 + #define EARLY_LSM_COUNT (__end_early_lsm_info - __start_early_lsm_info) 36 37 37 38 struct security_hook_heads security_hook_heads __lsm_ro_after_init; 38 39 static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain); ··· 278 277 static void __init lsm_early_cred(struct cred *cred); 279 278 static void __init lsm_early_task(struct task_struct *task); 280 279 280 + static int lsm_append(const char *new, char **result); 281 + 281 282 static void __init ordered_lsm_init(void) 282 283 { 283 284 struct lsm_info **lsm; ··· 326 323 kfree(ordered_lsms); 327 324 } 328 325 326 + int __init early_security_init(void) 327 + { 328 + int i; 329 + struct hlist_head *list = (struct hlist_head *) &security_hook_heads; 330 + struct lsm_info *lsm; 331 + 332 + for (i = 0; i < sizeof(security_hook_heads) / sizeof(struct hlist_head); 333 + i++) 334 + INIT_HLIST_HEAD(&list[i]); 335 + 336 + for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) { 337 + if (!lsm->enabled) 338 + lsm->enabled = &lsm_enabled_true; 339 + prepare_lsm(lsm); 340 + initialize_lsm(lsm); 341 + } 342 + 343 + return 0; 344 + } 345 + 329 346 /** 330 347 * security_init - initializes the security framework 331 348 * ··· 353 330 */ 354 331 int __init security_init(void) 355 332 { 356 - int i; 357 - struct hlist_head *list = (struct hlist_head *) &security_hook_heads; 333 + struct lsm_info *lsm; 358 334 359 335 pr_info("Security Framework initializing\n"); 360 336 361 - for (i = 0; i < sizeof(security_hook_heads) / sizeof(struct hlist_head); 362 - i++) 363 - INIT_HLIST_HEAD(&list[i]); 337 + /* 338 + * Append the names of the early LSM modules now that kmalloc() is 339 + * available 340 + */ 341 + for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) { 342 + if (lsm->enabled) 343 + lsm_append(lsm->name, &lsm_names); 344 + } 364 345 365 346 /* Load LSMs in specified order. */ 366 347 ordered_lsm_init(); ··· 411 384 return !strcmp(last, lsm); 412 385 } 413 386 414 - static int lsm_append(char *new, char **result) 387 + static int lsm_append(const char *new, char **result) 415 388 { 416 389 char *cp; 417 390 ··· 449 422 hooks[i].lsm = lsm; 450 423 hlist_add_tail_rcu(&hooks[i].list, hooks[i].head); 451 424 } 452 - if (lsm_append(lsm, &lsm_names) < 0) 453 - panic("%s - Cannot get early memory.\n", __func__); 425 + 426 + /* 427 + * Don't try to append during early_security_init(), we'll come back 428 + * and fix this up afterwards. 429 + */ 430 + if (slab_is_available()) { 431 + if (lsm_append(lsm, &lsm_names) < 0) 432 + panic("%s - Cannot get early memory.\n", __func__); 433 + } 454 434 } 455 435 456 436 int call_blocking_lsm_notifier(enum lsm_event event, void *data) ··· 2398 2364 call_void_hook(bpf_prog_free_security, aux); 2399 2365 } 2400 2366 #endif /* CONFIG_BPF_SYSCALL */ 2367 + 2368 + int security_locked_down(enum lockdown_reason what) 2369 + { 2370 + return call_int_hook(locked_down, 0, what); 2371 + } 2372 + EXPORT_SYMBOL(security_locked_down);