Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
"Highlights:

IMA:
- provide ">" and "<" operators for fowner/uid/euid rules

KEYS:
- add a system blacklist keyring

- add KEYCTL_RESTRICT_KEYRING, exposes keyring link restriction
functionality to userland via keyctl()

LSM:
- harden LSM API with __ro_after_init

- add prlmit security hook, implement for SELinux

- revive security_task_alloc hook

TPM:
- implement contextual TPM command 'spaces'"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (98 commits)
tpm: Fix reference count to main device
tpm_tis: convert to using locality callbacks
tpm: fix handling of the TPM 2.0 event logs
tpm_crb: remove a cruft constant
keys: select CONFIG_CRYPTO when selecting DH / KDF
apparmor: Make path_max parameter readonly
apparmor: fix parameters so that the permission test is bypassed at boot
apparmor: fix invalid reference to index variable of iterator line 836
apparmor: use SHASH_DESC_ON_STACK
security/apparmor/lsm.c: set debug messages
apparmor: fix boolreturn.cocci warnings
Smack: Use GFP_KERNEL for smk_netlbl_mls().
smack: fix double free in smack_parse_opts_str()
KEYS: add SP800-56A KDF support for DH
KEYS: Keyring asymmetric key restrict method with chaining
KEYS: Restrict asymmetric key linkage using a specific keychain
KEYS: Add a lookup_restriction function for the asymmetric key type
KEYS: Add KEYCTL_RESTRICT_KEYRING
KEYS: Consistent ordering for __key_link_begin and restrict check
KEYS: Add an optional lookup_restriction hook to key_type
...

+3243 -1123
+51
Documentation/crypto/asymmetric-keys.txt
··· 311 311 312 312 Parsers may not have the same name. The names are otherwise only used for 313 313 displaying in debugging messages. 314 + 315 + 316 + ========================= 317 + KEYRING LINK RESTRICTIONS 318 + ========================= 319 + 320 + Keyrings created from userspace using add_key can be configured to check the 321 + signature of the key being linked. 322 + 323 + Several restriction methods are available: 324 + 325 + (1) Restrict using the kernel builtin trusted keyring 326 + 327 + - Option string used with KEYCTL_RESTRICT_KEYRING: 328 + - "builtin_trusted" 329 + 330 + The kernel builtin trusted keyring will be searched for the signing 331 + key. The ca_keys kernel parameter also affects which keys are used for 332 + signature verification. 333 + 334 + (2) Restrict using the kernel builtin and secondary trusted keyrings 335 + 336 + - Option string used with KEYCTL_RESTRICT_KEYRING: 337 + - "builtin_and_secondary_trusted" 338 + 339 + The kernel builtin and secondary trusted keyrings will be searched for the 340 + signing key. The ca_keys kernel parameter also affects which keys are used 341 + for signature verification. 342 + 343 + (3) Restrict using a separate key or keyring 344 + 345 + - Option string used with KEYCTL_RESTRICT_KEYRING: 346 + - "key_or_keyring:<key or keyring serial number>[:chain]" 347 + 348 + Whenever a key link is requested, the link will only succeed if the key 349 + being linked is signed by one of the designated keys. This key may be 350 + specified directly by providing a serial number for one asymmetric key, or 351 + a group of keys may be searched for the signing key by providing the 352 + serial number for a keyring. 353 + 354 + When the "chain" option is provided at the end of the string, the keys 355 + within the destination keyring will also be searched for signing keys. 356 + This allows for verification of certificate chains by adding each 357 + cert in order (starting closest to the root) to one keyring. 358 + 359 + In all of these cases, if the signing key is found the signature of the key to 360 + be linked will be verified using the signing key. The requested key is added 361 + to the keyring only if the signature is successfully verified. -ENOKEY is 362 + returned if the parent certificate could not be found, or -EKEYREJECTED is 363 + returned if the signature check fails or the key is blacklisted. Other errors 364 + may be returned if the signature check could not be performed.
+76 -24
Documentation/security/keys.txt
··· 827 827 828 828 long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params, 829 829 char *buffer, size_t buflen, 830 - void *reserved); 830 + struct keyctl_kdf_params *kdf); 831 831 832 832 The params struct contains serial numbers for three keys: 833 833 ··· 844 844 public key. If the base is the remote public key, the result is 845 845 the shared secret. 846 846 847 - The reserved argument must be set to NULL. 847 + If the parameter kdf is NULL, the following applies: 848 848 849 - The buffer length must be at least the length of the prime, or zero. 849 + - The buffer length must be at least the length of the prime, or zero. 850 850 851 - If the buffer length is nonzero, the length of the result is 852 - returned when it is successfully calculated and copied in to the 853 - buffer. When the buffer length is zero, the minimum required 854 - buffer length is returned. 851 + - If the buffer length is nonzero, the length of the result is 852 + returned when it is successfully calculated and copied in to the 853 + buffer. When the buffer length is zero, the minimum required 854 + buffer length is returned. 855 + 856 + The kdf parameter allows the caller to apply a key derivation function 857 + (KDF) on the Diffie-Hellman computation where only the result 858 + of the KDF is returned to the caller. The KDF is characterized with 859 + struct keyctl_kdf_params as follows: 860 + 861 + - char *hashname specifies the NUL terminated string identifying 862 + the hash used from the kernel crypto API and applied for the KDF 863 + operation. The KDF implemenation complies with SP800-56A as well 864 + as with SP800-108 (the counter KDF). 865 + 866 + - char *otherinfo specifies the OtherInfo data as documented in 867 + SP800-56A section 5.8.1.2. The length of the buffer is given with 868 + otherinfolen. The format of OtherInfo is defined by the caller. 869 + The otherinfo pointer may be NULL if no OtherInfo shall be used. 855 870 856 871 This function will return error EOPNOTSUPP if the key type is not 857 872 supported, error ENOKEY if the key could not be found, or error 858 - EACCES if the key is not readable by the caller. 873 + EACCES if the key is not readable by the caller. In addition, the 874 + function will return EMSGSIZE when the parameter kdf is non-NULL 875 + and either the buffer length or the OtherInfo length exceeds the 876 + allowed length. 877 + 878 + (*) Restrict keyring linkage 879 + 880 + long keyctl(KEYCTL_RESTRICT_KEYRING, key_serial_t keyring, 881 + const char *type, const char *restriction); 882 + 883 + An existing keyring can restrict linkage of additional keys by evaluating 884 + the contents of the key according to a restriction scheme. 885 + 886 + "keyring" is the key ID for an existing keyring to apply a restriction 887 + to. It may be empty or may already have keys linked. Existing linked keys 888 + will remain in the keyring even if the new restriction would reject them. 889 + 890 + "type" is a registered key type. 891 + 892 + "restriction" is a string describing how key linkage is to be restricted. 893 + The format varies depending on the key type, and the string is passed to 894 + the lookup_restriction() function for the requested type. It may specify 895 + a method and relevant data for the restriction such as signature 896 + verification or constraints on key payload. If the requested key type is 897 + later unregistered, no keys may be added to the keyring after the key type 898 + is removed. 899 + 900 + To apply a keyring restriction the process must have Set Attribute 901 + permission and the keyring must not be previously restricted. 859 902 860 903 =============== 861 904 KERNEL SERVICES ··· 1075 1032 struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid, 1076 1033 const struct cred *cred, 1077 1034 key_perm_t perm, 1078 - int (*restrict_link)(struct key *, 1079 - const struct key_type *, 1080 - unsigned long, 1081 - const union key_payload *), 1035 + struct key_restriction *restrict_link, 1082 1036 unsigned long flags, 1083 1037 struct key *dest); 1084 1038 ··· 1087 1047 KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted 1088 1048 towards the user's quota). Error ENOMEM can also be returned. 1089 1049 1090 - If restrict_link not NULL, it should point to a function that will be 1091 - called each time an attempt is made to link a key into the new keyring. 1092 - This function is called to check whether a key may be added into the keying 1093 - or not. Callers of key_create_or_update() within the kernel can pass 1094 - KEY_ALLOC_BYPASS_RESTRICTION to suppress the check. An example of using 1095 - this is to manage rings of cryptographic keys that are set up when the 1096 - kernel boots where userspace is also permitted to add keys - provided they 1097 - can be verified by a key the kernel already has. 1050 + If restrict_link is not NULL, it should point to a structure that contains 1051 + the function that will be called each time an attempt is made to link a 1052 + key into the new keyring. The structure may also contain a key pointer 1053 + and an associated key type. The function is called to check whether a key 1054 + may be added into the keyring or not. The key type is used by the garbage 1055 + collector to clean up function or data pointers in this structure if the 1056 + given key type is unregistered. Callers of key_create_or_update() within 1057 + the kernel can pass KEY_ALLOC_BYPASS_RESTRICTION to suppress the check. 1058 + An example of using this is to manage rings of cryptographic keys that are 1059 + set up when the kernel boots where userspace is also permitted to add keys 1060 + - provided they can be verified by a key the kernel already has. 1098 1061 1099 1062 When called, the restriction function will be passed the keyring being 1100 - added to, the key flags value and the type and payload of the key being 1101 - added. Note that when a new key is being created, this is called between 1102 - payload preparsing and actual key creation. The function should return 0 1103 - to allow the link or an error to reject it. 1063 + added to, the key type, the payload of the key being added, and data to be 1064 + used in the restriction check. Note that when a new key is being created, 1065 + this is called between payload preparsing and actual key creation. The 1066 + function should return 0 to allow the link or an error to reject it. 1104 1067 1105 1068 A convenience function, restrict_link_reject, exists to always return 1106 1069 -EPERM to in this case. ··· 1486 1443 (*) struct key *authkey; 1487 1444 1488 1445 The authorisation key. 1446 + 1447 + 1448 + (*) struct key_restriction *(*lookup_restriction)(const char *params); 1449 + 1450 + This optional method is used to enable userspace configuration of keyring 1451 + restrictions. The restriction parameter string (not including the key type 1452 + name) is passed in, and this method returns a pointer to a key_restriction 1453 + structure containing the relevant functions and data to evaluate each 1454 + attempted key link operation. If there is no match, -EINVAL is returned. 1489 1455 1490 1456 1491 1457 ============================
+18
certs/Kconfig
··· 64 64 those keys are not blacklisted and are vouched for by a key built 65 65 into the kernel or already in the secondary trusted keyring. 66 66 67 + config SYSTEM_BLACKLIST_KEYRING 68 + bool "Provide system-wide ring of blacklisted keys" 69 + depends on KEYS 70 + help 71 + Provide a system keyring to which blacklisted keys can be added. 72 + Keys in the keyring are considered entirely untrusted. Keys in this 73 + keyring are used by the module signature checking to reject loading 74 + of modules signed with a blacklisted key. 75 + 76 + config SYSTEM_BLACKLIST_HASH_LIST 77 + string "Hashes to be preloaded into the system blacklist keyring" 78 + depends on SYSTEM_BLACKLIST_KEYRING 79 + help 80 + If set, this option should be the filename of a list of hashes in the 81 + form "<hash>", "<hash>", ... . This will be included into a C 82 + wrapper to incorporate the list into the kernel. Each <hash> should 83 + be a string of hex digits. 84 + 67 85 endmenu
+6
certs/Makefile
··· 3 3 # 4 4 5 5 obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o 6 + obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o 7 + ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),"") 8 + obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o 9 + else 10 + obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_nohashes.o 11 + endif 6 12 7 13 ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y) 8 14
+174
certs/blacklist.c
··· 1 + /* System hash blacklist. 2 + * 3 + * Copyright (C) 2016 Red Hat, Inc. All Rights Reserved. 4 + * Written by David Howells (dhowells@redhat.com) 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public Licence 8 + * as published by the Free Software Foundation; either version 9 + * 2 of the Licence, or (at your option) any later version. 10 + */ 11 + 12 + #define pr_fmt(fmt) "blacklist: "fmt 13 + #include <linux/module.h> 14 + #include <linux/slab.h> 15 + #include <linux/key.h> 16 + #include <linux/key-type.h> 17 + #include <linux/sched.h> 18 + #include <linux/ctype.h> 19 + #include <linux/err.h> 20 + #include <linux/seq_file.h> 21 + #include <keys/system_keyring.h> 22 + #include "blacklist.h" 23 + 24 + static struct key *blacklist_keyring; 25 + 26 + /* 27 + * The description must be a type prefix, a colon and then an even number of 28 + * hex digits. The hash is kept in the description. 29 + */ 30 + static int blacklist_vet_description(const char *desc) 31 + { 32 + int n = 0; 33 + 34 + if (*desc == ':') 35 + return -EINVAL; 36 + for (; *desc; desc++) 37 + if (*desc == ':') 38 + goto found_colon; 39 + return -EINVAL; 40 + 41 + found_colon: 42 + desc++; 43 + for (; *desc; desc++) { 44 + if (!isxdigit(*desc)) 45 + return -EINVAL; 46 + n++; 47 + } 48 + 49 + if (n == 0 || n & 1) 50 + return -EINVAL; 51 + return 0; 52 + } 53 + 54 + /* 55 + * The hash to be blacklisted is expected to be in the description. There will 56 + * be no payload. 57 + */ 58 + static int blacklist_preparse(struct key_preparsed_payload *prep) 59 + { 60 + if (prep->datalen > 0) 61 + return -EINVAL; 62 + return 0; 63 + } 64 + 65 + static void blacklist_free_preparse(struct key_preparsed_payload *prep) 66 + { 67 + } 68 + 69 + static void blacklist_describe(const struct key *key, struct seq_file *m) 70 + { 71 + seq_puts(m, key->description); 72 + } 73 + 74 + static struct key_type key_type_blacklist = { 75 + .name = "blacklist", 76 + .vet_description = blacklist_vet_description, 77 + .preparse = blacklist_preparse, 78 + .free_preparse = blacklist_free_preparse, 79 + .instantiate = generic_key_instantiate, 80 + .describe = blacklist_describe, 81 + }; 82 + 83 + /** 84 + * mark_hash_blacklisted - Add a hash to the system blacklist 85 + * @hash - The hash as a hex string with a type prefix (eg. "tbs:23aa429783") 86 + */ 87 + int mark_hash_blacklisted(const char *hash) 88 + { 89 + key_ref_t key; 90 + 91 + key = key_create_or_update(make_key_ref(blacklist_keyring, true), 92 + "blacklist", 93 + hash, 94 + NULL, 95 + 0, 96 + ((KEY_POS_ALL & ~KEY_POS_SETATTR) | 97 + KEY_USR_VIEW), 98 + KEY_ALLOC_NOT_IN_QUOTA | 99 + KEY_ALLOC_BUILT_IN); 100 + if (IS_ERR(key)) { 101 + pr_err("Problem blacklisting hash (%ld)\n", PTR_ERR(key)); 102 + return PTR_ERR(key); 103 + } 104 + return 0; 105 + } 106 + 107 + /** 108 + * is_hash_blacklisted - Determine if a hash is blacklisted 109 + * @hash: The hash to be checked as a binary blob 110 + * @hash_len: The length of the binary hash 111 + * @type: Type of hash 112 + */ 113 + int is_hash_blacklisted(const u8 *hash, size_t hash_len, const char *type) 114 + { 115 + key_ref_t kref; 116 + size_t type_len = strlen(type); 117 + char *buffer, *p; 118 + int ret = 0; 119 + 120 + buffer = kmalloc(type_len + 1 + hash_len * 2 + 1, GFP_KERNEL); 121 + if (!buffer) 122 + return -ENOMEM; 123 + p = memcpy(buffer, type, type_len); 124 + p += type_len; 125 + *p++ = ':'; 126 + bin2hex(p, hash, hash_len); 127 + p += hash_len * 2; 128 + *p = 0; 129 + 130 + kref = keyring_search(make_key_ref(blacklist_keyring, true), 131 + &key_type_blacklist, buffer); 132 + if (!IS_ERR(kref)) { 133 + key_ref_put(kref); 134 + ret = -EKEYREJECTED; 135 + } 136 + 137 + kfree(buffer); 138 + return ret; 139 + } 140 + EXPORT_SYMBOL_GPL(is_hash_blacklisted); 141 + 142 + /* 143 + * Intialise the blacklist 144 + */ 145 + static int __init blacklist_init(void) 146 + { 147 + const char *const *bl; 148 + 149 + if (register_key_type(&key_type_blacklist) < 0) 150 + panic("Can't allocate system blacklist key type\n"); 151 + 152 + blacklist_keyring = 153 + keyring_alloc(".blacklist", 154 + KUIDT_INIT(0), KGIDT_INIT(0), 155 + current_cred(), 156 + (KEY_POS_ALL & ~KEY_POS_SETATTR) | 157 + KEY_USR_VIEW | KEY_USR_READ | 158 + KEY_USR_SEARCH, 159 + KEY_ALLOC_NOT_IN_QUOTA | 160 + KEY_FLAG_KEEP, 161 + NULL, NULL); 162 + if (IS_ERR(blacklist_keyring)) 163 + panic("Can't allocate system blacklist keyring\n"); 164 + 165 + for (bl = blacklist_hashes; *bl; bl++) 166 + if (mark_hash_blacklisted(*bl) < 0) 167 + pr_err("- blacklisting failed\n"); 168 + return 0; 169 + } 170 + 171 + /* 172 + * Must be initialised before we try and load the keys into the keyring. 173 + */ 174 + device_initcall(blacklist_init);
+3
certs/blacklist.h
··· 1 + #include <linux/kernel.h> 2 + 3 + extern const char __initdata *const blacklist_hashes[];
+6
certs/blacklist_hashes.c
··· 1 + #include "blacklist.h" 2 + 3 + const char __initdata *const blacklist_hashes[] = { 4 + #include CONFIG_SYSTEM_BLACKLIST_HASH_LIST 5 + , NULL 6 + };
+5
certs/blacklist_nohashes.c
··· 1 + #include "blacklist.h" 2 + 3 + const char __initdata *const blacklist_hashes[] = { 4 + NULL 5 + };
+31 -8
certs/system_keyring.c
··· 14 14 #include <linux/sched.h> 15 15 #include <linux/cred.h> 16 16 #include <linux/err.h> 17 + #include <linux/slab.h> 17 18 #include <keys/asymmetric-type.h> 18 19 #include <keys/system_keyring.h> 19 20 #include <crypto/pkcs7.h> ··· 33 32 * Restrict the addition of keys into a keyring based on the key-to-be-added 34 33 * being vouched for by a key in the built in system keyring. 35 34 */ 36 - int restrict_link_by_builtin_trusted(struct key *keyring, 35 + int restrict_link_by_builtin_trusted(struct key *dest_keyring, 37 36 const struct key_type *type, 38 - const union key_payload *payload) 37 + const union key_payload *payload, 38 + struct key *restriction_key) 39 39 { 40 - return restrict_link_by_signature(builtin_trusted_keys, type, payload); 40 + return restrict_link_by_signature(dest_keyring, type, payload, 41 + builtin_trusted_keys); 41 42 } 42 43 43 44 #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING ··· 52 49 * keyrings. 53 50 */ 54 51 int restrict_link_by_builtin_and_secondary_trusted( 55 - struct key *keyring, 52 + struct key *dest_keyring, 56 53 const struct key_type *type, 57 - const union key_payload *payload) 54 + const union key_payload *payload, 55 + struct key *restrict_key) 58 56 { 59 57 /* If we have a secondary trusted keyring, then that contains a link 60 58 * through to the builtin keyring and the search will follow that link. 61 59 */ 62 60 if (type == &key_type_keyring && 63 - keyring == secondary_trusted_keys && 61 + dest_keyring == secondary_trusted_keys && 64 62 payload == &builtin_trusted_keys->payload) 65 63 /* Allow the builtin keyring to be added to the secondary */ 66 64 return 0; 67 65 68 - return restrict_link_by_signature(secondary_trusted_keys, type, payload); 66 + return restrict_link_by_signature(dest_keyring, type, payload, 67 + secondary_trusted_keys); 68 + } 69 + 70 + /** 71 + * Allocate a struct key_restriction for the "builtin and secondary trust" 72 + * keyring. Only for use in system_trusted_keyring_init(). 73 + */ 74 + static __init struct key_restriction *get_builtin_and_secondary_restriction(void) 75 + { 76 + struct key_restriction *restriction; 77 + 78 + restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL); 79 + 80 + if (!restriction) 81 + panic("Can't allocate secondary trusted keyring restriction\n"); 82 + 83 + restriction->check = restrict_link_by_builtin_and_secondary_trusted; 84 + 85 + return restriction; 69 86 } 70 87 #endif 71 88 ··· 114 91 KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH | 115 92 KEY_USR_WRITE), 116 93 KEY_ALLOC_NOT_IN_QUOTA, 117 - restrict_link_by_builtin_and_secondary_trusted, 94 + get_builtin_and_secondary_restriction(), 118 95 NULL); 119 96 if (IS_ERR(secondary_trusted_keys)) 120 97 panic("Can't allocate secondary trusted keyring\n");
+94 -8
crypto/asymmetric_keys/asymmetric_type.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/ctype.h> 20 + #include <keys/system_keyring.h> 20 21 #include "asymmetric_keys.h" 21 22 22 23 MODULE_LICENSE("GPL"); ··· 452 451 asymmetric_key_free_kids(kids); 453 452 } 454 453 454 + static struct key_restriction *asymmetric_restriction_alloc( 455 + key_restrict_link_func_t check, 456 + struct key *key) 457 + { 458 + struct key_restriction *keyres = 459 + kzalloc(sizeof(struct key_restriction), GFP_KERNEL); 460 + 461 + if (!keyres) 462 + return ERR_PTR(-ENOMEM); 463 + 464 + keyres->check = check; 465 + keyres->key = key; 466 + keyres->keytype = &key_type_asymmetric; 467 + 468 + return keyres; 469 + } 470 + 471 + /* 472 + * look up keyring restrict functions for asymmetric keys 473 + */ 474 + static struct key_restriction *asymmetric_lookup_restriction( 475 + const char *restriction) 476 + { 477 + char *restrict_method; 478 + char *parse_buf; 479 + char *next; 480 + struct key_restriction *ret = ERR_PTR(-EINVAL); 481 + 482 + if (strcmp("builtin_trusted", restriction) == 0) 483 + return asymmetric_restriction_alloc( 484 + restrict_link_by_builtin_trusted, NULL); 485 + 486 + if (strcmp("builtin_and_secondary_trusted", restriction) == 0) 487 + return asymmetric_restriction_alloc( 488 + restrict_link_by_builtin_and_secondary_trusted, NULL); 489 + 490 + parse_buf = kstrndup(restriction, PAGE_SIZE, GFP_KERNEL); 491 + if (!parse_buf) 492 + return ERR_PTR(-ENOMEM); 493 + 494 + next = parse_buf; 495 + restrict_method = strsep(&next, ":"); 496 + 497 + if ((strcmp(restrict_method, "key_or_keyring") == 0) && next) { 498 + char *key_text; 499 + key_serial_t serial; 500 + struct key *key; 501 + key_restrict_link_func_t link_fn = 502 + restrict_link_by_key_or_keyring; 503 + bool allow_null_key = false; 504 + 505 + key_text = strsep(&next, ":"); 506 + 507 + if (next) { 508 + if (strcmp(next, "chain") != 0) 509 + goto out; 510 + 511 + link_fn = restrict_link_by_key_or_keyring_chain; 512 + allow_null_key = true; 513 + } 514 + 515 + if (kstrtos32(key_text, 0, &serial) < 0) 516 + goto out; 517 + 518 + if ((serial == 0) && allow_null_key) { 519 + key = NULL; 520 + } else { 521 + key = key_lookup(serial); 522 + if (IS_ERR(key)) { 523 + ret = ERR_CAST(key); 524 + goto out; 525 + } 526 + } 527 + 528 + ret = asymmetric_restriction_alloc(link_fn, key); 529 + if (IS_ERR(ret)) 530 + key_put(key); 531 + } 532 + 533 + out: 534 + kfree(parse_buf); 535 + return ret; 536 + } 537 + 455 538 struct key_type key_type_asymmetric = { 456 - .name = "asymmetric", 457 - .preparse = asymmetric_key_preparse, 458 - .free_preparse = asymmetric_key_free_preparse, 459 - .instantiate = generic_key_instantiate, 460 - .match_preparse = asymmetric_key_match_preparse, 461 - .match_free = asymmetric_key_match_free, 462 - .destroy = asymmetric_key_destroy, 463 - .describe = asymmetric_key_describe, 539 + .name = "asymmetric", 540 + .preparse = asymmetric_key_preparse, 541 + .free_preparse = asymmetric_key_free_preparse, 542 + .instantiate = generic_key_instantiate, 543 + .match_preparse = asymmetric_key_match_preparse, 544 + .match_free = asymmetric_key_match_free, 545 + .destroy = asymmetric_key_destroy, 546 + .describe = asymmetric_key_describe, 547 + .lookup_restriction = asymmetric_lookup_restriction, 464 548 }; 465 549 EXPORT_SYMBOL_GPL(key_type_asymmetric); 466 550
+1
crypto/asymmetric_keys/pkcs7_parser.h
··· 23 23 struct x509_certificate *signer; /* Signing certificate (in msg->certs) */ 24 24 unsigned index; 25 25 bool unsupported_crypto; /* T if not usable due to missing crypto */ 26 + bool blacklisted; 26 27 27 28 /* Message digest - the digest of the Content Data (or NULL) */ 28 29 const void *msgdigest;
+24 -8
crypto/asymmetric_keys/pkcs7_verify.c
··· 190 190 x509->subject, 191 191 x509->raw_serial_size, x509->raw_serial); 192 192 x509->seen = true; 193 + 194 + if (x509->blacklisted) { 195 + /* If this cert is blacklisted, then mark everything 196 + * that depends on this as blacklisted too. 197 + */ 198 + sinfo->blacklisted = true; 199 + for (p = sinfo->signer; p != x509; p = p->signer) 200 + p->blacklisted = true; 201 + pr_debug("- blacklisted\n"); 202 + return 0; 203 + } 204 + 193 205 if (x509->unsupported_key) 194 206 goto unsupported_crypto_in_x509; 195 207 ··· 369 357 * 370 358 * (*) -EBADMSG if some part of the message was invalid, or: 371 359 * 372 - * (*) -ENOPKG if none of the signature chains are verifiable because suitable 373 - * crypto modules couldn't be found, or: 360 + * (*) 0 if no signature chains were found to be blacklisted or to contain 361 + * unsupported crypto, or: 374 362 * 375 - * (*) 0 if all the signature chains that don't incur -ENOPKG can be verified 376 - * (note that a signature chain may be of zero length), or: 363 + * (*) -EKEYREJECTED if a blacklisted key was encountered, or: 364 + * 365 + * (*) -ENOPKG if none of the signature chains are verifiable because suitable 366 + * crypto modules couldn't be found. 377 367 */ 378 368 int pkcs7_verify(struct pkcs7_message *pkcs7, 379 369 enum key_being_used_for usage) 380 370 { 381 371 struct pkcs7_signed_info *sinfo; 382 - int enopkg = -ENOPKG; 372 + int actual_ret = -ENOPKG; 383 373 int ret; 384 374 385 375 kenter(""); ··· 426 412 427 413 for (sinfo = pkcs7->signed_infos; sinfo; sinfo = sinfo->next) { 428 414 ret = pkcs7_verify_one(pkcs7, sinfo); 415 + if (sinfo->blacklisted && actual_ret == -ENOPKG) 416 + actual_ret = -EKEYREJECTED; 429 417 if (ret < 0) { 430 418 if (ret == -ENOPKG) { 431 419 sinfo->unsupported_crypto = true; ··· 436 420 kleave(" = %d", ret); 437 421 return ret; 438 422 } 439 - enopkg = 0; 423 + actual_ret = 0; 440 424 } 441 425 442 - kleave(" = %d", enopkg); 443 - return enopkg; 426 + kleave(" = %d", actual_ret); 427 + return actual_ret; 444 428 } 445 429 EXPORT_SYMBOL_GPL(pkcs7_verify); 446 430
+158 -3
crypto/asymmetric_keys/restrict.c
··· 56 56 57 57 /** 58 58 * restrict_link_by_signature - Restrict additions to a ring of public keys 59 - * @trust_keyring: A ring of keys that can be used to vouch for the new cert. 59 + * @dest_keyring: Keyring being linked to. 60 60 * @type: The type of key being added. 61 61 * @payload: The payload of the new key. 62 + * @trust_keyring: A ring of keys that can be used to vouch for the new cert. 62 63 * 63 64 * Check the new certificate against the ones in the trust keyring. If one of 64 65 * those is the signing key and validates the new certificate, then mark the ··· 70 69 * signature check fails or the key is blacklisted and some other error if 71 70 * there is a matching certificate but the signature check cannot be performed. 72 71 */ 73 - int restrict_link_by_signature(struct key *trust_keyring, 72 + int restrict_link_by_signature(struct key *dest_keyring, 74 73 const struct key_type *type, 75 - const union key_payload *payload) 74 + const union key_payload *payload, 75 + struct key *trust_keyring) 76 76 { 77 77 const struct public_key_signature *sig; 78 78 struct key *key; ··· 107 105 ret = verify_signature(key, sig); 108 106 key_put(key); 109 107 return ret; 108 + } 109 + 110 + static bool match_either_id(const struct asymmetric_key_ids *pair, 111 + const struct asymmetric_key_id *single) 112 + { 113 + return (asymmetric_key_id_same(pair->id[0], single) || 114 + asymmetric_key_id_same(pair->id[1], single)); 115 + } 116 + 117 + static int key_or_keyring_common(struct key *dest_keyring, 118 + const struct key_type *type, 119 + const union key_payload *payload, 120 + struct key *trusted, bool check_dest) 121 + { 122 + const struct public_key_signature *sig; 123 + struct key *key = NULL; 124 + int ret; 125 + 126 + pr_devel("==>%s()\n", __func__); 127 + 128 + if (!dest_keyring) 129 + return -ENOKEY; 130 + else if (dest_keyring->type != &key_type_keyring) 131 + return -EOPNOTSUPP; 132 + 133 + if (!trusted && !check_dest) 134 + return -ENOKEY; 135 + 136 + if (type != &key_type_asymmetric) 137 + return -EOPNOTSUPP; 138 + 139 + sig = payload->data[asym_auth]; 140 + if (!sig->auth_ids[0] && !sig->auth_ids[1]) 141 + return -ENOKEY; 142 + 143 + if (trusted) { 144 + if (trusted->type == &key_type_keyring) { 145 + /* See if we have a key that signed this one. */ 146 + key = find_asymmetric_key(trusted, sig->auth_ids[0], 147 + sig->auth_ids[1], false); 148 + if (IS_ERR(key)) 149 + key = NULL; 150 + } else if (trusted->type == &key_type_asymmetric) { 151 + const struct asymmetric_key_ids *signer_ids; 152 + 153 + signer_ids = asymmetric_key_ids(trusted); 154 + 155 + /* 156 + * The auth_ids come from the candidate key (the 157 + * one that is being considered for addition to 158 + * dest_keyring) and identify the key that was 159 + * used to sign. 160 + * 161 + * The signer_ids are identifiers for the 162 + * signing key specified for dest_keyring. 163 + * 164 + * The first auth_id is the preferred id, and 165 + * the second is the fallback. If only one 166 + * auth_id is present, it may match against 167 + * either signer_id. If two auth_ids are 168 + * present, the first auth_id must match one 169 + * signer_id and the second auth_id must match 170 + * the second signer_id. 171 + */ 172 + if (!sig->auth_ids[0] || !sig->auth_ids[1]) { 173 + const struct asymmetric_key_id *auth_id; 174 + 175 + auth_id = sig->auth_ids[0] ?: sig->auth_ids[1]; 176 + if (match_either_id(signer_ids, auth_id)) 177 + key = __key_get(trusted); 178 + 179 + } else if (asymmetric_key_id_same(signer_ids->id[1], 180 + sig->auth_ids[1]) && 181 + match_either_id(signer_ids, 182 + sig->auth_ids[0])) { 183 + key = __key_get(trusted); 184 + } 185 + } else { 186 + return -EOPNOTSUPP; 187 + } 188 + } 189 + 190 + if (check_dest && !key) { 191 + /* See if the destination has a key that signed this one. */ 192 + key = find_asymmetric_key(dest_keyring, sig->auth_ids[0], 193 + sig->auth_ids[1], false); 194 + if (IS_ERR(key)) 195 + key = NULL; 196 + } 197 + 198 + if (!key) 199 + return -ENOKEY; 200 + 201 + ret = key_validate(key); 202 + if (ret == 0) 203 + ret = verify_signature(key, sig); 204 + 205 + key_put(key); 206 + return ret; 207 + } 208 + 209 + /** 210 + * restrict_link_by_key_or_keyring - Restrict additions to a ring of public 211 + * keys using the restrict_key information stored in the ring. 212 + * @dest_keyring: Keyring being linked to. 213 + * @type: The type of key being added. 214 + * @payload: The payload of the new key. 215 + * @trusted: A key or ring of keys that can be used to vouch for the new cert. 216 + * 217 + * Check the new certificate only against the key or keys passed in the data 218 + * parameter. If one of those is the signing key and validates the new 219 + * certificate, then mark the new certificate as being ok to link. 220 + * 221 + * Returns 0 if the new certificate was accepted, -ENOKEY if we 222 + * couldn't find a matching parent certificate in the trusted list, 223 + * -EKEYREJECTED if the signature check fails, and some other error if 224 + * there is a matching certificate but the signature check cannot be 225 + * performed. 226 + */ 227 + int restrict_link_by_key_or_keyring(struct key *dest_keyring, 228 + const struct key_type *type, 229 + const union key_payload *payload, 230 + struct key *trusted) 231 + { 232 + return key_or_keyring_common(dest_keyring, type, payload, trusted, 233 + false); 234 + } 235 + 236 + /** 237 + * restrict_link_by_key_or_keyring_chain - Restrict additions to a ring of 238 + * public keys using the restrict_key information stored in the ring. 239 + * @dest_keyring: Keyring being linked to. 240 + * @type: The type of key being added. 241 + * @payload: The payload of the new key. 242 + * @trusted: A key or ring of keys that can be used to vouch for the new cert. 243 + * 244 + * Check the new certificate only against the key or keys passed in the data 245 + * parameter. If one of those is the signing key and validates the new 246 + * certificate, then mark the new certificate as being ok to link. 247 + * 248 + * Returns 0 if the new certificate was accepted, -ENOKEY if we 249 + * couldn't find a matching parent certificate in the trusted list, 250 + * -EKEYREJECTED if the signature check fails, and some other error if 251 + * there is a matching certificate but the signature check cannot be 252 + * performed. 253 + */ 254 + int restrict_link_by_key_or_keyring_chain(struct key *dest_keyring, 255 + const struct key_type *type, 256 + const union key_payload *payload, 257 + struct key *trusted) 258 + { 259 + return key_or_keyring_common(dest_keyring, type, payload, trusted, 260 + true); 110 261 }
+1
crypto/asymmetric_keys/x509_parser.h
··· 42 42 bool self_signed; /* T if self-signed (check unsupported_sig too) */ 43 43 bool unsupported_key; /* T if key uses unsupported crypto */ 44 44 bool unsupported_sig; /* T if signature uses unsupported crypto */ 45 + bool blacklisted; 45 46 }; 46 47 47 48 /*
+15
crypto/asymmetric_keys/x509_public_key.c
··· 84 84 goto error_2; 85 85 might_sleep(); 86 86 ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest); 87 + if (ret < 0) 88 + goto error_2; 89 + 90 + ret = is_hash_blacklisted(sig->digest, sig->digest_size, "tbs"); 91 + if (ret == -EKEYREJECTED) { 92 + pr_err("Cert %*phN is blacklisted\n", 93 + sig->digest_size, sig->digest); 94 + cert->blacklisted = true; 95 + ret = 0; 96 + } 87 97 88 98 error_2: 89 99 kfree(desc); ··· 195 185 pr_devel("Cert Signature: %s + %s\n", 196 186 cert->sig->pkey_algo, cert->sig->hash_algo); 197 187 } 188 + 189 + /* Don't permit addition of blacklisted keys */ 190 + ret = -EKEYREJECTED; 191 + if (cert->blacklisted) 192 + goto error_free_cert; 198 193 199 194 /* Propose a description */ 200 195 sulen = strlen(cert->subject);
+2 -1
drivers/char/tpm/Kconfig
··· 6 6 tristate "TPM Hardware Support" 7 7 depends on HAS_IOMEM 8 8 select SECURITYFS 9 + select CRYPTO 9 10 select CRYPTO_HASH_INFO 10 11 ---help--- 11 12 If you have a TPM security chip in your system, which ··· 136 135 137 136 config TCG_CRB 138 137 tristate "TPM 2.0 CRB Interface" 139 - depends on X86 && ACPI 138 + depends on ACPI 140 139 ---help--- 141 140 If you have a TPM security chip that is compliant with the 142 141 TCG CRB 2.0 TPM specification say Yes and it will be accessible
+2 -1
drivers/char/tpm/Makefile
··· 3 3 # 4 4 obj-$(CONFIG_TCG_TPM) += tpm.o 5 5 tpm-y := tpm-interface.o tpm-dev.o tpm-sysfs.o tpm-chip.o tpm2-cmd.o \ 6 - tpm1_eventlog.o tpm2_eventlog.o 6 + tpm-dev-common.o tpmrm-dev.o tpm1_eventlog.o tpm2_eventlog.o \ 7 + tpm2-space.o 7 8 tpm-$(CONFIG_ACPI) += tpm_ppi.o tpm_acpi.o 8 9 tpm-$(CONFIG_OF) += tpm_of.o 9 10 obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o
+20 -3
drivers/char/tpm/st33zp24/i2c.c
··· 111 111 .recv = st33zp24_i2c_recv, 112 112 }; 113 113 114 + static const struct acpi_gpio_params lpcpd_gpios = { 1, 0, false }; 115 + 116 + static const struct acpi_gpio_mapping acpi_st33zp24_gpios[] = { 117 + { "lpcpd-gpios", &lpcpd_gpios, 1 }, 118 + {}, 119 + }; 120 + 114 121 static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client) 115 122 { 116 123 struct tpm_chip *chip = i2c_get_clientdata(client); ··· 125 118 struct st33zp24_i2c_phy *phy = tpm_dev->phy_id; 126 119 struct gpio_desc *gpiod_lpcpd; 127 120 struct device *dev = &client->dev; 121 + int ret; 122 + 123 + ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios); 124 + if (ret) 125 + return ret; 128 126 129 127 /* Get LPCPD GPIO from ACPI */ 130 - gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, 131 - GPIOD_OUT_HIGH); 128 + gpiod_lpcpd = devm_gpiod_get(dev, "lpcpd", GPIOD_OUT_HIGH); 132 129 if (IS_ERR(gpiod_lpcpd)) { 133 130 dev_err(&client->dev, 134 131 "Failed to retrieve lpcpd-gpios from acpi.\n"); ··· 279 268 static int st33zp24_i2c_remove(struct i2c_client *client) 280 269 { 281 270 struct tpm_chip *chip = i2c_get_clientdata(client); 271 + int ret; 282 272 283 - return st33zp24_remove(chip); 273 + ret = st33zp24_remove(chip); 274 + if (ret) 275 + return ret; 276 + 277 + acpi_dev_remove_driver_gpios(ACPI_COMPANION(&client->dev)); 278 + return 0; 284 279 } 285 280 286 281 static const struct i2c_device_id st33zp24_i2c_id[] = {
+20 -3
drivers/char/tpm/st33zp24/spi.c
··· 230 230 .recv = st33zp24_spi_recv, 231 231 }; 232 232 233 + static const struct acpi_gpio_params lpcpd_gpios = { 1, 0, false }; 234 + 235 + static const struct acpi_gpio_mapping acpi_st33zp24_gpios[] = { 236 + { "lpcpd-gpios", &lpcpd_gpios, 1 }, 237 + {}, 238 + }; 239 + 233 240 static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev) 234 241 { 235 242 struct tpm_chip *chip = spi_get_drvdata(spi_dev); ··· 244 237 struct st33zp24_spi_phy *phy = tpm_dev->phy_id; 245 238 struct gpio_desc *gpiod_lpcpd; 246 239 struct device *dev = &spi_dev->dev; 240 + int ret; 241 + 242 + ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios); 243 + if (ret) 244 + return ret; 247 245 248 246 /* Get LPCPD GPIO from ACPI */ 249 - gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, 250 - GPIOD_OUT_HIGH); 247 + gpiod_lpcpd = devm_gpiod_get(dev, "lpcpd", GPIOD_OUT_HIGH); 251 248 if (IS_ERR(gpiod_lpcpd)) { 252 249 dev_err(dev, "Failed to retrieve lpcpd-gpios from acpi.\n"); 253 250 phy->io_lpcpd = -1; ··· 396 385 static int st33zp24_spi_remove(struct spi_device *dev) 397 386 { 398 387 struct tpm_chip *chip = spi_get_drvdata(dev); 388 + int ret; 399 389 400 - return st33zp24_remove(chip); 390 + ret = st33zp24_remove(chip); 391 + if (ret) 392 + return ret; 393 + 394 + acpi_dev_remove_driver_gpios(ACPI_COMPANION(&dev->dev)); 395 + return 0; 401 396 } 402 397 403 398 static const struct spi_device_id st33zp24_spi_id[] = {
+6 -6
drivers/char/tpm/st33zp24/st33zp24.c
··· 117 117 /* 118 118 * check_locality if the locality is active 119 119 * @param: chip, the tpm chip description 120 - * @return: the active locality or -EACCESS. 120 + * @return: true if LOCALITY0 is active, otherwise false 121 121 */ 122 - static int check_locality(struct tpm_chip *chip) 122 + static bool check_locality(struct tpm_chip *chip) 123 123 { 124 124 struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 125 125 u8 data; ··· 129 129 if (status && (data & 130 130 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 131 131 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) 132 - return tpm_dev->locality; 132 + return true; 133 133 134 - return -EACCES; 134 + return false; 135 135 } /* check_locality() */ 136 136 137 137 /* ··· 146 146 long ret; 147 147 u8 data; 148 148 149 - if (check_locality(chip) == tpm_dev->locality) 149 + if (check_locality(chip)) 150 150 return tpm_dev->locality; 151 151 152 152 data = TPM_ACCESS_REQUEST_USE; ··· 158 158 159 159 /* Request locality is usually effective after the request */ 160 160 do { 161 - if (check_locality(chip) >= 0) 161 + if (check_locality(chip)) 162 162 return tpm_dev->locality; 163 163 msleep(TPM_TIMEOUT); 164 164 } while (time_before(jiffies, stop));
+70 -1
drivers/char/tpm/tpm-chip.c
··· 33 33 static DEFINE_MUTEX(idr_lock); 34 34 35 35 struct class *tpm_class; 36 + struct class *tpmrm_class; 36 37 dev_t tpm_devt; 37 38 38 39 /** ··· 129 128 mutex_unlock(&idr_lock); 130 129 131 130 kfree(chip->log.bios_event_log); 131 + kfree(chip->work_space.context_buf); 132 + kfree(chip->work_space.session_buf); 132 133 kfree(chip); 134 + } 135 + 136 + static void tpm_devs_release(struct device *dev) 137 + { 138 + struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs); 139 + 140 + /* release the master device reference */ 141 + put_device(&chip->dev); 133 142 } 134 143 135 144 /** ··· 178 167 chip->dev_num = rc; 179 168 180 169 device_initialize(&chip->dev); 170 + device_initialize(&chip->devs); 181 171 182 172 chip->dev.class = tpm_class; 183 173 chip->dev.release = tpm_dev_release; 184 174 chip->dev.parent = pdev; 185 175 chip->dev.groups = chip->groups; 186 176 177 + chip->devs.parent = pdev; 178 + chip->devs.class = tpmrm_class; 179 + chip->devs.release = tpm_devs_release; 180 + /* get extra reference on main device to hold on 181 + * behalf of devs. This holds the chip structure 182 + * while cdevs is in use. The corresponding put 183 + * is in the tpm_devs_release (TPM2 only) 184 + */ 185 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 186 + get_device(&chip->dev); 187 + 187 188 if (chip->dev_num == 0) 188 189 chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR); 189 190 else 190 191 chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num); 191 192 193 + chip->devs.devt = 194 + MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES); 195 + 192 196 rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num); 197 + if (rc) 198 + goto out; 199 + rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num); 193 200 if (rc) 194 201 goto out; 195 202 ··· 215 186 chip->flags |= TPM_CHIP_FLAG_VIRTUAL; 216 187 217 188 cdev_init(&chip->cdev, &tpm_fops); 189 + cdev_init(&chip->cdevs, &tpmrm_fops); 218 190 chip->cdev.owner = THIS_MODULE; 191 + chip->cdevs.owner = THIS_MODULE; 219 192 chip->cdev.kobj.parent = &chip->dev.kobj; 193 + chip->cdevs.kobj.parent = &chip->devs.kobj; 220 194 195 + chip->work_space.context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL); 196 + if (!chip->work_space.context_buf) { 197 + rc = -ENOMEM; 198 + goto out; 199 + } 200 + chip->work_space.session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL); 201 + if (!chip->work_space.session_buf) { 202 + rc = -ENOMEM; 203 + goto out; 204 + } 205 + 206 + chip->locality = -1; 221 207 return chip; 222 208 223 209 out: 210 + put_device(&chip->devs); 224 211 put_device(&chip->dev); 225 212 return ERR_PTR(rc); 226 213 } ··· 281 236 "unable to cdev_add() %s, major %d, minor %d, err=%d\n", 282 237 dev_name(&chip->dev), MAJOR(chip->dev.devt), 283 238 MINOR(chip->dev.devt), rc); 284 - 285 239 return rc; 286 240 } 287 241 ··· 292 248 MINOR(chip->dev.devt), rc); 293 249 294 250 cdev_del(&chip->cdev); 251 + return rc; 252 + } 253 + 254 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 255 + rc = cdev_add(&chip->cdevs, chip->devs.devt, 1); 256 + if (rc) { 257 + dev_err(&chip->dev, 258 + "unable to cdev_add() %s, major %d, minor %d, err=%d\n", 259 + dev_name(&chip->devs), MAJOR(chip->devs.devt), 260 + MINOR(chip->devs.devt), rc); 261 + return rc; 262 + } 263 + 264 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 265 + rc = device_add(&chip->devs); 266 + if (rc) { 267 + dev_err(&chip->dev, 268 + "unable to device_register() %s, major %d, minor %d, err=%d\n", 269 + dev_name(&chip->devs), MAJOR(chip->devs.devt), 270 + MINOR(chip->devs.devt), rc); 271 + cdev_del(&chip->cdevs); 295 272 return rc; 296 273 } 297 274 ··· 449 384 { 450 385 tpm_del_legacy_sysfs(chip); 451 386 tpm_bios_log_teardown(chip); 387 + if (chip->flags & TPM_CHIP_FLAG_TPM2) { 388 + cdev_del(&chip->cdevs); 389 + device_del(&chip->devs); 390 + } 452 391 tpm_del_char_device(chip); 453 392 } 454 393 EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+148
drivers/char/tpm/tpm-dev-common.c
··· 1 + /* 2 + * Copyright (C) 2004 IBM Corporation 3 + * Authors: 4 + * Leendert van Doorn <leendert@watson.ibm.com> 5 + * Dave Safford <safford@watson.ibm.com> 6 + * Reiner Sailer <sailer@watson.ibm.com> 7 + * Kylene Hall <kjhall@us.ibm.com> 8 + * 9 + * Copyright (C) 2013 Obsidian Research Corp 10 + * Jason Gunthorpe <jgunthorpe@obsidianresearch.com> 11 + * 12 + * Device file system interface to the TPM 13 + * 14 + * This program is free software; you can redistribute it and/or 15 + * modify it under the terms of the GNU General Public License as 16 + * published by the Free Software Foundation, version 2 of the 17 + * License. 18 + * 19 + */ 20 + #include <linux/slab.h> 21 + #include <linux/uaccess.h> 22 + #include "tpm.h" 23 + #include "tpm-dev.h" 24 + 25 + static void user_reader_timeout(unsigned long ptr) 26 + { 27 + struct file_priv *priv = (struct file_priv *)ptr; 28 + 29 + pr_warn("TPM user space timeout is deprecated (pid=%d)\n", 30 + task_tgid_nr(current)); 31 + 32 + schedule_work(&priv->work); 33 + } 34 + 35 + static void timeout_work(struct work_struct *work) 36 + { 37 + struct file_priv *priv = container_of(work, struct file_priv, work); 38 + 39 + mutex_lock(&priv->buffer_mutex); 40 + atomic_set(&priv->data_pending, 0); 41 + memset(priv->data_buffer, 0, sizeof(priv->data_buffer)); 42 + mutex_unlock(&priv->buffer_mutex); 43 + } 44 + 45 + void tpm_common_open(struct file *file, struct tpm_chip *chip, 46 + struct file_priv *priv) 47 + { 48 + priv->chip = chip; 49 + atomic_set(&priv->data_pending, 0); 50 + mutex_init(&priv->buffer_mutex); 51 + setup_timer(&priv->user_read_timer, user_reader_timeout, 52 + (unsigned long)priv); 53 + INIT_WORK(&priv->work, timeout_work); 54 + 55 + file->private_data = priv; 56 + } 57 + 58 + ssize_t tpm_common_read(struct file *file, char __user *buf, 59 + size_t size, loff_t *off) 60 + { 61 + struct file_priv *priv = file->private_data; 62 + ssize_t ret_size; 63 + ssize_t orig_ret_size; 64 + int rc; 65 + 66 + del_singleshot_timer_sync(&priv->user_read_timer); 67 + flush_work(&priv->work); 68 + ret_size = atomic_read(&priv->data_pending); 69 + if (ret_size > 0) { /* relay data */ 70 + orig_ret_size = ret_size; 71 + if (size < ret_size) 72 + ret_size = size; 73 + 74 + mutex_lock(&priv->buffer_mutex); 75 + rc = copy_to_user(buf, priv->data_buffer, ret_size); 76 + memset(priv->data_buffer, 0, orig_ret_size); 77 + if (rc) 78 + ret_size = -EFAULT; 79 + 80 + mutex_unlock(&priv->buffer_mutex); 81 + } 82 + 83 + atomic_set(&priv->data_pending, 0); 84 + 85 + return ret_size; 86 + } 87 + 88 + ssize_t tpm_common_write(struct file *file, const char __user *buf, 89 + size_t size, loff_t *off, struct tpm_space *space) 90 + { 91 + struct file_priv *priv = file->private_data; 92 + size_t in_size = size; 93 + ssize_t out_size; 94 + 95 + /* Cannot perform a write until the read has cleared either via 96 + * tpm_read or a user_read_timer timeout. This also prevents split 97 + * buffered writes from blocking here. 98 + */ 99 + if (atomic_read(&priv->data_pending) != 0) 100 + return -EBUSY; 101 + 102 + if (in_size > TPM_BUFSIZE) 103 + return -E2BIG; 104 + 105 + mutex_lock(&priv->buffer_mutex); 106 + 107 + if (copy_from_user 108 + (priv->data_buffer, (void __user *) buf, in_size)) { 109 + mutex_unlock(&priv->buffer_mutex); 110 + return -EFAULT; 111 + } 112 + 113 + /* atomic tpm command send and result receive. We only hold the ops 114 + * lock during this period so that the tpm can be unregistered even if 115 + * the char dev is held open. 116 + */ 117 + if (tpm_try_get_ops(priv->chip)) { 118 + mutex_unlock(&priv->buffer_mutex); 119 + return -EPIPE; 120 + } 121 + out_size = tpm_transmit(priv->chip, space, priv->data_buffer, 122 + sizeof(priv->data_buffer), 0); 123 + 124 + tpm_put_ops(priv->chip); 125 + if (out_size < 0) { 126 + mutex_unlock(&priv->buffer_mutex); 127 + return out_size; 128 + } 129 + 130 + atomic_set(&priv->data_pending, out_size); 131 + mutex_unlock(&priv->buffer_mutex); 132 + 133 + /* Set a timeout by which the reader must come claim the result */ 134 + mod_timer(&priv->user_read_timer, jiffies + (120 * HZ)); 135 + 136 + return in_size; 137 + } 138 + 139 + /* 140 + * Called on file close 141 + */ 142 + void tpm_common_release(struct file *file, struct file_priv *priv) 143 + { 144 + del_singleshot_timer_sync(&priv->user_read_timer); 145 + flush_work(&priv->work); 146 + file->private_data = NULL; 147 + atomic_set(&priv->data_pending, 0); 148 + }
+14 -129
drivers/char/tpm/tpm-dev.c
··· 18 18 * 19 19 */ 20 20 #include <linux/slab.h> 21 - #include <linux/uaccess.h> 22 - #include "tpm.h" 23 - 24 - struct file_priv { 25 - struct tpm_chip *chip; 26 - 27 - /* Data passed to and from the tpm via the read/write calls */ 28 - atomic_t data_pending; 29 - struct mutex buffer_mutex; 30 - 31 - struct timer_list user_read_timer; /* user needs to claim result */ 32 - struct work_struct work; 33 - 34 - u8 data_buffer[TPM_BUFSIZE]; 35 - }; 36 - 37 - static void user_reader_timeout(unsigned long ptr) 38 - { 39 - struct file_priv *priv = (struct file_priv *)ptr; 40 - 41 - pr_warn("TPM user space timeout is deprecated (pid=%d)\n", 42 - task_tgid_nr(current)); 43 - 44 - schedule_work(&priv->work); 45 - } 46 - 47 - static void timeout_work(struct work_struct *work) 48 - { 49 - struct file_priv *priv = container_of(work, struct file_priv, work); 50 - 51 - mutex_lock(&priv->buffer_mutex); 52 - atomic_set(&priv->data_pending, 0); 53 - memset(priv->data_buffer, 0, sizeof(priv->data_buffer)); 54 - mutex_unlock(&priv->buffer_mutex); 55 - } 21 + #include "tpm-dev.h" 56 22 57 23 static int tpm_open(struct inode *inode, struct file *file) 58 24 { 59 - struct tpm_chip *chip = 60 - container_of(inode->i_cdev, struct tpm_chip, cdev); 25 + struct tpm_chip *chip; 61 26 struct file_priv *priv; 27 + 28 + chip = container_of(inode->i_cdev, struct tpm_chip, cdev); 62 29 63 30 /* It's assured that the chip will be opened just once, 64 31 * by the check of is_open variable, which is protected ··· 36 69 } 37 70 38 71 priv = kzalloc(sizeof(*priv), GFP_KERNEL); 39 - if (priv == NULL) { 40 - clear_bit(0, &chip->is_open); 41 - return -ENOMEM; 42 - } 72 + if (priv == NULL) 73 + goto out; 43 74 44 - priv->chip = chip; 45 - atomic_set(&priv->data_pending, 0); 46 - mutex_init(&priv->buffer_mutex); 47 - setup_timer(&priv->user_read_timer, user_reader_timeout, 48 - (unsigned long)priv); 49 - INIT_WORK(&priv->work, timeout_work); 75 + tpm_common_open(file, chip, priv); 50 76 51 - file->private_data = priv; 52 77 return 0; 53 - } 54 78 55 - static ssize_t tpm_read(struct file *file, char __user *buf, 56 - size_t size, loff_t *off) 57 - { 58 - struct file_priv *priv = file->private_data; 59 - ssize_t ret_size; 60 - int rc; 61 - 62 - del_singleshot_timer_sync(&priv->user_read_timer); 63 - flush_work(&priv->work); 64 - ret_size = atomic_read(&priv->data_pending); 65 - if (ret_size > 0) { /* relay data */ 66 - ssize_t orig_ret_size = ret_size; 67 - if (size < ret_size) 68 - ret_size = size; 69 - 70 - mutex_lock(&priv->buffer_mutex); 71 - rc = copy_to_user(buf, priv->data_buffer, ret_size); 72 - memset(priv->data_buffer, 0, orig_ret_size); 73 - if (rc) 74 - ret_size = -EFAULT; 75 - 76 - mutex_unlock(&priv->buffer_mutex); 77 - } 78 - 79 - atomic_set(&priv->data_pending, 0); 80 - 81 - return ret_size; 79 + out: 80 + clear_bit(0, &chip->is_open); 81 + return -ENOMEM; 82 82 } 83 83 84 84 static ssize_t tpm_write(struct file *file, const char __user *buf, 85 85 size_t size, loff_t *off) 86 86 { 87 - struct file_priv *priv = file->private_data; 88 - size_t in_size = size; 89 - ssize_t out_size; 90 - 91 - /* cannot perform a write until the read has cleared 92 - either via tpm_read or a user_read_timer timeout. 93 - This also prevents splitted buffered writes from blocking here. 94 - */ 95 - if (atomic_read(&priv->data_pending) != 0) 96 - return -EBUSY; 97 - 98 - if (in_size > TPM_BUFSIZE) 99 - return -E2BIG; 100 - 101 - mutex_lock(&priv->buffer_mutex); 102 - 103 - if (copy_from_user 104 - (priv->data_buffer, (void __user *) buf, in_size)) { 105 - mutex_unlock(&priv->buffer_mutex); 106 - return -EFAULT; 107 - } 108 - 109 - /* atomic tpm command send and result receive. We only hold the ops 110 - * lock during this period so that the tpm can be unregistered even if 111 - * the char dev is held open. 112 - */ 113 - if (tpm_try_get_ops(priv->chip)) { 114 - mutex_unlock(&priv->buffer_mutex); 115 - return -EPIPE; 116 - } 117 - out_size = tpm_transmit(priv->chip, priv->data_buffer, 118 - sizeof(priv->data_buffer), 0); 119 - 120 - tpm_put_ops(priv->chip); 121 - if (out_size < 0) { 122 - mutex_unlock(&priv->buffer_mutex); 123 - return out_size; 124 - } 125 - 126 - atomic_set(&priv->data_pending, out_size); 127 - mutex_unlock(&priv->buffer_mutex); 128 - 129 - /* Set a timeout by which the reader must come claim the result */ 130 - mod_timer(&priv->user_read_timer, jiffies + (120 * HZ)); 131 - 132 - return in_size; 87 + return tpm_common_write(file, buf, size, off, NULL); 133 88 } 134 89 135 90 /* ··· 61 172 { 62 173 struct file_priv *priv = file->private_data; 63 174 64 - del_singleshot_timer_sync(&priv->user_read_timer); 65 - flush_work(&priv->work); 66 - file->private_data = NULL; 67 - atomic_set(&priv->data_pending, 0); 175 + tpm_common_release(file, priv); 68 176 clear_bit(0, &priv->chip->is_open); 69 177 kfree(priv); 178 + 70 179 return 0; 71 180 } 72 181 ··· 72 185 .owner = THIS_MODULE, 73 186 .llseek = no_llseek, 74 187 .open = tpm_open, 75 - .read = tpm_read, 188 + .read = tpm_common_read, 76 189 .write = tpm_write, 77 190 .release = tpm_release, 78 191 }; 79 - 80 -
+27
drivers/char/tpm/tpm-dev.h
··· 1 + #ifndef _TPM_DEV_H 2 + #define _TPM_DEV_H 3 + 4 + #include "tpm.h" 5 + 6 + struct file_priv { 7 + struct tpm_chip *chip; 8 + 9 + /* Data passed to and from the tpm via the read/write calls */ 10 + atomic_t data_pending; 11 + struct mutex buffer_mutex; 12 + 13 + struct timer_list user_read_timer; /* user needs to claim result */ 14 + struct work_struct work; 15 + 16 + u8 data_buffer[TPM_BUFSIZE]; 17 + }; 18 + 19 + void tpm_common_open(struct file *file, struct tpm_chip *chip, 20 + struct file_priv *priv); 21 + ssize_t tpm_common_read(struct file *file, char __user *buf, 22 + size_t size, loff_t *off); 23 + ssize_t tpm_common_write(struct file *file, const char __user *buf, 24 + size_t size, loff_t *off, struct tpm_space *space); 25 + void tpm_common_release(struct file *file, struct file_priv *priv); 26 + 27 + #endif
+117 -35
drivers/char/tpm/tpm-interface.c
··· 328 328 } 329 329 EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration); 330 330 331 + static bool tpm_validate_command(struct tpm_chip *chip, 332 + struct tpm_space *space, 333 + const u8 *cmd, 334 + size_t len) 335 + { 336 + const struct tpm_input_header *header = (const void *)cmd; 337 + int i; 338 + u32 cc; 339 + u32 attrs; 340 + unsigned int nr_handles; 341 + 342 + if (len < TPM_HEADER_SIZE) 343 + return false; 344 + 345 + if (!space) 346 + return true; 347 + 348 + if (chip->flags & TPM_CHIP_FLAG_TPM2 && chip->nr_commands) { 349 + cc = be32_to_cpu(header->ordinal); 350 + 351 + i = tpm2_find_cc(chip, cc); 352 + if (i < 0) { 353 + dev_dbg(&chip->dev, "0x%04X is an invalid command\n", 354 + cc); 355 + return false; 356 + } 357 + 358 + attrs = chip->cc_attrs_tbl[i]; 359 + nr_handles = 360 + 4 * ((attrs >> TPM2_CC_ATTR_CHANDLES) & GENMASK(2, 0)); 361 + if (len < TPM_HEADER_SIZE + 4 * nr_handles) 362 + goto err_len; 363 + } 364 + 365 + return true; 366 + err_len: 367 + dev_dbg(&chip->dev, 368 + "%s: insufficient command length %zu", __func__, len); 369 + return false; 370 + } 371 + 331 372 /** 332 373 * tmp_transmit - Internal kernel interface to transmit TPM commands. 333 374 * ··· 381 340 * 0 when the operation is successful. 382 341 * A negative number for system errors (errno). 383 342 */ 384 - ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, 385 - unsigned int flags) 343 + ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space, 344 + u8 *buf, size_t bufsiz, unsigned int flags) 386 345 { 387 - ssize_t rc; 346 + struct tpm_output_header *header = (void *)buf; 347 + int rc; 348 + ssize_t len = 0; 388 349 u32 count, ordinal; 389 350 unsigned long stop; 351 + bool need_locality; 390 352 391 - if (bufsiz < TPM_HEADER_SIZE) 353 + if (!tpm_validate_command(chip, space, buf, bufsiz)) 392 354 return -EINVAL; 393 355 394 356 if (bufsiz > TPM_BUFSIZE) ··· 413 369 if (chip->dev.parent) 414 370 pm_runtime_get_sync(chip->dev.parent); 415 371 372 + /* Store the decision as chip->locality will be changed. */ 373 + need_locality = chip->locality == -1; 374 + 375 + if (need_locality && chip->ops->request_locality) { 376 + rc = chip->ops->request_locality(chip, 0); 377 + if (rc < 0) 378 + goto out_no_locality; 379 + chip->locality = rc; 380 + } 381 + 382 + rc = tpm2_prepare_space(chip, space, ordinal, buf); 383 + if (rc) 384 + goto out; 385 + 416 386 rc = chip->ops->send(chip, (u8 *) buf, count); 417 387 if (rc < 0) { 418 388 dev_err(&chip->dev, 419 - "tpm_transmit: tpm_send: error %zd\n", rc); 389 + "tpm_transmit: tpm_send: error %d\n", rc); 420 390 goto out; 421 391 } 422 392 ··· 463 405 goto out; 464 406 465 407 out_recv: 466 - rc = chip->ops->recv(chip, (u8 *) buf, bufsiz); 467 - if (rc < 0) 408 + len = chip->ops->recv(chip, (u8 *) buf, bufsiz); 409 + if (len < 0) { 410 + rc = len; 468 411 dev_err(&chip->dev, 469 - "tpm_transmit: tpm_recv: error %zd\n", rc); 412 + "tpm_transmit: tpm_recv: error %d\n", rc); 413 + goto out; 414 + } else if (len < TPM_HEADER_SIZE) { 415 + rc = -EFAULT; 416 + goto out; 417 + } 418 + 419 + if (len != be32_to_cpu(header->length)) { 420 + rc = -EFAULT; 421 + goto out; 422 + } 423 + 424 + rc = tpm2_commit_space(chip, space, ordinal, buf, &len); 425 + 470 426 out: 427 + if (need_locality && chip->ops->relinquish_locality) { 428 + chip->ops->relinquish_locality(chip, chip->locality); 429 + chip->locality = -1; 430 + } 431 + out_no_locality: 471 432 if (chip->dev.parent) 472 433 pm_runtime_put_sync(chip->dev.parent); 473 434 474 435 if (!(flags & TPM_TRANSMIT_UNLOCKED)) 475 436 mutex_unlock(&chip->tpm_mutex); 476 - return rc; 437 + return rc ? rc : len; 477 438 } 478 439 479 440 /** ··· 511 434 * A negative number for system errors (errno). 512 435 * A positive number for a TPM error. 513 436 */ 514 - ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *buf, 515 - size_t bufsiz, size_t min_rsp_body_length, 516 - unsigned int flags, const char *desc) 437 + ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space, 438 + const void *buf, size_t bufsiz, 439 + size_t min_rsp_body_length, unsigned int flags, 440 + const char *desc) 517 441 { 518 - const struct tpm_output_header *header; 442 + const struct tpm_output_header *header = buf; 519 443 int err; 520 444 ssize_t len; 521 445 522 - len = tpm_transmit(chip, (const u8 *)buf, bufsiz, flags); 446 + len = tpm_transmit(chip, space, (u8 *)buf, bufsiz, flags); 523 447 if (len < 0) 524 448 return len; 525 - else if (len < TPM_HEADER_SIZE) 526 - return -EFAULT; 527 - 528 - header = buf; 529 - if (len != be32_to_cpu(header->length)) 530 - return -EFAULT; 531 449 532 450 err = be32_to_cpu(header->return_code); 533 451 if (err != 0 && desc) ··· 573 501 tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4); 574 502 tpm_cmd.params.getcap_in.subcap = cpu_to_be32(subcap_id); 575 503 } 576 - rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 504 + rc = tpm_transmit_cmd(chip, NULL, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, 577 505 min_cap_length, 0, desc); 578 506 if (!rc) 579 507 *cap = tpm_cmd.params.getcap_out.cap; ··· 597 525 start_cmd.header.in = tpm_startup_header; 598 526 599 527 start_cmd.params.startup_in.startup_type = startup_type; 600 - return tpm_transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE, 0, 528 + return tpm_transmit_cmd(chip, NULL, &start_cmd, 529 + TPM_INTERNAL_RESULT_SIZE, 0, 601 530 0, "attempting to start the TPM"); 602 531 } 603 532 ··· 755 682 struct tpm_cmd_t cmd; 756 683 757 684 cmd.header.in = continue_selftest_header; 758 - rc = tpm_transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE, 0, 0, 759 - "continue selftest"); 685 + rc = tpm_transmit_cmd(chip, NULL, &cmd, CONTINUE_SELFTEST_RESULT_SIZE, 686 + 0, 0, "continue selftest"); 760 687 return rc; 761 688 } 762 689 ··· 776 703 777 704 cmd.header.in = pcrread_header; 778 705 cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx); 779 - rc = tpm_transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE, 706 + rc = tpm_transmit_cmd(chip, NULL, &cmd, READ_PCR_RESULT_SIZE, 780 707 READ_PCR_RESULT_BODY_SIZE, 0, 781 708 "attempting to read a pcr value"); 782 709 ··· 888 815 cmd.header.in = pcrextend_header; 889 816 cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx); 890 817 memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE); 891 - rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, 818 + rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE, 892 819 EXTEND_PCR_RESULT_BODY_SIZE, 0, 893 820 "attempting extend a PCR value"); 894 821 ··· 993 920 if (chip == NULL) 994 921 return -ENODEV; 995 922 996 - rc = tpm_transmit_cmd(chip, cmd, buflen, 0, 0, "attempting tpm_cmd"); 997 - 923 + rc = tpm_transmit_cmd(chip, NULL, cmd, buflen, 0, 0, 924 + "attempting tpm_cmd"); 998 925 tpm_put_ops(chip); 999 926 return rc; 1000 927 } ··· 1095 1022 cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr); 1096 1023 memcpy(cmd.params.pcrextend_in.hash, dummy_hash, 1097 1024 TPM_DIGEST_SIZE); 1098 - rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, 1099 - EXTEND_PCR_RESULT_BODY_SIZE, 0, 1025 + rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE, 1026 + EXTEND_PCR_RESULT_BODY_SIZE, 0, 1100 1027 "extending dummy pcr before suspend"); 1101 1028 } 1102 1029 1103 1030 /* now do the actual savestate */ 1104 1031 for (try = 0; try < TPM_RETRY; try++) { 1105 1032 cmd.header.in = savestate_header; 1106 - rc = tpm_transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, 0, 1107 - 0, NULL); 1033 + rc = tpm_transmit_cmd(chip, NULL, &cmd, SAVESTATE_RESULT_SIZE, 1034 + 0, 0, NULL); 1108 1035 1109 1036 /* 1110 1037 * If the TPM indicates that it is too busy to respond to ··· 1187 1114 tpm_cmd.header.in = tpm_getrandom_header; 1188 1115 tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes); 1189 1116 1190 - err = tpm_transmit_cmd(chip, &tpm_cmd, 1117 + err = tpm_transmit_cmd(chip, NULL, &tpm_cmd, 1191 1118 TPM_GETRANDOM_RESULT_SIZE + num_bytes, 1192 1119 offsetof(struct tpm_getrandom_out, 1193 1120 rng_data), ··· 1278 1205 return PTR_ERR(tpm_class); 1279 1206 } 1280 1207 1281 - rc = alloc_chrdev_region(&tpm_devt, 0, TPM_NUM_DEVICES, "tpm"); 1208 + tpmrm_class = class_create(THIS_MODULE, "tpmrm"); 1209 + if (IS_ERR(tpmrm_class)) { 1210 + pr_err("couldn't create tpmrm class\n"); 1211 + class_destroy(tpm_class); 1212 + return PTR_ERR(tpmrm_class); 1213 + } 1214 + 1215 + rc = alloc_chrdev_region(&tpm_devt, 0, 2*TPM_NUM_DEVICES, "tpm"); 1282 1216 if (rc < 0) { 1283 1217 pr_err("tpm: failed to allocate char dev region\n"); 1218 + class_destroy(tpmrm_class); 1284 1219 class_destroy(tpm_class); 1285 1220 return rc; 1286 1221 } ··· 1300 1219 { 1301 1220 idr_destroy(&dev_nums_idr); 1302 1221 class_destroy(tpm_class); 1303 - unregister_chrdev_region(tpm_devt, TPM_NUM_DEVICES); 1222 + class_destroy(tpmrm_class); 1223 + unregister_chrdev_region(tpm_devt, 2*TPM_NUM_DEVICES); 1304 1224 } 1305 1225 1306 1226 subsys_initcall(tpm_init);
+1 -1
drivers/char/tpm/tpm-sysfs.c
··· 40 40 struct tpm_chip *chip = to_tpm_chip(dev); 41 41 42 42 tpm_cmd.header.in = tpm_readpubek_header; 43 - err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE, 43 + err = tpm_transmit_cmd(chip, NULL, &tpm_cmd, READ_PUBEK_RESULT_SIZE, 44 44 READ_PUBEK_RESULT_MIN_BODY_SIZE, 0, 45 45 "attempting to read the PUBEK"); 46 46 if (err)
+48 -4
drivers/char/tpm/tpm.h
··· 89 89 }; 90 90 91 91 enum tpm2_return_codes { 92 + TPM2_RC_SUCCESS = 0x0000, 92 93 TPM2_RC_HASH = 0x0083, /* RC_FMT1 */ 94 + TPM2_RC_HANDLE = 0x008B, 93 95 TPM2_RC_INITIALIZE = 0x0100, /* RC_VER1 */ 94 96 TPM2_RC_DISABLED = 0x0120, 95 97 TPM2_RC_TESTING = 0x090A, /* RC_WARN */ 98 + TPM2_RC_REFERENCE_H0 = 0x0910, 96 99 }; 97 100 98 101 enum tpm2_algorithms { ··· 117 114 TPM2_CC_CREATE = 0x0153, 118 115 TPM2_CC_LOAD = 0x0157, 119 116 TPM2_CC_UNSEAL = 0x015E, 117 + TPM2_CC_CONTEXT_LOAD = 0x0161, 118 + TPM2_CC_CONTEXT_SAVE = 0x0162, 120 119 TPM2_CC_FLUSH_CONTEXT = 0x0165, 121 120 TPM2_CC_GET_CAPABILITY = 0x017A, 122 121 TPM2_CC_GET_RANDOM = 0x017B, ··· 132 127 }; 133 128 134 129 enum tpm2_capabilities { 130 + TPM2_CAP_HANDLES = 1, 131 + TPM2_CAP_COMMANDS = 2, 135 132 TPM2_CAP_PCRS = 5, 136 133 TPM2_CAP_TPM_PROPERTIES = 6, 134 + }; 135 + 136 + enum tpm2_properties { 137 + TPM_PT_TOTAL_COMMANDS = 0x0129, 137 138 }; 138 139 139 140 enum tpm2_startup_types { ··· 147 136 TPM2_SU_STATE = 0x0001, 148 137 }; 149 138 139 + enum tpm2_cc_attrs { 140 + TPM2_CC_ATTR_CHANDLES = 25, 141 + TPM2_CC_ATTR_RHANDLE = 28, 142 + }; 143 + 150 144 #define TPM_VID_INTEL 0x8086 151 145 #define TPM_VID_WINBOND 0x1050 152 146 #define TPM_VID_STM 0x104A 153 147 154 148 #define TPM_PPI_VERSION_LEN 3 149 + 150 + struct tpm_space { 151 + u32 context_tbl[3]; 152 + u8 *context_buf; 153 + u32 session_tbl[3]; 154 + u8 *session_buf; 155 + }; 155 156 156 157 enum tpm_chip_flags { 157 158 TPM_CHIP_FLAG_TPM2 = BIT(1), ··· 184 161 185 162 struct tpm_chip { 186 163 struct device dev; 164 + struct device devs; 187 165 struct cdev cdev; 166 + struct cdev cdevs; 188 167 189 168 /* A driver callback under ops cannot be run unless ops_sem is held 190 169 * (sometimes implicitly, eg for the sysfs code). ops becomes null ··· 224 199 acpi_handle acpi_dev_handle; 225 200 char ppi_version[TPM_PPI_VERSION_LEN + 1]; 226 201 #endif /* CONFIG_ACPI */ 202 + 203 + struct tpm_space work_space; 204 + u32 nr_commands; 205 + u32 *cc_attrs_tbl; 206 + 207 + /* active locality */ 208 + int locality; 227 209 }; 228 210 229 211 #define to_tpm_chip(d) container_of(d, struct tpm_chip, dev) ··· 517 485 } 518 486 519 487 extern struct class *tpm_class; 488 + extern struct class *tpmrm_class; 520 489 extern dev_t tpm_devt; 521 490 extern const struct file_operations tpm_fops; 491 + extern const struct file_operations tpmrm_fops; 522 492 extern struct idr dev_nums_idr; 523 493 524 494 enum tpm_transmit_flags { 525 495 TPM_TRANSMIT_UNLOCKED = BIT(0), 526 496 }; 527 497 528 - ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, 529 - unsigned int flags); 530 - ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *buf, size_t bufsiz, 531 - size_t min_rsp_body_len, unsigned int flags, 498 + ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space, 499 + u8 *buf, size_t bufsiz, unsigned int flags); 500 + ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space, 501 + const void *buf, size_t bufsiz, 502 + size_t min_rsp_body_length, unsigned int flags, 532 503 const char *desc); 533 504 ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap, 534 505 const char *desc, size_t min_cap_length); ··· 576 541 int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, u32 count, 577 542 struct tpm2_digest *digests); 578 543 int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max); 544 + void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle, 545 + unsigned int flags); 579 546 int tpm2_seal_trusted(struct tpm_chip *chip, 580 547 struct trusted_key_payload *payload, 581 548 struct trusted_key_options *options); ··· 591 554 void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type); 592 555 unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal); 593 556 int tpm2_probe(struct tpm_chip *chip); 557 + int tpm2_find_cc(struct tpm_chip *chip, u32 cc); 558 + int tpm2_init_space(struct tpm_space *space); 559 + void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space); 560 + int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc, 561 + u8 *cmd); 562 + int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, 563 + u32 cc, u8 *buf, size_t *bufsiz); 594 564 #endif
+123 -50
drivers/char/tpm/tpm2-cmd.c
··· 266 266 sizeof(cmd.params.pcrread_in.pcr_select)); 267 267 cmd.params.pcrread_in.pcr_select[pcr_idx >> 3] = 1 << (pcr_idx & 0x7); 268 268 269 - rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 269 + rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 270 270 TPM2_PCR_READ_RESP_BODY_SIZE, 271 271 0, "attempting to read a pcr value"); 272 272 if (rc == 0) { ··· 333 333 } 334 334 } 335 335 336 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 0, 0, 336 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 0, 0, 337 337 "attempting extend a PCR value"); 338 338 339 339 tpm_buf_destroy(&buf); ··· 382 382 cmd.header.in = tpm2_getrandom_header; 383 383 cmd.params.getrandom_in.size = cpu_to_be16(num_bytes); 384 384 385 - err = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 385 + err = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 386 386 offsetof(struct tpm2_get_random_out, 387 387 buffer), 388 388 0, "attempting get random"); ··· 417 417 .length = cpu_to_be32(TPM2_GET_TPM_PT_IN_SIZE), 418 418 .ordinal = cpu_to_be32(TPM2_CC_GET_CAPABILITY) 419 419 }; 420 + 421 + /** 422 + * tpm2_flush_context_cmd() - execute a TPM2_FlushContext command 423 + * @chip: TPM chip to use 424 + * @payload: the key data in clear and encrypted form 425 + * @options: authentication values and other options 426 + * 427 + * Return: same as with tpm_transmit_cmd 428 + */ 429 + void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle, 430 + unsigned int flags) 431 + { 432 + struct tpm_buf buf; 433 + int rc; 434 + 435 + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT); 436 + if (rc) { 437 + dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n", 438 + handle); 439 + return; 440 + } 441 + 442 + tpm_buf_append_u32(&buf, handle); 443 + 444 + (void) tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 0, flags, 445 + "flushing context"); 446 + 447 + tpm_buf_destroy(&buf); 448 + } 420 449 421 450 /** 422 451 * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer. ··· 557 528 goto out; 558 529 } 559 530 560 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 4, 0, 531 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 4, 0, 561 532 "sealing data"); 562 533 if (rc) 563 534 goto out; ··· 641 612 goto out; 642 613 } 643 614 644 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 4, flags, 615 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 4, flags, 645 616 "loading blob"); 646 617 if (!rc) 647 618 *blob_handle = be32_to_cpup( ··· 654 625 rc = -EPERM; 655 626 656 627 return rc; 657 - } 658 - 659 - /** 660 - * tpm2_flush_context_cmd() - execute a TPM2_FlushContext command 661 - * 662 - * @chip: TPM chip to use 663 - * @handle: the key data in clear and encrypted form 664 - * @flags: tpm transmit flags 665 - * 666 - * Return: Same as with tpm_transmit_cmd. 667 - */ 668 - static void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle, 669 - unsigned int flags) 670 - { 671 - struct tpm_buf buf; 672 - int rc; 673 - 674 - rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT); 675 - if (rc) { 676 - dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n", 677 - handle); 678 - return; 679 - } 680 - 681 - tpm_buf_append_u32(&buf, handle); 682 - 683 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 0, flags, 684 - "flushing context"); 685 - if (rc) 686 - dev_warn(&chip->dev, "0x%08x was not flushed, rc=%d\n", handle, 687 - rc); 688 - 689 - tpm_buf_destroy(&buf); 690 628 } 691 629 692 630 /** ··· 693 697 options->blobauth /* hmac */, 694 698 TPM_DIGEST_SIZE); 695 699 696 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 6, flags, 700 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 6, flags, 697 701 "unsealing"); 698 702 if (rc > 0) 699 703 rc = -EPERM; ··· 770 774 cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(property_id); 771 775 cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1); 772 776 773 - rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 777 + rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 774 778 TPM2_GET_TPM_PT_OUT_BODY_SIZE, 0, desc); 775 779 if (!rc) 776 780 *value = be32_to_cpu(cmd.params.get_tpm_pt_out.value); ··· 805 809 cmd.header.in = tpm2_startup_header; 806 810 807 811 cmd.params.startup_in.startup_type = cpu_to_be16(startup_type); 808 - return tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, 812 + return tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0, 809 813 "attempting to start the TPM"); 810 814 } 811 815 ··· 834 838 cmd.header.in = tpm2_shutdown_header; 835 839 cmd.params.startup_in.startup_type = cpu_to_be16(shutdown_type); 836 840 837 - rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, 841 + rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0, 838 842 "stopping the TPM"); 839 843 840 844 /* In places where shutdown command is sent there's no much we can do ··· 898 902 cmd.header.in = tpm2_selftest_header; 899 903 cmd.params.selftest_in.full_test = full; 900 904 901 - rc = tpm_transmit_cmd(chip, &cmd, TPM2_SELF_TEST_IN_SIZE, 0, 0, 905 + rc = tpm_transmit_cmd(chip, NULL, &cmd, TPM2_SELF_TEST_IN_SIZE, 0, 0, 902 906 "continue selftest"); 903 907 904 908 /* At least some prototype chips seem to give RC_TESTING error ··· 949 953 cmd.params.pcrread_in.pcr_select[1] = 0x00; 950 954 cmd.params.pcrread_in.pcr_select[2] = 0x00; 951 955 952 - rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, NULL); 956 + rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0, 957 + NULL); 953 958 if (rc < 0) 954 959 break; 955 960 ··· 983 986 cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(0x100); 984 987 cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1); 985 988 986 - rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, NULL); 989 + rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0, NULL); 987 990 if (rc < 0) 988 991 return rc; 989 992 ··· 1021 1024 tpm_buf_append_u32(&buf, 0); 1022 1025 tpm_buf_append_u32(&buf, 1); 1023 1026 1024 - rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 9, 0, 1027 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 9, 0, 1025 1028 "get tpm pcr allocation"); 1026 1029 if (rc) 1027 1030 goto out; ··· 1064 1067 return rc; 1065 1068 } 1066 1069 1070 + static int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip) 1071 + { 1072 + struct tpm_buf buf; 1073 + u32 nr_commands; 1074 + u32 *attrs; 1075 + u32 cc; 1076 + int i; 1077 + int rc; 1078 + 1079 + rc = tpm2_get_tpm_pt(chip, TPM_PT_TOTAL_COMMANDS, &nr_commands, NULL); 1080 + if (rc) 1081 + goto out; 1082 + 1083 + if (nr_commands > 0xFFFFF) { 1084 + rc = -EFAULT; 1085 + goto out; 1086 + } 1087 + 1088 + chip->cc_attrs_tbl = devm_kzalloc(&chip->dev, 4 * nr_commands, 1089 + GFP_KERNEL); 1090 + 1091 + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_GET_CAPABILITY); 1092 + if (rc) 1093 + goto out; 1094 + 1095 + tpm_buf_append_u32(&buf, TPM2_CAP_COMMANDS); 1096 + tpm_buf_append_u32(&buf, TPM2_CC_FIRST); 1097 + tpm_buf_append_u32(&buf, nr_commands); 1098 + 1099 + rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 1100 + 9 + 4 * nr_commands, 0, NULL); 1101 + if (rc) { 1102 + tpm_buf_destroy(&buf); 1103 + goto out; 1104 + } 1105 + 1106 + if (nr_commands != 1107 + be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) { 1108 + tpm_buf_destroy(&buf); 1109 + goto out; 1110 + } 1111 + 1112 + chip->nr_commands = nr_commands; 1113 + 1114 + attrs = (u32 *)&buf.data[TPM_HEADER_SIZE + 9]; 1115 + for (i = 0; i < nr_commands; i++, attrs++) { 1116 + chip->cc_attrs_tbl[i] = be32_to_cpup(attrs); 1117 + cc = chip->cc_attrs_tbl[i] & 0xFFFF; 1118 + 1119 + if (cc == TPM2_CC_CONTEXT_SAVE || cc == TPM2_CC_FLUSH_CONTEXT) { 1120 + chip->cc_attrs_tbl[i] &= 1121 + ~(GENMASK(2, 0) << TPM2_CC_ATTR_CHANDLES); 1122 + chip->cc_attrs_tbl[i] |= 1 << TPM2_CC_ATTR_CHANDLES; 1123 + } 1124 + } 1125 + 1126 + tpm_buf_destroy(&buf); 1127 + 1128 + out: 1129 + if (rc > 0) 1130 + rc = -ENODEV; 1131 + return rc; 1132 + } 1133 + 1067 1134 /** 1068 1135 * tpm2_auto_startup - Perform the standard automatic TPM initialization 1069 1136 * sequence 1070 1137 * @chip: TPM chip to use 1071 1138 * 1072 - * Initializes timeout values for operation and command durations, conducts 1073 - * a self-test and reads the list of active PCR banks. 1074 - * 1075 - * Return: 0 on success. Otherwise, a system error code is returned. 1139 + * Returns 0 on success, < 0 in case of fatal error. 1076 1140 */ 1077 1141 int tpm2_auto_startup(struct tpm_chip *chip) 1078 1142 { ··· 1162 1104 } 1163 1105 1164 1106 rc = tpm2_get_pcr_allocation(chip); 1107 + if (rc) 1108 + goto out; 1109 + 1110 + rc = tpm2_get_cc_attrs_tbl(chip); 1165 1111 1166 1112 out: 1167 1113 if (rc > 0) 1168 1114 rc = -ENODEV; 1169 1115 return rc; 1116 + } 1117 + 1118 + int tpm2_find_cc(struct tpm_chip *chip, u32 cc) 1119 + { 1120 + int i; 1121 + 1122 + for (i = 0; i < chip->nr_commands; i++) 1123 + if (cc == (chip->cc_attrs_tbl[i] & GENMASK(15, 0))) 1124 + return i; 1125 + 1126 + return -1; 1170 1127 }
+528
drivers/char/tpm/tpm2-space.c
··· 1 + /* 2 + * Copyright (C) 2016 Intel Corporation 3 + * 4 + * Authors: 5 + * Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> 6 + * 7 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 8 + * 9 + * This file contains TPM2 protocol implementations of the commands 10 + * used by the kernel internally. 11 + * 12 + * This program is free software; you can redistribute it and/or 13 + * modify it under the terms of the GNU General Public License 14 + * as published by the Free Software Foundation; version 2 15 + * of the License. 16 + */ 17 + 18 + #include <linux/gfp.h> 19 + #include <asm/unaligned.h> 20 + #include "tpm.h" 21 + 22 + enum tpm2_handle_types { 23 + TPM2_HT_HMAC_SESSION = 0x02000000, 24 + TPM2_HT_POLICY_SESSION = 0x03000000, 25 + TPM2_HT_TRANSIENT = 0x80000000, 26 + }; 27 + 28 + struct tpm2_context { 29 + __be64 sequence; 30 + __be32 saved_handle; 31 + __be32 hierarchy; 32 + __be16 blob_size; 33 + } __packed; 34 + 35 + static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space) 36 + { 37 + int i; 38 + 39 + for (i = 0; i < ARRAY_SIZE(space->session_tbl); i++) { 40 + if (space->session_tbl[i]) 41 + tpm2_flush_context_cmd(chip, space->session_tbl[i], 42 + TPM_TRANSMIT_UNLOCKED); 43 + } 44 + } 45 + 46 + int tpm2_init_space(struct tpm_space *space) 47 + { 48 + space->context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL); 49 + if (!space->context_buf) 50 + return -ENOMEM; 51 + 52 + space->session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL); 53 + if (space->session_buf == NULL) { 54 + kfree(space->context_buf); 55 + return -ENOMEM; 56 + } 57 + 58 + return 0; 59 + } 60 + 61 + void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space) 62 + { 63 + mutex_lock(&chip->tpm_mutex); 64 + tpm2_flush_sessions(chip, space); 65 + mutex_unlock(&chip->tpm_mutex); 66 + kfree(space->context_buf); 67 + kfree(space->session_buf); 68 + } 69 + 70 + static int tpm2_load_context(struct tpm_chip *chip, u8 *buf, 71 + unsigned int *offset, u32 *handle) 72 + { 73 + struct tpm_buf tbuf; 74 + struct tpm2_context *ctx; 75 + unsigned int body_size; 76 + int rc; 77 + 78 + rc = tpm_buf_init(&tbuf, TPM2_ST_NO_SESSIONS, TPM2_CC_CONTEXT_LOAD); 79 + if (rc) 80 + return rc; 81 + 82 + ctx = (struct tpm2_context *)&buf[*offset]; 83 + body_size = sizeof(*ctx) + be16_to_cpu(ctx->blob_size); 84 + tpm_buf_append(&tbuf, &buf[*offset], body_size); 85 + 86 + rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 4, 87 + TPM_TRANSMIT_UNLOCKED, NULL); 88 + if (rc < 0) { 89 + dev_warn(&chip->dev, "%s: failed with a system error %d\n", 90 + __func__, rc); 91 + tpm_buf_destroy(&tbuf); 92 + return -EFAULT; 93 + } else if (tpm2_rc_value(rc) == TPM2_RC_HANDLE || 94 + rc == TPM2_RC_REFERENCE_H0) { 95 + /* 96 + * TPM_RC_HANDLE means that the session context can't 97 + * be loaded because of an internal counter mismatch 98 + * that makes the TPM think there might have been a 99 + * replay. This might happen if the context was saved 100 + * and loaded outside the space. 101 + * 102 + * TPM_RC_REFERENCE_H0 means the session has been 103 + * flushed outside the space 104 + */ 105 + rc = -ENOENT; 106 + tpm_buf_destroy(&tbuf); 107 + } else if (rc > 0) { 108 + dev_warn(&chip->dev, "%s: failed with a TPM error 0x%04X\n", 109 + __func__, rc); 110 + tpm_buf_destroy(&tbuf); 111 + return -EFAULT; 112 + } 113 + 114 + *handle = be32_to_cpup((__be32 *)&tbuf.data[TPM_HEADER_SIZE]); 115 + *offset += body_size; 116 + 117 + tpm_buf_destroy(&tbuf); 118 + return 0; 119 + } 120 + 121 + static int tpm2_save_context(struct tpm_chip *chip, u32 handle, u8 *buf, 122 + unsigned int buf_size, unsigned int *offset) 123 + { 124 + struct tpm_buf tbuf; 125 + unsigned int body_size; 126 + int rc; 127 + 128 + rc = tpm_buf_init(&tbuf, TPM2_ST_NO_SESSIONS, TPM2_CC_CONTEXT_SAVE); 129 + if (rc) 130 + return rc; 131 + 132 + tpm_buf_append_u32(&tbuf, handle); 133 + 134 + rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 0, 135 + TPM_TRANSMIT_UNLOCKED, NULL); 136 + if (rc < 0) { 137 + dev_warn(&chip->dev, "%s: failed with a system error %d\n", 138 + __func__, rc); 139 + tpm_buf_destroy(&tbuf); 140 + return -EFAULT; 141 + } else if (tpm2_rc_value(rc) == TPM2_RC_REFERENCE_H0) { 142 + tpm_buf_destroy(&tbuf); 143 + return -ENOENT; 144 + } else if (rc) { 145 + dev_warn(&chip->dev, "%s: failed with a TPM error 0x%04X\n", 146 + __func__, rc); 147 + tpm_buf_destroy(&tbuf); 148 + return -EFAULT; 149 + } 150 + 151 + body_size = tpm_buf_length(&tbuf) - TPM_HEADER_SIZE; 152 + if ((*offset + body_size) > buf_size) { 153 + dev_warn(&chip->dev, "%s: out of backing storage\n", __func__); 154 + tpm_buf_destroy(&tbuf); 155 + return -ENOMEM; 156 + } 157 + 158 + memcpy(&buf[*offset], &tbuf.data[TPM_HEADER_SIZE], body_size); 159 + *offset += body_size; 160 + tpm_buf_destroy(&tbuf); 161 + return 0; 162 + } 163 + 164 + static void tpm2_flush_space(struct tpm_chip *chip) 165 + { 166 + struct tpm_space *space = &chip->work_space; 167 + int i; 168 + 169 + for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++) 170 + if (space->context_tbl[i] && ~space->context_tbl[i]) 171 + tpm2_flush_context_cmd(chip, space->context_tbl[i], 172 + TPM_TRANSMIT_UNLOCKED); 173 + 174 + tpm2_flush_sessions(chip, space); 175 + } 176 + 177 + static int tpm2_load_space(struct tpm_chip *chip) 178 + { 179 + struct tpm_space *space = &chip->work_space; 180 + unsigned int offset; 181 + int i; 182 + int rc; 183 + 184 + for (i = 0, offset = 0; i < ARRAY_SIZE(space->context_tbl); i++) { 185 + if (!space->context_tbl[i]) 186 + continue; 187 + 188 + /* sanity check, should never happen */ 189 + if (~space->context_tbl[i]) { 190 + dev_err(&chip->dev, "context table is inconsistent"); 191 + return -EFAULT; 192 + } 193 + 194 + rc = tpm2_load_context(chip, space->context_buf, &offset, 195 + &space->context_tbl[i]); 196 + if (rc) 197 + return rc; 198 + } 199 + 200 + for (i = 0, offset = 0; i < ARRAY_SIZE(space->session_tbl); i++) { 201 + u32 handle; 202 + 203 + if (!space->session_tbl[i]) 204 + continue; 205 + 206 + rc = tpm2_load_context(chip, space->session_buf, 207 + &offset, &handle); 208 + if (rc == -ENOENT) { 209 + /* load failed, just forget session */ 210 + space->session_tbl[i] = 0; 211 + } else if (rc) { 212 + tpm2_flush_space(chip); 213 + return rc; 214 + } 215 + if (handle != space->session_tbl[i]) { 216 + dev_warn(&chip->dev, "session restored to wrong handle\n"); 217 + tpm2_flush_space(chip); 218 + return -EFAULT; 219 + } 220 + } 221 + 222 + return 0; 223 + } 224 + 225 + static bool tpm2_map_to_phandle(struct tpm_space *space, void *handle) 226 + { 227 + u32 vhandle = be32_to_cpup((__be32 *)handle); 228 + u32 phandle; 229 + int i; 230 + 231 + i = 0xFFFFFF - (vhandle & 0xFFFFFF); 232 + if (i >= ARRAY_SIZE(space->context_tbl) || !space->context_tbl[i]) 233 + return false; 234 + 235 + phandle = space->context_tbl[i]; 236 + *((__be32 *)handle) = cpu_to_be32(phandle); 237 + return true; 238 + } 239 + 240 + static int tpm2_map_command(struct tpm_chip *chip, u32 cc, u8 *cmd) 241 + { 242 + struct tpm_space *space = &chip->work_space; 243 + unsigned int nr_handles; 244 + u32 attrs; 245 + u32 *handle; 246 + int i; 247 + 248 + i = tpm2_find_cc(chip, cc); 249 + if (i < 0) 250 + return -EINVAL; 251 + 252 + attrs = chip->cc_attrs_tbl[i]; 253 + nr_handles = (attrs >> TPM2_CC_ATTR_CHANDLES) & GENMASK(2, 0); 254 + 255 + handle = (u32 *)&cmd[TPM_HEADER_SIZE]; 256 + for (i = 0; i < nr_handles; i++, handle++) { 257 + if ((be32_to_cpu(*handle) & 0xFF000000) == TPM2_HT_TRANSIENT) { 258 + if (!tpm2_map_to_phandle(space, handle)) 259 + return -EINVAL; 260 + } 261 + } 262 + 263 + return 0; 264 + } 265 + 266 + int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc, 267 + u8 *cmd) 268 + { 269 + int rc; 270 + 271 + if (!space) 272 + return 0; 273 + 274 + memcpy(&chip->work_space.context_tbl, &space->context_tbl, 275 + sizeof(space->context_tbl)); 276 + memcpy(&chip->work_space.session_tbl, &space->session_tbl, 277 + sizeof(space->session_tbl)); 278 + memcpy(chip->work_space.context_buf, space->context_buf, PAGE_SIZE); 279 + memcpy(chip->work_space.session_buf, space->session_buf, PAGE_SIZE); 280 + 281 + rc = tpm2_load_space(chip); 282 + if (rc) { 283 + tpm2_flush_space(chip); 284 + return rc; 285 + } 286 + 287 + rc = tpm2_map_command(chip, cc, cmd); 288 + if (rc) { 289 + tpm2_flush_space(chip); 290 + return rc; 291 + } 292 + 293 + return 0; 294 + } 295 + 296 + static bool tpm2_add_session(struct tpm_chip *chip, u32 handle) 297 + { 298 + struct tpm_space *space = &chip->work_space; 299 + int i; 300 + 301 + for (i = 0; i < ARRAY_SIZE(space->session_tbl); i++) 302 + if (space->session_tbl[i] == 0) 303 + break; 304 + 305 + if (i == ARRAY_SIZE(space->session_tbl)) 306 + return false; 307 + 308 + space->session_tbl[i] = handle; 309 + return true; 310 + } 311 + 312 + static u32 tpm2_map_to_vhandle(struct tpm_space *space, u32 phandle, bool alloc) 313 + { 314 + int i; 315 + 316 + for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++) { 317 + if (alloc) { 318 + if (!space->context_tbl[i]) { 319 + space->context_tbl[i] = phandle; 320 + break; 321 + } 322 + } else if (space->context_tbl[i] == phandle) 323 + break; 324 + } 325 + 326 + if (i == ARRAY_SIZE(space->context_tbl)) 327 + return 0; 328 + 329 + return TPM2_HT_TRANSIENT | (0xFFFFFF - i); 330 + } 331 + 332 + static int tpm2_map_response_header(struct tpm_chip *chip, u32 cc, u8 *rsp, 333 + size_t len) 334 + { 335 + struct tpm_space *space = &chip->work_space; 336 + struct tpm_output_header *header = (void *)rsp; 337 + u32 phandle; 338 + u32 phandle_type; 339 + u32 vhandle; 340 + u32 attrs; 341 + int i; 342 + 343 + if (be32_to_cpu(header->return_code) != TPM2_RC_SUCCESS) 344 + return 0; 345 + 346 + i = tpm2_find_cc(chip, cc); 347 + /* sanity check, should never happen */ 348 + if (i < 0) 349 + return -EFAULT; 350 + 351 + attrs = chip->cc_attrs_tbl[i]; 352 + if (!((attrs >> TPM2_CC_ATTR_RHANDLE) & 1)) 353 + return 0; 354 + 355 + phandle = be32_to_cpup((__be32 *)&rsp[TPM_HEADER_SIZE]); 356 + phandle_type = phandle & 0xFF000000; 357 + 358 + switch (phandle_type) { 359 + case TPM2_HT_TRANSIENT: 360 + vhandle = tpm2_map_to_vhandle(space, phandle, true); 361 + if (!vhandle) 362 + goto out_no_slots; 363 + 364 + *(__be32 *)&rsp[TPM_HEADER_SIZE] = cpu_to_be32(vhandle); 365 + break; 366 + case TPM2_HT_HMAC_SESSION: 367 + case TPM2_HT_POLICY_SESSION: 368 + if (!tpm2_add_session(chip, phandle)) 369 + goto out_no_slots; 370 + break; 371 + default: 372 + dev_err(&chip->dev, "%s: unknown handle 0x%08X\n", 373 + __func__, phandle); 374 + break; 375 + }; 376 + 377 + return 0; 378 + out_no_slots: 379 + tpm2_flush_context_cmd(chip, phandle, TPM_TRANSMIT_UNLOCKED); 380 + dev_warn(&chip->dev, "%s: out of slots for 0x%08X\n", __func__, 381 + phandle); 382 + return -ENOMEM; 383 + } 384 + 385 + struct tpm2_cap_handles { 386 + u8 more_data; 387 + __be32 capability; 388 + __be32 count; 389 + __be32 handles[]; 390 + } __packed; 391 + 392 + static int tpm2_map_response_body(struct tpm_chip *chip, u32 cc, u8 *rsp, 393 + size_t len) 394 + { 395 + struct tpm_space *space = &chip->work_space; 396 + struct tpm_output_header *header = (void *)rsp; 397 + struct tpm2_cap_handles *data; 398 + u32 phandle; 399 + u32 phandle_type; 400 + u32 vhandle; 401 + int i; 402 + int j; 403 + 404 + if (cc != TPM2_CC_GET_CAPABILITY || 405 + be32_to_cpu(header->return_code) != TPM2_RC_SUCCESS) { 406 + return 0; 407 + } 408 + 409 + if (len < TPM_HEADER_SIZE + 9) 410 + return -EFAULT; 411 + 412 + data = (void *)&rsp[TPM_HEADER_SIZE]; 413 + if (be32_to_cpu(data->capability) != TPM2_CAP_HANDLES) 414 + return 0; 415 + 416 + if (len != TPM_HEADER_SIZE + 9 + 4 * be32_to_cpu(data->count)) 417 + return -EFAULT; 418 + 419 + for (i = 0, j = 0; i < be32_to_cpu(data->count); i++) { 420 + phandle = be32_to_cpup((__be32 *)&data->handles[i]); 421 + phandle_type = phandle & 0xFF000000; 422 + 423 + switch (phandle_type) { 424 + case TPM2_HT_TRANSIENT: 425 + vhandle = tpm2_map_to_vhandle(space, phandle, false); 426 + if (!vhandle) 427 + break; 428 + 429 + data->handles[j] = cpu_to_be32(vhandle); 430 + j++; 431 + break; 432 + 433 + default: 434 + data->handles[j] = cpu_to_be32(phandle); 435 + j++; 436 + break; 437 + } 438 + 439 + } 440 + 441 + header->length = cpu_to_be32(TPM_HEADER_SIZE + 9 + 4 * j); 442 + data->count = cpu_to_be32(j); 443 + return 0; 444 + } 445 + 446 + static int tpm2_save_space(struct tpm_chip *chip) 447 + { 448 + struct tpm_space *space = &chip->work_space; 449 + unsigned int offset; 450 + int i; 451 + int rc; 452 + 453 + for (i = 0, offset = 0; i < ARRAY_SIZE(space->context_tbl); i++) { 454 + if (!(space->context_tbl[i] && ~space->context_tbl[i])) 455 + continue; 456 + 457 + rc = tpm2_save_context(chip, space->context_tbl[i], 458 + space->context_buf, PAGE_SIZE, 459 + &offset); 460 + if (rc == -ENOENT) { 461 + space->context_tbl[i] = 0; 462 + continue; 463 + } else if (rc) 464 + return rc; 465 + 466 + tpm2_flush_context_cmd(chip, space->context_tbl[i], 467 + TPM_TRANSMIT_UNLOCKED); 468 + space->context_tbl[i] = ~0; 469 + } 470 + 471 + for (i = 0, offset = 0; i < ARRAY_SIZE(space->session_tbl); i++) { 472 + if (!space->session_tbl[i]) 473 + continue; 474 + 475 + rc = tpm2_save_context(chip, space->session_tbl[i], 476 + space->session_buf, PAGE_SIZE, 477 + &offset); 478 + 479 + if (rc == -ENOENT) { 480 + /* handle error saving session, just forget it */ 481 + space->session_tbl[i] = 0; 482 + } else if (rc < 0) { 483 + tpm2_flush_space(chip); 484 + return rc; 485 + } 486 + } 487 + 488 + return 0; 489 + } 490 + 491 + int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, 492 + u32 cc, u8 *buf, size_t *bufsiz) 493 + { 494 + struct tpm_output_header *header = (void *)buf; 495 + int rc; 496 + 497 + if (!space) 498 + return 0; 499 + 500 + rc = tpm2_map_response_header(chip, cc, buf, *bufsiz); 501 + if (rc) { 502 + tpm2_flush_space(chip); 503 + return rc; 504 + } 505 + 506 + rc = tpm2_map_response_body(chip, cc, buf, *bufsiz); 507 + if (rc) { 508 + tpm2_flush_space(chip); 509 + return rc; 510 + } 511 + 512 + rc = tpm2_save_space(chip); 513 + if (rc) { 514 + tpm2_flush_space(chip); 515 + return rc; 516 + } 517 + 518 + *bufsiz = be32_to_cpu(header->length); 519 + 520 + memcpy(&space->context_tbl, &chip->work_space.context_tbl, 521 + sizeof(space->context_tbl)); 522 + memcpy(&space->session_tbl, &chip->work_space.session_tbl, 523 + sizeof(space->session_tbl)); 524 + memcpy(space->context_buf, chip->work_space.context_buf, PAGE_SIZE); 525 + memcpy(space->session_buf, chip->work_space.session_buf, PAGE_SIZE); 526 + 527 + return 0; 528 + }
+10 -4
drivers/char/tpm/tpm2_eventlog.c
··· 56 56 57 57 efispecid = (struct tcg_efi_specid_event *)event_header->event; 58 58 59 - for (i = 0; (i < event->count) && (i < TPM2_ACTIVE_PCR_BANKS); 60 - i++) { 59 + /* Check if event is malformed. */ 60 + if (event->count > efispecid->num_algs) 61 + return 0; 62 + 63 + for (i = 0; i < event->count; i++) { 61 64 halg_size = sizeof(event->digests[i].alg_id); 62 65 memcpy(&halg, marker, halg_size); 63 66 marker = marker + halg_size; 64 - for (j = 0; (j < efispecid->num_algs); j++) { 67 + for (j = 0; j < efispecid->num_algs; j++) { 65 68 if (halg == efispecid->digest_sizes[j].alg_id) { 66 - marker = marker + 69 + marker += 67 70 efispecid->digest_sizes[j].digest_size; 68 71 break; 69 72 } 70 73 } 74 + /* Algorithm without known length. Such event is unparseable. */ 75 + if (j == efispecid->num_algs) 76 + return 0; 71 77 } 72 78 73 79 event_field = (struct tcg_event_field *)marker;
+230 -51
drivers/char/tpm/tpm_crb.c
··· 20 20 #include <linux/rculist.h> 21 21 #include <linux/module.h> 22 22 #include <linux/pm_runtime.h> 23 + #ifdef CONFIG_ARM64 24 + #include <linux/arm-smccc.h> 25 + #endif 23 26 #include "tpm.h" 24 27 25 28 #define ACPI_SIG_TPM2 "TPM2" ··· 35 32 enum crb_defaults { 36 33 CRB_ACPI_START_REVISION_ID = 1, 37 34 CRB_ACPI_START_INDEX = 1, 35 + }; 36 + 37 + enum crb_loc_ctrl { 38 + CRB_LOC_CTRL_REQUEST_ACCESS = BIT(0), 39 + CRB_LOC_CTRL_RELINQUISH = BIT(1), 40 + }; 41 + 42 + enum crb_loc_state { 43 + CRB_LOC_STATE_LOC_ASSIGNED = BIT(1), 44 + CRB_LOC_STATE_TPM_REG_VALID_STS = BIT(7), 38 45 }; 39 46 40 47 enum crb_ctrl_req { ··· 65 52 CRB_CANCEL_INVOKE = BIT(0), 66 53 }; 67 54 68 - struct crb_control_area { 69 - u32 req; 70 - u32 sts; 71 - u32 cancel; 72 - u32 start; 73 - u32 int_enable; 74 - u32 int_sts; 75 - u32 cmd_size; 76 - u32 cmd_pa_low; 77 - u32 cmd_pa_high; 78 - u32 rsp_size; 79 - u64 rsp_pa; 55 + struct crb_regs_head { 56 + u32 loc_state; 57 + u32 reserved1; 58 + u32 loc_ctrl; 59 + u32 loc_sts; 60 + u8 reserved2[32]; 61 + u64 intf_id; 62 + u64 ctrl_ext; 63 + } __packed; 64 + 65 + struct crb_regs_tail { 66 + u32 ctrl_req; 67 + u32 ctrl_sts; 68 + u32 ctrl_cancel; 69 + u32 ctrl_start; 70 + u32 ctrl_int_enable; 71 + u32 ctrl_int_sts; 72 + u32 ctrl_cmd_size; 73 + u32 ctrl_cmd_pa_low; 74 + u32 ctrl_cmd_pa_high; 75 + u32 ctrl_rsp_size; 76 + u64 ctrl_rsp_pa; 80 77 } __packed; 81 78 82 79 enum crb_status { ··· 96 73 enum crb_flags { 97 74 CRB_FL_ACPI_START = BIT(0), 98 75 CRB_FL_CRB_START = BIT(1), 76 + CRB_FL_CRB_SMC_START = BIT(2), 99 77 }; 100 78 101 79 struct crb_priv { 102 80 unsigned int flags; 103 81 void __iomem *iobase; 104 - struct crb_control_area __iomem *cca; 82 + struct crb_regs_head __iomem *regs_h; 83 + struct crb_regs_tail __iomem *regs_t; 105 84 u8 __iomem *cmd; 106 85 u8 __iomem *rsp; 107 86 u32 cmd_size; 87 + u32 smc_func_id; 88 + }; 89 + 90 + struct tpm2_crb_smc { 91 + u32 interrupt; 92 + u8 interrupt_flags; 93 + u8 op_flags; 94 + u16 reserved2; 95 + u32 smc_func_id; 108 96 }; 109 97 110 98 /** ··· 135 101 */ 136 102 static int __maybe_unused crb_go_idle(struct device *dev, struct crb_priv *priv) 137 103 { 138 - if (priv->flags & CRB_FL_ACPI_START) 104 + if ((priv->flags & CRB_FL_ACPI_START) || 105 + (priv->flags & CRB_FL_CRB_SMC_START)) 139 106 return 0; 140 107 141 - iowrite32(CRB_CTRL_REQ_GO_IDLE, &priv->cca->req); 108 + iowrite32(CRB_CTRL_REQ_GO_IDLE, &priv->regs_t->ctrl_req); 142 109 /* we don't really care when this settles */ 143 110 144 111 return 0; 112 + } 113 + 114 + static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value, 115 + unsigned long timeout) 116 + { 117 + ktime_t start; 118 + ktime_t stop; 119 + 120 + start = ktime_get(); 121 + stop = ktime_add(start, ms_to_ktime(timeout)); 122 + 123 + do { 124 + if ((ioread32(reg) & mask) == value) 125 + return true; 126 + 127 + usleep_range(50, 100); 128 + } while (ktime_before(ktime_get(), stop)); 129 + 130 + return false; 145 131 } 146 132 147 133 /** ··· 181 127 static int __maybe_unused crb_cmd_ready(struct device *dev, 182 128 struct crb_priv *priv) 183 129 { 184 - ktime_t stop, start; 185 - 186 - if (priv->flags & CRB_FL_ACPI_START) 130 + if ((priv->flags & CRB_FL_ACPI_START) || 131 + (priv->flags & CRB_FL_CRB_SMC_START)) 187 132 return 0; 188 133 189 - iowrite32(CRB_CTRL_REQ_CMD_READY, &priv->cca->req); 190 - 191 - start = ktime_get(); 192 - stop = ktime_add(start, ms_to_ktime(TPM2_TIMEOUT_C)); 193 - do { 194 - if (!(ioread32(&priv->cca->req) & CRB_CTRL_REQ_CMD_READY)) 195 - return 0; 196 - usleep_range(50, 100); 197 - } while (ktime_before(ktime_get(), stop)); 198 - 199 - if (ioread32(&priv->cca->req) & CRB_CTRL_REQ_CMD_READY) { 134 + iowrite32(CRB_CTRL_REQ_CMD_READY, &priv->regs_t->ctrl_req); 135 + if (!crb_wait_for_reg_32(&priv->regs_t->ctrl_req, 136 + CRB_CTRL_REQ_CMD_READY /* mask */, 137 + 0, /* value */ 138 + TPM2_TIMEOUT_C)) { 200 139 dev_warn(dev, "cmdReady timed out\n"); 201 140 return -ETIME; 202 141 } ··· 197 150 return 0; 198 151 } 199 152 153 + static int crb_request_locality(struct tpm_chip *chip, int loc) 154 + { 155 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 156 + u32 value = CRB_LOC_STATE_LOC_ASSIGNED | 157 + CRB_LOC_STATE_TPM_REG_VALID_STS; 158 + 159 + if (!priv->regs_h) 160 + return 0; 161 + 162 + iowrite32(CRB_LOC_CTRL_REQUEST_ACCESS, &priv->regs_h->loc_ctrl); 163 + if (!crb_wait_for_reg_32(&priv->regs_h->loc_state, value, value, 164 + TPM2_TIMEOUT_C)) { 165 + dev_warn(&chip->dev, "TPM_LOC_STATE_x.requestAccess timed out\n"); 166 + return -ETIME; 167 + } 168 + 169 + return 0; 170 + } 171 + 172 + static void crb_relinquish_locality(struct tpm_chip *chip, int loc) 173 + { 174 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 175 + 176 + if (!priv->regs_h) 177 + return; 178 + 179 + iowrite32(CRB_LOC_CTRL_RELINQUISH, &priv->regs_h->loc_ctrl); 180 + } 181 + 200 182 static u8 crb_status(struct tpm_chip *chip) 201 183 { 202 184 struct crb_priv *priv = dev_get_drvdata(&chip->dev); 203 185 u8 sts = 0; 204 186 205 - if ((ioread32(&priv->cca->start) & CRB_START_INVOKE) != 187 + if ((ioread32(&priv->regs_t->ctrl_start) & CRB_START_INVOKE) != 206 188 CRB_START_INVOKE) 207 189 sts |= CRB_DRV_STS_COMPLETE; 208 190 ··· 247 171 if (count < 6) 248 172 return -EIO; 249 173 250 - if (ioread32(&priv->cca->sts) & CRB_CTRL_STS_ERROR) 174 + if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR) 251 175 return -EIO; 252 176 253 177 memcpy_fromio(buf, priv->rsp, 6); 254 178 expected = be32_to_cpup((__be32 *) &buf[2]); 255 - 256 - if (expected > count) 179 + if (expected > count || expected < 6) 257 180 return -EIO; 258 181 259 182 memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6); ··· 277 202 return rc; 278 203 } 279 204 205 + #ifdef CONFIG_ARM64 206 + /* 207 + * This is a TPM Command Response Buffer start method that invokes a 208 + * Secure Monitor Call to requrest the firmware to execute or cancel 209 + * a TPM 2.0 command. 210 + */ 211 + static int tpm_crb_smc_start(struct device *dev, unsigned long func_id) 212 + { 213 + struct arm_smccc_res res; 214 + 215 + arm_smccc_smc(func_id, 0, 0, 0, 0, 0, 0, 0, &res); 216 + if (res.a0 != 0) { 217 + dev_err(dev, 218 + FW_BUG "tpm_crb_smc_start() returns res.a0 = 0x%lx\n", 219 + res.a0); 220 + return -EIO; 221 + } 222 + 223 + return 0; 224 + } 225 + #else 226 + static int tpm_crb_smc_start(struct device *dev, unsigned long func_id) 227 + { 228 + dev_err(dev, FW_BUG "tpm_crb: incorrect start method\n"); 229 + return -EINVAL; 230 + } 231 + #endif 232 + 280 233 static int crb_send(struct tpm_chip *chip, u8 *buf, size_t len) 281 234 { 282 235 struct crb_priv *priv = dev_get_drvdata(&chip->dev); ··· 313 210 /* Zero the cancel register so that the next command will not get 314 211 * canceled. 315 212 */ 316 - iowrite32(0, &priv->cca->cancel); 213 + iowrite32(0, &priv->regs_t->ctrl_cancel); 317 214 318 215 if (len > priv->cmd_size) { 319 216 dev_err(&chip->dev, "invalid command count value %zd %d\n", ··· 327 224 wmb(); 328 225 329 226 if (priv->flags & CRB_FL_CRB_START) 330 - iowrite32(CRB_START_INVOKE, &priv->cca->start); 227 + iowrite32(CRB_START_INVOKE, &priv->regs_t->ctrl_start); 331 228 332 229 if (priv->flags & CRB_FL_ACPI_START) 333 230 rc = crb_do_acpi_start(chip); 231 + 232 + if (priv->flags & CRB_FL_CRB_SMC_START) { 233 + iowrite32(CRB_START_INVOKE, &priv->regs_t->ctrl_start); 234 + rc = tpm_crb_smc_start(&chip->dev, priv->smc_func_id); 235 + } 334 236 335 237 return rc; 336 238 } ··· 344 236 { 345 237 struct crb_priv *priv = dev_get_drvdata(&chip->dev); 346 238 347 - iowrite32(CRB_CANCEL_INVOKE, &priv->cca->cancel); 239 + iowrite32(CRB_CANCEL_INVOKE, &priv->regs_t->ctrl_cancel); 348 240 349 241 if ((priv->flags & CRB_FL_ACPI_START) && crb_do_acpi_start(chip)) 350 242 dev_err(&chip->dev, "ACPI Start failed\n"); ··· 353 245 static bool crb_req_canceled(struct tpm_chip *chip, u8 status) 354 246 { 355 247 struct crb_priv *priv = dev_get_drvdata(&chip->dev); 356 - u32 cancel = ioread32(&priv->cca->cancel); 248 + u32 cancel = ioread32(&priv->regs_t->ctrl_cancel); 357 249 358 250 return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE; 359 251 } ··· 365 257 .send = crb_send, 366 258 .cancel = crb_cancel, 367 259 .req_canceled = crb_req_canceled, 260 + .request_locality = crb_request_locality, 261 + .relinquish_locality = crb_relinquish_locality, 368 262 .req_complete_mask = CRB_DRV_STS_COMPLETE, 369 263 .req_complete_val = CRB_DRV_STS_COMPLETE, 370 264 }; ··· 405 295 return priv->iobase + (new_res.start - io_res->start); 406 296 } 407 297 298 + /* 299 + * Work around broken BIOSs that return inconsistent values from the ACPI 300 + * region vs the registers. Trust the ACPI region. Such broken systems 301 + * probably cannot send large TPM commands since the buffer will be truncated. 302 + */ 303 + static u64 crb_fixup_cmd_size(struct device *dev, struct resource *io_res, 304 + u64 start, u64 size) 305 + { 306 + if (io_res->start > start || io_res->end < start) 307 + return size; 308 + 309 + if (start + size - 1 <= io_res->end) 310 + return size; 311 + 312 + dev_err(dev, 313 + FW_BUG "ACPI region does not cover the entire command/response buffer. %pr vs %llx %llx\n", 314 + io_res, start, size); 315 + 316 + return io_res->end - start + 1; 317 + } 318 + 408 319 static int crb_map_io(struct acpi_device *device, struct crb_priv *priv, 409 320 struct acpi_table_tpm2 *buf) 410 321 { ··· 455 324 if (IS_ERR(priv->iobase)) 456 325 return PTR_ERR(priv->iobase); 457 326 458 - priv->cca = crb_map_res(dev, priv, &io_res, buf->control_address, 459 - sizeof(struct crb_control_area)); 460 - if (IS_ERR(priv->cca)) 461 - return PTR_ERR(priv->cca); 327 + /* The ACPI IO region starts at the head area and continues to include 328 + * the control area, as one nice sane region except for some older 329 + * stuff that puts the control area outside the ACPI IO region. 330 + */ 331 + if (!(priv->flags & CRB_FL_ACPI_START)) { 332 + if (buf->control_address == io_res.start + 333 + sizeof(*priv->regs_h)) 334 + priv->regs_h = priv->iobase; 335 + else 336 + dev_warn(dev, FW_BUG "Bad ACPI memory layout"); 337 + } 338 + 339 + priv->regs_t = crb_map_res(dev, priv, &io_res, buf->control_address, 340 + sizeof(struct crb_regs_tail)); 341 + if (IS_ERR(priv->regs_t)) 342 + return PTR_ERR(priv->regs_t); 462 343 463 344 /* 464 345 * PTT HW bug w/a: wake up the device to access ··· 480 337 if (ret) 481 338 return ret; 482 339 483 - pa_high = ioread32(&priv->cca->cmd_pa_high); 484 - pa_low = ioread32(&priv->cca->cmd_pa_low); 340 + pa_high = ioread32(&priv->regs_t->ctrl_cmd_pa_high); 341 + pa_low = ioread32(&priv->regs_t->ctrl_cmd_pa_low); 485 342 cmd_pa = ((u64)pa_high << 32) | pa_low; 486 - cmd_size = ioread32(&priv->cca->cmd_size); 343 + cmd_size = crb_fixup_cmd_size(dev, &io_res, cmd_pa, 344 + ioread32(&priv->regs_t->ctrl_cmd_size)); 487 345 488 346 dev_dbg(dev, "cmd_hi = %X cmd_low = %X cmd_size %X\n", 489 347 pa_high, pa_low, cmd_size); ··· 495 351 goto out; 496 352 } 497 353 498 - memcpy_fromio(&rsp_pa, &priv->cca->rsp_pa, 8); 354 + memcpy_fromio(&rsp_pa, &priv->regs_t->ctrl_rsp_pa, 8); 499 355 rsp_pa = le64_to_cpu(rsp_pa); 500 - rsp_size = ioread32(&priv->cca->rsp_size); 356 + rsp_size = crb_fixup_cmd_size(dev, &io_res, rsp_pa, 357 + ioread32(&priv->regs_t->ctrl_rsp_size)); 501 358 502 359 if (cmd_pa != rsp_pa) { 503 360 priv->rsp = crb_map_res(dev, priv, &io_res, rsp_pa, rsp_size); ··· 531 386 struct crb_priv *priv; 532 387 struct tpm_chip *chip; 533 388 struct device *dev = &device->dev; 389 + struct tpm2_crb_smc *crb_smc; 534 390 acpi_status status; 535 391 u32 sm; 536 392 int rc; ··· 563 417 if (sm == ACPI_TPM2_START_METHOD || 564 418 sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) 565 419 priv->flags |= CRB_FL_ACPI_START; 420 + 421 + if (sm == ACPI_TPM2_COMMAND_BUFFER_WITH_SMC) { 422 + if (buf->header.length < (sizeof(*buf) + sizeof(*crb_smc))) { 423 + dev_err(dev, 424 + FW_BUG "TPM2 ACPI table has wrong size %u for start method type %d\n", 425 + buf->header.length, 426 + ACPI_TPM2_COMMAND_BUFFER_WITH_SMC); 427 + return -EINVAL; 428 + } 429 + crb_smc = ACPI_ADD_PTR(struct tpm2_crb_smc, buf, sizeof(*buf)); 430 + priv->smc_func_id = crb_smc->smc_func_id; 431 + priv->flags |= CRB_FL_CRB_SMC_START; 432 + } 566 433 567 434 rc = crb_map_io(device, priv, buf); 568 435 if (rc) ··· 622 463 return 0; 623 464 } 624 465 625 - #ifdef CONFIG_PM 626 - static int crb_pm_runtime_suspend(struct device *dev) 466 + static int __maybe_unused crb_pm_runtime_suspend(struct device *dev) 627 467 { 628 468 struct tpm_chip *chip = dev_get_drvdata(dev); 629 469 struct crb_priv *priv = dev_get_drvdata(&chip->dev); ··· 630 472 return crb_go_idle(dev, priv); 631 473 } 632 474 633 - static int crb_pm_runtime_resume(struct device *dev) 475 + static int __maybe_unused crb_pm_runtime_resume(struct device *dev) 634 476 { 635 477 struct tpm_chip *chip = dev_get_drvdata(dev); 636 478 struct crb_priv *priv = dev_get_drvdata(&chip->dev); 637 479 638 480 return crb_cmd_ready(dev, priv); 639 481 } 640 - #endif /* CONFIG_PM */ 482 + 483 + static int __maybe_unused crb_pm_suspend(struct device *dev) 484 + { 485 + int ret; 486 + 487 + ret = tpm_pm_suspend(dev); 488 + if (ret) 489 + return ret; 490 + 491 + return crb_pm_runtime_suspend(dev); 492 + } 493 + 494 + static int __maybe_unused crb_pm_resume(struct device *dev) 495 + { 496 + int ret; 497 + 498 + ret = crb_pm_runtime_resume(dev); 499 + if (ret) 500 + return ret; 501 + 502 + return tpm_pm_resume(dev); 503 + } 641 504 642 505 static const struct dev_pm_ops crb_pm = { 643 - SET_SYSTEM_SLEEP_PM_OPS(tpm_pm_suspend, tpm_pm_resume) 506 + SET_SYSTEM_SLEEP_PM_OPS(crb_pm_suspend, crb_pm_resume) 644 507 SET_RUNTIME_PM_OPS(crb_pm_runtime_suspend, crb_pm_runtime_resume, NULL) 645 508 }; 646 509
+6 -6
drivers/char/tpm/tpm_i2c_infineon.c
··· 278 278 #define TPM_DATA_FIFO(l) (0x0005 | ((l) << 4)) 279 279 #define TPM_DID_VID(l) (0x0006 | ((l) << 4)) 280 280 281 - static int check_locality(struct tpm_chip *chip, int loc) 281 + static bool check_locality(struct tpm_chip *chip, int loc) 282 282 { 283 283 u8 buf; 284 284 int rc; 285 285 286 286 rc = iic_tpm_read(TPM_ACCESS(loc), &buf, 1); 287 287 if (rc < 0) 288 - return rc; 288 + return false; 289 289 290 290 if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 291 291 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { 292 292 tpm_dev.locality = loc; 293 - return loc; 293 + return true; 294 294 } 295 295 296 - return -EIO; 296 + return false; 297 297 } 298 298 299 299 /* implementation similar to tpm_tis */ ··· 315 315 unsigned long stop; 316 316 u8 buf = TPM_ACCESS_REQUEST_USE; 317 317 318 - if (check_locality(chip, loc) >= 0) 318 + if (check_locality(chip, loc)) 319 319 return loc; 320 320 321 321 iic_tpm_write(TPM_ACCESS(loc), &buf, 1); ··· 323 323 /* wait for burstcount */ 324 324 stop = jiffies + chip->timeout_a; 325 325 do { 326 - if (check_locality(chip, loc) >= 0) 326 + if (check_locality(chip, loc)) 327 327 return loc; 328 328 usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI); 329 329 } while (time_before(jiffies, stop));
+16 -8
drivers/char/tpm/tpm_i2c_nuvoton.c
··· 49 49 */ 50 50 #define TPM_I2C_MAX_BUF_SIZE 32 51 51 #define TPM_I2C_RETRY_COUNT 32 52 - #define TPM_I2C_BUS_DELAY 1 /* msec */ 53 - #define TPM_I2C_RETRY_DELAY_SHORT 2 /* msec */ 54 - #define TPM_I2C_RETRY_DELAY_LONG 10 /* msec */ 52 + #define TPM_I2C_BUS_DELAY 1000 /* usec */ 53 + #define TPM_I2C_RETRY_DELAY_SHORT (2 * 1000) /* usec */ 54 + #define TPM_I2C_RETRY_DELAY_LONG (10 * 1000) /* usec */ 55 + #define TPM_I2C_DELAY_RANGE 300 /* usec */ 55 56 56 57 #define OF_IS_TPM2 ((void *)1) 57 58 #define I2C_IS_TPM2 1 ··· 124 123 /* this causes the current command to be aborted */ 125 124 for (i = 0, status = -1; i < TPM_I2C_RETRY_COUNT && status < 0; i++) { 126 125 status = i2c_nuvoton_write_buf(client, TPM_STS, 1, &data); 127 - msleep(TPM_I2C_BUS_DELAY); 126 + if (status < 0) 127 + usleep_range(TPM_I2C_BUS_DELAY, TPM_I2C_BUS_DELAY 128 + + TPM_I2C_DELAY_RANGE); 128 129 } 129 130 return status; 130 131 } ··· 163 160 burst_count = min_t(u8, TPM_I2C_MAX_BUF_SIZE, data); 164 161 break; 165 162 } 166 - msleep(TPM_I2C_BUS_DELAY); 163 + usleep_range(TPM_I2C_BUS_DELAY, TPM_I2C_BUS_DELAY 164 + + TPM_I2C_DELAY_RANGE); 167 165 } while (time_before(jiffies, stop)); 168 166 169 167 return burst_count; ··· 207 203 return 0; 208 204 209 205 /* use polling to wait for the event */ 210 - ten_msec = jiffies + msecs_to_jiffies(TPM_I2C_RETRY_DELAY_LONG); 206 + ten_msec = jiffies + usecs_to_jiffies(TPM_I2C_RETRY_DELAY_LONG); 211 207 stop = jiffies + timeout; 212 208 do { 213 209 if (time_before(jiffies, ten_msec)) 214 - msleep(TPM_I2C_RETRY_DELAY_SHORT); 210 + usleep_range(TPM_I2C_RETRY_DELAY_SHORT, 211 + TPM_I2C_RETRY_DELAY_SHORT 212 + + TPM_I2C_DELAY_RANGE); 215 213 else 216 - msleep(TPM_I2C_RETRY_DELAY_LONG); 214 + usleep_range(TPM_I2C_RETRY_DELAY_LONG, 215 + TPM_I2C_RETRY_DELAY_LONG 216 + + TPM_I2C_DELAY_RANGE); 217 217 status_valid = i2c_nuvoton_check_status(chip, mask, 218 218 value); 219 219 if (status_valid)
+6 -2
drivers/char/tpm/tpm_ibmvtpm.c
··· 299 299 } 300 300 301 301 kfree(ibmvtpm); 302 + /* For tpm_ibmvtpm_get_desired_dma */ 303 + dev_set_drvdata(&vdev->dev, NULL); 302 304 303 305 return 0; 304 306 } ··· 315 313 static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev) 316 314 { 317 315 struct tpm_chip *chip = dev_get_drvdata(&vdev->dev); 318 - struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 316 + struct ibmvtpm_dev *ibmvtpm; 319 317 320 318 /* 321 319 * ibmvtpm initializes at probe time, so the data we are 322 320 * asking for may not be set yet. Estimate that 4K required 323 321 * for TCE-mapped buffer in addition to CRQ. 324 322 */ 325 - if (!ibmvtpm) 323 + if (chip) 324 + ibmvtpm = dev_get_drvdata(&chip->dev); 325 + else 326 326 return CRQ_RES_BUF_SIZE + PAGE_SIZE; 327 327 328 328 return CRQ_RES_BUF_SIZE + ibmvtpm->rtce_size;
+23 -37
drivers/char/tpm/tpm_tis_core.c
··· 56 56 return -1; 57 57 } 58 58 59 - static int check_locality(struct tpm_chip *chip, int l) 59 + static bool check_locality(struct tpm_chip *chip, int l) 60 60 { 61 61 struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 62 62 int rc; ··· 64 64 65 65 rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); 66 66 if (rc < 0) 67 - return rc; 67 + return false; 68 68 69 69 if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 70 - (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) 71 - return priv->locality = l; 70 + (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { 71 + priv->locality = l; 72 + return true; 73 + } 72 74 73 - return -1; 75 + return false; 74 76 } 75 77 76 - static void release_locality(struct tpm_chip *chip, int l, int force) 78 + static void release_locality(struct tpm_chip *chip, int l) 77 79 { 78 80 struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 79 - int rc; 80 - u8 access; 81 81 82 - rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); 83 - if (rc < 0) 84 - return; 85 - 86 - if (force || (access & 87 - (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) == 88 - (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) 89 - tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY); 90 - 82 + tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY); 91 83 } 92 84 93 85 static int request_locality(struct tpm_chip *chip, int l) ··· 88 96 unsigned long stop, timeout; 89 97 long rc; 90 98 91 - if (check_locality(chip, l) >= 0) 99 + if (check_locality(chip, l)) 92 100 return l; 93 101 94 102 rc = tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_REQUEST_USE); ··· 104 112 return -1; 105 113 rc = wait_event_interruptible_timeout(priv->int_queue, 106 114 (check_locality 107 - (chip, l) >= 0), 115 + (chip, l)), 108 116 timeout); 109 117 if (rc > 0) 110 118 return l; ··· 115 123 } else { 116 124 /* wait for burstcount */ 117 125 do { 118 - if (check_locality(chip, l) >= 0) 126 + if (check_locality(chip, l)) 119 127 return l; 120 128 msleep(TPM_TIMEOUT); 121 129 } while (time_before(jiffies, stop)); ··· 152 160 u32 value; 153 161 154 162 /* wait for burstcount */ 155 - /* which timeout value, spec has 2 answers (c & d) */ 156 - stop = jiffies + chip->timeout_d; 163 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 164 + stop = jiffies + chip->timeout_a; 165 + else 166 + stop = jiffies + chip->timeout_d; 157 167 do { 158 168 rc = tpm_tis_read32(priv, TPM_STS(priv->locality), &value); 159 169 if (rc < 0) ··· 244 250 245 251 out: 246 252 tpm_tis_ready(chip); 247 - release_locality(chip, priv->locality, 0); 248 253 return size; 249 254 } 250 255 ··· 258 265 int rc, status, burstcnt; 259 266 size_t count = 0; 260 267 bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND; 261 - 262 - if (request_locality(chip, 0) < 0) 263 - return -EBUSY; 264 268 265 269 status = tpm_tis_status(chip); 266 270 if ((status & TPM_STS_COMMAND_READY) == 0) { ··· 317 327 318 328 out_err: 319 329 tpm_tis_ready(chip); 320 - release_locality(chip, priv->locality, 0); 321 330 return rc; 322 331 } 323 332 ··· 377 388 return len; 378 389 out_err: 379 390 tpm_tis_ready(chip); 380 - release_locality(chip, priv->locality, 0); 381 391 return rc; 382 392 } 383 393 ··· 463 475 if (vendor != TPM_VID_INTEL) 464 476 return 0; 465 477 478 + if (request_locality(chip, 0) != 0) 479 + return -EBUSY; 480 + 466 481 rc = tpm_tis_send_data(chip, cmd_getticks, len); 467 482 if (rc == 0) 468 483 goto out; 469 484 470 485 tpm_tis_ready(chip); 471 - release_locality(chip, priv->locality, 0); 472 486 473 487 priv->flags |= TPM_TIS_ITPM_WORKAROUND; 474 488 ··· 484 494 485 495 out: 486 496 tpm_tis_ready(chip); 487 - release_locality(chip, priv->locality, 0); 497 + release_locality(chip, priv->locality); 488 498 489 499 return rc; 490 500 } ··· 523 533 wake_up_interruptible(&priv->read_queue); 524 534 if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) 525 535 for (i = 0; i < 5; i++) 526 - if (check_locality(chip, i) >= 0) 536 + if (check_locality(chip, i)) 527 537 break; 528 538 if (interrupt & 529 539 (TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT | ··· 658 668 interrupt = 0; 659 669 660 670 tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); 661 - release_locality(chip, priv->locality, 1); 662 671 } 663 672 EXPORT_SYMBOL_GPL(tpm_tis_remove); 664 673 ··· 671 682 .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 672 683 .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 673 684 .req_canceled = tpm_tis_req_canceled, 685 + .request_locality = request_locality, 686 + .relinquish_locality = release_locality, 674 687 }; 675 688 676 689 int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, ··· 714 723 TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT; 715 724 intmask &= ~TPM_GLOBAL_INT_ENABLE; 716 725 tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); 717 - 718 - if (request_locality(chip, 0) != 0) { 719 - rc = -ENODEV; 720 - goto out_err; 721 - } 722 726 723 727 rc = tpm2_probe(chip); 724 728 if (rc)
+70 -94
drivers/char/tpm/tpm_tis_spi.c
··· 47 47 struct tpm_tis_data priv; 48 48 struct spi_device *spi_device; 49 49 50 - u8 tx_buf[MAX_SPI_FRAMESIZE + 4]; 51 - u8 rx_buf[MAX_SPI_FRAMESIZE + 4]; 50 + u8 tx_buf[4]; 51 + u8 rx_buf[4]; 52 52 }; 53 53 54 54 static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data) ··· 56 56 return container_of(data, struct tpm_tis_spi_phy, priv); 57 57 } 58 58 59 - static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr, 60 - u16 len, u8 *result) 59 + static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, 60 + u8 *buffer, u8 direction) 61 61 { 62 62 struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); 63 - int ret, i; 63 + int ret = 0; 64 + int i; 64 65 struct spi_message m; 65 - struct spi_transfer spi_xfer = { 66 - .tx_buf = phy->tx_buf, 67 - .rx_buf = phy->rx_buf, 68 - .len = 4, 69 - }; 70 - 71 - if (len > MAX_SPI_FRAMESIZE) 72 - return -ENOMEM; 73 - 74 - phy->tx_buf[0] = 0x80 | (len - 1); 75 - phy->tx_buf[1] = 0xd4; 76 - phy->tx_buf[2] = (addr >> 8) & 0xFF; 77 - phy->tx_buf[3] = addr & 0xFF; 78 - 79 - spi_xfer.cs_change = 1; 80 - spi_message_init(&m); 81 - spi_message_add_tail(&spi_xfer, &m); 66 + struct spi_transfer spi_xfer; 67 + u8 transfer_len; 82 68 83 69 spi_bus_lock(phy->spi_device->master); 84 - ret = spi_sync_locked(phy->spi_device, &m); 85 - if (ret < 0) 86 - goto exit; 87 70 88 - memset(phy->tx_buf, 0, len); 71 + while (len) { 72 + transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE); 89 73 90 - /* According to TCG PTP specification, if there is no TPM present at 91 - * all, then the design has a weak pull-up on MISO. If a TPM is not 92 - * present, a pull-up on MISO means that the SB controller sees a 1, 93 - * and will latch in 0xFF on the read. 94 - */ 95 - for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) { 96 - spi_xfer.len = 1; 74 + phy->tx_buf[0] = direction | (transfer_len - 1); 75 + phy->tx_buf[1] = 0xd4; 76 + phy->tx_buf[2] = addr >> 8; 77 + phy->tx_buf[3] = addr; 78 + 79 + memset(&spi_xfer, 0, sizeof(spi_xfer)); 80 + spi_xfer.tx_buf = phy->tx_buf; 81 + spi_xfer.rx_buf = phy->rx_buf; 82 + spi_xfer.len = 4; 83 + spi_xfer.cs_change = 1; 84 + 97 85 spi_message_init(&m); 98 86 spi_message_add_tail(&spi_xfer, &m); 99 87 ret = spi_sync_locked(phy->spi_device, &m); 100 88 if (ret < 0) 101 89 goto exit; 90 + 91 + if ((phy->rx_buf[3] & 0x01) == 0) { 92 + // handle SPI wait states 93 + phy->tx_buf[0] = 0; 94 + 95 + for (i = 0; i < TPM_RETRY; i++) { 96 + spi_xfer.len = 1; 97 + spi_message_init(&m); 98 + spi_message_add_tail(&spi_xfer, &m); 99 + ret = spi_sync_locked(phy->spi_device, &m); 100 + if (ret < 0) 101 + goto exit; 102 + if (phy->rx_buf[0] & 0x01) 103 + break; 104 + } 105 + 106 + if (i == TPM_RETRY) { 107 + ret = -ETIMEDOUT; 108 + goto exit; 109 + } 110 + } 111 + 112 + spi_xfer.cs_change = 0; 113 + spi_xfer.len = transfer_len; 114 + spi_xfer.delay_usecs = 5; 115 + 116 + if (direction) { 117 + spi_xfer.tx_buf = NULL; 118 + spi_xfer.rx_buf = buffer; 119 + } else { 120 + spi_xfer.tx_buf = buffer; 121 + spi_xfer.rx_buf = NULL; 122 + } 123 + 124 + spi_message_init(&m); 125 + spi_message_add_tail(&spi_xfer, &m); 126 + ret = spi_sync_locked(phy->spi_device, &m); 127 + if (ret < 0) 128 + goto exit; 129 + 130 + len -= transfer_len; 131 + buffer += transfer_len; 102 132 } 103 - 104 - spi_xfer.cs_change = 0; 105 - spi_xfer.len = len; 106 - spi_xfer.rx_buf = result; 107 - 108 - spi_message_init(&m); 109 - spi_message_add_tail(&spi_xfer, &m); 110 - ret = spi_sync_locked(phy->spi_device, &m); 111 133 112 134 exit: 113 135 spi_bus_unlock(phy->spi_device->master); 114 136 return ret; 115 137 } 116 138 139 + static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr, 140 + u16 len, u8 *result) 141 + { 142 + return tpm_tis_spi_transfer(data, addr, len, result, 0x80); 143 + } 144 + 117 145 static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr, 118 146 u16 len, u8 *value) 119 147 { 120 - struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); 121 - int ret, i; 122 - struct spi_message m; 123 - struct spi_transfer spi_xfer = { 124 - .tx_buf = phy->tx_buf, 125 - .rx_buf = phy->rx_buf, 126 - .len = 4, 127 - }; 128 - 129 - if (len > MAX_SPI_FRAMESIZE) 130 - return -ENOMEM; 131 - 132 - phy->tx_buf[0] = len - 1; 133 - phy->tx_buf[1] = 0xd4; 134 - phy->tx_buf[2] = (addr >> 8) & 0xFF; 135 - phy->tx_buf[3] = addr & 0xFF; 136 - 137 - spi_xfer.cs_change = 1; 138 - spi_message_init(&m); 139 - spi_message_add_tail(&spi_xfer, &m); 140 - 141 - spi_bus_lock(phy->spi_device->master); 142 - ret = spi_sync_locked(phy->spi_device, &m); 143 - if (ret < 0) 144 - goto exit; 145 - 146 - memset(phy->tx_buf, 0, len); 147 - 148 - /* According to TCG PTP specification, if there is no TPM present at 149 - * all, then the design has a weak pull-up on MISO. If a TPM is not 150 - * present, a pull-up on MISO means that the SB controller sees a 1, 151 - * and will latch in 0xFF on the read. 152 - */ 153 - for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) { 154 - spi_xfer.len = 1; 155 - spi_message_init(&m); 156 - spi_message_add_tail(&spi_xfer, &m); 157 - ret = spi_sync_locked(phy->spi_device, &m); 158 - if (ret < 0) 159 - goto exit; 160 - } 161 - 162 - spi_xfer.len = len; 163 - spi_xfer.tx_buf = value; 164 - spi_xfer.cs_change = 0; 165 - spi_xfer.tx_buf = value; 166 - spi_message_init(&m); 167 - spi_message_add_tail(&spi_xfer, &m); 168 - ret = spi_sync_locked(phy->spi_device, &m); 169 - 170 - exit: 171 - spi_bus_unlock(phy->spi_device->master); 172 - return ret; 148 + return tpm_tis_spi_transfer(data, addr, len, value, 0); 173 149 } 174 150 175 151 static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result)
+65
drivers/char/tpm/tpmrm-dev.c
··· 1 + /* 2 + * Copyright (C) 2017 James.Bottomley@HansenPartnership.com 3 + * 4 + * GPLv2 5 + */ 6 + #include <linux/slab.h> 7 + #include "tpm-dev.h" 8 + 9 + struct tpmrm_priv { 10 + struct file_priv priv; 11 + struct tpm_space space; 12 + }; 13 + 14 + static int tpmrm_open(struct inode *inode, struct file *file) 15 + { 16 + struct tpm_chip *chip; 17 + struct tpmrm_priv *priv; 18 + int rc; 19 + 20 + chip = container_of(inode->i_cdev, struct tpm_chip, cdevs); 21 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 22 + if (priv == NULL) 23 + return -ENOMEM; 24 + 25 + rc = tpm2_init_space(&priv->space); 26 + if (rc) { 27 + kfree(priv); 28 + return -ENOMEM; 29 + } 30 + 31 + tpm_common_open(file, chip, &priv->priv); 32 + 33 + return 0; 34 + } 35 + 36 + static int tpmrm_release(struct inode *inode, struct file *file) 37 + { 38 + struct file_priv *fpriv = file->private_data; 39 + struct tpmrm_priv *priv = container_of(fpriv, struct tpmrm_priv, priv); 40 + 41 + tpm_common_release(file, fpriv); 42 + tpm2_del_space(fpriv->chip, &priv->space); 43 + kfree(priv); 44 + 45 + return 0; 46 + } 47 + 48 + ssize_t tpmrm_write(struct file *file, const char __user *buf, 49 + size_t size, loff_t *off) 50 + { 51 + struct file_priv *fpriv = file->private_data; 52 + struct tpmrm_priv *priv = container_of(fpriv, struct tpmrm_priv, priv); 53 + 54 + return tpm_common_write(file, buf, size, off, &priv->space); 55 + } 56 + 57 + const struct file_operations tpmrm_fops = { 58 + .owner = THIS_MODULE, 59 + .llseek = no_llseek, 60 + .open = tpmrm_open, 61 + .read = tpm_common_read, 62 + .write = tpmrm_write, 63 + .release = tpmrm_release, 64 + }; 65 +
+10 -10
fs/namei.c
··· 340 340 341 341 if (S_ISDIR(inode->i_mode)) { 342 342 /* DACs are overridable for directories */ 343 - if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) 344 - return 0; 345 343 if (!(mask & MAY_WRITE)) 346 344 if (capable_wrt_inode_uidgid(inode, 347 345 CAP_DAC_READ_SEARCH)) 348 346 return 0; 349 - return -EACCES; 350 - } 351 - /* 352 - * Read/write DACs are always overridable. 353 - * Executable DACs are overridable when there is 354 - * at least one exec bit set. 355 - */ 356 - if (!(mask & MAY_EXEC) || (inode->i_mode & S_IXUGO)) 357 347 if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) 358 348 return 0; 349 + return -EACCES; 350 + } 359 351 360 352 /* 361 353 * Searching includes executable on directories, else just read. ··· 355 363 mask &= MAY_READ | MAY_WRITE | MAY_EXEC; 356 364 if (mask == MAY_READ) 357 365 if (capable_wrt_inode_uidgid(inode, CAP_DAC_READ_SEARCH)) 366 + return 0; 367 + /* 368 + * Read/write DACs are always overridable. 369 + * Executable DACs are overridable when there is 370 + * at least one exec bit set. 371 + */ 372 + if (!(mask & MAY_EXEC) || (inode->i_mode & S_IXUGO)) 373 + if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) 358 374 return 0; 359 375 360 376 return -EACCES;
+1
include/acpi/actbl2.h
··· 1294 1294 #define ACPI_TPM2_MEMORY_MAPPED 6 1295 1295 #define ACPI_TPM2_COMMAND_BUFFER 7 1296 1296 #define ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD 8 1297 + #define ACPI_TPM2_COMMAND_BUFFER_WITH_SMC 11 1297 1298 1298 1299 /******************************************************************************* 1299 1300 *
+13 -2
include/crypto/public_key.h
··· 50 50 struct key_type; 51 51 union key_payload; 52 52 53 - extern int restrict_link_by_signature(struct key *trust_keyring, 53 + extern int restrict_link_by_signature(struct key *dest_keyring, 54 54 const struct key_type *type, 55 - const union key_payload *payload); 55 + const union key_payload *payload, 56 + struct key *trust_keyring); 57 + 58 + extern int restrict_link_by_key_or_keyring(struct key *dest_keyring, 59 + const struct key_type *type, 60 + const union key_payload *payload, 61 + struct key *trusted); 62 + 63 + extern int restrict_link_by_key_or_keyring_chain(struct key *trust_keyring, 64 + const struct key_type *type, 65 + const union key_payload *payload, 66 + struct key *trusted); 56 67 57 68 extern int verify_signature(const struct key *key, 58 69 const struct public_key_signature *sig);
+16 -2
include/keys/system_keyring.h
··· 18 18 19 19 extern int restrict_link_by_builtin_trusted(struct key *keyring, 20 20 const struct key_type *type, 21 - const union key_payload *payload); 21 + const union key_payload *payload, 22 + struct key *restriction_key); 22 23 23 24 #else 24 25 #define restrict_link_by_builtin_trusted restrict_link_reject ··· 29 28 extern int restrict_link_by_builtin_and_secondary_trusted( 30 29 struct key *keyring, 31 30 const struct key_type *type, 32 - const union key_payload *payload); 31 + const union key_payload *payload, 32 + struct key *restriction_key); 33 33 #else 34 34 #define restrict_link_by_builtin_and_secondary_trusted restrict_link_by_builtin_trusted 35 + #endif 36 + 37 + #ifdef CONFIG_SYSTEM_BLACKLIST_KEYRING 38 + extern int mark_hash_blacklisted(const char *hash); 39 + extern int is_hash_blacklisted(const u8 *hash, size_t hash_len, 40 + const char *type); 41 + #else 42 + static inline int is_hash_blacklisted(const u8 *hash, size_t hash_len, 43 + const char *type) 44 + { 45 + return 0; 46 + } 35 47 #endif 36 48 37 49 #ifdef CONFIG_IMA_BLACKLIST_KEYRING
+7
include/linux/compat.h
··· 295 295 }; 296 296 #endif 297 297 298 + struct compat_keyctl_kdf_params { 299 + compat_uptr_t hashname; 300 + compat_uptr_t otherinfo; 301 + __u32 otherinfolen; 302 + __u32 __spare[8]; 303 + }; 304 + 298 305 struct compat_statfs; 299 306 struct compat_statfs64; 300 307 struct compat_old_linux_dirent;
+7
include/linux/init_task.h
··· 219 219 # define INIT_TASK_TI(tsk) 220 220 #endif 221 221 222 + #ifdef CONFIG_SECURITY 223 + #define INIT_TASK_SECURITY .security = NULL, 224 + #else 225 + #define INIT_TASK_SECURITY 226 + #endif 227 + 222 228 /* 223 229 * INIT_TASK is used to set up the first task table, touch at 224 230 * your own risk!. Base=0, limit=0x1fffff (=2MB) ··· 304 298 INIT_NUMA_BALANCING(tsk) \ 305 299 INIT_KASAN(tsk) \ 306 300 INIT_LIVEPATCH(tsk) \ 301 + INIT_TASK_SECURITY \ 307 302 } 308 303 309 304
+8
include/linux/key-type.h
··· 147 147 */ 148 148 request_key_actor_t request_key; 149 149 150 + /* Look up a keyring access restriction (optional) 151 + * 152 + * - NULL is a valid return value (meaning the requested restriction 153 + * is known but will never block addition of a key) 154 + * - should return -EINVAL if the restriction is unknown 155 + */ 156 + struct key_restriction *(*lookup_restriction)(const char *params); 157 + 150 158 /* internal fields */ 151 159 struct list_head link; /* link in types list */ 152 160 struct lock_class_key lock_class; /* key->sem lock class */
+25 -14
include/linux/key.h
··· 23 23 #include <linux/rwsem.h> 24 24 #include <linux/atomic.h> 25 25 #include <linux/assoc_array.h> 26 + #include <linux/refcount.h> 26 27 27 28 #ifdef __KERNEL__ 28 29 #include <linux/uidgid.h> ··· 127 126 return (unsigned long) key_ref & 1UL; 128 127 } 129 128 129 + typedef int (*key_restrict_link_func_t)(struct key *dest_keyring, 130 + const struct key_type *type, 131 + const union key_payload *payload, 132 + struct key *restriction_key); 133 + 134 + struct key_restriction { 135 + key_restrict_link_func_t check; 136 + struct key *key; 137 + struct key_type *keytype; 138 + }; 139 + 130 140 /*****************************************************************************/ 131 141 /* 132 142 * authentication token / access credential / keyring ··· 147 135 * - Kerberos TGTs and tickets 148 136 */ 149 137 struct key { 150 - atomic_t usage; /* number of references */ 138 + refcount_t usage; /* number of references */ 151 139 key_serial_t serial; /* key serial number */ 152 140 union { 153 141 struct list_head graveyard_link; ··· 217 205 }; 218 206 219 207 /* This is set on a keyring to restrict the addition of a link to a key 220 - * to it. If this method isn't provided then it is assumed that the 208 + * to it. If this structure isn't provided then it is assumed that the 221 209 * keyring is open to any addition. It is ignored for non-keyring 222 - * keys. 210 + * keys. Only set this value using keyring_restrict(), keyring_alloc(), 211 + * or key_alloc(). 223 212 * 224 213 * This is intended for use with rings of trusted keys whereby addition 225 214 * to the keyring needs to be controlled. KEY_ALLOC_BYPASS_RESTRICTION 226 215 * overrides this, allowing the kernel to add extra keys without 227 216 * restriction. 228 217 */ 229 - int (*restrict_link)(struct key *keyring, 230 - const struct key_type *type, 231 - const union key_payload *payload); 218 + struct key_restriction *restrict_link; 232 219 }; 233 220 234 221 extern struct key *key_alloc(struct key_type *type, ··· 236 225 const struct cred *cred, 237 226 key_perm_t perm, 238 227 unsigned long flags, 239 - int (*restrict_link)(struct key *, 240 - const struct key_type *, 241 - const union key_payload *)); 228 + struct key_restriction *restrict_link); 242 229 243 230 244 231 #define KEY_ALLOC_IN_QUOTA 0x0000 /* add to quota, reject if would overrun */ ··· 251 242 252 243 static inline struct key *__key_get(struct key *key) 253 244 { 254 - atomic_inc(&key->usage); 245 + refcount_inc(&key->usage); 255 246 return key; 256 247 } 257 248 ··· 312 303 const struct cred *cred, 313 304 key_perm_t perm, 314 305 unsigned long flags, 315 - int (*restrict_link)(struct key *, 316 - const struct key_type *, 317 - const union key_payload *), 306 + struct key_restriction *restrict_link, 318 307 struct key *dest); 319 308 320 309 extern int restrict_link_reject(struct key *keyring, 321 310 const struct key_type *type, 322 - const union key_payload *payload); 311 + const union key_payload *payload, 312 + struct key *restriction_key); 323 313 324 314 extern int keyring_clear(struct key *keyring); 325 315 ··· 328 320 329 321 extern int keyring_add_key(struct key *keyring, 330 322 struct key *key); 323 + 324 + extern int keyring_restrict(key_ref_t keyring, const char *type, 325 + const char *restriction); 331 326 332 327 extern struct key *key_lookup(key_serial_t id); 333 328
+30 -4
include/linux/lsm_hooks.h
··· 533 533 * manual page for definitions of the @clone_flags. 534 534 * @clone_flags contains the flags indicating what should be shared. 535 535 * Return 0 if permission is granted. 536 + * @task_alloc: 537 + * @task task being allocated. 538 + * @clone_flags contains the flags indicating what should be shared. 539 + * Handle allocation of task-related resources. 540 + * Returns a zero on success, negative values on failure. 536 541 * @task_free: 537 - * @task task being freed 542 + * @task task about to be freed. 538 543 * Handle release of task-related resources. (Note that this can be called 539 544 * from interrupt context.) 540 545 * @cred_alloc_blank: ··· 635 630 * Check permission before getting the ioprio value of @p. 636 631 * @p contains the task_struct of process. 637 632 * Return 0 if permission is granted. 633 + * @task_prlimit: 634 + * Check permission before getting and/or setting the resource limits of 635 + * another task. 636 + * @cred points to the cred structure for the current task. 637 + * @tcred points to the cred structure for the target task. 638 + * @flags contains the LSM_PRLIMIT_* flag bits indicating whether the 639 + * resource limits are being read, modified, or both. 640 + * Return 0 if permission is granted. 638 641 * @task_setrlimit: 639 - * Check permission before setting the resource limits of the current 640 - * process for @resource to @new_rlim. The old resource limit values can 641 - * be examined by dereferencing (current->signal->rlim + resource). 642 + * Check permission before setting the resource limits of process @p 643 + * for @resource to @new_rlim. The old resource limit values can 644 + * be examined by dereferencing (p->signal->rlim + resource). 645 + * @p points to the task_struct for the target task's group leader. 642 646 * @resource contains the resource whose limit is being set. 643 647 * @new_rlim contains the new limits for @resource. 644 648 * Return 0 if permission is granted. ··· 1487 1473 int (*file_open)(struct file *file, const struct cred *cred); 1488 1474 1489 1475 int (*task_create)(unsigned long clone_flags); 1476 + int (*task_alloc)(struct task_struct *task, unsigned long clone_flags); 1490 1477 void (*task_free)(struct task_struct *task); 1491 1478 int (*cred_alloc_blank)(struct cred *cred, gfp_t gfp); 1492 1479 void (*cred_free)(struct cred *cred); ··· 1509 1494 int (*task_setnice)(struct task_struct *p, int nice); 1510 1495 int (*task_setioprio)(struct task_struct *p, int ioprio); 1511 1496 int (*task_getioprio)(struct task_struct *p); 1497 + int (*task_prlimit)(const struct cred *cred, const struct cred *tcred, 1498 + unsigned int flags); 1512 1499 int (*task_setrlimit)(struct task_struct *p, unsigned int resource, 1513 1500 struct rlimit *new_rlim); 1514 1501 int (*task_setscheduler)(struct task_struct *p); ··· 1754 1737 struct list_head file_receive; 1755 1738 struct list_head file_open; 1756 1739 struct list_head task_create; 1740 + struct list_head task_alloc; 1757 1741 struct list_head task_free; 1758 1742 struct list_head cred_alloc_blank; 1759 1743 struct list_head cred_free; ··· 1773 1755 struct list_head task_setnice; 1774 1756 struct list_head task_setioprio; 1775 1757 struct list_head task_getioprio; 1758 + struct list_head task_prlimit; 1776 1759 struct list_head task_setrlimit; 1777 1760 struct list_head task_setscheduler; 1778 1761 struct list_head task_getscheduler; ··· 1926 1907 list_del_rcu(&hooks[i].list); 1927 1908 } 1928 1909 #endif /* CONFIG_SECURITY_SELINUX_DISABLE */ 1910 + 1911 + /* Currently required to handle SELinux runtime hook disable. */ 1912 + #ifdef CONFIG_SECURITY_WRITABLE_HOOKS 1913 + #define __lsm_ro_after_init 1914 + #else 1915 + #define __lsm_ro_after_init __ro_after_init 1916 + #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */ 1929 1917 1930 1918 extern int __init security_module_enable(const char *module); 1931 1919 extern void __init capability_add_hooks(void);
+4
include/linux/sched.h
··· 1047 1047 #ifdef CONFIG_LIVEPATCH 1048 1048 int patch_state; 1049 1049 #endif 1050 + #ifdef CONFIG_SECURITY 1051 + /* Used by LSM modules for access restriction: */ 1052 + void *security; 1053 + #endif 1050 1054 /* CPU-specific state of this task: */ 1051 1055 struct thread_struct thread; 1052 1056
+20
include/linux/security.h
··· 133 133 /* setfsuid or setfsgid, id0 == fsuid or fsgid */ 134 134 #define LSM_SETID_FS 8 135 135 136 + /* Flags for security_task_prlimit(). */ 137 + #define LSM_PRLIMIT_READ 1 138 + #define LSM_PRLIMIT_WRITE 2 139 + 136 140 /* forward declares to avoid warnings */ 137 141 struct sched_param; 138 142 struct request_sock; ··· 308 304 int security_file_receive(struct file *file); 309 305 int security_file_open(struct file *file, const struct cred *cred); 310 306 int security_task_create(unsigned long clone_flags); 307 + int security_task_alloc(struct task_struct *task, unsigned long clone_flags); 311 308 void security_task_free(struct task_struct *task); 312 309 int security_cred_alloc_blank(struct cred *cred, gfp_t gfp); 313 310 void security_cred_free(struct cred *cred); ··· 329 324 int security_task_setnice(struct task_struct *p, int nice); 330 325 int security_task_setioprio(struct task_struct *p, int ioprio); 331 326 int security_task_getioprio(struct task_struct *p); 327 + int security_task_prlimit(const struct cred *cred, const struct cred *tcred, 328 + unsigned int flags); 332 329 int security_task_setrlimit(struct task_struct *p, unsigned int resource, 333 330 struct rlimit *new_rlim); 334 331 int security_task_setscheduler(struct task_struct *p); ··· 862 855 return 0; 863 856 } 864 857 858 + static inline int security_task_alloc(struct task_struct *task, 859 + unsigned long clone_flags) 860 + { 861 + return 0; 862 + } 863 + 865 864 static inline void security_task_free(struct task_struct *task) 866 865 { } 867 866 ··· 958 945 } 959 946 960 947 static inline int security_task_getioprio(struct task_struct *p) 948 + { 949 + return 0; 950 + } 951 + 952 + static inline int security_task_prlimit(const struct cred *cred, 953 + const struct cred *tcred, 954 + unsigned int flags) 961 955 { 962 956 return 0; 963 957 }
+2 -1
include/linux/tpm.h
··· 48 48 u8 (*status) (struct tpm_chip *chip); 49 49 bool (*update_timeouts)(struct tpm_chip *chip, 50 50 unsigned long *timeout_cap); 51 - 51 + int (*request_locality)(struct tpm_chip *chip, int loc); 52 + void (*relinquish_locality)(struct tpm_chip *chip, int loc); 52 53 }; 53 54 54 55 #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
+8
include/uapi/linux/keyctl.h
··· 60 60 #define KEYCTL_INVALIDATE 21 /* invalidate a key */ 61 61 #define KEYCTL_GET_PERSISTENT 22 /* get a user's persistent keyring */ 62 62 #define KEYCTL_DH_COMPUTE 23 /* Compute Diffie-Hellman values */ 63 + #define KEYCTL_RESTRICT_KEYRING 29 /* Restrict keys allowed to link to a keyring */ 63 64 64 65 /* keyctl structures */ 65 66 struct keyctl_dh_params { 66 67 __s32 private; 67 68 __s32 prime; 68 69 __s32 base; 70 + }; 71 + 72 + struct keyctl_kdf_params { 73 + char *hashname; 74 + char *otherinfo; 75 + __u32 otherinfolen; 76 + __u32 __spare[8]; 69 77 }; 70 78 71 79 #endif /* _LINUX_KEYCTL_H */
+6 -1
kernel/fork.c
··· 1681 1681 goto bad_fork_cleanup_perf; 1682 1682 /* copy all the process information */ 1683 1683 shm_init_task(p); 1684 - retval = copy_semundo(clone_flags, p); 1684 + retval = security_task_alloc(p, clone_flags); 1685 1685 if (retval) 1686 1686 goto bad_fork_cleanup_audit; 1687 + retval = copy_semundo(clone_flags, p); 1688 + if (retval) 1689 + goto bad_fork_cleanup_security; 1687 1690 retval = copy_files(clone_flags, p); 1688 1691 if (retval) 1689 1692 goto bad_fork_cleanup_semundo; ··· 1910 1907 exit_files(p); /* blocking */ 1911 1908 bad_fork_cleanup_semundo: 1912 1909 exit_sem(p); 1910 + bad_fork_cleanup_security: 1911 + security_task_free(p); 1913 1912 bad_fork_cleanup_audit: 1914 1913 audit_free(p); 1915 1914 bad_fork_cleanup_perf:
+18 -12
kernel/sys.c
··· 1432 1432 } 1433 1433 1434 1434 /* rcu lock must be held */ 1435 - static int check_prlimit_permission(struct task_struct *task) 1435 + static int check_prlimit_permission(struct task_struct *task, 1436 + unsigned int flags) 1436 1437 { 1437 1438 const struct cred *cred = current_cred(), *tcred; 1439 + bool id_match; 1438 1440 1439 1441 if (current == task) 1440 1442 return 0; 1441 1443 1442 1444 tcred = __task_cred(task); 1443 - if (uid_eq(cred->uid, tcred->euid) && 1444 - uid_eq(cred->uid, tcred->suid) && 1445 - uid_eq(cred->uid, tcred->uid) && 1446 - gid_eq(cred->gid, tcred->egid) && 1447 - gid_eq(cred->gid, tcred->sgid) && 1448 - gid_eq(cred->gid, tcred->gid)) 1449 - return 0; 1450 - if (ns_capable(tcred->user_ns, CAP_SYS_RESOURCE)) 1451 - return 0; 1445 + id_match = (uid_eq(cred->uid, tcred->euid) && 1446 + uid_eq(cred->uid, tcred->suid) && 1447 + uid_eq(cred->uid, tcred->uid) && 1448 + gid_eq(cred->gid, tcred->egid) && 1449 + gid_eq(cred->gid, tcred->sgid) && 1450 + gid_eq(cred->gid, tcred->gid)); 1451 + if (!id_match && !ns_capable(tcred->user_ns, CAP_SYS_RESOURCE)) 1452 + return -EPERM; 1452 1453 1453 - return -EPERM; 1454 + return security_task_prlimit(cred, tcred, flags); 1454 1455 } 1455 1456 1456 1457 SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource, ··· 1461 1460 struct rlimit64 old64, new64; 1462 1461 struct rlimit old, new; 1463 1462 struct task_struct *tsk; 1463 + unsigned int checkflags = 0; 1464 1464 int ret; 1465 + 1466 + if (old_rlim) 1467 + checkflags |= LSM_PRLIMIT_READ; 1465 1468 1466 1469 if (new_rlim) { 1467 1470 if (copy_from_user(&new64, new_rlim, sizeof(new64))) 1468 1471 return -EFAULT; 1469 1472 rlim64_to_rlim(&new64, &new); 1473 + checkflags |= LSM_PRLIMIT_WRITE; 1470 1474 } 1471 1475 1472 1476 rcu_read_lock(); ··· 1480 1474 rcu_read_unlock(); 1481 1475 return -ESRCH; 1482 1476 } 1483 - ret = check_prlimit_permission(tsk); 1477 + ret = check_prlimit_permission(tsk, checkflags); 1484 1478 if (ret) { 1485 1479 rcu_read_unlock(); 1486 1480 return ret;
+1
scripts/selinux/genheaders/genheaders.c
··· 8 8 #include <string.h> 9 9 #include <errno.h> 10 10 #include <ctype.h> 11 + #include <sys/socket.h> 11 12 12 13 struct security_class_mapping { 13 14 const char *name;
+1
scripts/selinux/mdp/mdp.c
··· 32 32 #include <stdlib.h> 33 33 #include <unistd.h> 34 34 #include <string.h> 35 + #include <sys/socket.h> 35 36 36 37 static void usage(char *name) 37 38 {
+5
security/Kconfig
··· 31 31 32 32 If you are unsure how to answer this question, answer N. 33 33 34 + config SECURITY_WRITABLE_HOOKS 35 + depends on SECURITY 36 + bool 37 + default n 38 + 34 39 config SECURITYFS 35 40 bool "Enable the securityfs filesystem" 36 41 help
+13 -19
security/apparmor/crypto.c
··· 31 31 32 32 char *aa_calc_hash(void *data, size_t len) 33 33 { 34 - struct { 35 - struct shash_desc shash; 36 - char ctx[crypto_shash_descsize(apparmor_tfm)]; 37 - } desc; 34 + SHASH_DESC_ON_STACK(desc, apparmor_tfm); 38 35 char *hash = NULL; 39 36 int error = -ENOMEM; 40 37 ··· 42 45 if (!hash) 43 46 goto fail; 44 47 45 - desc.shash.tfm = apparmor_tfm; 46 - desc.shash.flags = 0; 48 + desc->tfm = apparmor_tfm; 49 + desc->flags = 0; 47 50 48 - error = crypto_shash_init(&desc.shash); 51 + error = crypto_shash_init(desc); 49 52 if (error) 50 53 goto fail; 51 - error = crypto_shash_update(&desc.shash, (u8 *) data, len); 54 + error = crypto_shash_update(desc, (u8 *) data, len); 52 55 if (error) 53 56 goto fail; 54 - error = crypto_shash_final(&desc.shash, hash); 57 + error = crypto_shash_final(desc, hash); 55 58 if (error) 56 59 goto fail; 57 60 ··· 66 69 int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start, 67 70 size_t len) 68 71 { 69 - struct { 70 - struct shash_desc shash; 71 - char ctx[crypto_shash_descsize(apparmor_tfm)]; 72 - } desc; 72 + SHASH_DESC_ON_STACK(desc, apparmor_tfm); 73 73 int error = -ENOMEM; 74 74 __le32 le32_version = cpu_to_le32(version); 75 75 ··· 80 86 if (!profile->hash) 81 87 goto fail; 82 88 83 - desc.shash.tfm = apparmor_tfm; 84 - desc.shash.flags = 0; 89 + desc->tfm = apparmor_tfm; 90 + desc->flags = 0; 85 91 86 - error = crypto_shash_init(&desc.shash); 92 + error = crypto_shash_init(desc); 87 93 if (error) 88 94 goto fail; 89 - error = crypto_shash_update(&desc.shash, (u8 *) &le32_version, 4); 95 + error = crypto_shash_update(desc, (u8 *) &le32_version, 4); 90 96 if (error) 91 97 goto fail; 92 - error = crypto_shash_update(&desc.shash, (u8 *) start, len); 98 + error = crypto_shash_update(desc, (u8 *) start, len); 93 99 if (error) 94 100 goto fail; 95 - error = crypto_shash_final(&desc.shash, profile->hash); 101 + error = crypto_shash_final(desc, profile->hash); 96 102 if (error) 97 103 goto fail; 98 104
+1 -1
security/apparmor/include/lib.h
··· 57 57 pr_err_ratelimited("AppArmor: " fmt, ##args) 58 58 59 59 /* Flag indicating whether initialization completed */ 60 - extern int apparmor_initialized __initdata; 60 + extern int apparmor_initialized; 61 61 62 62 /* fn's in lib */ 63 63 char *aa_split_fqname(char *args, char **ns_name);
+2 -2
security/apparmor/lib.c
··· 180 180 } else 181 181 policy->hname = kstrdup(name, gfp); 182 182 if (!policy->hname) 183 - return 0; 183 + return false; 184 184 /* base.name is a substring of fqname */ 185 185 policy->name = basename(policy->hname); 186 186 INIT_LIST_HEAD(&policy->list); 187 187 INIT_LIST_HEAD(&policy->profiles); 188 188 189 - return 1; 189 + return true; 190 190 } 191 191 192 192 /**
+25 -28
security/apparmor/lsm.c
··· 39 39 #include "include/procattr.h" 40 40 41 41 /* Flag indicating whether initialization completed */ 42 - int apparmor_initialized __initdata; 42 + int apparmor_initialized; 43 43 44 44 DEFINE_PER_CPU(struct aa_buffers, aa_buffers); 45 45 ··· 587 587 return error; 588 588 } 589 589 590 - static struct security_hook_list apparmor_hooks[] = { 590 + static struct security_hook_list apparmor_hooks[] __lsm_ro_after_init = { 591 591 LSM_HOOK_INIT(ptrace_access_check, apparmor_ptrace_access_check), 592 592 LSM_HOOK_INIT(ptrace_traceme, apparmor_ptrace_traceme), 593 593 LSM_HOOK_INIT(capget, apparmor_capget), ··· 681 681 #endif 682 682 683 683 /* Debug mode */ 684 - bool aa_g_debug = IS_ENABLED(CONFIG_SECURITY_DEBUG_MESSAGES); 684 + bool aa_g_debug = IS_ENABLED(CONFIG_SECURITY_APPARMOR_DEBUG_MESSAGES); 685 685 module_param_named(debug, aa_g_debug, aabool, S_IRUSR | S_IWUSR); 686 686 687 687 /* Audit mode */ ··· 710 710 711 711 /* Maximum pathname length before accesses will start getting rejected */ 712 712 unsigned int aa_g_path_max = 2 * PATH_MAX; 713 - module_param_named(path_max, aa_g_path_max, aauint, S_IRUSR | S_IWUSR); 713 + module_param_named(path_max, aa_g_path_max, aauint, S_IRUSR); 714 714 715 715 /* Determines how paranoid loading of policy is and how much verification 716 716 * on the loaded policy is done. ··· 738 738 /* set global flag turning off the ability to load policy */ 739 739 static int param_set_aalockpolicy(const char *val, const struct kernel_param *kp) 740 740 { 741 - if (!policy_admin_capable(NULL)) 741 + if (!apparmor_enabled) 742 + return -EINVAL; 743 + if (apparmor_initialized && !policy_admin_capable(NULL)) 742 744 return -EPERM; 743 745 return param_set_bool(val, kp); 744 746 } 745 747 746 748 static int param_get_aalockpolicy(char *buffer, const struct kernel_param *kp) 747 749 { 748 - if (!policy_view_capable(NULL)) 749 - return -EPERM; 750 750 if (!apparmor_enabled) 751 751 return -EINVAL; 752 + if (apparmor_initialized && !policy_view_capable(NULL)) 753 + return -EPERM; 752 754 return param_get_bool(buffer, kp); 753 755 } 754 756 755 757 static int param_set_aabool(const char *val, const struct kernel_param *kp) 756 758 { 757 - if (!policy_admin_capable(NULL)) 758 - return -EPERM; 759 759 if (!apparmor_enabled) 760 760 return -EINVAL; 761 + if (apparmor_initialized && !policy_admin_capable(NULL)) 762 + return -EPERM; 761 763 return param_set_bool(val, kp); 762 764 } 763 765 764 766 static int param_get_aabool(char *buffer, const struct kernel_param *kp) 765 767 { 766 - if (!policy_view_capable(NULL)) 767 - return -EPERM; 768 768 if (!apparmor_enabled) 769 769 return -EINVAL; 770 + if (apparmor_initialized && !policy_view_capable(NULL)) 771 + return -EPERM; 770 772 return param_get_bool(buffer, kp); 771 773 } 772 774 773 775 static int param_set_aauint(const char *val, const struct kernel_param *kp) 774 776 { 775 - if (!policy_admin_capable(NULL)) 776 - return -EPERM; 777 777 if (!apparmor_enabled) 778 778 return -EINVAL; 779 + if (apparmor_initialized && !policy_admin_capable(NULL)) 780 + return -EPERM; 779 781 return param_set_uint(val, kp); 780 782 } 781 783 782 784 static int param_get_aauint(char *buffer, const struct kernel_param *kp) 783 785 { 784 - if (!policy_view_capable(NULL)) 785 - return -EPERM; 786 786 if (!apparmor_enabled) 787 787 return -EINVAL; 788 + if (apparmor_initialized && !policy_view_capable(NULL)) 789 + return -EPERM; 788 790 return param_get_uint(buffer, kp); 789 791 } 790 792 791 793 static int param_get_audit(char *buffer, struct kernel_param *kp) 792 794 { 793 - if (!policy_view_capable(NULL)) 794 - return -EPERM; 795 - 796 795 if (!apparmor_enabled) 797 796 return -EINVAL; 798 - 797 + if (apparmor_initialized && !policy_view_capable(NULL)) 798 + return -EPERM; 799 799 return sprintf(buffer, "%s", audit_mode_names[aa_g_audit]); 800 800 } 801 801 802 802 static int param_set_audit(const char *val, struct kernel_param *kp) 803 803 { 804 804 int i; 805 - if (!policy_admin_capable(NULL)) 806 - return -EPERM; 807 805 808 806 if (!apparmor_enabled) 809 807 return -EINVAL; 810 - 811 808 if (!val) 812 809 return -EINVAL; 810 + if (apparmor_initialized && !policy_admin_capable(NULL)) 811 + return -EPERM; 813 812 814 813 for (i = 0; i < AUDIT_MAX_INDEX; i++) { 815 814 if (strcmp(val, audit_mode_names[i]) == 0) { ··· 822 823 823 824 static int param_get_mode(char *buffer, struct kernel_param *kp) 824 825 { 825 - if (!policy_view_capable(NULL)) 826 - return -EPERM; 827 - 828 826 if (!apparmor_enabled) 829 827 return -EINVAL; 828 + if (apparmor_initialized && !policy_view_capable(NULL)) 829 + return -EPERM; 830 830 831 831 return sprintf(buffer, "%s", aa_profile_mode_names[aa_g_profile_mode]); 832 832 } ··· 833 835 static int param_set_mode(const char *val, struct kernel_param *kp) 834 836 { 835 837 int i; 836 - if (!policy_admin_capable(NULL)) 837 - return -EPERM; 838 838 839 839 if (!apparmor_enabled) 840 840 return -EINVAL; 841 - 842 841 if (!val) 843 842 return -EINVAL; 843 + if (apparmor_initialized && !policy_admin_capable(NULL)) 844 + return -EPERM; 844 845 845 846 for (i = 0; i < APPARMOR_MODE_NAMES_MAX_INDEX; i++) { 846 847 if (strcmp(val, aa_profile_mode_names[i]) == 0) {
+4 -2
security/apparmor/policy.c
··· 876 876 if (ns_name) { 877 877 ns = aa_prepare_ns(view, ns_name); 878 878 if (IS_ERR(ns)) { 879 + op = OP_PROF_LOAD; 879 880 info = "failed to prepare namespace"; 880 881 error = PTR_ERR(ns); 881 882 ns = NULL; 883 + ent = NULL; 882 884 goto fail; 883 885 } 884 886 } else ··· 1015 1013 /* audit cause of failure */ 1016 1014 op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; 1017 1015 fail: 1018 - audit_policy(profile, op, ns_name, ent->new->base.hname, 1016 + audit_policy(profile, op, ns_name, ent ? ent->new->base.hname : NULL, 1019 1017 info, error); 1020 1018 /* audit status that rest of profiles in the atomic set failed too */ 1021 1019 info = "valid profile in failed atomic policy load"; ··· 1025 1023 /* skip entry that caused failure */ 1026 1024 continue; 1027 1025 } 1028 - op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; 1026 + op = (!tmp->old) ? OP_PROF_LOAD : OP_PROF_REPL; 1029 1027 audit_policy(profile, op, ns_name, 1030 1028 tmp->new->base.hname, info, error); 1031 1029 }
+1 -1
security/commoncap.c
··· 1071 1071 1072 1072 #ifdef CONFIG_SECURITY 1073 1073 1074 - struct security_hook_list capability_hooks[] = { 1074 + struct security_hook_list capability_hooks[] __lsm_ro_after_init = { 1075 1075 LSM_HOOK_INIT(capable, cap_capable), 1076 1076 LSM_HOOK_INIT(settime, cap_settime), 1077 1077 LSM_HOOK_INIT(ptrace_access_check, cap_ptrace_access_check),
+8 -1
security/integrity/digsig.c
··· 81 81 int __init integrity_init_keyring(const unsigned int id) 82 82 { 83 83 const struct cred *cred = current_cred(); 84 + struct key_restriction *restriction; 84 85 int err = 0; 85 86 86 87 if (!init_keyring) 87 88 return 0; 89 + 90 + restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL); 91 + if (!restriction) 92 + return -ENOMEM; 93 + 94 + restriction->check = restrict_link_to_ima; 88 95 89 96 keyring[id] = keyring_alloc(keyring_name[id], KUIDT_INIT(0), 90 97 KGIDT_INIT(0), cred, ··· 99 92 KEY_USR_VIEW | KEY_USR_READ | 100 93 KEY_USR_WRITE | KEY_USR_SEARCH), 101 94 KEY_ALLOC_NOT_IN_QUOTA, 102 - restrict_link_to_ima, NULL); 95 + restriction, NULL); 103 96 if (IS_ERR(keyring[id])) { 104 97 err = PTR_ERR(keyring[id]); 105 98 pr_info("Can't allocate %s keyring (%d)\n",
+3 -2
security/integrity/ima/ima_appraise.c
··· 207 207 208 208 cause = "missing-hash"; 209 209 status = INTEGRITY_NOLABEL; 210 - if (opened & FILE_CREATED) { 210 + if (opened & FILE_CREATED) 211 211 iint->flags |= IMA_NEW_FILE; 212 + if ((iint->flags & IMA_NEW_FILE) && 213 + !(iint->flags & IMA_DIGSIG_REQUIRED)) 212 214 status = INTEGRITY_PASS; 213 - } 214 215 goto out; 215 216 } 216 217
+10 -1
security/integrity/ima/ima_mok.c
··· 17 17 #include <linux/cred.h> 18 18 #include <linux/err.h> 19 19 #include <linux/init.h> 20 + #include <linux/slab.h> 20 21 #include <keys/system_keyring.h> 21 22 22 23 ··· 28 27 */ 29 28 __init int ima_mok_init(void) 30 29 { 30 + struct key_restriction *restriction; 31 + 31 32 pr_notice("Allocating IMA blacklist keyring.\n"); 33 + 34 + restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL); 35 + if (!restriction) 36 + panic("Can't allocate IMA blacklist restriction."); 37 + 38 + restriction->check = restrict_link_by_builtin_trusted; 32 39 33 40 ima_blacklist_keyring = keyring_alloc(".ima_blacklist", 34 41 KUIDT_INIT(0), KGIDT_INIT(0), current_cred(), ··· 44 35 KEY_USR_VIEW | KEY_USR_READ | 45 36 KEY_USR_WRITE | KEY_USR_SEARCH, 46 37 KEY_ALLOC_NOT_IN_QUOTA, 47 - restrict_link_by_builtin_trusted, NULL); 38 + restriction, NULL); 48 39 49 40 if (IS_ERR(ima_blacklist_keyring)) 50 41 panic("Can't allocate IMA blacklist keyring.");
+91 -32
security/integrity/ima/ima_policy.c
··· 64 64 u8 fsuuid[16]; 65 65 kuid_t uid; 66 66 kuid_t fowner; 67 + bool (*uid_op)(kuid_t, kuid_t); /* Handlers for operators */ 68 + bool (*fowner_op)(kuid_t, kuid_t); /* uid_eq(), uid_gt(), uid_lt() */ 67 69 int pcr; 68 70 struct { 69 71 void *rule; /* LSM file metadata specific */ ··· 85 83 * normal users can easily run the machine out of memory simply building 86 84 * and running executables. 87 85 */ 88 - static struct ima_rule_entry dont_measure_rules[] = { 86 + static struct ima_rule_entry dont_measure_rules[] __ro_after_init = { 89 87 {.action = DONT_MEASURE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC}, 90 88 {.action = DONT_MEASURE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC}, 91 89 {.action = DONT_MEASURE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC}, ··· 99 97 {.action = DONT_MEASURE, .fsmagic = NSFS_MAGIC, .flags = IMA_FSMAGIC} 100 98 }; 101 99 102 - static struct ima_rule_entry original_measurement_rules[] = { 100 + static struct ima_rule_entry original_measurement_rules[] __ro_after_init = { 103 101 {.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC, 104 102 .flags = IMA_FUNC | IMA_MASK}, 105 103 {.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC, 106 104 .flags = IMA_FUNC | IMA_MASK}, 107 105 {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, 108 - .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_MASK | IMA_UID}, 106 + .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq, 107 + .flags = IMA_FUNC | IMA_MASK | IMA_UID}, 109 108 {.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC}, 110 109 {.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC}, 111 110 }; 112 111 113 - static struct ima_rule_entry default_measurement_rules[] = { 112 + static struct ima_rule_entry default_measurement_rules[] __ro_after_init = { 114 113 {.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC, 115 114 .flags = IMA_FUNC | IMA_MASK}, 116 115 {.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC, 117 116 .flags = IMA_FUNC | IMA_MASK}, 118 117 {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, 119 - .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_EUID}, 118 + .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq, 119 + .flags = IMA_FUNC | IMA_INMASK | IMA_EUID}, 120 120 {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, 121 - .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_UID}, 121 + .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq, 122 + .flags = IMA_FUNC | IMA_INMASK | IMA_UID}, 122 123 {.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC}, 123 124 {.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC}, 124 125 {.action = MEASURE, .func = POLICY_CHECK, .flags = IMA_FUNC}, 125 126 }; 126 127 127 - static struct ima_rule_entry default_appraise_rules[] = { 128 + static struct ima_rule_entry default_appraise_rules[] __ro_after_init = { 128 129 {.action = DONT_APPRAISE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC}, 129 130 {.action = DONT_APPRAISE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC}, 130 131 {.action = DONT_APPRAISE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC}, ··· 144 139 .flags = IMA_FUNC | IMA_DIGSIG_REQUIRED}, 145 140 #endif 146 141 #ifndef CONFIG_IMA_APPRAISE_SIGNED_INIT 147 - {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .flags = IMA_FOWNER}, 142 + {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .fowner_op = &uid_eq, 143 + .flags = IMA_FOWNER}, 148 144 #else 149 145 /* force signature */ 150 - {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, 146 + {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .fowner_op = &uid_eq, 151 147 .flags = IMA_FOWNER | IMA_DIGSIG_REQUIRED}, 152 148 #endif 153 149 }; ··· 246 240 if ((rule->flags & IMA_FSUUID) && 247 241 memcmp(rule->fsuuid, inode->i_sb->s_uuid, sizeof(rule->fsuuid))) 248 242 return false; 249 - if ((rule->flags & IMA_UID) && !uid_eq(rule->uid, cred->uid)) 243 + if ((rule->flags & IMA_UID) && !rule->uid_op(cred->uid, rule->uid)) 250 244 return false; 251 245 if (rule->flags & IMA_EUID) { 252 246 if (has_capability_noaudit(current, CAP_SETUID)) { 253 - if (!uid_eq(rule->uid, cred->euid) 254 - && !uid_eq(rule->uid, cred->suid) 255 - && !uid_eq(rule->uid, cred->uid)) 247 + if (!rule->uid_op(cred->euid, rule->uid) 248 + && !rule->uid_op(cred->suid, rule->uid) 249 + && !rule->uid_op(cred->uid, rule->uid)) 256 250 return false; 257 - } else if (!uid_eq(rule->uid, cred->euid)) 251 + } else if (!rule->uid_op(cred->euid, rule->uid)) 258 252 return false; 259 253 } 260 254 261 - if ((rule->flags & IMA_FOWNER) && !uid_eq(rule->fowner, inode->i_uid)) 255 + if ((rule->flags & IMA_FOWNER) && 256 + !rule->fowner_op(inode->i_uid, rule->fowner)) 262 257 return false; 263 258 for (i = 0; i < MAX_LSM_RULES; i++) { 264 259 int rc = 0; ··· 493 486 Opt_obj_user, Opt_obj_role, Opt_obj_type, 494 487 Opt_subj_user, Opt_subj_role, Opt_subj_type, 495 488 Opt_func, Opt_mask, Opt_fsmagic, 496 - Opt_fsuuid, Opt_uid, Opt_euid, Opt_fowner, 489 + Opt_fsuuid, Opt_uid_eq, Opt_euid_eq, Opt_fowner_eq, 490 + Opt_uid_gt, Opt_euid_gt, Opt_fowner_gt, 491 + Opt_uid_lt, Opt_euid_lt, Opt_fowner_lt, 497 492 Opt_appraise_type, Opt_permit_directio, 498 493 Opt_pcr 499 494 }; ··· 516 507 {Opt_mask, "mask=%s"}, 517 508 {Opt_fsmagic, "fsmagic=%s"}, 518 509 {Opt_fsuuid, "fsuuid=%s"}, 519 - {Opt_uid, "uid=%s"}, 520 - {Opt_euid, "euid=%s"}, 521 - {Opt_fowner, "fowner=%s"}, 510 + {Opt_uid_eq, "uid=%s"}, 511 + {Opt_euid_eq, "euid=%s"}, 512 + {Opt_fowner_eq, "fowner=%s"}, 513 + {Opt_uid_gt, "uid>%s"}, 514 + {Opt_euid_gt, "euid>%s"}, 515 + {Opt_fowner_gt, "fowner>%s"}, 516 + {Opt_uid_lt, "uid<%s"}, 517 + {Opt_euid_lt, "euid<%s"}, 518 + {Opt_fowner_lt, "fowner<%s"}, 522 519 {Opt_appraise_type, "appraise_type=%s"}, 523 520 {Opt_permit_directio, "permit_directio"}, 524 521 {Opt_pcr, "pcr=%s"}, ··· 556 541 return result; 557 542 } 558 543 559 - static void ima_log_string(struct audit_buffer *ab, char *key, char *value) 544 + static void ima_log_string_op(struct audit_buffer *ab, char *key, char *value, 545 + bool (*rule_operator)(kuid_t, kuid_t)) 560 546 { 561 - audit_log_format(ab, "%s=", key); 547 + if (rule_operator == &uid_gt) 548 + audit_log_format(ab, "%s>", key); 549 + else if (rule_operator == &uid_lt) 550 + audit_log_format(ab, "%s<", key); 551 + else 552 + audit_log_format(ab, "%s=", key); 562 553 audit_log_untrustedstring(ab, value); 563 554 audit_log_format(ab, " "); 555 + } 556 + static void ima_log_string(struct audit_buffer *ab, char *key, char *value) 557 + { 558 + ima_log_string_op(ab, key, value, NULL); 564 559 } 565 560 566 561 static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) ··· 578 553 struct audit_buffer *ab; 579 554 char *from; 580 555 char *p; 556 + bool uid_token; 581 557 int result = 0; 582 558 583 559 ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_INTEGRITY_RULE); 584 560 585 561 entry->uid = INVALID_UID; 586 562 entry->fowner = INVALID_UID; 563 + entry->uid_op = &uid_eq; 564 + entry->fowner_op = &uid_eq; 587 565 entry->action = UNKNOWN; 588 566 while ((p = strsep(&rule, " \t")) != NULL) { 589 567 substring_t args[MAX_OPT_ARGS]; ··· 722 694 if (!result) 723 695 entry->flags |= IMA_FSUUID; 724 696 break; 725 - case Opt_uid: 726 - ima_log_string(ab, "uid", args[0].from); 727 - case Opt_euid: 728 - if (token == Opt_euid) 729 - ima_log_string(ab, "euid", args[0].from); 697 + case Opt_uid_gt: 698 + case Opt_euid_gt: 699 + entry->uid_op = &uid_gt; 700 + case Opt_uid_lt: 701 + case Opt_euid_lt: 702 + if ((token == Opt_uid_lt) || (token == Opt_euid_lt)) 703 + entry->uid_op = &uid_lt; 704 + case Opt_uid_eq: 705 + case Opt_euid_eq: 706 + uid_token = (token == Opt_uid_eq) || 707 + (token == Opt_uid_gt) || 708 + (token == Opt_uid_lt); 709 + 710 + ima_log_string_op(ab, uid_token ? "uid" : "euid", 711 + args[0].from, entry->uid_op); 730 712 731 713 if (uid_valid(entry->uid)) { 732 714 result = -EINVAL; ··· 751 713 (uid_t)lnum != lnum) 752 714 result = -EINVAL; 753 715 else 754 - entry->flags |= (token == Opt_uid) 716 + entry->flags |= uid_token 755 717 ? IMA_UID : IMA_EUID; 756 718 } 757 719 break; 758 - case Opt_fowner: 759 - ima_log_string(ab, "fowner", args[0].from); 720 + case Opt_fowner_gt: 721 + entry->fowner_op = &uid_gt; 722 + case Opt_fowner_lt: 723 + if (token == Opt_fowner_lt) 724 + entry->fowner_op = &uid_lt; 725 + case Opt_fowner_eq: 726 + ima_log_string_op(ab, "fowner", args[0].from, 727 + entry->fowner_op); 760 728 761 729 if (uid_valid(entry->fowner)) { 762 730 result = -EINVAL; ··· 1093 1049 1094 1050 if (entry->flags & IMA_UID) { 1095 1051 snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid)); 1096 - seq_printf(m, pt(Opt_uid), tbuf); 1052 + if (entry->uid_op == &uid_gt) 1053 + seq_printf(m, pt(Opt_uid_gt), tbuf); 1054 + else if (entry->uid_op == &uid_lt) 1055 + seq_printf(m, pt(Opt_uid_lt), tbuf); 1056 + else 1057 + seq_printf(m, pt(Opt_uid_eq), tbuf); 1097 1058 seq_puts(m, " "); 1098 1059 } 1099 1060 1100 1061 if (entry->flags & IMA_EUID) { 1101 1062 snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid)); 1102 - seq_printf(m, pt(Opt_euid), tbuf); 1063 + if (entry->uid_op == &uid_gt) 1064 + seq_printf(m, pt(Opt_euid_gt), tbuf); 1065 + else if (entry->uid_op == &uid_lt) 1066 + seq_printf(m, pt(Opt_euid_lt), tbuf); 1067 + else 1068 + seq_printf(m, pt(Opt_euid_eq), tbuf); 1103 1069 seq_puts(m, " "); 1104 1070 } 1105 1071 1106 1072 if (entry->flags & IMA_FOWNER) { 1107 1073 snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->fowner)); 1108 - seq_printf(m, pt(Opt_fowner), tbuf); 1074 + if (entry->fowner_op == &uid_gt) 1075 + seq_printf(m, pt(Opt_fowner_gt), tbuf); 1076 + else if (entry->fowner_op == &uid_lt) 1077 + seq_printf(m, pt(Opt_fowner_lt), tbuf); 1078 + else 1079 + seq_printf(m, pt(Opt_fowner_eq), tbuf); 1109 1080 seq_puts(m, " "); 1110 1081 } 1111 1082
+2
security/keys/Kconfig
··· 90 90 bool "Diffie-Hellman operations on retained keys" 91 91 depends on KEYS 92 92 select MPILIB 93 + select CRYPTO 94 + select CRYPTO_HASH 93 95 help 94 96 This option provides support for calculating Diffie-Hellman 95 97 public keys and shared secrets using values stored as keys
+2 -1
security/keys/Makefile
··· 15 15 request_key.o \ 16 16 request_key_auth.o \ 17 17 user_defined.o 18 - obj-$(CONFIG_KEYS_COMPAT) += compat.o 18 + compat-obj-$(CONFIG_KEY_DH_OPERATIONS) += compat_dh.o 19 + obj-$(CONFIG_KEYS_COMPAT) += compat.o $(compat-obj-y) 19 20 obj-$(CONFIG_PROC_FS) += proc.o 20 21 obj-$(CONFIG_SYSCTL) += sysctl.o 21 22 obj-$(CONFIG_PERSISTENT_KEYRINGS) += persistent.o
+7 -2
security/keys/compat.c
··· 133 133 return keyctl_get_persistent(arg2, arg3); 134 134 135 135 case KEYCTL_DH_COMPUTE: 136 - return keyctl_dh_compute(compat_ptr(arg2), compat_ptr(arg3), 137 - arg4, compat_ptr(arg5)); 136 + return compat_keyctl_dh_compute(compat_ptr(arg2), 137 + compat_ptr(arg3), 138 + arg4, compat_ptr(arg5)); 139 + 140 + case KEYCTL_RESTRICT_KEYRING: 141 + return keyctl_restrict_keyring(arg2, compat_ptr(arg3), 142 + compat_ptr(arg4)); 138 143 139 144 default: 140 145 return -EOPNOTSUPP;
+38
security/keys/compat_dh.c
··· 1 + /* 32-bit compatibility syscall for 64-bit systems for DH operations 2 + * 3 + * Copyright (C) 2016 Stephan Mueller <smueller@chronox.de> 4 + * 5 + * This program is free software; you can redistribute it and/or 6 + * modify it under the terms of the GNU General Public License 7 + * as published by the Free Software Foundation; either version 8 + * 2 of the License, or (at your option) any later version. 9 + */ 10 + 11 + #include <linux/uaccess.h> 12 + 13 + #include "internal.h" 14 + 15 + /* 16 + * Perform the DH computation or DH based key derivation. 17 + * 18 + * If successful, 0 will be returned. 19 + */ 20 + long compat_keyctl_dh_compute(struct keyctl_dh_params __user *params, 21 + char __user *buffer, size_t buflen, 22 + struct compat_keyctl_kdf_params __user *kdf) 23 + { 24 + struct keyctl_kdf_params kdfcopy; 25 + struct compat_keyctl_kdf_params compat_kdfcopy; 26 + 27 + if (!kdf) 28 + return __keyctl_dh_compute(params, buffer, buflen, NULL); 29 + 30 + if (copy_from_user(&compat_kdfcopy, kdf, sizeof(compat_kdfcopy)) != 0) 31 + return -EFAULT; 32 + 33 + kdfcopy.hashname = compat_ptr(compat_kdfcopy.hashname); 34 + kdfcopy.otherinfo = compat_ptr(compat_kdfcopy.otherinfo); 35 + kdfcopy.otherinfolen = compat_kdfcopy.otherinfolen; 36 + 37 + return __keyctl_dh_compute(params, buffer, buflen, &kdfcopy); 38 + }
+208 -12
security/keys/dh.c
··· 11 11 #include <linux/mpi.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/uaccess.h> 14 + #include <linux/crypto.h> 15 + #include <crypto/hash.h> 14 16 #include <keys/user-type.h> 15 17 #include "internal.h" 16 18 ··· 79 77 return ret; 80 78 } 81 79 82 - long keyctl_dh_compute(struct keyctl_dh_params __user *params, 83 - char __user *buffer, size_t buflen, 84 - void __user *reserved) 80 + struct kdf_sdesc { 81 + struct shash_desc shash; 82 + char ctx[]; 83 + }; 84 + 85 + static int kdf_alloc(struct kdf_sdesc **sdesc_ret, char *hashname) 86 + { 87 + struct crypto_shash *tfm; 88 + struct kdf_sdesc *sdesc; 89 + int size; 90 + 91 + /* allocate synchronous hash */ 92 + tfm = crypto_alloc_shash(hashname, 0, 0); 93 + if (IS_ERR(tfm)) { 94 + pr_info("could not allocate digest TFM handle %s\n", hashname); 95 + return PTR_ERR(tfm); 96 + } 97 + 98 + size = sizeof(struct shash_desc) + crypto_shash_descsize(tfm); 99 + sdesc = kmalloc(size, GFP_KERNEL); 100 + if (!sdesc) 101 + return -ENOMEM; 102 + sdesc->shash.tfm = tfm; 103 + sdesc->shash.flags = 0x0; 104 + 105 + *sdesc_ret = sdesc; 106 + 107 + return 0; 108 + } 109 + 110 + static void kdf_dealloc(struct kdf_sdesc *sdesc) 111 + { 112 + if (!sdesc) 113 + return; 114 + 115 + if (sdesc->shash.tfm) 116 + crypto_free_shash(sdesc->shash.tfm); 117 + 118 + kzfree(sdesc); 119 + } 120 + 121 + /* convert 32 bit integer into its string representation */ 122 + static inline void crypto_kw_cpu_to_be32(u32 val, u8 *buf) 123 + { 124 + __be32 *a = (__be32 *)buf; 125 + 126 + *a = cpu_to_be32(val); 127 + } 128 + 129 + /* 130 + * Implementation of the KDF in counter mode according to SP800-108 section 5.1 131 + * as well as SP800-56A section 5.8.1 (Single-step KDF). 132 + * 133 + * SP800-56A: 134 + * The src pointer is defined as Z || other info where Z is the shared secret 135 + * from DH and other info is an arbitrary string (see SP800-56A section 136 + * 5.8.1.2). 137 + */ 138 + static int kdf_ctr(struct kdf_sdesc *sdesc, const u8 *src, unsigned int slen, 139 + u8 *dst, unsigned int dlen) 140 + { 141 + struct shash_desc *desc = &sdesc->shash; 142 + unsigned int h = crypto_shash_digestsize(desc->tfm); 143 + int err = 0; 144 + u8 *dst_orig = dst; 145 + u32 i = 1; 146 + u8 iteration[sizeof(u32)]; 147 + 148 + while (dlen) { 149 + err = crypto_shash_init(desc); 150 + if (err) 151 + goto err; 152 + 153 + crypto_kw_cpu_to_be32(i, iteration); 154 + err = crypto_shash_update(desc, iteration, sizeof(u32)); 155 + if (err) 156 + goto err; 157 + 158 + if (src && slen) { 159 + err = crypto_shash_update(desc, src, slen); 160 + if (err) 161 + goto err; 162 + } 163 + 164 + if (dlen < h) { 165 + u8 tmpbuffer[h]; 166 + 167 + err = crypto_shash_final(desc, tmpbuffer); 168 + if (err) 169 + goto err; 170 + memcpy(dst, tmpbuffer, dlen); 171 + memzero_explicit(tmpbuffer, h); 172 + return 0; 173 + } else { 174 + err = crypto_shash_final(desc, dst); 175 + if (err) 176 + goto err; 177 + 178 + dlen -= h; 179 + dst += h; 180 + i++; 181 + } 182 + } 183 + 184 + return 0; 185 + 186 + err: 187 + memzero_explicit(dst_orig, dlen); 188 + return err; 189 + } 190 + 191 + static int keyctl_dh_compute_kdf(struct kdf_sdesc *sdesc, 192 + char __user *buffer, size_t buflen, 193 + uint8_t *kbuf, size_t kbuflen) 194 + { 195 + uint8_t *outbuf = NULL; 196 + int ret; 197 + 198 + outbuf = kmalloc(buflen, GFP_KERNEL); 199 + if (!outbuf) { 200 + ret = -ENOMEM; 201 + goto err; 202 + } 203 + 204 + ret = kdf_ctr(sdesc, kbuf, kbuflen, outbuf, buflen); 205 + if (ret) 206 + goto err; 207 + 208 + ret = buflen; 209 + if (copy_to_user(buffer, outbuf, buflen) != 0) 210 + ret = -EFAULT; 211 + 212 + err: 213 + kzfree(outbuf); 214 + return ret; 215 + } 216 + 217 + long __keyctl_dh_compute(struct keyctl_dh_params __user *params, 218 + char __user *buffer, size_t buflen, 219 + struct keyctl_kdf_params *kdfcopy) 85 220 { 86 221 long ret; 87 222 MPI base, private, prime, result; ··· 227 88 uint8_t *kbuf; 228 89 ssize_t keylen; 229 90 size_t resultlen; 91 + struct kdf_sdesc *sdesc = NULL; 230 92 231 93 if (!params || (!buffer && buflen)) { 232 94 ret = -EINVAL; ··· 238 98 goto out; 239 99 } 240 100 241 - if (reserved) { 242 - ret = -EINVAL; 243 - goto out; 101 + if (kdfcopy) { 102 + char *hashname; 103 + 104 + if (buflen > KEYCTL_KDF_MAX_OUTPUT_LEN || 105 + kdfcopy->otherinfolen > KEYCTL_KDF_MAX_OI_LEN) { 106 + ret = -EMSGSIZE; 107 + goto out; 108 + } 109 + 110 + /* get KDF name string */ 111 + hashname = strndup_user(kdfcopy->hashname, CRYPTO_MAX_ALG_NAME); 112 + if (IS_ERR(hashname)) { 113 + ret = PTR_ERR(hashname); 114 + goto out; 115 + } 116 + 117 + /* allocate KDF from the kernel crypto API */ 118 + ret = kdf_alloc(&sdesc, hashname); 119 + kfree(hashname); 120 + if (ret) 121 + goto out; 244 122 } 245 123 246 - keylen = mpi_from_key(pcopy.prime, buflen, &prime); 124 + /* 125 + * If the caller requests postprocessing with a KDF, allow an 126 + * arbitrary output buffer size since the KDF ensures proper truncation. 127 + */ 128 + keylen = mpi_from_key(pcopy.prime, kdfcopy ? SIZE_MAX : buflen, &prime); 247 129 if (keylen < 0 || !prime) { 248 130 /* buflen == 0 may be used to query the required buffer size, 249 131 * which is the prime key length. ··· 295 133 goto error3; 296 134 } 297 135 298 - kbuf = kmalloc(resultlen, GFP_KERNEL); 136 + /* allocate space for DH shared secret and SP800-56A otherinfo */ 137 + kbuf = kmalloc(kdfcopy ? (resultlen + kdfcopy->otherinfolen) : resultlen, 138 + GFP_KERNEL); 299 139 if (!kbuf) { 300 140 ret = -ENOMEM; 301 141 goto error4; 142 + } 143 + 144 + /* 145 + * Concatenate SP800-56A otherinfo past DH shared secret -- the 146 + * input to the KDF is (DH shared secret || otherinfo) 147 + */ 148 + if (kdfcopy && kdfcopy->otherinfo && 149 + copy_from_user(kbuf + resultlen, kdfcopy->otherinfo, 150 + kdfcopy->otherinfolen) != 0) { 151 + ret = -EFAULT; 152 + goto error5; 302 153 } 303 154 304 155 ret = do_dh(result, base, private, prime); ··· 322 147 if (ret != 0) 323 148 goto error5; 324 149 325 - ret = nbytes; 326 - if (copy_to_user(buffer, kbuf, nbytes) != 0) 327 - ret = -EFAULT; 150 + if (kdfcopy) { 151 + ret = keyctl_dh_compute_kdf(sdesc, buffer, buflen, kbuf, 152 + resultlen + kdfcopy->otherinfolen); 153 + } else { 154 + ret = nbytes; 155 + if (copy_to_user(buffer, kbuf, nbytes) != 0) 156 + ret = -EFAULT; 157 + } 328 158 329 159 error5: 330 - kfree(kbuf); 160 + kzfree(kbuf); 331 161 error4: 332 162 mpi_free(result); 333 163 error3: ··· 342 162 error1: 343 163 mpi_free(prime); 344 164 out: 165 + kdf_dealloc(sdesc); 345 166 return ret; 167 + } 168 + 169 + long keyctl_dh_compute(struct keyctl_dh_params __user *params, 170 + char __user *buffer, size_t buflen, 171 + struct keyctl_kdf_params __user *kdf) 172 + { 173 + struct keyctl_kdf_params kdfcopy; 174 + 175 + if (!kdf) 176 + return __keyctl_dh_compute(params, buffer, buflen, NULL); 177 + 178 + if (copy_from_user(&kdfcopy, kdf, sizeof(kdfcopy)) != 0) 179 + return -EFAULT; 180 + 181 + return __keyctl_dh_compute(params, buffer, buflen, &kdfcopy); 346 182 }
+12 -1
security/keys/gc.c
··· 220 220 key = rb_entry(cursor, struct key, serial_node); 221 221 cursor = rb_next(cursor); 222 222 223 - if (atomic_read(&key->usage) == 0) 223 + if (refcount_read(&key->usage) == 0) 224 224 goto found_unreferenced_key; 225 225 226 226 if (unlikely(gc_state & KEY_GC_REAPING_DEAD_1)) { ··· 229 229 set_bit(KEY_FLAG_DEAD, &key->flags); 230 230 key->perm = 0; 231 231 goto skip_dead_key; 232 + } else if (key->type == &key_type_keyring && 233 + key->restrict_link) { 234 + goto found_restricted_keyring; 232 235 } 233 236 } 234 237 ··· 335 332 336 333 list_add_tail(&key->graveyard_link, &graveyard); 337 334 gc_state |= KEY_GC_REAP_AGAIN; 335 + goto maybe_resched; 336 + 337 + /* We found a restricted keyring and need to update the restriction if 338 + * it is associated with the dead key type. 339 + */ 340 + found_restricted_keyring: 341 + spin_unlock(&key_serial_lock); 342 + keyring_restriction_gc(key, key_gc_dead_keytype); 338 343 goto maybe_resched; 339 344 340 345 /* We found a keyring and we need to check the payload for links to
+29 -3
security/keys/internal.h
··· 17 17 #include <linux/key-type.h> 18 18 #include <linux/task_work.h> 19 19 #include <linux/keyctl.h> 20 + #include <linux/refcount.h> 21 + #include <linux/compat.h> 20 22 21 23 struct iovec; 22 24 ··· 55 53 struct rb_node node; 56 54 struct mutex cons_lock; /* construction initiation lock */ 57 55 spinlock_t lock; 58 - atomic_t usage; /* for accessing qnkeys & qnbytes */ 56 + refcount_t usage; /* for accessing qnkeys & qnbytes */ 59 57 atomic_t nkeys; /* number of keys */ 60 58 atomic_t nikeys; /* number of instantiated keys */ 61 59 kuid_t uid; ··· 169 167 extern struct work_struct key_gc_work; 170 168 extern unsigned key_gc_delay; 171 169 extern void keyring_gc(struct key *keyring, time_t limit); 170 + extern void keyring_restriction_gc(struct key *keyring, 171 + struct key_type *dead_type); 172 172 extern void key_schedule_gc(time_t gc_at); 173 173 extern void key_schedule_gc_links(void); 174 174 extern void key_gc_keytype(struct key_type *ktype); ··· 253 249 extern long keyctl_instantiate_key_common(key_serial_t, 254 250 struct iov_iter *, 255 251 key_serial_t); 252 + extern long keyctl_restrict_keyring(key_serial_t id, 253 + const char __user *_type, 254 + const char __user *_restriction); 256 255 #ifdef CONFIG_PERSISTENT_KEYRINGS 257 256 extern long keyctl_get_persistent(uid_t, key_serial_t); 258 257 extern unsigned persistent_keyring_expiry; ··· 268 261 269 262 #ifdef CONFIG_KEY_DH_OPERATIONS 270 263 extern long keyctl_dh_compute(struct keyctl_dh_params __user *, char __user *, 271 - size_t, void __user *); 264 + size_t, struct keyctl_kdf_params __user *); 265 + extern long __keyctl_dh_compute(struct keyctl_dh_params __user *, char __user *, 266 + size_t, struct keyctl_kdf_params *); 267 + #ifdef CONFIG_KEYS_COMPAT 268 + extern long compat_keyctl_dh_compute(struct keyctl_dh_params __user *params, 269 + char __user *buffer, size_t buflen, 270 + struct compat_keyctl_kdf_params __user *kdf); 271 + #endif 272 + #define KEYCTL_KDF_MAX_OUTPUT_LEN 1024 /* max length of KDF output */ 273 + #define KEYCTL_KDF_MAX_OI_LEN 64 /* max length of otherinfo */ 272 274 #else 273 275 static inline long keyctl_dh_compute(struct keyctl_dh_params __user *params, 274 276 char __user *buffer, size_t buflen, 275 - void __user *reserved) 277 + struct keyctl_kdf_params __user *kdf) 276 278 { 277 279 return -EOPNOTSUPP; 278 280 } 281 + 282 + #ifdef CONFIG_KEYS_COMPAT 283 + static inline long compat_keyctl_dh_compute( 284 + struct keyctl_dh_params __user *params, 285 + char __user *buffer, size_t buflen, 286 + struct keyctl_kdf_params __user *kdf) 287 + { 288 + return -EOPNOTSUPP; 289 + } 290 + #endif 279 291 #endif 280 292 281 293 /*
+31 -27
security/keys/key.c
··· 93 93 94 94 /* if we get here, then the user record still hadn't appeared on the 95 95 * second pass - so we use the candidate record */ 96 - atomic_set(&candidate->usage, 1); 96 + refcount_set(&candidate->usage, 1); 97 97 atomic_set(&candidate->nkeys, 0); 98 98 atomic_set(&candidate->nikeys, 0); 99 99 candidate->uid = uid; ··· 110 110 111 111 /* okay - we found a user record for this UID */ 112 112 found: 113 - atomic_inc(&user->usage); 113 + refcount_inc(&user->usage); 114 114 spin_unlock(&key_user_lock); 115 115 kfree(candidate); 116 116 out: ··· 122 122 */ 123 123 void key_user_put(struct key_user *user) 124 124 { 125 - if (atomic_dec_and_lock(&user->usage, &key_user_lock)) { 125 + if (refcount_dec_and_lock(&user->usage, &key_user_lock)) { 126 126 rb_erase(&user->node, &key_user_tree); 127 127 spin_unlock(&key_user_lock); 128 128 ··· 201 201 * @cred: The credentials specifying UID namespace. 202 202 * @perm: The permissions mask of the new key. 203 203 * @flags: Flags specifying quota properties. 204 - * @restrict_link: Optional link restriction method for new keyrings. 204 + * @restrict_link: Optional link restriction for new keyrings. 205 205 * 206 206 * Allocate a key of the specified type with the attributes given. The key is 207 207 * returned in an uninstantiated state and the caller needs to instantiate the 208 208 * key before returning. 209 + * 210 + * The restrict_link structure (if not NULL) will be freed when the 211 + * keyring is destroyed, so it must be dynamically allocated. 209 212 * 210 213 * The user's key count quota is updated to reflect the creation of the key and 211 214 * the user's key data quota has the default for the key type reserved. The ··· 228 225 struct key *key_alloc(struct key_type *type, const char *desc, 229 226 kuid_t uid, kgid_t gid, const struct cred *cred, 230 227 key_perm_t perm, unsigned long flags, 231 - int (*restrict_link)(struct key *, 232 - const struct key_type *, 233 - const union key_payload *)) 228 + struct key_restriction *restrict_link) 234 229 { 235 230 struct key_user *user = NULL; 236 231 struct key *key; ··· 286 285 if (!key->index_key.description) 287 286 goto no_memory_3; 288 287 289 - atomic_set(&key->usage, 1); 288 + refcount_set(&key->usage, 1); 290 289 init_rwsem(&key->sem); 291 290 lockdep_set_class(&key->sem, &type->lock_class); 292 291 key->index_key.type = type; ··· 500 499 } 501 500 502 501 if (keyring) { 503 - if (keyring->restrict_link) { 504 - ret = keyring->restrict_link(keyring, key->type, 505 - &prep.payload); 506 - if (ret < 0) 507 - goto error; 508 - } 509 502 ret = __key_link_begin(keyring, &key->index_key, &edit); 510 503 if (ret < 0) 511 504 goto error; 505 + 506 + if (keyring->restrict_link && keyring->restrict_link->check) { 507 + struct key_restriction *keyres = keyring->restrict_link; 508 + 509 + ret = keyres->check(keyring, key->type, &prep.payload, 510 + keyres->key); 511 + if (ret < 0) 512 + goto error_link_end; 513 + } 512 514 } 513 515 514 516 ret = __key_instantiate_and_link(key, &prep, keyring, authkey, &edit); 515 517 518 + error_link_end: 516 519 if (keyring) 517 520 __key_link_end(keyring, &key->index_key, edit); 518 521 ··· 626 621 if (key) { 627 622 key_check(key); 628 623 629 - if (atomic_dec_and_test(&key->usage)) 624 + if (refcount_dec_and_test(&key->usage)) 630 625 schedule_work(&key_gc_work); 631 626 } 632 627 } ··· 661 656 662 657 found: 663 658 /* pretend it doesn't exist if it is awaiting deletion */ 664 - if (atomic_read(&key->usage) == 0) 659 + if (refcount_read(&key->usage) == 0) 665 660 goto not_found; 666 661 667 662 /* this races with key_put(), but that doesn't matter since key_put() ··· 811 806 struct key *keyring, *key = NULL; 812 807 key_ref_t key_ref; 813 808 int ret; 814 - int (*restrict_link)(struct key *, 815 - const struct key_type *, 816 - const union key_payload *) = NULL; 809 + struct key_restriction *restrict_link = NULL; 817 810 818 811 /* look up the key type to see if it's one of the registered kernel 819 812 * types */ ··· 857 854 } 858 855 index_key.desc_len = strlen(index_key.description); 859 856 860 - if (restrict_link) { 861 - ret = restrict_link(keyring, index_key.type, &prep.payload); 862 - if (ret < 0) { 863 - key_ref = ERR_PTR(ret); 864 - goto error_free_prep; 865 - } 866 - } 867 - 868 857 ret = __key_link_begin(keyring, &index_key, &edit); 869 858 if (ret < 0) { 870 859 key_ref = ERR_PTR(ret); 871 860 goto error_free_prep; 861 + } 862 + 863 + if (restrict_link && restrict_link->check) { 864 + ret = restrict_link->check(keyring, index_key.type, 865 + &prep.payload, restrict_link->key); 866 + if (ret < 0) { 867 + key_ref = ERR_PTR(ret); 868 + goto error_link_end; 869 + } 872 870 } 873 871 874 872 /* if we're going to allocate a new key, we're going to have
+59 -1
security/keys/keyctl.c
··· 1585 1585 } 1586 1586 1587 1587 /* 1588 + * Apply a restriction to a given keyring. 1589 + * 1590 + * The caller must have Setattr permission to change keyring restrictions. 1591 + * 1592 + * The requested type name may be a NULL pointer to reject all attempts 1593 + * to link to the keyring. If _type is non-NULL, _restriction can be 1594 + * NULL or a pointer to a string describing the restriction. If _type is 1595 + * NULL, _restriction must also be NULL. 1596 + * 1597 + * Returns 0 if successful. 1598 + */ 1599 + long keyctl_restrict_keyring(key_serial_t id, const char __user *_type, 1600 + const char __user *_restriction) 1601 + { 1602 + key_ref_t key_ref; 1603 + bool link_reject = !_type; 1604 + char type[32]; 1605 + char *restriction = NULL; 1606 + long ret; 1607 + 1608 + key_ref = lookup_user_key(id, 0, KEY_NEED_SETATTR); 1609 + if (IS_ERR(key_ref)) 1610 + return PTR_ERR(key_ref); 1611 + 1612 + if (_type) { 1613 + ret = key_get_type_from_user(type, _type, sizeof(type)); 1614 + if (ret < 0) 1615 + goto error; 1616 + } 1617 + 1618 + if (_restriction) { 1619 + if (!_type) { 1620 + ret = -EINVAL; 1621 + goto error; 1622 + } 1623 + 1624 + restriction = strndup_user(_restriction, PAGE_SIZE); 1625 + if (IS_ERR(restriction)) { 1626 + ret = PTR_ERR(restriction); 1627 + goto error; 1628 + } 1629 + } 1630 + 1631 + ret = keyring_restrict(key_ref, link_reject ? NULL : type, restriction); 1632 + kfree(restriction); 1633 + 1634 + error: 1635 + key_ref_put(key_ref); 1636 + 1637 + return ret; 1638 + } 1639 + 1640 + /* 1588 1641 * The key control system call 1589 1642 */ 1590 1643 SYSCALL_DEFINE5(keyctl, int, option, unsigned long, arg2, unsigned long, arg3, ··· 1746 1693 case KEYCTL_DH_COMPUTE: 1747 1694 return keyctl_dh_compute((struct keyctl_dh_params __user *) arg2, 1748 1695 (char __user *) arg3, (size_t) arg4, 1749 - (void __user *) arg5); 1696 + (struct keyctl_kdf_params __user *) arg5); 1697 + 1698 + case KEYCTL_RESTRICT_KEYRING: 1699 + return keyctl_restrict_keyring((key_serial_t) arg2, 1700 + (const char __user *) arg3, 1701 + (const char __user *) arg4); 1750 1702 1751 1703 default: 1752 1704 return -EOPNOTSUPP;
+175 -12
security/keys/keyring.c
··· 394 394 write_unlock(&keyring_name_lock); 395 395 } 396 396 397 + if (keyring->restrict_link) { 398 + struct key_restriction *keyres = keyring->restrict_link; 399 + 400 + key_put(keyres->key); 401 + kfree(keyres); 402 + } 403 + 397 404 assoc_array_destroy(&keyring->keys, &keyring_assoc_array_ops); 398 405 } 399 406 ··· 499 492 struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid, 500 493 const struct cred *cred, key_perm_t perm, 501 494 unsigned long flags, 502 - int (*restrict_link)(struct key *, 503 - const struct key_type *, 504 - const union key_payload *), 495 + struct key_restriction *restrict_link, 505 496 struct key *dest) 506 497 { 507 498 struct key *keyring; ··· 524 519 * @keyring: The keyring being added to. 525 520 * @type: The type of key being added. 526 521 * @payload: The payload of the key intended to be added. 522 + * @data: Additional data for evaluating restriction. 527 523 * 528 524 * Reject the addition of any links to a keyring. It can be overridden by 529 525 * passing KEY_ALLOC_BYPASS_RESTRICTION to key_instantiate_and_link() when 530 526 * adding a key to a keyring. 531 527 * 532 - * This is meant to be passed as the restrict_link parameter to 533 - * keyring_alloc(). 528 + * This is meant to be stored in a key_restriction structure which is passed 529 + * in the restrict_link parameter to keyring_alloc(). 534 530 */ 535 531 int restrict_link_reject(struct key *keyring, 536 532 const struct key_type *type, 537 - const union key_payload *payload) 533 + const union key_payload *payload, 534 + struct key *restriction_key) 538 535 { 539 536 return -EPERM; 540 537 } ··· 947 940 } 948 941 EXPORT_SYMBOL(keyring_search); 949 942 943 + static struct key_restriction *keyring_restriction_alloc( 944 + key_restrict_link_func_t check) 945 + { 946 + struct key_restriction *keyres = 947 + kzalloc(sizeof(struct key_restriction), GFP_KERNEL); 948 + 949 + if (!keyres) 950 + return ERR_PTR(-ENOMEM); 951 + 952 + keyres->check = check; 953 + 954 + return keyres; 955 + } 956 + 957 + /* 958 + * Semaphore to serialise restriction setup to prevent reference count 959 + * cycles through restriction key pointers. 960 + */ 961 + static DECLARE_RWSEM(keyring_serialise_restrict_sem); 962 + 963 + /* 964 + * Check for restriction cycles that would prevent keyring garbage collection. 965 + * keyring_serialise_restrict_sem must be held. 966 + */ 967 + static bool keyring_detect_restriction_cycle(const struct key *dest_keyring, 968 + struct key_restriction *keyres) 969 + { 970 + while (keyres && keyres->key && 971 + keyres->key->type == &key_type_keyring) { 972 + if (keyres->key == dest_keyring) 973 + return true; 974 + 975 + keyres = keyres->key->restrict_link; 976 + } 977 + 978 + return false; 979 + } 980 + 981 + /** 982 + * keyring_restrict - Look up and apply a restriction to a keyring 983 + * 984 + * @keyring: The keyring to be restricted 985 + * @restriction: The restriction options to apply to the keyring 986 + */ 987 + int keyring_restrict(key_ref_t keyring_ref, const char *type, 988 + const char *restriction) 989 + { 990 + struct key *keyring; 991 + struct key_type *restrict_type = NULL; 992 + struct key_restriction *restrict_link; 993 + int ret = 0; 994 + 995 + keyring = key_ref_to_ptr(keyring_ref); 996 + key_check(keyring); 997 + 998 + if (keyring->type != &key_type_keyring) 999 + return -ENOTDIR; 1000 + 1001 + if (!type) { 1002 + restrict_link = keyring_restriction_alloc(restrict_link_reject); 1003 + } else { 1004 + restrict_type = key_type_lookup(type); 1005 + 1006 + if (IS_ERR(restrict_type)) 1007 + return PTR_ERR(restrict_type); 1008 + 1009 + if (!restrict_type->lookup_restriction) { 1010 + ret = -ENOENT; 1011 + goto error; 1012 + } 1013 + 1014 + restrict_link = restrict_type->lookup_restriction(restriction); 1015 + } 1016 + 1017 + if (IS_ERR(restrict_link)) { 1018 + ret = PTR_ERR(restrict_link); 1019 + goto error; 1020 + } 1021 + 1022 + down_write(&keyring->sem); 1023 + down_write(&keyring_serialise_restrict_sem); 1024 + 1025 + if (keyring->restrict_link) 1026 + ret = -EEXIST; 1027 + else if (keyring_detect_restriction_cycle(keyring, restrict_link)) 1028 + ret = -EDEADLK; 1029 + else 1030 + keyring->restrict_link = restrict_link; 1031 + 1032 + up_write(&keyring_serialise_restrict_sem); 1033 + up_write(&keyring->sem); 1034 + 1035 + if (ret < 0) { 1036 + key_put(restrict_link->key); 1037 + kfree(restrict_link); 1038 + } 1039 + 1040 + error: 1041 + if (restrict_type) 1042 + key_type_put(restrict_type); 1043 + 1044 + return ret; 1045 + } 1046 + EXPORT_SYMBOL(keyring_restrict); 1047 + 950 1048 /* 951 1049 * Search the given keyring for a key that might be updated. 952 1050 * ··· 1145 1033 /* we've got a match but we might end up racing with 1146 1034 * key_cleanup() if the keyring is currently 'dead' 1147 1035 * (ie. it has a zero usage count) */ 1148 - if (!atomic_inc_not_zero(&keyring->usage)) 1036 + if (!refcount_inc_not_zero(&keyring->usage)) 1149 1037 continue; 1150 1038 keyring->last_used_at = current_kernel_time().tv_sec; 1151 1039 goto out; ··· 1332 1220 */ 1333 1221 static int __key_link_check_restriction(struct key *keyring, struct key *key) 1334 1222 { 1335 - if (!keyring->restrict_link) 1223 + if (!keyring->restrict_link || !keyring->restrict_link->check) 1336 1224 return 0; 1337 - return keyring->restrict_link(keyring, key->type, &key->payload); 1225 + return keyring->restrict_link->check(keyring, key->type, &key->payload, 1226 + keyring->restrict_link->key); 1338 1227 } 1339 1228 1340 1229 /** ··· 1363 1250 struct assoc_array_edit *edit; 1364 1251 int ret; 1365 1252 1366 - kenter("{%d,%d}", keyring->serial, atomic_read(&keyring->usage)); 1253 + kenter("{%d,%d}", keyring->serial, refcount_read(&keyring->usage)); 1367 1254 1368 1255 key_check(keyring); 1369 1256 key_check(key); 1370 1257 1371 1258 ret = __key_link_begin(keyring, &key->index_key, &edit); 1372 1259 if (ret == 0) { 1373 - kdebug("begun {%d,%d}", keyring->serial, atomic_read(&keyring->usage)); 1260 + kdebug("begun {%d,%d}", keyring->serial, refcount_read(&keyring->usage)); 1374 1261 ret = __key_link_check_restriction(keyring, key); 1375 1262 if (ret == 0) 1376 1263 ret = __key_link_check_live_key(keyring, key); ··· 1379 1266 __key_link_end(keyring, &key->index_key, edit); 1380 1267 } 1381 1268 1382 - kleave(" = %d {%d,%d}", ret, keyring->serial, atomic_read(&keyring->usage)); 1269 + kleave(" = %d {%d,%d}", ret, keyring->serial, refcount_read(&keyring->usage)); 1383 1270 return ret; 1384 1271 } 1385 1272 EXPORT_SYMBOL(key_link); ··· 1538 1425 keyring_gc_select_iterator, &limit); 1539 1426 up_write(&keyring->sem); 1540 1427 kleave(" [gc]"); 1428 + } 1429 + 1430 + /* 1431 + * Garbage collect restriction pointers from a keyring. 1432 + * 1433 + * Keyring restrictions are associated with a key type, and must be cleaned 1434 + * up if the key type is unregistered. The restriction is altered to always 1435 + * reject additional keys so a keyring cannot be opened up by unregistering 1436 + * a key type. 1437 + * 1438 + * Not called with any keyring locks held. The keyring's key struct will not 1439 + * be deallocated under us as only our caller may deallocate it. 1440 + * 1441 + * The caller is required to hold key_types_sem and dead_type->sem. This is 1442 + * fulfilled by key_gc_keytype() holding the locks on behalf of 1443 + * key_garbage_collector(), which it invokes on a workqueue. 1444 + */ 1445 + void keyring_restriction_gc(struct key *keyring, struct key_type *dead_type) 1446 + { 1447 + struct key_restriction *keyres; 1448 + 1449 + kenter("%x{%s}", keyring->serial, keyring->description ?: ""); 1450 + 1451 + /* 1452 + * keyring->restrict_link is only assigned at key allocation time 1453 + * or with the key type locked, so the only values that could be 1454 + * concurrently assigned to keyring->restrict_link are for key 1455 + * types other than dead_type. Given this, it's ok to check 1456 + * the key type before acquiring keyring->sem. 1457 + */ 1458 + if (!dead_type || !keyring->restrict_link || 1459 + keyring->restrict_link->keytype != dead_type) { 1460 + kleave(" [no restriction gc]"); 1461 + return; 1462 + } 1463 + 1464 + /* Lock the keyring to ensure that a link is not in progress */ 1465 + down_write(&keyring->sem); 1466 + 1467 + keyres = keyring->restrict_link; 1468 + 1469 + keyres->check = restrict_link_reject; 1470 + 1471 + key_put(keyres->key); 1472 + keyres->key = NULL; 1473 + keyres->keytype = NULL; 1474 + 1475 + up_write(&keyring->sem); 1476 + 1477 + kleave(" [restriction gc]"); 1541 1478 }
+2 -2
security/keys/proc.c
··· 252 252 showflag(key, 'U', KEY_FLAG_USER_CONSTRUCT), 253 253 showflag(key, 'N', KEY_FLAG_NEGATIVE), 254 254 showflag(key, 'i', KEY_FLAG_INVALIDATED), 255 - atomic_read(&key->usage), 255 + refcount_read(&key->usage), 256 256 xbuf, 257 257 key->perm, 258 258 from_kuid_munged(seq_user_ns(m), key->uid), ··· 340 340 341 341 seq_printf(m, "%5u: %5d %d/%d %d/%d %d/%d\n", 342 342 from_kuid_munged(seq_user_ns(m), user->uid), 343 - atomic_read(&user->usage), 343 + refcount_read(&user->usage), 344 344 atomic_read(&user->nkeys), 345 345 atomic_read(&user->nikeys), 346 346 user->qnkeys,
+1 -1
security/keys/process_keys.c
··· 30 30 31 31 /* The root user's tracking struct */ 32 32 struct key_user root_key_user = { 33 - .usage = ATOMIC_INIT(3), 33 + .usage = REFCOUNT_INIT(3), 34 34 .cons_lock = __MUTEX_INITIALIZER(root_key_user.cons_lock), 35 35 .lock = __SPIN_LOCK_UNLOCKED(root_key_user.lock), 36 36 .nkeys = ATOMIC_INIT(2),
+1 -1
security/keys/request_key_auth.c
··· 213 213 if (ret < 0) 214 214 goto error_inst; 215 215 216 - kleave(" = {%d,%d}", authkey->serial, atomic_read(&authkey->usage)); 216 + kleave(" = {%d,%d}", authkey->serial, refcount_read(&authkey->usage)); 217 217 return authkey; 218 218 219 219 auth_key_revoked:
+1 -1
security/loadpin/loadpin.c
··· 174 174 return 0; 175 175 } 176 176 177 - static struct security_hook_list loadpin_hooks[] = { 177 + static struct security_hook_list loadpin_hooks[] __lsm_ro_after_init = { 178 178 LSM_HOOK_INIT(sb_free_security, loadpin_sb_free_security), 179 179 LSM_HOOK_INIT(kernel_read_file, loadpin_read_file), 180 180 };
+18 -352
security/security.c
··· 32 32 /* Maximum number of letters for an LSM name string */ 33 33 #define SECURITY_NAME_MAX 10 34 34 35 + struct security_hook_heads security_hook_heads __lsm_ro_after_init; 35 36 char *lsm_names; 36 37 /* Boot-time LSM user choice */ 37 38 static __initdata char chosen_lsm[SECURITY_NAME_MAX + 1] = ··· 55 54 */ 56 55 int __init security_init(void) 57 56 { 57 + int i; 58 + struct list_head *list = (struct list_head *) &security_hook_heads; 59 + 60 + for (i = 0; i < sizeof(security_hook_heads) / sizeof(struct list_head); 61 + i++) 62 + INIT_LIST_HEAD(&list[i]); 58 63 pr_info("Security Framework initialized\n"); 59 64 60 65 /* ··· 941 934 return call_int_hook(task_create, 0, clone_flags); 942 935 } 943 936 937 + int security_task_alloc(struct task_struct *task, unsigned long clone_flags) 938 + { 939 + return call_int_hook(task_alloc, 0, task, clone_flags); 940 + } 941 + 944 942 void security_task_free(struct task_struct *task) 945 943 { 946 944 call_void_hook(task_free, task); ··· 1050 1038 int security_task_getioprio(struct task_struct *p) 1051 1039 { 1052 1040 return call_int_hook(task_getioprio, 0, p); 1041 + } 1042 + 1043 + int security_task_prlimit(const struct cred *cred, const struct cred *tcred, 1044 + unsigned int flags) 1045 + { 1046 + return call_int_hook(task_prlimit, 0, cred, tcred, flags); 1053 1047 } 1054 1048 1055 1049 int security_task_setrlimit(struct task_struct *p, unsigned int resource, ··· 1643 1625 actx); 1644 1626 } 1645 1627 #endif /* CONFIG_AUDIT */ 1646 - 1647 - struct security_hook_heads security_hook_heads = { 1648 - .binder_set_context_mgr = 1649 - LIST_HEAD_INIT(security_hook_heads.binder_set_context_mgr), 1650 - .binder_transaction = 1651 - LIST_HEAD_INIT(security_hook_heads.binder_transaction), 1652 - .binder_transfer_binder = 1653 - LIST_HEAD_INIT(security_hook_heads.binder_transfer_binder), 1654 - .binder_transfer_file = 1655 - LIST_HEAD_INIT(security_hook_heads.binder_transfer_file), 1656 - 1657 - .ptrace_access_check = 1658 - LIST_HEAD_INIT(security_hook_heads.ptrace_access_check), 1659 - .ptrace_traceme = 1660 - LIST_HEAD_INIT(security_hook_heads.ptrace_traceme), 1661 - .capget = LIST_HEAD_INIT(security_hook_heads.capget), 1662 - .capset = LIST_HEAD_INIT(security_hook_heads.capset), 1663 - .capable = LIST_HEAD_INIT(security_hook_heads.capable), 1664 - .quotactl = LIST_HEAD_INIT(security_hook_heads.quotactl), 1665 - .quota_on = LIST_HEAD_INIT(security_hook_heads.quota_on), 1666 - .syslog = LIST_HEAD_INIT(security_hook_heads.syslog), 1667 - .settime = LIST_HEAD_INIT(security_hook_heads.settime), 1668 - .vm_enough_memory = 1669 - LIST_HEAD_INIT(security_hook_heads.vm_enough_memory), 1670 - .bprm_set_creds = 1671 - LIST_HEAD_INIT(security_hook_heads.bprm_set_creds), 1672 - .bprm_check_security = 1673 - LIST_HEAD_INIT(security_hook_heads.bprm_check_security), 1674 - .bprm_secureexec = 1675 - LIST_HEAD_INIT(security_hook_heads.bprm_secureexec), 1676 - .bprm_committing_creds = 1677 - LIST_HEAD_INIT(security_hook_heads.bprm_committing_creds), 1678 - .bprm_committed_creds = 1679 - LIST_HEAD_INIT(security_hook_heads.bprm_committed_creds), 1680 - .sb_alloc_security = 1681 - LIST_HEAD_INIT(security_hook_heads.sb_alloc_security), 1682 - .sb_free_security = 1683 - LIST_HEAD_INIT(security_hook_heads.sb_free_security), 1684 - .sb_copy_data = LIST_HEAD_INIT(security_hook_heads.sb_copy_data), 1685 - .sb_remount = LIST_HEAD_INIT(security_hook_heads.sb_remount), 1686 - .sb_kern_mount = 1687 - LIST_HEAD_INIT(security_hook_heads.sb_kern_mount), 1688 - .sb_show_options = 1689 - LIST_HEAD_INIT(security_hook_heads.sb_show_options), 1690 - .sb_statfs = LIST_HEAD_INIT(security_hook_heads.sb_statfs), 1691 - .sb_mount = LIST_HEAD_INIT(security_hook_heads.sb_mount), 1692 - .sb_umount = LIST_HEAD_INIT(security_hook_heads.sb_umount), 1693 - .sb_pivotroot = LIST_HEAD_INIT(security_hook_heads.sb_pivotroot), 1694 - .sb_set_mnt_opts = 1695 - LIST_HEAD_INIT(security_hook_heads.sb_set_mnt_opts), 1696 - .sb_clone_mnt_opts = 1697 - LIST_HEAD_INIT(security_hook_heads.sb_clone_mnt_opts), 1698 - .sb_parse_opts_str = 1699 - LIST_HEAD_INIT(security_hook_heads.sb_parse_opts_str), 1700 - .dentry_init_security = 1701 - LIST_HEAD_INIT(security_hook_heads.dentry_init_security), 1702 - .dentry_create_files_as = 1703 - LIST_HEAD_INIT(security_hook_heads.dentry_create_files_as), 1704 - #ifdef CONFIG_SECURITY_PATH 1705 - .path_unlink = LIST_HEAD_INIT(security_hook_heads.path_unlink), 1706 - .path_mkdir = LIST_HEAD_INIT(security_hook_heads.path_mkdir), 1707 - .path_rmdir = LIST_HEAD_INIT(security_hook_heads.path_rmdir), 1708 - .path_mknod = LIST_HEAD_INIT(security_hook_heads.path_mknod), 1709 - .path_truncate = 1710 - LIST_HEAD_INIT(security_hook_heads.path_truncate), 1711 - .path_symlink = LIST_HEAD_INIT(security_hook_heads.path_symlink), 1712 - .path_link = LIST_HEAD_INIT(security_hook_heads.path_link), 1713 - .path_rename = LIST_HEAD_INIT(security_hook_heads.path_rename), 1714 - .path_chmod = LIST_HEAD_INIT(security_hook_heads.path_chmod), 1715 - .path_chown = LIST_HEAD_INIT(security_hook_heads.path_chown), 1716 - .path_chroot = LIST_HEAD_INIT(security_hook_heads.path_chroot), 1717 - #endif 1718 - .inode_alloc_security = 1719 - LIST_HEAD_INIT(security_hook_heads.inode_alloc_security), 1720 - .inode_free_security = 1721 - LIST_HEAD_INIT(security_hook_heads.inode_free_security), 1722 - .inode_init_security = 1723 - LIST_HEAD_INIT(security_hook_heads.inode_init_security), 1724 - .inode_create = LIST_HEAD_INIT(security_hook_heads.inode_create), 1725 - .inode_link = LIST_HEAD_INIT(security_hook_heads.inode_link), 1726 - .inode_unlink = LIST_HEAD_INIT(security_hook_heads.inode_unlink), 1727 - .inode_symlink = 1728 - LIST_HEAD_INIT(security_hook_heads.inode_symlink), 1729 - .inode_mkdir = LIST_HEAD_INIT(security_hook_heads.inode_mkdir), 1730 - .inode_rmdir = LIST_HEAD_INIT(security_hook_heads.inode_rmdir), 1731 - .inode_mknod = LIST_HEAD_INIT(security_hook_heads.inode_mknod), 1732 - .inode_rename = LIST_HEAD_INIT(security_hook_heads.inode_rename), 1733 - .inode_readlink = 1734 - LIST_HEAD_INIT(security_hook_heads.inode_readlink), 1735 - .inode_follow_link = 1736 - LIST_HEAD_INIT(security_hook_heads.inode_follow_link), 1737 - .inode_permission = 1738 - LIST_HEAD_INIT(security_hook_heads.inode_permission), 1739 - .inode_setattr = 1740 - LIST_HEAD_INIT(security_hook_heads.inode_setattr), 1741 - .inode_getattr = 1742 - LIST_HEAD_INIT(security_hook_heads.inode_getattr), 1743 - .inode_setxattr = 1744 - LIST_HEAD_INIT(security_hook_heads.inode_setxattr), 1745 - .inode_post_setxattr = 1746 - LIST_HEAD_INIT(security_hook_heads.inode_post_setxattr), 1747 - .inode_getxattr = 1748 - LIST_HEAD_INIT(security_hook_heads.inode_getxattr), 1749 - .inode_listxattr = 1750 - LIST_HEAD_INIT(security_hook_heads.inode_listxattr), 1751 - .inode_removexattr = 1752 - LIST_HEAD_INIT(security_hook_heads.inode_removexattr), 1753 - .inode_need_killpriv = 1754 - LIST_HEAD_INIT(security_hook_heads.inode_need_killpriv), 1755 - .inode_killpriv = 1756 - LIST_HEAD_INIT(security_hook_heads.inode_killpriv), 1757 - .inode_getsecurity = 1758 - LIST_HEAD_INIT(security_hook_heads.inode_getsecurity), 1759 - .inode_setsecurity = 1760 - LIST_HEAD_INIT(security_hook_heads.inode_setsecurity), 1761 - .inode_listsecurity = 1762 - LIST_HEAD_INIT(security_hook_heads.inode_listsecurity), 1763 - .inode_getsecid = 1764 - LIST_HEAD_INIT(security_hook_heads.inode_getsecid), 1765 - .inode_copy_up = 1766 - LIST_HEAD_INIT(security_hook_heads.inode_copy_up), 1767 - .inode_copy_up_xattr = 1768 - LIST_HEAD_INIT(security_hook_heads.inode_copy_up_xattr), 1769 - .file_permission = 1770 - LIST_HEAD_INIT(security_hook_heads.file_permission), 1771 - .file_alloc_security = 1772 - LIST_HEAD_INIT(security_hook_heads.file_alloc_security), 1773 - .file_free_security = 1774 - LIST_HEAD_INIT(security_hook_heads.file_free_security), 1775 - .file_ioctl = LIST_HEAD_INIT(security_hook_heads.file_ioctl), 1776 - .mmap_addr = LIST_HEAD_INIT(security_hook_heads.mmap_addr), 1777 - .mmap_file = LIST_HEAD_INIT(security_hook_heads.mmap_file), 1778 - .file_mprotect = 1779 - LIST_HEAD_INIT(security_hook_heads.file_mprotect), 1780 - .file_lock = LIST_HEAD_INIT(security_hook_heads.file_lock), 1781 - .file_fcntl = LIST_HEAD_INIT(security_hook_heads.file_fcntl), 1782 - .file_set_fowner = 1783 - LIST_HEAD_INIT(security_hook_heads.file_set_fowner), 1784 - .file_send_sigiotask = 1785 - LIST_HEAD_INIT(security_hook_heads.file_send_sigiotask), 1786 - .file_receive = LIST_HEAD_INIT(security_hook_heads.file_receive), 1787 - .file_open = LIST_HEAD_INIT(security_hook_heads.file_open), 1788 - .task_create = LIST_HEAD_INIT(security_hook_heads.task_create), 1789 - .task_free = LIST_HEAD_INIT(security_hook_heads.task_free), 1790 - .cred_alloc_blank = 1791 - LIST_HEAD_INIT(security_hook_heads.cred_alloc_blank), 1792 - .cred_free = LIST_HEAD_INIT(security_hook_heads.cred_free), 1793 - .cred_prepare = LIST_HEAD_INIT(security_hook_heads.cred_prepare), 1794 - .cred_transfer = 1795 - LIST_HEAD_INIT(security_hook_heads.cred_transfer), 1796 - .kernel_act_as = 1797 - LIST_HEAD_INIT(security_hook_heads.kernel_act_as), 1798 - .kernel_create_files_as = 1799 - LIST_HEAD_INIT(security_hook_heads.kernel_create_files_as), 1800 - .kernel_module_request = 1801 - LIST_HEAD_INIT(security_hook_heads.kernel_module_request), 1802 - .kernel_read_file = 1803 - LIST_HEAD_INIT(security_hook_heads.kernel_read_file), 1804 - .kernel_post_read_file = 1805 - LIST_HEAD_INIT(security_hook_heads.kernel_post_read_file), 1806 - .task_fix_setuid = 1807 - LIST_HEAD_INIT(security_hook_heads.task_fix_setuid), 1808 - .task_setpgid = LIST_HEAD_INIT(security_hook_heads.task_setpgid), 1809 - .task_getpgid = LIST_HEAD_INIT(security_hook_heads.task_getpgid), 1810 - .task_getsid = LIST_HEAD_INIT(security_hook_heads.task_getsid), 1811 - .task_getsecid = 1812 - LIST_HEAD_INIT(security_hook_heads.task_getsecid), 1813 - .task_setnice = LIST_HEAD_INIT(security_hook_heads.task_setnice), 1814 - .task_setioprio = 1815 - LIST_HEAD_INIT(security_hook_heads.task_setioprio), 1816 - .task_getioprio = 1817 - LIST_HEAD_INIT(security_hook_heads.task_getioprio), 1818 - .task_setrlimit = 1819 - LIST_HEAD_INIT(security_hook_heads.task_setrlimit), 1820 - .task_setscheduler = 1821 - LIST_HEAD_INIT(security_hook_heads.task_setscheduler), 1822 - .task_getscheduler = 1823 - LIST_HEAD_INIT(security_hook_heads.task_getscheduler), 1824 - .task_movememory = 1825 - LIST_HEAD_INIT(security_hook_heads.task_movememory), 1826 - .task_kill = LIST_HEAD_INIT(security_hook_heads.task_kill), 1827 - .task_prctl = LIST_HEAD_INIT(security_hook_heads.task_prctl), 1828 - .task_to_inode = 1829 - LIST_HEAD_INIT(security_hook_heads.task_to_inode), 1830 - .ipc_permission = 1831 - LIST_HEAD_INIT(security_hook_heads.ipc_permission), 1832 - .ipc_getsecid = LIST_HEAD_INIT(security_hook_heads.ipc_getsecid), 1833 - .msg_msg_alloc_security = 1834 - LIST_HEAD_INIT(security_hook_heads.msg_msg_alloc_security), 1835 - .msg_msg_free_security = 1836 - LIST_HEAD_INIT(security_hook_heads.msg_msg_free_security), 1837 - .msg_queue_alloc_security = 1838 - LIST_HEAD_INIT(security_hook_heads.msg_queue_alloc_security), 1839 - .msg_queue_free_security = 1840 - LIST_HEAD_INIT(security_hook_heads.msg_queue_free_security), 1841 - .msg_queue_associate = 1842 - LIST_HEAD_INIT(security_hook_heads.msg_queue_associate), 1843 - .msg_queue_msgctl = 1844 - LIST_HEAD_INIT(security_hook_heads.msg_queue_msgctl), 1845 - .msg_queue_msgsnd = 1846 - LIST_HEAD_INIT(security_hook_heads.msg_queue_msgsnd), 1847 - .msg_queue_msgrcv = 1848 - LIST_HEAD_INIT(security_hook_heads.msg_queue_msgrcv), 1849 - .shm_alloc_security = 1850 - LIST_HEAD_INIT(security_hook_heads.shm_alloc_security), 1851 - .shm_free_security = 1852 - LIST_HEAD_INIT(security_hook_heads.shm_free_security), 1853 - .shm_associate = 1854 - LIST_HEAD_INIT(security_hook_heads.shm_associate), 1855 - .shm_shmctl = LIST_HEAD_INIT(security_hook_heads.shm_shmctl), 1856 - .shm_shmat = LIST_HEAD_INIT(security_hook_heads.shm_shmat), 1857 - .sem_alloc_security = 1858 - LIST_HEAD_INIT(security_hook_heads.sem_alloc_security), 1859 - .sem_free_security = 1860 - LIST_HEAD_INIT(security_hook_heads.sem_free_security), 1861 - .sem_associate = 1862 - LIST_HEAD_INIT(security_hook_heads.sem_associate), 1863 - .sem_semctl = LIST_HEAD_INIT(security_hook_heads.sem_semctl), 1864 - .sem_semop = LIST_HEAD_INIT(security_hook_heads.sem_semop), 1865 - .netlink_send = LIST_HEAD_INIT(security_hook_heads.netlink_send), 1866 - .d_instantiate = 1867 - LIST_HEAD_INIT(security_hook_heads.d_instantiate), 1868 - .getprocattr = LIST_HEAD_INIT(security_hook_heads.getprocattr), 1869 - .setprocattr = LIST_HEAD_INIT(security_hook_heads.setprocattr), 1870 - .ismaclabel = LIST_HEAD_INIT(security_hook_heads.ismaclabel), 1871 - .secid_to_secctx = 1872 - LIST_HEAD_INIT(security_hook_heads.secid_to_secctx), 1873 - .secctx_to_secid = 1874 - LIST_HEAD_INIT(security_hook_heads.secctx_to_secid), 1875 - .release_secctx = 1876 - LIST_HEAD_INIT(security_hook_heads.release_secctx), 1877 - .inode_invalidate_secctx = 1878 - LIST_HEAD_INIT(security_hook_heads.inode_invalidate_secctx), 1879 - .inode_notifysecctx = 1880 - LIST_HEAD_INIT(security_hook_heads.inode_notifysecctx), 1881 - .inode_setsecctx = 1882 - LIST_HEAD_INIT(security_hook_heads.inode_setsecctx), 1883 - .inode_getsecctx = 1884 - LIST_HEAD_INIT(security_hook_heads.inode_getsecctx), 1885 - #ifdef CONFIG_SECURITY_NETWORK 1886 - .unix_stream_connect = 1887 - LIST_HEAD_INIT(security_hook_heads.unix_stream_connect), 1888 - .unix_may_send = 1889 - LIST_HEAD_INIT(security_hook_heads.unix_may_send), 1890 - .socket_create = 1891 - LIST_HEAD_INIT(security_hook_heads.socket_create), 1892 - .socket_post_create = 1893 - LIST_HEAD_INIT(security_hook_heads.socket_post_create), 1894 - .socket_bind = LIST_HEAD_INIT(security_hook_heads.socket_bind), 1895 - .socket_connect = 1896 - LIST_HEAD_INIT(security_hook_heads.socket_connect), 1897 - .socket_listen = 1898 - LIST_HEAD_INIT(security_hook_heads.socket_listen), 1899 - .socket_accept = 1900 - LIST_HEAD_INIT(security_hook_heads.socket_accept), 1901 - .socket_sendmsg = 1902 - LIST_HEAD_INIT(security_hook_heads.socket_sendmsg), 1903 - .socket_recvmsg = 1904 - LIST_HEAD_INIT(security_hook_heads.socket_recvmsg), 1905 - .socket_getsockname = 1906 - LIST_HEAD_INIT(security_hook_heads.socket_getsockname), 1907 - .socket_getpeername = 1908 - LIST_HEAD_INIT(security_hook_heads.socket_getpeername), 1909 - .socket_getsockopt = 1910 - LIST_HEAD_INIT(security_hook_heads.socket_getsockopt), 1911 - .socket_setsockopt = 1912 - LIST_HEAD_INIT(security_hook_heads.socket_setsockopt), 1913 - .socket_shutdown = 1914 - LIST_HEAD_INIT(security_hook_heads.socket_shutdown), 1915 - .socket_sock_rcv_skb = 1916 - LIST_HEAD_INIT(security_hook_heads.socket_sock_rcv_skb), 1917 - .socket_getpeersec_stream = 1918 - LIST_HEAD_INIT(security_hook_heads.socket_getpeersec_stream), 1919 - .socket_getpeersec_dgram = 1920 - LIST_HEAD_INIT(security_hook_heads.socket_getpeersec_dgram), 1921 - .sk_alloc_security = 1922 - LIST_HEAD_INIT(security_hook_heads.sk_alloc_security), 1923 - .sk_free_security = 1924 - LIST_HEAD_INIT(security_hook_heads.sk_free_security), 1925 - .sk_clone_security = 1926 - LIST_HEAD_INIT(security_hook_heads.sk_clone_security), 1927 - .sk_getsecid = LIST_HEAD_INIT(security_hook_heads.sk_getsecid), 1928 - .sock_graft = LIST_HEAD_INIT(security_hook_heads.sock_graft), 1929 - .inet_conn_request = 1930 - LIST_HEAD_INIT(security_hook_heads.inet_conn_request), 1931 - .inet_csk_clone = 1932 - LIST_HEAD_INIT(security_hook_heads.inet_csk_clone), 1933 - .inet_conn_established = 1934 - LIST_HEAD_INIT(security_hook_heads.inet_conn_established), 1935 - .secmark_relabel_packet = 1936 - LIST_HEAD_INIT(security_hook_heads.secmark_relabel_packet), 1937 - .secmark_refcount_inc = 1938 - LIST_HEAD_INIT(security_hook_heads.secmark_refcount_inc), 1939 - .secmark_refcount_dec = 1940 - LIST_HEAD_INIT(security_hook_heads.secmark_refcount_dec), 1941 - .req_classify_flow = 1942 - LIST_HEAD_INIT(security_hook_heads.req_classify_flow), 1943 - .tun_dev_alloc_security = 1944 - LIST_HEAD_INIT(security_hook_heads.tun_dev_alloc_security), 1945 - .tun_dev_free_security = 1946 - LIST_HEAD_INIT(security_hook_heads.tun_dev_free_security), 1947 - .tun_dev_create = 1948 - LIST_HEAD_INIT(security_hook_heads.tun_dev_create), 1949 - .tun_dev_attach_queue = 1950 - LIST_HEAD_INIT(security_hook_heads.tun_dev_attach_queue), 1951 - .tun_dev_attach = 1952 - LIST_HEAD_INIT(security_hook_heads.tun_dev_attach), 1953 - .tun_dev_open = LIST_HEAD_INIT(security_hook_heads.tun_dev_open), 1954 - #endif /* CONFIG_SECURITY_NETWORK */ 1955 - #ifdef CONFIG_SECURITY_NETWORK_XFRM 1956 - .xfrm_policy_alloc_security = 1957 - LIST_HEAD_INIT(security_hook_heads.xfrm_policy_alloc_security), 1958 - .xfrm_policy_clone_security = 1959 - LIST_HEAD_INIT(security_hook_heads.xfrm_policy_clone_security), 1960 - .xfrm_policy_free_security = 1961 - LIST_HEAD_INIT(security_hook_heads.xfrm_policy_free_security), 1962 - .xfrm_policy_delete_security = 1963 - LIST_HEAD_INIT(security_hook_heads.xfrm_policy_delete_security), 1964 - .xfrm_state_alloc = 1965 - LIST_HEAD_INIT(security_hook_heads.xfrm_state_alloc), 1966 - .xfrm_state_alloc_acquire = 1967 - LIST_HEAD_INIT(security_hook_heads.xfrm_state_alloc_acquire), 1968 - .xfrm_state_free_security = 1969 - LIST_HEAD_INIT(security_hook_heads.xfrm_state_free_security), 1970 - .xfrm_state_delete_security = 1971 - LIST_HEAD_INIT(security_hook_heads.xfrm_state_delete_security), 1972 - .xfrm_policy_lookup = 1973 - LIST_HEAD_INIT(security_hook_heads.xfrm_policy_lookup), 1974 - .xfrm_state_pol_flow_match = 1975 - LIST_HEAD_INIT(security_hook_heads.xfrm_state_pol_flow_match), 1976 - .xfrm_decode_session = 1977 - LIST_HEAD_INIT(security_hook_heads.xfrm_decode_session), 1978 - #endif /* CONFIG_SECURITY_NETWORK_XFRM */ 1979 - #ifdef CONFIG_KEYS 1980 - .key_alloc = LIST_HEAD_INIT(security_hook_heads.key_alloc), 1981 - .key_free = LIST_HEAD_INIT(security_hook_heads.key_free), 1982 - .key_permission = 1983 - LIST_HEAD_INIT(security_hook_heads.key_permission), 1984 - .key_getsecurity = 1985 - LIST_HEAD_INIT(security_hook_heads.key_getsecurity), 1986 - #endif /* CONFIG_KEYS */ 1987 - #ifdef CONFIG_AUDIT 1988 - .audit_rule_init = 1989 - LIST_HEAD_INIT(security_hook_heads.audit_rule_init), 1990 - .audit_rule_known = 1991 - LIST_HEAD_INIT(security_hook_heads.audit_rule_known), 1992 - .audit_rule_match = 1993 - LIST_HEAD_INIT(security_hook_heads.audit_rule_match), 1994 - .audit_rule_free = 1995 - LIST_HEAD_INIT(security_hook_heads.audit_rule_free), 1996 - #endif /* CONFIG_AUDIT */ 1997 - };
+6
security/selinux/Kconfig
··· 40 40 config SECURITY_SELINUX_DISABLE 41 41 bool "NSA SELinux runtime disable" 42 42 depends on SECURITY_SELINUX 43 + select SECURITY_WRITABLE_HOOKS 43 44 default n 44 45 help 45 46 This option enables writing to a selinuxfs node 'disable', which ··· 50 49 support runtime disabling of SELinux, e.g. from /sbin/init, for 51 50 portability across platforms where boot parameters are difficult 52 51 to employ. 52 + 53 + NOTE: selecting this option will disable the '__ro_after_init' 54 + kernel hardening feature for security hooks. Please consider 55 + using the selinux=0 boot parameter instead of enabling this 56 + option. 53 57 54 58 If you are unsure how to answer this question, answer N. 55 59
+25 -1
security/selinux/hooks.c
··· 3920 3920 PROCESS__GETSCHED, NULL); 3921 3921 } 3922 3922 3923 + int selinux_task_prlimit(const struct cred *cred, const struct cred *tcred, 3924 + unsigned int flags) 3925 + { 3926 + u32 av = 0; 3927 + 3928 + if (!flags) 3929 + return 0; 3930 + if (flags & LSM_PRLIMIT_WRITE) 3931 + av |= PROCESS__SETRLIMIT; 3932 + if (flags & LSM_PRLIMIT_READ) 3933 + av |= PROCESS__GETRLIMIT; 3934 + return avc_has_perm(cred_sid(cred), cred_sid(tcred), 3935 + SECCLASS_PROCESS, av, NULL); 3936 + } 3937 + 3923 3938 static int selinux_task_setrlimit(struct task_struct *p, unsigned int resource, 3924 3939 struct rlimit *new_rlim) 3925 3940 { ··· 4367 4352 u32 sid, node_perm; 4368 4353 4369 4354 if (family == PF_INET) { 4355 + if (addrlen < sizeof(struct sockaddr_in)) { 4356 + err = -EINVAL; 4357 + goto out; 4358 + } 4370 4359 addr4 = (struct sockaddr_in *)address; 4371 4360 snum = ntohs(addr4->sin_port); 4372 4361 addrp = (char *)&addr4->sin_addr.s_addr; 4373 4362 } else { 4363 + if (addrlen < SIN6_LEN_RFC2133) { 4364 + err = -EINVAL; 4365 + goto out; 4366 + } 4374 4367 addr6 = (struct sockaddr_in6 *)address; 4375 4368 snum = ntohs(addr6->sin6_port); 4376 4369 addrp = (char *)&addr6->sin6_addr.s6_addr; ··· 6131 6108 6132 6109 #endif 6133 6110 6134 - static struct security_hook_list selinux_hooks[] = { 6111 + static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { 6135 6112 LSM_HOOK_INIT(binder_set_context_mgr, selinux_binder_set_context_mgr), 6136 6113 LSM_HOOK_INIT(binder_transaction, selinux_binder_transaction), 6137 6114 LSM_HOOK_INIT(binder_transfer_binder, selinux_binder_transfer_binder), ··· 6229 6206 LSM_HOOK_INIT(task_setnice, selinux_task_setnice), 6230 6207 LSM_HOOK_INIT(task_setioprio, selinux_task_setioprio), 6231 6208 LSM_HOOK_INIT(task_getioprio, selinux_task_getioprio), 6209 + LSM_HOOK_INIT(task_prlimit, selinux_task_prlimit), 6232 6210 LSM_HOOK_INIT(task_setrlimit, selinux_task_setrlimit), 6233 6211 LSM_HOOK_INIT(task_setscheduler, selinux_task_setscheduler), 6234 6212 LSM_HOOK_INIT(task_getscheduler, selinux_task_getscheduler),
+1 -1
security/selinux/include/classmap.h
··· 47 47 "getattr", "setexec", "setfscreate", "noatsecure", "siginh", 48 48 "setrlimit", "rlimitinh", "dyntransition", "setcurrent", 49 49 "execmem", "execstack", "execheap", "setkeycreate", 50 - "setsockcreate", NULL } }, 50 + "setsockcreate", "getrlimit", NULL } }, 51 51 { "system", 52 52 { "ipc_info", "syslog_read", "syslog_mod", 53 53 "syslog_console", "module_request", "module_load", NULL } },
+5 -5
security/selinux/nlmsgtab.c
··· 28 28 u32 perm; 29 29 }; 30 30 31 - static struct nlmsg_perm nlmsg_route_perms[] = 31 + static const struct nlmsg_perm nlmsg_route_perms[] = 32 32 { 33 33 { RTM_NEWLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, 34 34 { RTM_DELLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, ··· 81 81 { RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 82 82 }; 83 83 84 - static struct nlmsg_perm nlmsg_tcpdiag_perms[] = 84 + static const struct nlmsg_perm nlmsg_tcpdiag_perms[] = 85 85 { 86 86 { TCPDIAG_GETSOCK, NETLINK_TCPDIAG_SOCKET__NLMSG_READ }, 87 87 { DCCPDIAG_GETSOCK, NETLINK_TCPDIAG_SOCKET__NLMSG_READ }, ··· 89 89 { SOCK_DESTROY, NETLINK_TCPDIAG_SOCKET__NLMSG_WRITE }, 90 90 }; 91 91 92 - static struct nlmsg_perm nlmsg_xfrm_perms[] = 92 + static const struct nlmsg_perm nlmsg_xfrm_perms[] = 93 93 { 94 94 { XFRM_MSG_NEWSA, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, 95 95 { XFRM_MSG_DELSA, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, ··· 116 116 { XFRM_MSG_MAPPING, NETLINK_XFRM_SOCKET__NLMSG_READ }, 117 117 }; 118 118 119 - static struct nlmsg_perm nlmsg_audit_perms[] = 119 + static const struct nlmsg_perm nlmsg_audit_perms[] = 120 120 { 121 121 { AUDIT_GET, NETLINK_AUDIT_SOCKET__NLMSG_READ }, 122 122 { AUDIT_SET, NETLINK_AUDIT_SOCKET__NLMSG_WRITE }, ··· 137 137 }; 138 138 139 139 140 - static int nlmsg_perm(u16 nlmsg_type, u32 *perm, struct nlmsg_perm *tab, size_t tabsize) 140 + static int nlmsg_perm(u16 nlmsg_type, u32 *perm, const struct nlmsg_perm *tab, size_t tabsize) 141 141 { 142 142 int i, err = -EINVAL; 143 143
+4 -4
security/selinux/selinuxfs.c
··· 1456 1456 { 1457 1457 struct avc_cache_stats *st = v; 1458 1458 1459 - if (v == SEQ_START_TOKEN) 1460 - seq_printf(seq, "lookups hits misses allocations reclaims " 1461 - "frees\n"); 1462 - else { 1459 + if (v == SEQ_START_TOKEN) { 1460 + seq_puts(seq, 1461 + "lookups hits misses allocations reclaims frees\n"); 1462 + } else { 1463 1463 unsigned int lookups = st->lookups; 1464 1464 unsigned int misses = st->misses; 1465 1465 unsigned int hits = lookups - misses;
+7 -7
security/selinux/ss/conditional.c
··· 176 176 int cond_init_bool_indexes(struct policydb *p) 177 177 { 178 178 kfree(p->bool_val_to_struct); 179 - p->bool_val_to_struct = 180 - kmalloc(p->p_bools.nprim * sizeof(struct cond_bool_datum *), GFP_KERNEL); 179 + p->bool_val_to_struct = kmalloc_array(p->p_bools.nprim, 180 + sizeof(*p->bool_val_to_struct), 181 + GFP_KERNEL); 181 182 if (!p->bool_val_to_struct) 182 183 return -ENOMEM; 183 184 return 0; ··· 227 226 u32 len; 228 227 int rc; 229 228 230 - booldatum = kzalloc(sizeof(struct cond_bool_datum), GFP_KERNEL); 229 + booldatum = kzalloc(sizeof(*booldatum), GFP_KERNEL); 231 230 if (!booldatum) 232 231 return -ENOMEM; 233 232 ··· 332 331 goto err; 333 332 } 334 333 335 - list = kzalloc(sizeof(struct cond_av_list), GFP_KERNEL); 334 + list = kzalloc(sizeof(*list), GFP_KERNEL); 336 335 if (!list) { 337 336 rc = -ENOMEM; 338 337 goto err; ··· 421 420 goto err; 422 421 423 422 rc = -ENOMEM; 424 - expr = kzalloc(sizeof(struct cond_expr), GFP_KERNEL); 423 + expr = kzalloc(sizeof(*expr), GFP_KERNEL); 425 424 if (!expr) 426 425 goto err; 427 426 ··· 472 471 473 472 for (i = 0; i < len; i++) { 474 473 rc = -ENOMEM; 475 - node = kzalloc(sizeof(struct cond_node), GFP_KERNEL); 474 + node = kzalloc(sizeof(*node), GFP_KERNEL); 476 475 if (!node) 477 476 goto err; 478 477 ··· 664 663 (node->key.specified & AVTAB_XPERMS)) 665 664 services_compute_xperms_drivers(xperms, node); 666 665 } 667 - return; 668 666 }
+5 -5
security/selinux/ss/hashtab.c
··· 17 17 u32 i; 18 18 19 19 p = kzalloc(sizeof(*p), GFP_KERNEL); 20 - if (p == NULL) 20 + if (!p) 21 21 return p; 22 22 23 23 p->size = size; 24 24 p->nel = 0; 25 25 p->hash_value = hash_value; 26 26 p->keycmp = keycmp; 27 - p->htable = kmalloc(sizeof(*(p->htable)) * size, GFP_KERNEL); 28 - if (p->htable == NULL) { 27 + p->htable = kmalloc_array(size, sizeof(*p->htable), GFP_KERNEL); 28 + if (!p->htable) { 29 29 kfree(p); 30 30 return NULL; 31 31 } ··· 58 58 return -EEXIST; 59 59 60 60 newnode = kzalloc(sizeof(*newnode), GFP_KERNEL); 61 - if (newnode == NULL) 61 + if (!newnode) 62 62 return -ENOMEM; 63 63 newnode->key = key; 64 64 newnode->datum = datum; ··· 87 87 while (cur && h->keycmp(h, key, cur->key) > 0) 88 88 cur = cur->next; 89 89 90 - if (cur == NULL || (h->keycmp(h, key, cur->key) != 0)) 90 + if (!cur || (h->keycmp(h, key, cur->key) != 0)) 91 91 return NULL; 92 92 93 93 return cur->datum;
+24 -35
security/selinux/ss/policydb.c
··· 178 178 int rc; 179 179 struct role_datum *role; 180 180 181 - rc = -ENOMEM; 182 181 role = kzalloc(sizeof(*role), GFP_KERNEL); 183 182 if (!role) 184 - goto out; 183 + return -ENOMEM; 185 184 186 185 rc = -EINVAL; 187 186 role->value = ++p->p_roles.nprim; ··· 539 540 #endif 540 541 541 542 rc = -ENOMEM; 542 - p->class_val_to_struct = 543 - kzalloc(p->p_classes.nprim * sizeof(*(p->class_val_to_struct)), 544 - GFP_KERNEL); 543 + p->class_val_to_struct = kcalloc(p->p_classes.nprim, 544 + sizeof(*p->class_val_to_struct), 545 + GFP_KERNEL); 545 546 if (!p->class_val_to_struct) 546 547 goto out; 547 548 548 549 rc = -ENOMEM; 549 - p->role_val_to_struct = 550 - kzalloc(p->p_roles.nprim * sizeof(*(p->role_val_to_struct)), 551 - GFP_KERNEL); 550 + p->role_val_to_struct = kcalloc(p->p_roles.nprim, 551 + sizeof(*p->role_val_to_struct), 552 + GFP_KERNEL); 552 553 if (!p->role_val_to_struct) 553 554 goto out; 554 555 555 556 rc = -ENOMEM; 556 - p->user_val_to_struct = 557 - kzalloc(p->p_users.nprim * sizeof(*(p->user_val_to_struct)), 558 - GFP_KERNEL); 557 + p->user_val_to_struct = kcalloc(p->p_users.nprim, 558 + sizeof(*p->user_val_to_struct), 559 + GFP_KERNEL); 559 560 if (!p->user_val_to_struct) 560 561 goto out; 561 562 ··· 879 880 ebitmap_destroy(&p->filename_trans_ttypes); 880 881 ebitmap_destroy(&p->policycaps); 881 882 ebitmap_destroy(&p->permissive_map); 882 - 883 - return; 884 883 } 885 884 886 885 /* ··· 1117 1120 __le32 buf[2]; 1118 1121 u32 len; 1119 1122 1120 - rc = -ENOMEM; 1121 1123 perdatum = kzalloc(sizeof(*perdatum), GFP_KERNEL); 1122 1124 if (!perdatum) 1123 - goto bad; 1125 + return -ENOMEM; 1124 1126 1125 1127 rc = next_entry(buf, fp, sizeof buf); 1126 1128 if (rc) ··· 1150 1154 u32 len, nel; 1151 1155 int i, rc; 1152 1156 1153 - rc = -ENOMEM; 1154 1157 comdatum = kzalloc(sizeof(*comdatum), GFP_KERNEL); 1155 1158 if (!comdatum) 1156 - goto bad; 1159 + return -ENOMEM; 1157 1160 1158 1161 rc = next_entry(buf, fp, sizeof buf); 1159 1162 if (rc) ··· 1315 1320 u32 len, len2, ncons, nel; 1316 1321 int i, rc; 1317 1322 1318 - rc = -ENOMEM; 1319 1323 cladatum = kzalloc(sizeof(*cladatum), GFP_KERNEL); 1320 1324 if (!cladatum) 1321 - goto bad; 1325 + return -ENOMEM; 1322 1326 1323 1327 rc = next_entry(buf, fp, sizeof(u32)*6); 1324 1328 if (rc) ··· 1408 1414 __le32 buf[3]; 1409 1415 u32 len; 1410 1416 1411 - rc = -ENOMEM; 1412 1417 role = kzalloc(sizeof(*role), GFP_KERNEL); 1413 1418 if (!role) 1414 - goto bad; 1419 + return -ENOMEM; 1415 1420 1416 1421 if (p->policyvers >= POLICYDB_VERSION_BOUNDARY) 1417 1422 to_read = 3; ··· 1464 1471 __le32 buf[4]; 1465 1472 u32 len; 1466 1473 1467 - rc = -ENOMEM; 1468 1474 typdatum = kzalloc(sizeof(*typdatum), GFP_KERNEL); 1469 1475 if (!typdatum) 1470 - goto bad; 1476 + return -ENOMEM; 1471 1477 1472 1478 if (p->policyvers >= POLICYDB_VERSION_BOUNDARY) 1473 1479 to_read = 4; ··· 1538 1546 __le32 buf[3]; 1539 1547 u32 len; 1540 1548 1541 - rc = -ENOMEM; 1542 1549 usrdatum = kzalloc(sizeof(*usrdatum), GFP_KERNEL); 1543 1550 if (!usrdatum) 1544 - goto bad; 1551 + return -ENOMEM; 1545 1552 1546 1553 if (p->policyvers >= POLICYDB_VERSION_BOUNDARY) 1547 1554 to_read = 3; ··· 1588 1597 __le32 buf[2]; 1589 1598 u32 len; 1590 1599 1591 - rc = -ENOMEM; 1592 1600 levdatum = kzalloc(sizeof(*levdatum), GFP_ATOMIC); 1593 1601 if (!levdatum) 1594 - goto bad; 1602 + return -ENOMEM; 1595 1603 1596 1604 rc = next_entry(buf, fp, sizeof buf); 1597 1605 if (rc) ··· 1604 1614 goto bad; 1605 1615 1606 1616 rc = -ENOMEM; 1607 - levdatum->level = kmalloc(sizeof(struct mls_level), GFP_ATOMIC); 1617 + levdatum->level = kmalloc(sizeof(*levdatum->level), GFP_ATOMIC); 1608 1618 if (!levdatum->level) 1609 1619 goto bad; 1610 1620 ··· 1629 1639 __le32 buf[3]; 1630 1640 u32 len; 1631 1641 1632 - rc = -ENOMEM; 1633 1642 catdatum = kzalloc(sizeof(*catdatum), GFP_ATOMIC); 1634 1643 if (!catdatum) 1635 - goto bad; 1644 + return -ENOMEM; 1636 1645 1637 1646 rc = next_entry(buf, fp, sizeof buf); 1638 1647 if (rc) ··· 1843 1854 1844 1855 rc = next_entry(buf, fp, sizeof(u32)); 1845 1856 if (rc) 1846 - goto out; 1857 + return rc; 1847 1858 1848 1859 nel = le32_to_cpu(buf[0]); 1849 1860 for (i = 0; i < nel; i++) { ··· 1920 1931 nel = le32_to_cpu(buf[0]); 1921 1932 1922 1933 for (i = 0; i < nel; i++) { 1923 - ft = NULL; 1924 1934 otype = NULL; 1925 1935 name = NULL; 1926 1936 ··· 1996 2008 1997 2009 rc = next_entry(buf, fp, sizeof(u32)); 1998 2010 if (rc) 1999 - goto out; 2011 + return rc; 2000 2012 nel = le32_to_cpu(buf[0]); 2001 2013 2002 2014 for (i = 0; i < nel; i++) { ··· 2088 2100 } 2089 2101 rc = 0; 2090 2102 out: 2091 - if (newgenfs) 2103 + if (newgenfs) { 2092 2104 kfree(newgenfs->fstype); 2093 - kfree(newgenfs); 2105 + kfree(newgenfs); 2106 + } 2094 2107 ocontext_destroy(newc, OCON_FSUSE); 2095 2108 2096 2109 return rc;
+1 -1
security/selinux/ss/services.c
··· 157 157 } 158 158 159 159 k = 0; 160 - while (p_in->perms && p_in->perms[k]) { 160 + while (p_in->perms[k]) { 161 161 /* An empty permission string skips ahead */ 162 162 if (!*p_in->perms[k]) { 163 163 k++;
+3 -3
security/selinux/ss/sidtab.c
··· 18 18 { 19 19 int i; 20 20 21 - s->htable = kmalloc(sizeof(*(s->htable)) * SIDTAB_SIZE, GFP_ATOMIC); 21 + s->htable = kmalloc_array(SIDTAB_SIZE, sizeof(*s->htable), GFP_ATOMIC); 22 22 if (!s->htable) 23 23 return -ENOMEM; 24 24 for (i = 0; i < SIDTAB_SIZE; i++) ··· 54 54 } 55 55 56 56 newnode = kmalloc(sizeof(*newnode), GFP_ATOMIC); 57 - if (newnode == NULL) { 57 + if (!newnode) { 58 58 rc = -ENOMEM; 59 59 goto out; 60 60 } ··· 98 98 if (force && cur && sid == cur->sid && cur->context.len) 99 99 return &cur->context; 100 100 101 - if (cur == NULL || sid != cur->sid || cur->context.len) { 101 + if (!cur || sid != cur->sid || cur->context.len) { 102 102 /* Remap invalid SIDs to the unlabeled SID. */ 103 103 sid = SECINITSID_UNLABELED; 104 104 hvalue = SIDTAB_HASH(sid);
+1 -1
security/smack/smack_access.c
··· 504 504 if ((m & *cp) == 0) 505 505 continue; 506 506 rc = netlbl_catmap_setbit(&sap->attr.mls.cat, 507 - cat, GFP_ATOMIC); 507 + cat, GFP_KERNEL); 508 508 if (rc < 0) { 509 509 netlbl_catmap_free(sap->attr.mls.cat); 510 510 return rc;
+2 -4
security/smack/smack_lsm.c
··· 695 695 696 696 opts->mnt_opts_flags = kcalloc(NUM_SMK_MNT_OPTS, sizeof(int), 697 697 GFP_KERNEL); 698 - if (!opts->mnt_opts_flags) { 699 - kfree(opts->mnt_opts); 698 + if (!opts->mnt_opts_flags) 700 699 goto out_err; 701 - } 702 700 703 701 if (fsdefault) { 704 702 opts->mnt_opts[num_mnt_opts] = fsdefault; ··· 4631 4633 return 0; 4632 4634 } 4633 4635 4634 - static struct security_hook_list smack_hooks[] = { 4636 + static struct security_hook_list smack_hooks[] __lsm_ro_after_init = { 4635 4637 LSM_HOOK_INIT(ptrace_access_check, smack_ptrace_access_check), 4636 4638 LSM_HOOK_INIT(ptrace_traceme, smack_ptrace_traceme), 4637 4639 LSM_HOOK_INIT(syslog, smack_syslog),
+6 -6
security/tomoyo/file.c
··· 692 692 { 693 693 struct tomoyo_request_info r; 694 694 struct tomoyo_obj_info obj = { 695 - .path1 = *path, 695 + .path1 = { .mnt = path->mnt, .dentry = path->dentry }, 696 696 }; 697 697 int error = -ENOMEM; 698 698 struct tomoyo_path_info buf; ··· 740 740 struct tomoyo_path_info buf; 741 741 struct tomoyo_request_info r; 742 742 struct tomoyo_obj_info obj = { 743 - .path1 = *path, 743 + .path1 = { .mnt = path->mnt, .dentry = path->dentry }, 744 744 }; 745 745 int idx; 746 746 ··· 786 786 { 787 787 struct tomoyo_request_info r; 788 788 struct tomoyo_obj_info obj = { 789 - .path1 = *path, 789 + .path1 = { .mnt = path->mnt, .dentry = path->dentry }, 790 790 }; 791 791 int error; 792 792 struct tomoyo_path_info buf; ··· 843 843 { 844 844 struct tomoyo_request_info r; 845 845 struct tomoyo_obj_info obj = { 846 - .path1 = *path, 846 + .path1 = { .mnt = path->mnt, .dentry = path->dentry }, 847 847 }; 848 848 int error = -ENOMEM; 849 849 struct tomoyo_path_info buf; ··· 890 890 struct tomoyo_path_info buf2; 891 891 struct tomoyo_request_info r; 892 892 struct tomoyo_obj_info obj = { 893 - .path1 = *path1, 894 - .path2 = *path2, 893 + .path1 = { .mnt = path1->mnt, .dentry = path1->dentry }, 894 + .path2 = { .mnt = path2->mnt, .dentry = path2->dentry } 895 895 }; 896 896 int idx; 897 897
+11 -11
security/tomoyo/tomoyo.c
··· 165 165 */ 166 166 static int tomoyo_path_unlink(const struct path *parent, struct dentry *dentry) 167 167 { 168 - struct path path = { parent->mnt, dentry }; 168 + struct path path = { .mnt = parent->mnt, .dentry = dentry }; 169 169 return tomoyo_path_perm(TOMOYO_TYPE_UNLINK, &path, NULL); 170 170 } 171 171 ··· 181 181 static int tomoyo_path_mkdir(const struct path *parent, struct dentry *dentry, 182 182 umode_t mode) 183 183 { 184 - struct path path = { parent->mnt, dentry }; 184 + struct path path = { .mnt = parent->mnt, .dentry = dentry }; 185 185 return tomoyo_path_number_perm(TOMOYO_TYPE_MKDIR, &path, 186 186 mode & S_IALLUGO); 187 187 } ··· 196 196 */ 197 197 static int tomoyo_path_rmdir(const struct path *parent, struct dentry *dentry) 198 198 { 199 - struct path path = { parent->mnt, dentry }; 199 + struct path path = { .mnt = parent->mnt, .dentry = dentry }; 200 200 return tomoyo_path_perm(TOMOYO_TYPE_RMDIR, &path, NULL); 201 201 } 202 202 ··· 212 212 static int tomoyo_path_symlink(const struct path *parent, struct dentry *dentry, 213 213 const char *old_name) 214 214 { 215 - struct path path = { parent->mnt, dentry }; 215 + struct path path = { .mnt = parent->mnt, .dentry = dentry }; 216 216 return tomoyo_path_perm(TOMOYO_TYPE_SYMLINK, &path, old_name); 217 217 } 218 218 ··· 229 229 static int tomoyo_path_mknod(const struct path *parent, struct dentry *dentry, 230 230 umode_t mode, unsigned int dev) 231 231 { 232 - struct path path = { parent->mnt, dentry }; 232 + struct path path = { .mnt = parent->mnt, .dentry = dentry }; 233 233 int type = TOMOYO_TYPE_CREATE; 234 234 const unsigned int perm = mode & S_IALLUGO; 235 235 ··· 268 268 static int tomoyo_path_link(struct dentry *old_dentry, const struct path *new_dir, 269 269 struct dentry *new_dentry) 270 270 { 271 - struct path path1 = { new_dir->mnt, old_dentry }; 272 - struct path path2 = { new_dir->mnt, new_dentry }; 271 + struct path path1 = { .mnt = new_dir->mnt, .dentry = old_dentry }; 272 + struct path path2 = { .mnt = new_dir->mnt, .dentry = new_dentry }; 273 273 return tomoyo_path2_perm(TOMOYO_TYPE_LINK, &path1, &path2); 274 274 } 275 275 ··· 288 288 const struct path *new_parent, 289 289 struct dentry *new_dentry) 290 290 { 291 - struct path path1 = { old_parent->mnt, old_dentry }; 292 - struct path path2 = { new_parent->mnt, new_dentry }; 291 + struct path path1 = { .mnt = old_parent->mnt, .dentry = old_dentry }; 292 + struct path path2 = { .mnt = new_parent->mnt, .dentry = new_dentry }; 293 293 return tomoyo_path2_perm(TOMOYO_TYPE_RENAME, &path1, &path2); 294 294 } 295 295 ··· 417 417 */ 418 418 static int tomoyo_sb_umount(struct vfsmount *mnt, int flags) 419 419 { 420 - struct path path = { mnt, mnt->mnt_root }; 420 + struct path path = { .mnt = mnt, .dentry = mnt->mnt_root }; 421 421 return tomoyo_path_perm(TOMOYO_TYPE_UMOUNT, &path, NULL); 422 422 } 423 423 ··· 496 496 * tomoyo_security_ops is a "struct security_operations" which is used for 497 497 * registering TOMOYO. 498 498 */ 499 - static struct security_hook_list tomoyo_hooks[] = { 499 + static struct security_hook_list tomoyo_hooks[] __lsm_ro_after_init = { 500 500 LSM_HOOK_INIT(cred_alloc_blank, tomoyo_cred_alloc_blank), 501 501 LSM_HOOK_INIT(cred_prepare, tomoyo_cred_prepare), 502 502 LSM_HOOK_INIT(cred_transfer, tomoyo_cred_transfer),
+1 -1
security/yama/yama_lsm.c
··· 428 428 return rc; 429 429 } 430 430 431 - static struct security_hook_list yama_hooks[] = { 431 + static struct security_hook_list yama_hooks[] __lsm_ro_after_init = { 432 432 LSM_HOOK_INIT(ptrace_access_check, yama_ptrace_access_check), 433 433 LSM_HOOK_INIT(ptrace_traceme, yama_ptrace_traceme), 434 434 LSM_HOOK_INIT(task_prctl, yama_task_prctl),