Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
"Highlights:

- Integrity: add local fs integrity verification to detect offline
attacks
- Integrity: add digital signature verification
- Simple stacking of Yama with other LSMs (per LSS discussions)
- IBM vTPM support on ppc64
- Add new driver for Infineon I2C TIS TPM
- Smack: add rule revocation for subject labels"

Fixed conflicts with the user namespace support in kernel/auditsc.c and
security/integrity/ima/ima_policy.c.

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (39 commits)
Documentation: Update git repository URL for Smack userland tools
ima: change flags container data type
Smack: setprocattr memory leak fix
Smack: implement revoking all rules for a subject label
Smack: remove task_wait() hook.
ima: audit log hashes
ima: generic IMA action flag handling
ima: rename ima_must_appraise_or_measure
audit: export audit_log_task_info
tpm: fix tpm_acpi sparse warning on different address spaces
samples/seccomp: fix 31 bit build on s390
ima: digital signature verification support
ima: add support for different security.ima data types
ima: add ima_inode_setxattr/removexattr function and calls
ima: add inode_post_setattr call
ima: replace iint spinblock with rwlock/read_lock
ima: allocating iint improvements
ima: add appraise action keywords and default rules
ima: integrity appraisal extension
vfs: move ima_file_free before releasing the file
...

+3547 -464
+21 -4
Documentation/ABI/testing/ima_policy
··· 12 12 then closing the file. The new policy takes effect after 13 13 the file ima/policy is closed. 14 14 15 + IMA appraisal, if configured, uses these file measurements 16 + for local measurement appraisal. 17 + 15 18 rule format: action [condition ...] 16 19 17 - action: measure | dont_measure 20 + action: measure | dont_measure | appraise | dont_appraise | audit 18 21 condition:= base | lsm 19 - base: [[func=] [mask=] [fsmagic=] [uid=]] 22 + base: [[func=] [mask=] [fsmagic=] [uid=] [fowner]] 20 23 lsm: [[subj_user=] [subj_role=] [subj_type=] 21 24 [obj_user=] [obj_role=] [obj_type=]] 22 25 ··· 27 24 mask:= [MAY_READ] [MAY_WRITE] [MAY_APPEND] [MAY_EXEC] 28 25 fsmagic:= hex value 29 26 uid:= decimal value 27 + fowner:=decimal value 30 28 lsm: are LSM specific 31 29 32 30 default policy: 33 31 # PROC_SUPER_MAGIC 34 32 dont_measure fsmagic=0x9fa0 33 + dont_appraise fsmagic=0x9fa0 35 34 # SYSFS_MAGIC 36 35 dont_measure fsmagic=0x62656572 36 + dont_appraise fsmagic=0x62656572 37 37 # DEBUGFS_MAGIC 38 38 dont_measure fsmagic=0x64626720 39 + dont_appraise fsmagic=0x64626720 39 40 # TMPFS_MAGIC 40 41 dont_measure fsmagic=0x01021994 42 + dont_appraise fsmagic=0x01021994 43 + # RAMFS_MAGIC 44 + dont_measure fsmagic=0x858458f6 45 + dont_appraise fsmagic=0x858458f6 41 46 # SECURITYFS_MAGIC 42 47 dont_measure fsmagic=0x73636673 48 + dont_appraise fsmagic=0x73636673 43 49 44 50 measure func=BPRM_CHECK 45 51 measure func=FILE_MMAP mask=MAY_EXEC 46 52 measure func=FILE_CHECK mask=MAY_READ uid=0 53 + appraise fowner=0 47 54 48 55 The default policy measures all executables in bprm_check, 49 56 all files mmapped executable in file_mmap, and all files 50 - open for read by root in do_filp_open. 57 + open for read by root in do_filp_open. The default appraisal 58 + policy appraises all files owned by root. 51 59 52 60 Examples of LSM specific definitions: 53 61 54 62 SELinux: 55 63 # SELINUX_MAGIC 56 - dont_measure fsmagic=0xF97CFF8C 64 + dont_measure fsmagic=0xf97cff8c 65 + dont_appraise fsmagic=0xf97cff8c 57 66 58 67 dont_measure obj_type=var_log_t 68 + dont_appraise obj_type=var_log_t 59 69 dont_measure obj_type=auditd_log_t 70 + dont_appraise obj_type=auditd_log_t 60 71 measure subj_user=system_u func=FILE_CHECK mask=MAY_READ 61 72 measure subj_role=system_r func=FILE_CHECK mask=MAY_READ 62 73
+70
Documentation/ABI/testing/sysfs-driver-ppi
··· 1 + What: /sys/devices/pnp0/<bus-num>/ppi/ 2 + Date: August 2012 3 + Kernel Version: 3.6 4 + Contact: xiaoyan.zhang@intel.com 5 + Description: 6 + This folder includes the attributes related with PPI (Physical 7 + Presence Interface). Only if TPM is supported by BIOS, this 8 + folder makes sence. The folder path can be got by command 9 + 'find /sys/ -name 'pcrs''. For the detail information of PPI, 10 + please refer to the PPI specification from 11 + http://www.trustedcomputinggroup.org/ 12 + 13 + What: /sys/devices/pnp0/<bus-num>/ppi/version 14 + Date: August 2012 15 + Contact: xiaoyan.zhang@intel.com 16 + Description: 17 + This attribute shows the version of the PPI supported by the 18 + platform. 19 + This file is readonly. 20 + 21 + What: /sys/devices/pnp0/<bus-num>/ppi/request 22 + Date: August 2012 23 + Contact: xiaoyan.zhang@intel.com 24 + Description: 25 + This attribute shows the request for an operation to be 26 + executed in the pre-OS environment. It is the only input from 27 + the OS to the pre-OS environment. The request should be an 28 + integer value range from 1 to 160, and 0 means no request. 29 + This file can be read and written. 30 + 31 + What: /sys/devices/pnp0/00:<bus-num>/ppi/response 32 + Date: August 2012 33 + Contact: xiaoyan.zhang@intel.com 34 + Description: 35 + This attribute shows the response to the most recent operation 36 + request it acted upon. The format is "<request> <response num> 37 + : <response description>". 38 + This file is readonly. 39 + 40 + What: /sys/devices/pnp0/<bus-num>/ppi/transition_action 41 + Date: August 2012 42 + Contact: xiaoyan.zhang@intel.com 43 + Description: 44 + This attribute shows the platform-specific action that should 45 + take place in order to transition to the BIOS for execution of 46 + a requested operation. The format is "<action num>: <action 47 + description>". 48 + This file is readonly. 49 + 50 + What: /sys/devices/pnp0/<bus-num>/ppi/tcg_operations 51 + Date: August 2012 52 + Contact: xiaoyan.zhang@intel.com 53 + Description: 54 + This attribute shows whether it is allowed to request an 55 + operation to be executed in the pre-OS environment by the BIOS 56 + for the requests defined by TCG, i.e. requests from 1 to 22. 57 + The format is "<request> <status num>: <status description>". 58 + This attribute is only supported by PPI version 1.2+. 59 + This file is readonly. 60 + 61 + What: /sys/devices/pnp0/<bus-num>/ppi/vs_operations 62 + Date: August 2012 63 + Contact: xiaoyan.zhang@intel.com 64 + Description: 65 + This attribute shows whether it is allowed to request an 66 + operation to be executed in the pre-OS environment by the BIOS 67 + for the verdor specific requests, i.e. requests from 128 to 68 + 255. The format is same with tcg_operations. This attribute 69 + is also only supported by PPI version 1.2+. 70 + This file is readonly.
+8
Documentation/kernel-parameters.txt
··· 1051 1051 ihash_entries= [KNL] 1052 1052 Set number of hash buckets for inode cache. 1053 1053 1054 + ima_appraise= [IMA] appraise integrity measurements 1055 + Format: { "off" | "enforce" | "fix" } 1056 + default: "enforce" 1057 + 1058 + ima_appraise_tcb [IMA] 1059 + The builtin appraise policy appraises all files 1060 + owned by uid=0. 1061 + 1054 1062 ima_audit= [IMA] 1055 1063 Format: { "0" | "1" } 1056 1064 0 -- integrity auditing messages. (Default)
+6 -4
Documentation/security/Smack.txt
··· 28 28 configurations are intolerant of IP options and can impede 29 29 access to systems that use them as Smack does. 30 30 31 - The current git repositories for Smack user space are: 31 + The current git repository for Smack user space is: 32 32 33 - git@gitorious.org:meego-platform-security/smackutil.git 34 - git@gitorious.org:meego-platform-security/libsmack.git 33 + git://github.com/smack-team/smack.git 35 34 36 - These should make and install on most modern distributions. 35 + This should make and install on most modern distributions. 37 36 There are three commands included in smackutil: 38 37 39 38 smackload - properly formats data for writing to /smack/load ··· 193 194 these capabilities are effective at for processes with any 194 195 label. The value is set by writing the desired label to the 195 196 file or cleared by writing "-" to the file. 197 + revoke-subject 198 + Writing a Smack label here sets the access to '-' for all access 199 + rules with that subject label. 196 200 197 201 You can add access rules in /etc/smack/accesses. They take the form: 198 202
+62
arch/powerpc/kernel/prom_init.c
··· 1624 1624 1625 1625 #ifdef CONFIG_PPC64 1626 1626 /* 1627 + * Allocate room for and instantiate Stored Measurement Log (SML) 1628 + */ 1629 + static void __init prom_instantiate_sml(void) 1630 + { 1631 + phandle ibmvtpm_node; 1632 + ihandle ibmvtpm_inst; 1633 + u32 entry = 0, size = 0; 1634 + u64 base; 1635 + 1636 + prom_debug("prom_instantiate_sml: start...\n"); 1637 + 1638 + ibmvtpm_node = call_prom("finddevice", 1, 1, ADDR("/ibm,vtpm")); 1639 + prom_debug("ibmvtpm_node: %x\n", ibmvtpm_node); 1640 + if (!PHANDLE_VALID(ibmvtpm_node)) 1641 + return; 1642 + 1643 + ibmvtpm_inst = call_prom("open", 1, 1, ADDR("/ibm,vtpm")); 1644 + if (!IHANDLE_VALID(ibmvtpm_inst)) { 1645 + prom_printf("opening vtpm package failed (%x)\n", ibmvtpm_inst); 1646 + return; 1647 + } 1648 + 1649 + if (call_prom_ret("call-method", 2, 2, &size, 1650 + ADDR("sml-get-handover-size"), 1651 + ibmvtpm_inst) != 0 || size == 0) { 1652 + prom_printf("SML get handover size failed\n"); 1653 + return; 1654 + } 1655 + 1656 + base = alloc_down(size, PAGE_SIZE, 0); 1657 + if (base == 0) 1658 + prom_panic("Could not allocate memory for sml\n"); 1659 + 1660 + prom_printf("instantiating sml at 0x%x...", base); 1661 + 1662 + if (call_prom_ret("call-method", 4, 2, &entry, 1663 + ADDR("sml-handover"), 1664 + ibmvtpm_inst, size, base) != 0 || entry == 0) { 1665 + prom_printf("SML handover failed\n"); 1666 + return; 1667 + } 1668 + prom_printf(" done\n"); 1669 + 1670 + reserve_mem(base, size); 1671 + 1672 + prom_setprop(ibmvtpm_node, "/ibm,vtpm", "linux,sml-base", 1673 + &base, sizeof(base)); 1674 + prom_setprop(ibmvtpm_node, "/ibm,vtpm", "linux,sml-size", 1675 + &size, sizeof(size)); 1676 + 1677 + prom_debug("sml base = 0x%x\n", base); 1678 + prom_debug("sml size = 0x%x\n", (long)size); 1679 + 1680 + prom_debug("prom_instantiate_sml: end...\n"); 1681 + } 1682 + 1683 + /* 1627 1684 * Allocate room for and initialize TCE tables 1628 1685 */ 1629 1686 static void __init prom_initialize_tce_table(void) ··· 2971 2914 } 2972 2915 } else if (RELOC(of_platform) == PLATFORM_OPAL) 2973 2916 prom_instantiate_opal(); 2917 + #endif 2918 + 2919 + #ifdef CONFIG_PPC64 2920 + /* instantiate sml */ 2921 + prom_instantiate_sml(); 2974 2922 #endif 2975 2923 2976 2924 /*
+13
drivers/char/hw_random/Kconfig
··· 289 289 module will be called exynos-rng. 290 290 291 291 If unsure, say Y. 292 + 293 + config HW_RANDOM_TPM 294 + tristate "TPM HW Random Number Generator support" 295 + depends on HW_RANDOM && TCG_TPM 296 + default HW_RANDOM 297 + ---help--- 298 + This driver provides kernel-side support for the Random Number 299 + Generator in the Trusted Platform Module 300 + 301 + To compile this driver as a module, choose M here: the 302 + module will be called tpm-rng. 303 + 304 + If unsure, say Y.
+1
drivers/char/hw_random/Makefile
··· 25 25 obj-$(CONFIG_HW_RANDOM_PPC4XX) += ppc4xx-rng.o 26 26 obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o 27 27 obj-$(CONFIG_HW_RANDOM_EXYNOS) += exynos-rng.o 28 + obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o
+50
drivers/char/hw_random/tpm-rng.c
··· 1 + /* 2 + * Copyright (C) 2012 Kent Yoder IBM Corporation 3 + * 4 + * HWRNG interfaces to pull RNG data from a TPM 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, write to the Free Software 17 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 + */ 19 + 20 + #include <linux/module.h> 21 + #include <linux/hw_random.h> 22 + #include <linux/tpm.h> 23 + 24 + #define MODULE_NAME "tpm-rng" 25 + 26 + static int tpm_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) 27 + { 28 + return tpm_get_random(TPM_ANY_NUM, data, max); 29 + } 30 + 31 + static struct hwrng tpm_rng = { 32 + .name = MODULE_NAME, 33 + .read = tpm_rng_read, 34 + }; 35 + 36 + static int __init rng_init(void) 37 + { 38 + return hwrng_register(&tpm_rng); 39 + } 40 + module_init(rng_init); 41 + 42 + static void __exit rng_exit(void) 43 + { 44 + hwrng_unregister(&tpm_rng); 45 + } 46 + module_exit(rng_exit); 47 + 48 + MODULE_LICENSE("GPL v2"); 49 + MODULE_AUTHOR("Kent Yoder <key@linux.vnet.ibm.com>"); 50 + MODULE_DESCRIPTION("RNG driver for TPM devices");
+19
drivers/char/tpm/Kconfig
··· 33 33 from within Linux. To compile this driver as a module, choose 34 34 M here; the module will be called tpm_tis. 35 35 36 + config TCG_TIS_I2C_INFINEON 37 + tristate "TPM Interface Specification 1.2 Interface (I2C - Infineon)" 38 + depends on I2C 39 + ---help--- 40 + If you have a TPM security chip that is compliant with the 41 + TCG TIS 1.2 TPM specification and Infineon's I2C Protocol Stack 42 + Specification 0.20 say Yes and it will be accessible from within 43 + Linux. 44 + To compile this driver as a module, choose M here; the module 45 + will be called tpm_tis_i2c_infineon. 46 + 36 47 config TCG_NSC 37 48 tristate "National Semiconductor TPM Interface" 38 49 depends on X86 ··· 72 61 will be called tpm_infineon. 73 62 Further information on this driver and the supported hardware 74 63 can be found at http://www.trust.rub.de/projects/linux-device-driver-infineon-tpm/ 64 + 65 + config TCG_IBMVTPM 66 + tristate "IBM VTPM Interface" 67 + depends on PPC64 68 + ---help--- 69 + If you have IBM virtual TPM (VTPM) support say Yes and it 70 + will be accessible from within Linux. To compile this driver 71 + as a module, choose M here; the module will be called tpm_ibmvtpm. 75 72 76 73 endif # TCG_TPM
+8
drivers/char/tpm/Makefile
··· 4 4 obj-$(CONFIG_TCG_TPM) += tpm.o 5 5 ifdef CONFIG_ACPI 6 6 obj-$(CONFIG_TCG_TPM) += tpm_bios.o 7 + tpm_bios-objs += tpm_eventlog.o tpm_acpi.o tpm_ppi.o 8 + else 9 + ifdef CONFIG_TCG_IBMVTPM 10 + obj-$(CONFIG_TCG_TPM) += tpm_bios.o 11 + tpm_bios-objs += tpm_eventlog.o tpm_of.o 12 + endif 7 13 endif 8 14 obj-$(CONFIG_TCG_TIS) += tpm_tis.o 15 + obj-$(CONFIG_TCG_TIS_I2C_INFINEON) += tpm_i2c_infineon.o 9 16 obj-$(CONFIG_TCG_NSC) += tpm_nsc.o 10 17 obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o 11 18 obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o 19 + obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o
+64 -10
drivers/char/tpm/tpm.c
··· 30 30 #include <linux/freezer.h> 31 31 32 32 #include "tpm.h" 33 - 34 - enum tpm_const { 35 - TPM_MINOR = 224, /* officially assigned */ 36 - TPM_BUFSIZE = 4096, 37 - TPM_NUM_DEVICES = 256, 38 - }; 33 + #include "tpm_eventlog.h" 39 34 40 35 enum tpm_duration { 41 36 TPM_SHORT = 0, ··· 477 482 #define TPM_INTERNAL_RESULT_SIZE 200 478 483 #define TPM_TAG_RQU_COMMAND cpu_to_be16(193) 479 484 #define TPM_ORD_GET_CAP cpu_to_be32(101) 485 + #define TPM_ORD_GET_RANDOM cpu_to_be32(70) 480 486 481 487 static const struct tpm_input_header tpm_getcap_header = { 482 488 .tag = TPM_TAG_RQU_COMMAND, ··· 915 919 916 920 #define READ_PUBEK_RESULT_SIZE 314 917 921 #define TPM_ORD_READPUBEK cpu_to_be32(124) 918 - struct tpm_input_header tpm_readpubek_header = { 922 + static struct tpm_input_header tpm_readpubek_header = { 919 923 .tag = TPM_TAG_RQU_COMMAND, 920 924 .length = cpu_to_be32(30), 921 925 .ordinal = TPM_ORD_READPUBEK ··· 1171 1175 flush_work(&chip->work); 1172 1176 file->private_data = NULL; 1173 1177 atomic_set(&chip->data_pending, 0); 1174 - kfree(chip->data_buffer); 1178 + kzfree(chip->data_buffer); 1175 1179 clear_bit(0, &chip->is_open); 1176 1180 put_device(chip->dev); 1177 1181 return 0; ··· 1223 1227 del_singleshot_timer_sync(&chip->user_read_timer); 1224 1228 flush_work(&chip->work); 1225 1229 ret_size = atomic_read(&chip->data_pending); 1226 - atomic_set(&chip->data_pending, 0); 1227 1230 if (ret_size > 0) { /* relay data */ 1228 1231 ssize_t orig_ret_size = ret_size; 1229 1232 if (size < ret_size) ··· 1236 1241 1237 1242 mutex_unlock(&chip->buffer_mutex); 1238 1243 } 1244 + 1245 + atomic_set(&chip->data_pending, 0); 1239 1246 1240 1247 return ret_size; 1241 1248 } ··· 1323 1326 } 1324 1327 EXPORT_SYMBOL_GPL(tpm_pm_resume); 1325 1328 1329 + #define TPM_GETRANDOM_RESULT_SIZE 18 1330 + static struct tpm_input_header tpm_getrandom_header = { 1331 + .tag = TPM_TAG_RQU_COMMAND, 1332 + .length = cpu_to_be32(14), 1333 + .ordinal = TPM_ORD_GET_RANDOM 1334 + }; 1335 + 1336 + /** 1337 + * tpm_get_random() - Get random bytes from the tpm's RNG 1338 + * @chip_num: A specific chip number for the request or TPM_ANY_NUM 1339 + * @out: destination buffer for the random bytes 1340 + * @max: the max number of bytes to write to @out 1341 + * 1342 + * Returns < 0 on error and the number of bytes read on success 1343 + */ 1344 + int tpm_get_random(u32 chip_num, u8 *out, size_t max) 1345 + { 1346 + struct tpm_chip *chip; 1347 + struct tpm_cmd_t tpm_cmd; 1348 + u32 recd, num_bytes = min_t(u32, max, TPM_MAX_RNG_DATA); 1349 + int err, total = 0, retries = 5; 1350 + u8 *dest = out; 1351 + 1352 + chip = tpm_chip_find_get(chip_num); 1353 + if (chip == NULL) 1354 + return -ENODEV; 1355 + 1356 + if (!out || !num_bytes || max > TPM_MAX_RNG_DATA) 1357 + return -EINVAL; 1358 + 1359 + do { 1360 + tpm_cmd.header.in = tpm_getrandom_header; 1361 + tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes); 1362 + 1363 + err = transmit_cmd(chip, &tpm_cmd, 1364 + TPM_GETRANDOM_RESULT_SIZE + num_bytes, 1365 + "attempting get random"); 1366 + if (err) 1367 + break; 1368 + 1369 + recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len); 1370 + memcpy(dest, tpm_cmd.params.getrandom_out.rng_data, recd); 1371 + 1372 + dest += recd; 1373 + total += recd; 1374 + num_bytes -= recd; 1375 + } while (retries-- && total < max); 1376 + 1377 + return total ? total : -EIO; 1378 + } 1379 + EXPORT_SYMBOL_GPL(tpm_get_random); 1380 + 1326 1381 /* In case vendor provided release function, call it too.*/ 1327 1382 1328 1383 void tpm_dev_vendor_release(struct tpm_chip *chip) ··· 1395 1346 * Once all references to platform device are down to 0, 1396 1347 * release all allocated structures. 1397 1348 */ 1398 - void tpm_dev_release(struct device *dev) 1349 + static void tpm_dev_release(struct device *dev) 1399 1350 { 1400 1351 struct tpm_chip *chip = dev_get_drvdata(dev); 1401 1352 ··· 1472 1423 } 1473 1424 1474 1425 if (sysfs_create_group(&dev->kobj, chip->vendor.attr_group)) { 1426 + misc_deregister(&chip->vendor.miscdev); 1427 + goto put_device; 1428 + } 1429 + 1430 + if (sys_add_ppi(&dev->kobj)) { 1475 1431 misc_deregister(&chip->vendor.miscdev); 1476 1432 goto put_device; 1477 1433 }
+28 -7
drivers/char/tpm/tpm.h
··· 28 28 #include <linux/io.h> 29 29 #include <linux/tpm.h> 30 30 31 + enum tpm_const { 32 + TPM_MINOR = 224, /* officially assigned */ 33 + TPM_BUFSIZE = 4096, 34 + TPM_NUM_DEVICES = 256, 35 + }; 36 + 31 37 enum tpm_timeout { 32 38 TPM_TIMEOUT = 5, /* msecs */ 33 39 }; ··· 100 94 bool timeout_adjusted; 101 95 unsigned long duration[3]; /* jiffies */ 102 96 bool duration_adjusted; 97 + void *data; 103 98 104 99 wait_queue_head_t read_queue; 105 100 wait_queue_head_t int_queue; ··· 276 269 u8 hash[TPM_DIGEST_SIZE]; 277 270 }__attribute__((packed)); 278 271 272 + /* 128 bytes is an arbitrary cap. This could be as large as TPM_BUFSIZE - 18 273 + * bytes, but 128 is still a relatively large number of random bytes and 274 + * anything much bigger causes users of struct tpm_cmd_t to start getting 275 + * compiler warnings about stack frame size. */ 276 + #define TPM_MAX_RNG_DATA 128 277 + 278 + struct tpm_getrandom_out { 279 + __be32 rng_data_len; 280 + u8 rng_data[TPM_MAX_RNG_DATA]; 281 + }__attribute__((packed)); 282 + 283 + struct tpm_getrandom_in { 284 + __be32 num_bytes; 285 + }__attribute__((packed)); 286 + 279 287 typedef union { 280 288 struct tpm_getcap_params_out getcap_out; 281 289 struct tpm_readpubek_params_out readpubek_out; ··· 299 277 struct tpm_pcrread_in pcrread_in; 300 278 struct tpm_pcrread_out pcrread_out; 301 279 struct tpm_pcrextend_in pcrextend_in; 280 + struct tpm_getrandom_in getrandom_in; 281 + struct tpm_getrandom_out getrandom_out; 302 282 } tpm_cmd_params; 303 283 304 284 struct tpm_cmd_t { ··· 327 303 extern int tpm_pm_resume(struct device *); 328 304 extern int wait_for_tpm_stat(struct tpm_chip *, u8, unsigned long, 329 305 wait_queue_head_t *); 306 + 330 307 #ifdef CONFIG_ACPI 331 - extern struct dentry ** tpm_bios_log_setup(char *); 332 - extern void tpm_bios_log_teardown(struct dentry **); 308 + extern ssize_t sys_add_ppi(struct kobject *parent); 333 309 #else 334 - static inline struct dentry ** tpm_bios_log_setup(char *name) 310 + static inline ssize_t sys_add_ppi(struct kobject *parent) 335 311 { 336 - return NULL; 337 - } 338 - static inline void tpm_bios_log_teardown(struct dentry **dir) 339 - { 312 + return 0; 340 313 } 341 314 #endif
+109
drivers/char/tpm/tpm_acpi.c
··· 1 + /* 2 + * Copyright (C) 2005 IBM Corporation 3 + * 4 + * Authors: 5 + * Seiji Munetoh <munetoh@jp.ibm.com> 6 + * Stefan Berger <stefanb@us.ibm.com> 7 + * Reiner Sailer <sailer@watson.ibm.com> 8 + * Kylene Hall <kjhall@us.ibm.com> 9 + * 10 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 11 + * 12 + * Access to the eventlog extended by the TCG BIOS of PC platform 13 + * 14 + * This program is free software; you can redistribute it and/or 15 + * modify it under the terms of the GNU General Public License 16 + * as published by the Free Software Foundation; either version 17 + * 2 of the License, or (at your option) any later version. 18 + * 19 + */ 20 + 21 + #include <linux/seq_file.h> 22 + #include <linux/fs.h> 23 + #include <linux/security.h> 24 + #include <linux/module.h> 25 + #include <linux/slab.h> 26 + #include <acpi/acpi.h> 27 + 28 + #include "tpm.h" 29 + #include "tpm_eventlog.h" 30 + 31 + struct acpi_tcpa { 32 + struct acpi_table_header hdr; 33 + u16 platform_class; 34 + union { 35 + struct client_hdr { 36 + u32 log_max_len __attribute__ ((packed)); 37 + u64 log_start_addr __attribute__ ((packed)); 38 + } client; 39 + struct server_hdr { 40 + u16 reserved; 41 + u64 log_max_len __attribute__ ((packed)); 42 + u64 log_start_addr __attribute__ ((packed)); 43 + } server; 44 + }; 45 + }; 46 + 47 + /* read binary bios log */ 48 + int read_log(struct tpm_bios_log *log) 49 + { 50 + struct acpi_tcpa *buff; 51 + acpi_status status; 52 + void __iomem *virt; 53 + u64 len, start; 54 + 55 + if (log->bios_event_log != NULL) { 56 + printk(KERN_ERR 57 + "%s: ERROR - Eventlog already initialized\n", 58 + __func__); 59 + return -EFAULT; 60 + } 61 + 62 + /* Find TCPA entry in RSDT (ACPI_LOGICAL_ADDRESSING) */ 63 + status = acpi_get_table(ACPI_SIG_TCPA, 1, 64 + (struct acpi_table_header **)&buff); 65 + 66 + if (ACPI_FAILURE(status)) { 67 + printk(KERN_ERR "%s: ERROR - Could not get TCPA table\n", 68 + __func__); 69 + return -EIO; 70 + } 71 + 72 + switch(buff->platform_class) { 73 + case BIOS_SERVER: 74 + len = buff->server.log_max_len; 75 + start = buff->server.log_start_addr; 76 + break; 77 + case BIOS_CLIENT: 78 + default: 79 + len = buff->client.log_max_len; 80 + start = buff->client.log_start_addr; 81 + break; 82 + } 83 + if (!len) { 84 + printk(KERN_ERR "%s: ERROR - TCPA log area empty\n", __func__); 85 + return -EIO; 86 + } 87 + 88 + /* malloc EventLog space */ 89 + log->bios_event_log = kmalloc(len, GFP_KERNEL); 90 + if (!log->bios_event_log) { 91 + printk("%s: ERROR - Not enough Memory for BIOS measurements\n", 92 + __func__); 93 + return -ENOMEM; 94 + } 95 + 96 + log->bios_event_log_end = log->bios_event_log + len; 97 + 98 + virt = acpi_os_map_memory(start, len); 99 + if (!virt) { 100 + kfree(log->bios_event_log); 101 + printk("%s: ERROR - Unable to map memory\n", __func__); 102 + return -EIO; 103 + } 104 + 105 + memcpy_fromio(log->bios_event_log, virt, len); 106 + 107 + acpi_os_unmap_memory(virt, len); 108 + return 0; 109 + }
+5 -142
drivers/char/tpm/tpm_bios.c drivers/char/tpm/tpm_eventlog.c
··· 1 1 /* 2 - * Copyright (C) 2005 IBM Corporation 2 + * Copyright (C) 2005, 2012 IBM Corporation 3 3 * 4 4 * Authors: 5 + * Kent Yoder <key@linux.vnet.ibm.com> 5 6 * Seiji Munetoh <munetoh@jp.ibm.com> 6 7 * Stefan Berger <stefanb@us.ibm.com> 7 8 * Reiner Sailer <sailer@watson.ibm.com> ··· 10 9 * 11 10 * Maintained by: <tpmdd-devel@lists.sourceforge.net> 12 11 * 13 - * Access to the eventlog extended by the TCG BIOS of PC platform 12 + * Access to the eventlog created by a system's firmware / BIOS 14 13 * 15 14 * This program is free software; you can redistribute it and/or 16 15 * modify it under the terms of the GNU General Public License ··· 24 23 #include <linux/security.h> 25 24 #include <linux/module.h> 26 25 #include <linux/slab.h> 27 - #include <acpi/acpi.h> 26 + 28 27 #include "tpm.h" 28 + #include "tpm_eventlog.h" 29 29 30 - #define TCG_EVENT_NAME_LEN_MAX 255 31 - #define MAX_TEXT_EVENT 1000 /* Max event string length */ 32 - #define ACPI_TCPA_SIG "TCPA" /* 0x41504354 /'TCPA' */ 33 - 34 - enum bios_platform_class { 35 - BIOS_CLIENT = 0x00, 36 - BIOS_SERVER = 0x01, 37 - }; 38 - 39 - struct tpm_bios_log { 40 - void *bios_event_log; 41 - void *bios_event_log_end; 42 - }; 43 - 44 - struct acpi_tcpa { 45 - struct acpi_table_header hdr; 46 - u16 platform_class; 47 - union { 48 - struct client_hdr { 49 - u32 log_max_len __attribute__ ((packed)); 50 - u64 log_start_addr __attribute__ ((packed)); 51 - } client; 52 - struct server_hdr { 53 - u16 reserved; 54 - u64 log_max_len __attribute__ ((packed)); 55 - u64 log_start_addr __attribute__ ((packed)); 56 - } server; 57 - }; 58 - }; 59 - 60 - struct tcpa_event { 61 - u32 pcr_index; 62 - u32 event_type; 63 - u8 pcr_value[20]; /* SHA1 */ 64 - u32 event_size; 65 - u8 event_data[0]; 66 - }; 67 - 68 - enum tcpa_event_types { 69 - PREBOOT = 0, 70 - POST_CODE, 71 - UNUSED, 72 - NO_ACTION, 73 - SEPARATOR, 74 - ACTION, 75 - EVENT_TAG, 76 - SCRTM_CONTENTS, 77 - SCRTM_VERSION, 78 - CPU_MICROCODE, 79 - PLATFORM_CONFIG_FLAGS, 80 - TABLE_OF_DEVICES, 81 - COMPACT_HASH, 82 - IPL, 83 - IPL_PARTITION_DATA, 84 - NONHOST_CODE, 85 - NONHOST_CONFIG, 86 - NONHOST_INFO, 87 - }; 88 30 89 31 static const char* tcpa_event_type_strings[] = { 90 32 "PREBOOT", ··· 48 104 "Non-Host Code", 49 105 "Non-Host Config", 50 106 "Non-Host Info" 51 - }; 52 - 53 - struct tcpa_pc_event { 54 - u32 event_id; 55 - u32 event_size; 56 - u8 event_data[0]; 57 - }; 58 - 59 - enum tcpa_pc_event_ids { 60 - SMBIOS = 1, 61 - BIS_CERT, 62 - POST_BIOS_ROM, 63 - ESCD, 64 - CMOS, 65 - NVRAM, 66 - OPTION_ROM_EXEC, 67 - OPTION_ROM_CONFIG, 68 - OPTION_ROM_MICROCODE = 10, 69 - S_CRTM_VERSION, 70 - S_CRTM_CONTENTS, 71 - POST_CONTENTS, 72 - HOST_TABLE_OF_DEVICES, 73 107 }; 74 108 75 109 static const char* tcpa_pc_event_id_strings[] = { ··· 279 357 .stop = tpm_bios_measurements_stop, 280 358 .show = tpm_binary_bios_measurements_show, 281 359 }; 282 - 283 - /* read binary bios log */ 284 - static int read_log(struct tpm_bios_log *log) 285 - { 286 - struct acpi_tcpa *buff; 287 - acpi_status status; 288 - struct acpi_table_header *virt; 289 - u64 len, start; 290 - 291 - if (log->bios_event_log != NULL) { 292 - printk(KERN_ERR 293 - "%s: ERROR - Eventlog already initialized\n", 294 - __func__); 295 - return -EFAULT; 296 - } 297 - 298 - /* Find TCPA entry in RSDT (ACPI_LOGICAL_ADDRESSING) */ 299 - status = acpi_get_table(ACPI_SIG_TCPA, 1, 300 - (struct acpi_table_header **)&buff); 301 - 302 - if (ACPI_FAILURE(status)) { 303 - printk(KERN_ERR "%s: ERROR - Could not get TCPA table\n", 304 - __func__); 305 - return -EIO; 306 - } 307 - 308 - switch(buff->platform_class) { 309 - case BIOS_SERVER: 310 - len = buff->server.log_max_len; 311 - start = buff->server.log_start_addr; 312 - break; 313 - case BIOS_CLIENT: 314 - default: 315 - len = buff->client.log_max_len; 316 - start = buff->client.log_start_addr; 317 - break; 318 - } 319 - if (!len) { 320 - printk(KERN_ERR "%s: ERROR - TCPA log area empty\n", __func__); 321 - return -EIO; 322 - } 323 - 324 - /* malloc EventLog space */ 325 - log->bios_event_log = kmalloc(len, GFP_KERNEL); 326 - if (!log->bios_event_log) { 327 - printk("%s: ERROR - Not enough Memory for BIOS measurements\n", 328 - __func__); 329 - return -ENOMEM; 330 - } 331 - 332 - log->bios_event_log_end = log->bios_event_log + len; 333 - 334 - virt = acpi_os_map_memory(start, len); 335 - 336 - memcpy(log->bios_event_log, virt, len); 337 - 338 - acpi_os_unmap_memory(virt, len); 339 - return 0; 340 - } 341 360 342 361 static int tpm_ascii_bios_measurements_open(struct inode *inode, 343 362 struct file *file)
+86
drivers/char/tpm/tpm_eventlog.h
··· 1 + 2 + #ifndef __TPM_EVENTLOG_H__ 3 + #define __TPM_EVENTLOG_H__ 4 + 5 + #define TCG_EVENT_NAME_LEN_MAX 255 6 + #define MAX_TEXT_EVENT 1000 /* Max event string length */ 7 + #define ACPI_TCPA_SIG "TCPA" /* 0x41504354 /'TCPA' */ 8 + 9 + enum bios_platform_class { 10 + BIOS_CLIENT = 0x00, 11 + BIOS_SERVER = 0x01, 12 + }; 13 + 14 + struct tpm_bios_log { 15 + void *bios_event_log; 16 + void *bios_event_log_end; 17 + }; 18 + 19 + struct tcpa_event { 20 + u32 pcr_index; 21 + u32 event_type; 22 + u8 pcr_value[20]; /* SHA1 */ 23 + u32 event_size; 24 + u8 event_data[0]; 25 + }; 26 + 27 + enum tcpa_event_types { 28 + PREBOOT = 0, 29 + POST_CODE, 30 + UNUSED, 31 + NO_ACTION, 32 + SEPARATOR, 33 + ACTION, 34 + EVENT_TAG, 35 + SCRTM_CONTENTS, 36 + SCRTM_VERSION, 37 + CPU_MICROCODE, 38 + PLATFORM_CONFIG_FLAGS, 39 + TABLE_OF_DEVICES, 40 + COMPACT_HASH, 41 + IPL, 42 + IPL_PARTITION_DATA, 43 + NONHOST_CODE, 44 + NONHOST_CONFIG, 45 + NONHOST_INFO, 46 + }; 47 + 48 + struct tcpa_pc_event { 49 + u32 event_id; 50 + u32 event_size; 51 + u8 event_data[0]; 52 + }; 53 + 54 + enum tcpa_pc_event_ids { 55 + SMBIOS = 1, 56 + BIS_CERT, 57 + POST_BIOS_ROM, 58 + ESCD, 59 + CMOS, 60 + NVRAM, 61 + OPTION_ROM_EXEC, 62 + OPTION_ROM_CONFIG, 63 + OPTION_ROM_MICROCODE = 10, 64 + S_CRTM_VERSION, 65 + S_CRTM_CONTENTS, 66 + POST_CONTENTS, 67 + HOST_TABLE_OF_DEVICES, 68 + }; 69 + 70 + int read_log(struct tpm_bios_log *log); 71 + 72 + #if defined(CONFIG_TCG_IBMVTPM) || defined(CONFIG_TCG_IBMVTPM_MODULE) || \ 73 + defined(CONFIG_ACPI) 74 + extern struct dentry **tpm_bios_log_setup(char *); 75 + extern void tpm_bios_log_teardown(struct dentry **); 76 + #else 77 + static inline struct dentry **tpm_bios_log_setup(char *name) 78 + { 79 + return NULL; 80 + } 81 + static inline void tpm_bios_log_teardown(struct dentry **dir) 82 + { 83 + } 84 + #endif 85 + 86 + #endif
+695
drivers/char/tpm/tpm_i2c_infineon.c
··· 1 + /* 2 + * Copyright (C) 2012 Infineon Technologies 3 + * 4 + * Authors: 5 + * Peter Huewe <peter.huewe@infineon.com> 6 + * 7 + * Device driver for TCG/TCPA TPM (trusted platform module). 8 + * Specifications at www.trustedcomputinggroup.org 9 + * 10 + * This device driver implements the TPM interface as defined in 11 + * the TCG TPM Interface Spec version 1.2, revision 1.0 and the 12 + * Infineon I2C Protocol Stack Specification v0.20. 13 + * 14 + * It is based on the original tpm_tis device driver from Leendert van 15 + * Dorn and Kyleen Hall. 16 + * 17 + * This program is free software; you can redistribute it and/or 18 + * modify it under the terms of the GNU General Public License as 19 + * published by the Free Software Foundation, version 2 of the 20 + * License. 21 + * 22 + * 23 + */ 24 + #include <linux/init.h> 25 + #include <linux/i2c.h> 26 + #include <linux/module.h> 27 + #include <linux/moduleparam.h> 28 + #include <linux/wait.h> 29 + #include "tpm.h" 30 + 31 + /* max. buffer size supported by our TPM */ 32 + #define TPM_BUFSIZE 1260 33 + 34 + /* max. number of iterations after I2C NAK */ 35 + #define MAX_COUNT 3 36 + 37 + #define SLEEP_DURATION_LOW 55 38 + #define SLEEP_DURATION_HI 65 39 + 40 + /* max. number of iterations after I2C NAK for 'long' commands 41 + * we need this especially for sending TPM_READY, since the cleanup after the 42 + * transtion to the ready state may take some time, but it is unpredictable 43 + * how long it will take. 44 + */ 45 + #define MAX_COUNT_LONG 50 46 + 47 + #define SLEEP_DURATION_LONG_LOW 200 48 + #define SLEEP_DURATION_LONG_HI 220 49 + 50 + /* After sending TPM_READY to 'reset' the TPM we have to sleep even longer */ 51 + #define SLEEP_DURATION_RESET_LOW 2400 52 + #define SLEEP_DURATION_RESET_HI 2600 53 + 54 + /* we want to use usleep_range instead of msleep for the 5ms TPM_TIMEOUT */ 55 + #define TPM_TIMEOUT_US_LOW (TPM_TIMEOUT * 1000) 56 + #define TPM_TIMEOUT_US_HI (TPM_TIMEOUT_US_LOW + 2000) 57 + 58 + /* expected value for DIDVID register */ 59 + #define TPM_TIS_I2C_DID_VID 0x000b15d1L 60 + 61 + /* Structure to store I2C TPM specific stuff */ 62 + struct tpm_inf_dev { 63 + struct i2c_client *client; 64 + u8 buf[TPM_BUFSIZE + sizeof(u8)]; /* max. buffer size + addr */ 65 + struct tpm_chip *chip; 66 + }; 67 + 68 + static struct tpm_inf_dev tpm_dev; 69 + static struct i2c_driver tpm_tis_i2c_driver; 70 + 71 + /* 72 + * iic_tpm_read() - read from TPM register 73 + * @addr: register address to read from 74 + * @buffer: provided by caller 75 + * @len: number of bytes to read 76 + * 77 + * Read len bytes from TPM register and put them into 78 + * buffer (little-endian format, i.e. first byte is put into buffer[0]). 79 + * 80 + * NOTE: TPM is big-endian for multi-byte values. Multi-byte 81 + * values have to be swapped. 82 + * 83 + * NOTE: We can't unfortunately use the combined read/write functions 84 + * provided by the i2c core as the TPM currently does not support the 85 + * repeated start condition and due to it's special requirements. 86 + * The i2c_smbus* functions do not work for this chip. 87 + * 88 + * Return -EIO on error, 0 on success. 89 + */ 90 + static int iic_tpm_read(u8 addr, u8 *buffer, size_t len) 91 + { 92 + 93 + struct i2c_msg msg1 = { tpm_dev.client->addr, 0, 1, &addr }; 94 + struct i2c_msg msg2 = { tpm_dev.client->addr, I2C_M_RD, len, buffer }; 95 + 96 + int rc; 97 + int count; 98 + 99 + /* Lock the adapter for the duration of the whole sequence. */ 100 + if (!tpm_dev.client->adapter->algo->master_xfer) 101 + return -EOPNOTSUPP; 102 + i2c_lock_adapter(tpm_dev.client->adapter); 103 + 104 + for (count = 0; count < MAX_COUNT; count++) { 105 + rc = __i2c_transfer(tpm_dev.client->adapter, &msg1, 1); 106 + if (rc > 0) 107 + break; /* break here to skip sleep */ 108 + 109 + usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI); 110 + } 111 + 112 + if (rc <= 0) 113 + goto out; 114 + 115 + /* After the TPM has successfully received the register address it needs 116 + * some time, thus we're sleeping here again, before retrieving the data 117 + */ 118 + for (count = 0; count < MAX_COUNT; count++) { 119 + usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI); 120 + rc = __i2c_transfer(tpm_dev.client->adapter, &msg2, 1); 121 + if (rc > 0) 122 + break; 123 + 124 + } 125 + 126 + out: 127 + i2c_unlock_adapter(tpm_dev.client->adapter); 128 + if (rc <= 0) 129 + return -EIO; 130 + 131 + return 0; 132 + } 133 + 134 + static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len, 135 + unsigned int sleep_low, 136 + unsigned int sleep_hi, u8 max_count) 137 + { 138 + int rc = -EIO; 139 + int count; 140 + 141 + struct i2c_msg msg1 = { tpm_dev.client->addr, 0, len + 1, tpm_dev.buf }; 142 + 143 + if (len > TPM_BUFSIZE) 144 + return -EINVAL; 145 + 146 + if (!tpm_dev.client->adapter->algo->master_xfer) 147 + return -EOPNOTSUPP; 148 + i2c_lock_adapter(tpm_dev.client->adapter); 149 + 150 + /* prepend the 'register address' to the buffer */ 151 + tpm_dev.buf[0] = addr; 152 + memcpy(&(tpm_dev.buf[1]), buffer, len); 153 + 154 + /* 155 + * NOTE: We have to use these special mechanisms here and unfortunately 156 + * cannot rely on the standard behavior of i2c_transfer. 157 + */ 158 + for (count = 0; count < max_count; count++) { 159 + rc = __i2c_transfer(tpm_dev.client->adapter, &msg1, 1); 160 + if (rc > 0) 161 + break; 162 + 163 + usleep_range(sleep_low, sleep_hi); 164 + } 165 + 166 + i2c_unlock_adapter(tpm_dev.client->adapter); 167 + if (rc <= 0) 168 + return -EIO; 169 + 170 + return 0; 171 + } 172 + 173 + /* 174 + * iic_tpm_write() - write to TPM register 175 + * @addr: register address to write to 176 + * @buffer: containing data to be written 177 + * @len: number of bytes to write 178 + * 179 + * Write len bytes from provided buffer to TPM register (little 180 + * endian format, i.e. buffer[0] is written as first byte). 181 + * 182 + * NOTE: TPM is big-endian for multi-byte values. Multi-byte 183 + * values have to be swapped. 184 + * 185 + * NOTE: use this function instead of the iic_tpm_write_generic function. 186 + * 187 + * Return -EIO on error, 0 on success 188 + */ 189 + static int iic_tpm_write(u8 addr, u8 *buffer, size_t len) 190 + { 191 + return iic_tpm_write_generic(addr, buffer, len, SLEEP_DURATION_LOW, 192 + SLEEP_DURATION_HI, MAX_COUNT); 193 + } 194 + 195 + /* 196 + * This function is needed especially for the cleanup situation after 197 + * sending TPM_READY 198 + * */ 199 + static int iic_tpm_write_long(u8 addr, u8 *buffer, size_t len) 200 + { 201 + return iic_tpm_write_generic(addr, buffer, len, SLEEP_DURATION_LONG_LOW, 202 + SLEEP_DURATION_LONG_HI, MAX_COUNT_LONG); 203 + } 204 + 205 + enum tis_access { 206 + TPM_ACCESS_VALID = 0x80, 207 + TPM_ACCESS_ACTIVE_LOCALITY = 0x20, 208 + TPM_ACCESS_REQUEST_PENDING = 0x04, 209 + TPM_ACCESS_REQUEST_USE = 0x02, 210 + }; 211 + 212 + enum tis_status { 213 + TPM_STS_VALID = 0x80, 214 + TPM_STS_COMMAND_READY = 0x40, 215 + TPM_STS_GO = 0x20, 216 + TPM_STS_DATA_AVAIL = 0x10, 217 + TPM_STS_DATA_EXPECT = 0x08, 218 + }; 219 + 220 + enum tis_defaults { 221 + TIS_SHORT_TIMEOUT = 750, /* ms */ 222 + TIS_LONG_TIMEOUT = 2000, /* 2 sec */ 223 + }; 224 + 225 + #define TPM_ACCESS(l) (0x0000 | ((l) << 4)) 226 + #define TPM_STS(l) (0x0001 | ((l) << 4)) 227 + #define TPM_DATA_FIFO(l) (0x0005 | ((l) << 4)) 228 + #define TPM_DID_VID(l) (0x0006 | ((l) << 4)) 229 + 230 + static int check_locality(struct tpm_chip *chip, int loc) 231 + { 232 + u8 buf; 233 + int rc; 234 + 235 + rc = iic_tpm_read(TPM_ACCESS(loc), &buf, 1); 236 + if (rc < 0) 237 + return rc; 238 + 239 + if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 240 + (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { 241 + chip->vendor.locality = loc; 242 + return loc; 243 + } 244 + 245 + return -EIO; 246 + } 247 + 248 + /* implementation similar to tpm_tis */ 249 + static void release_locality(struct tpm_chip *chip, int loc, int force) 250 + { 251 + u8 buf; 252 + if (iic_tpm_read(TPM_ACCESS(loc), &buf, 1) < 0) 253 + return; 254 + 255 + if (force || (buf & (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) == 256 + (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) { 257 + buf = TPM_ACCESS_ACTIVE_LOCALITY; 258 + iic_tpm_write(TPM_ACCESS(loc), &buf, 1); 259 + } 260 + } 261 + 262 + static int request_locality(struct tpm_chip *chip, int loc) 263 + { 264 + unsigned long stop; 265 + u8 buf = TPM_ACCESS_REQUEST_USE; 266 + 267 + if (check_locality(chip, loc) >= 0) 268 + return loc; 269 + 270 + iic_tpm_write(TPM_ACCESS(loc), &buf, 1); 271 + 272 + /* wait for burstcount */ 273 + stop = jiffies + chip->vendor.timeout_a; 274 + do { 275 + if (check_locality(chip, loc) >= 0) 276 + return loc; 277 + usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI); 278 + } while (time_before(jiffies, stop)); 279 + 280 + return -ETIME; 281 + } 282 + 283 + static u8 tpm_tis_i2c_status(struct tpm_chip *chip) 284 + { 285 + /* NOTE: since I2C read may fail, return 0 in this case --> time-out */ 286 + u8 buf; 287 + if (iic_tpm_read(TPM_STS(chip->vendor.locality), &buf, 1) < 0) 288 + return 0; 289 + else 290 + return buf; 291 + } 292 + 293 + static void tpm_tis_i2c_ready(struct tpm_chip *chip) 294 + { 295 + /* this causes the current command to be aborted */ 296 + u8 buf = TPM_STS_COMMAND_READY; 297 + iic_tpm_write_long(TPM_STS(chip->vendor.locality), &buf, 1); 298 + } 299 + 300 + static ssize_t get_burstcount(struct tpm_chip *chip) 301 + { 302 + unsigned long stop; 303 + ssize_t burstcnt; 304 + u8 buf[3]; 305 + 306 + /* wait for burstcount */ 307 + /* which timeout value, spec has 2 answers (c & d) */ 308 + stop = jiffies + chip->vendor.timeout_d; 309 + do { 310 + /* Note: STS is little endian */ 311 + if (iic_tpm_read(TPM_STS(chip->vendor.locality)+1, buf, 3) < 0) 312 + burstcnt = 0; 313 + else 314 + burstcnt = (buf[2] << 16) + (buf[1] << 8) + buf[0]; 315 + 316 + if (burstcnt) 317 + return burstcnt; 318 + 319 + usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI); 320 + } while (time_before(jiffies, stop)); 321 + return -EBUSY; 322 + } 323 + 324 + static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout, 325 + int *status) 326 + { 327 + unsigned long stop; 328 + 329 + /* check current status */ 330 + *status = tpm_tis_i2c_status(chip); 331 + if ((*status & mask) == mask) 332 + return 0; 333 + 334 + stop = jiffies + timeout; 335 + do { 336 + /* since we just checked the status, give the TPM some time */ 337 + usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI); 338 + *status = tpm_tis_i2c_status(chip); 339 + if ((*status & mask) == mask) 340 + return 0; 341 + 342 + } while (time_before(jiffies, stop)); 343 + 344 + return -ETIME; 345 + } 346 + 347 + static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count) 348 + { 349 + size_t size = 0; 350 + ssize_t burstcnt; 351 + u8 retries = 0; 352 + int rc; 353 + 354 + while (size < count) { 355 + burstcnt = get_burstcount(chip); 356 + 357 + /* burstcnt < 0 = TPM is busy */ 358 + if (burstcnt < 0) 359 + return burstcnt; 360 + 361 + /* limit received data to max. left */ 362 + if (burstcnt > (count - size)) 363 + burstcnt = count - size; 364 + 365 + rc = iic_tpm_read(TPM_DATA_FIFO(chip->vendor.locality), 366 + &(buf[size]), burstcnt); 367 + if (rc == 0) 368 + size += burstcnt; 369 + else if (rc < 0) 370 + retries++; 371 + 372 + /* avoid endless loop in case of broken HW */ 373 + if (retries > MAX_COUNT_LONG) 374 + return -EIO; 375 + 376 + } 377 + return size; 378 + } 379 + 380 + static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count) 381 + { 382 + int size = 0; 383 + int expected, status; 384 + 385 + if (count < TPM_HEADER_SIZE) { 386 + size = -EIO; 387 + goto out; 388 + } 389 + 390 + /* read first 10 bytes, including tag, paramsize, and result */ 391 + size = recv_data(chip, buf, TPM_HEADER_SIZE); 392 + if (size < TPM_HEADER_SIZE) { 393 + dev_err(chip->dev, "Unable to read header\n"); 394 + goto out; 395 + } 396 + 397 + expected = be32_to_cpu(*(__be32 *)(buf + 2)); 398 + if ((size_t) expected > count) { 399 + size = -EIO; 400 + goto out; 401 + } 402 + 403 + size += recv_data(chip, &buf[TPM_HEADER_SIZE], 404 + expected - TPM_HEADER_SIZE); 405 + if (size < expected) { 406 + dev_err(chip->dev, "Unable to read remainder of result\n"); 407 + size = -ETIME; 408 + goto out; 409 + } 410 + 411 + wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status); 412 + if (status & TPM_STS_DATA_AVAIL) { /* retry? */ 413 + dev_err(chip->dev, "Error left over data\n"); 414 + size = -EIO; 415 + goto out; 416 + } 417 + 418 + out: 419 + tpm_tis_i2c_ready(chip); 420 + /* The TPM needs some time to clean up here, 421 + * so we sleep rather than keeping the bus busy 422 + */ 423 + usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI); 424 + release_locality(chip, chip->vendor.locality, 0); 425 + return size; 426 + } 427 + 428 + static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len) 429 + { 430 + int rc, status; 431 + ssize_t burstcnt; 432 + size_t count = 0; 433 + u8 retries = 0; 434 + u8 sts = TPM_STS_GO; 435 + 436 + if (len > TPM_BUFSIZE) 437 + return -E2BIG; /* command is too long for our tpm, sorry */ 438 + 439 + if (request_locality(chip, 0) < 0) 440 + return -EBUSY; 441 + 442 + status = tpm_tis_i2c_status(chip); 443 + if ((status & TPM_STS_COMMAND_READY) == 0) { 444 + tpm_tis_i2c_ready(chip); 445 + if (wait_for_stat 446 + (chip, TPM_STS_COMMAND_READY, 447 + chip->vendor.timeout_b, &status) < 0) { 448 + rc = -ETIME; 449 + goto out_err; 450 + } 451 + } 452 + 453 + while (count < len - 1) { 454 + burstcnt = get_burstcount(chip); 455 + 456 + /* burstcnt < 0 = TPM is busy */ 457 + if (burstcnt < 0) 458 + return burstcnt; 459 + 460 + if (burstcnt > (len - 1 - count)) 461 + burstcnt = len - 1 - count; 462 + 463 + rc = iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality), 464 + &(buf[count]), burstcnt); 465 + if (rc == 0) 466 + count += burstcnt; 467 + else if (rc < 0) 468 + retries++; 469 + 470 + /* avoid endless loop in case of broken HW */ 471 + if (retries > MAX_COUNT_LONG) { 472 + rc = -EIO; 473 + goto out_err; 474 + } 475 + 476 + wait_for_stat(chip, TPM_STS_VALID, 477 + chip->vendor.timeout_c, &status); 478 + 479 + if ((status & TPM_STS_DATA_EXPECT) == 0) { 480 + rc = -EIO; 481 + goto out_err; 482 + } 483 + 484 + } 485 + 486 + /* write last byte */ 487 + iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality), &(buf[count]), 1); 488 + wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status); 489 + if ((status & TPM_STS_DATA_EXPECT) != 0) { 490 + rc = -EIO; 491 + goto out_err; 492 + } 493 + 494 + /* go and do it */ 495 + iic_tpm_write(TPM_STS(chip->vendor.locality), &sts, 1); 496 + 497 + return len; 498 + out_err: 499 + tpm_tis_i2c_ready(chip); 500 + /* The TPM needs some time to clean up here, 501 + * so we sleep rather than keeping the bus busy 502 + */ 503 + usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI); 504 + release_locality(chip, chip->vendor.locality, 0); 505 + return rc; 506 + } 507 + 508 + static const struct file_operations tis_ops = { 509 + .owner = THIS_MODULE, 510 + .llseek = no_llseek, 511 + .open = tpm_open, 512 + .read = tpm_read, 513 + .write = tpm_write, 514 + .release = tpm_release, 515 + }; 516 + 517 + static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL); 518 + static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL); 519 + static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL); 520 + static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL); 521 + static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL); 522 + static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, NULL); 523 + static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL); 524 + static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel); 525 + static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL); 526 + static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL); 527 + 528 + static struct attribute *tis_attrs[] = { 529 + &dev_attr_pubek.attr, 530 + &dev_attr_pcrs.attr, 531 + &dev_attr_enabled.attr, 532 + &dev_attr_active.attr, 533 + &dev_attr_owned.attr, 534 + &dev_attr_temp_deactivated.attr, 535 + &dev_attr_caps.attr, 536 + &dev_attr_cancel.attr, 537 + &dev_attr_durations.attr, 538 + &dev_attr_timeouts.attr, 539 + NULL, 540 + }; 541 + 542 + static struct attribute_group tis_attr_grp = { 543 + .attrs = tis_attrs 544 + }; 545 + 546 + static struct tpm_vendor_specific tpm_tis_i2c = { 547 + .status = tpm_tis_i2c_status, 548 + .recv = tpm_tis_i2c_recv, 549 + .send = tpm_tis_i2c_send, 550 + .cancel = tpm_tis_i2c_ready, 551 + .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 552 + .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 553 + .req_canceled = TPM_STS_COMMAND_READY, 554 + .attr_group = &tis_attr_grp, 555 + .miscdev.fops = &tis_ops, 556 + }; 557 + 558 + static int __devinit tpm_tis_i2c_init(struct device *dev) 559 + { 560 + u32 vendor; 561 + int rc = 0; 562 + struct tpm_chip *chip; 563 + 564 + chip = tpm_register_hardware(dev, &tpm_tis_i2c); 565 + if (!chip) { 566 + rc = -ENODEV; 567 + goto out_err; 568 + } 569 + 570 + /* Disable interrupts */ 571 + chip->vendor.irq = 0; 572 + 573 + /* Default timeouts */ 574 + chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 575 + chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); 576 + chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 577 + chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 578 + 579 + if (request_locality(chip, 0) != 0) { 580 + rc = -ENODEV; 581 + goto out_vendor; 582 + } 583 + 584 + /* read four bytes from DID_VID register */ 585 + if (iic_tpm_read(TPM_DID_VID(0), (u8 *)&vendor, 4) < 0) { 586 + rc = -EIO; 587 + goto out_release; 588 + } 589 + 590 + /* create DID_VID register value, after swapping to little-endian */ 591 + vendor = be32_to_cpu((__be32) vendor); 592 + 593 + if (vendor != TPM_TIS_I2C_DID_VID) { 594 + rc = -ENODEV; 595 + goto out_release; 596 + } 597 + 598 + dev_info(dev, "1.2 TPM (device-id 0x%X)\n", vendor >> 16); 599 + 600 + INIT_LIST_HEAD(&chip->vendor.list); 601 + tpm_dev.chip = chip; 602 + 603 + tpm_get_timeouts(chip); 604 + tpm_do_selftest(chip); 605 + 606 + return 0; 607 + 608 + out_release: 609 + release_locality(chip, chip->vendor.locality, 1); 610 + 611 + out_vendor: 612 + /* close file handles */ 613 + tpm_dev_vendor_release(chip); 614 + 615 + /* remove hardware */ 616 + tpm_remove_hardware(chip->dev); 617 + 618 + /* reset these pointers, otherwise we oops */ 619 + chip->dev->release = NULL; 620 + chip->release = NULL; 621 + tpm_dev.client = NULL; 622 + dev_set_drvdata(chip->dev, chip); 623 + out_err: 624 + return rc; 625 + } 626 + 627 + static const struct i2c_device_id tpm_tis_i2c_table[] = { 628 + {"tpm_i2c_infineon", 0}, 629 + {}, 630 + }; 631 + 632 + MODULE_DEVICE_TABLE(i2c, tpm_tis_i2c_table); 633 + static SIMPLE_DEV_PM_OPS(tpm_tis_i2c_ops, tpm_pm_suspend, tpm_pm_resume); 634 + 635 + static int __devinit tpm_tis_i2c_probe(struct i2c_client *client, 636 + const struct i2c_device_id *id) 637 + { 638 + int rc; 639 + if (tpm_dev.client != NULL) 640 + return -EBUSY; /* We only support one client */ 641 + 642 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) { 643 + dev_err(&client->dev, 644 + "no algorithms associated to the i2c bus\n"); 645 + return -ENODEV; 646 + } 647 + 648 + client->driver = &tpm_tis_i2c_driver; 649 + tpm_dev.client = client; 650 + rc = tpm_tis_i2c_init(&client->dev); 651 + if (rc != 0) { 652 + client->driver = NULL; 653 + tpm_dev.client = NULL; 654 + rc = -ENODEV; 655 + } 656 + return rc; 657 + } 658 + 659 + static int __devexit tpm_tis_i2c_remove(struct i2c_client *client) 660 + { 661 + struct tpm_chip *chip = tpm_dev.chip; 662 + release_locality(chip, chip->vendor.locality, 1); 663 + 664 + /* close file handles */ 665 + tpm_dev_vendor_release(chip); 666 + 667 + /* remove hardware */ 668 + tpm_remove_hardware(chip->dev); 669 + 670 + /* reset these pointers, otherwise we oops */ 671 + chip->dev->release = NULL; 672 + chip->release = NULL; 673 + tpm_dev.client = NULL; 674 + dev_set_drvdata(chip->dev, chip); 675 + 676 + return 0; 677 + } 678 + 679 + static struct i2c_driver tpm_tis_i2c_driver = { 680 + 681 + .id_table = tpm_tis_i2c_table, 682 + .probe = tpm_tis_i2c_probe, 683 + .remove = tpm_tis_i2c_remove, 684 + .driver = { 685 + .name = "tpm_i2c_infineon", 686 + .owner = THIS_MODULE, 687 + .pm = &tpm_tis_i2c_ops, 688 + }, 689 + }; 690 + 691 + module_i2c_driver(tpm_tis_i2c_driver); 692 + MODULE_AUTHOR("Peter Huewe <peter.huewe@infineon.com>"); 693 + MODULE_DESCRIPTION("TPM TIS I2C Infineon Driver"); 694 + MODULE_VERSION("2.1.5"); 695 + MODULE_LICENSE("GPL");
+749
drivers/char/tpm/tpm_ibmvtpm.c
··· 1 + /* 2 + * Copyright (C) 2012 IBM Corporation 3 + * 4 + * Author: Ashley Lai <adlai@us.ibm.com> 5 + * 6 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 7 + * 8 + * Device driver for TCG/TCPA TPM (trusted platform module). 9 + * Specifications at www.trustedcomputinggroup.org 10 + * 11 + * This program is free software; you can redistribute it and/or 12 + * modify it under the terms of the GNU General Public License as 13 + * published by the Free Software Foundation, version 2 of the 14 + * License. 15 + * 16 + */ 17 + 18 + #include <linux/dma-mapping.h> 19 + #include <linux/dmapool.h> 20 + #include <linux/slab.h> 21 + #include <asm/vio.h> 22 + #include <asm/irq.h> 23 + #include <linux/types.h> 24 + #include <linux/list.h> 25 + #include <linux/spinlock.h> 26 + #include <linux/interrupt.h> 27 + #include <linux/wait.h> 28 + #include <asm/prom.h> 29 + 30 + #include "tpm.h" 31 + #include "tpm_ibmvtpm.h" 32 + 33 + static const char tpm_ibmvtpm_driver_name[] = "tpm_ibmvtpm"; 34 + 35 + static struct vio_device_id tpm_ibmvtpm_device_table[] __devinitdata = { 36 + { "IBM,vtpm", "IBM,vtpm"}, 37 + { "", "" } 38 + }; 39 + MODULE_DEVICE_TABLE(vio, tpm_ibmvtpm_device_table); 40 + 41 + DECLARE_WAIT_QUEUE_HEAD(wq); 42 + 43 + /** 44 + * ibmvtpm_send_crq - Send a CRQ request 45 + * @vdev: vio device struct 46 + * @w1: first word 47 + * @w2: second word 48 + * 49 + * Return value: 50 + * 0 -Sucess 51 + * Non-zero - Failure 52 + */ 53 + static int ibmvtpm_send_crq(struct vio_dev *vdev, u64 w1, u64 w2) 54 + { 55 + return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, w1, w2); 56 + } 57 + 58 + /** 59 + * ibmvtpm_get_data - Retrieve ibm vtpm data 60 + * @dev: device struct 61 + * 62 + * Return value: 63 + * vtpm device struct 64 + */ 65 + static struct ibmvtpm_dev *ibmvtpm_get_data(const struct device *dev) 66 + { 67 + struct tpm_chip *chip = dev_get_drvdata(dev); 68 + if (chip) 69 + return (struct ibmvtpm_dev *)chip->vendor.data; 70 + return NULL; 71 + } 72 + 73 + /** 74 + * tpm_ibmvtpm_recv - Receive data after send 75 + * @chip: tpm chip struct 76 + * @buf: buffer to read 77 + * count: size of buffer 78 + * 79 + * Return value: 80 + * Number of bytes read 81 + */ 82 + static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) 83 + { 84 + struct ibmvtpm_dev *ibmvtpm; 85 + u16 len; 86 + 87 + ibmvtpm = (struct ibmvtpm_dev *)chip->vendor.data; 88 + 89 + if (!ibmvtpm->rtce_buf) { 90 + dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n"); 91 + return 0; 92 + } 93 + 94 + wait_event_interruptible(wq, ibmvtpm->crq_res.len != 0); 95 + 96 + if (count < ibmvtpm->crq_res.len) { 97 + dev_err(ibmvtpm->dev, 98 + "Invalid size in recv: count=%ld, crq_size=%d\n", 99 + count, ibmvtpm->crq_res.len); 100 + return -EIO; 101 + } 102 + 103 + spin_lock(&ibmvtpm->rtce_lock); 104 + memcpy((void *)buf, (void *)ibmvtpm->rtce_buf, ibmvtpm->crq_res.len); 105 + memset(ibmvtpm->rtce_buf, 0, ibmvtpm->crq_res.len); 106 + ibmvtpm->crq_res.valid = 0; 107 + ibmvtpm->crq_res.msg = 0; 108 + len = ibmvtpm->crq_res.len; 109 + ibmvtpm->crq_res.len = 0; 110 + spin_unlock(&ibmvtpm->rtce_lock); 111 + return len; 112 + } 113 + 114 + /** 115 + * tpm_ibmvtpm_send - Send tpm request 116 + * @chip: tpm chip struct 117 + * @buf: buffer contains data to send 118 + * count: size of buffer 119 + * 120 + * Return value: 121 + * Number of bytes sent 122 + */ 123 + static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) 124 + { 125 + struct ibmvtpm_dev *ibmvtpm; 126 + struct ibmvtpm_crq crq; 127 + u64 *word = (u64 *) &crq; 128 + int rc; 129 + 130 + ibmvtpm = (struct ibmvtpm_dev *)chip->vendor.data; 131 + 132 + if (!ibmvtpm->rtce_buf) { 133 + dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n"); 134 + return 0; 135 + } 136 + 137 + if (count > ibmvtpm->rtce_size) { 138 + dev_err(ibmvtpm->dev, 139 + "Invalid size in send: count=%ld, rtce_size=%d\n", 140 + count, ibmvtpm->rtce_size); 141 + return -EIO; 142 + } 143 + 144 + spin_lock(&ibmvtpm->rtce_lock); 145 + memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count); 146 + crq.valid = (u8)IBMVTPM_VALID_CMD; 147 + crq.msg = (u8)VTPM_TPM_COMMAND; 148 + crq.len = (u16)count; 149 + crq.data = ibmvtpm->rtce_dma_handle; 150 + 151 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, word[0], word[1]); 152 + if (rc != H_SUCCESS) { 153 + dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc); 154 + rc = 0; 155 + } else 156 + rc = count; 157 + 158 + spin_unlock(&ibmvtpm->rtce_lock); 159 + return rc; 160 + } 161 + 162 + static void tpm_ibmvtpm_cancel(struct tpm_chip *chip) 163 + { 164 + return; 165 + } 166 + 167 + static u8 tpm_ibmvtpm_status(struct tpm_chip *chip) 168 + { 169 + return 0; 170 + } 171 + 172 + /** 173 + * ibmvtpm_crq_get_rtce_size - Send a CRQ request to get rtce size 174 + * @ibmvtpm: vtpm device struct 175 + * 176 + * Return value: 177 + * 0 - Success 178 + * Non-zero - Failure 179 + */ 180 + static int ibmvtpm_crq_get_rtce_size(struct ibmvtpm_dev *ibmvtpm) 181 + { 182 + struct ibmvtpm_crq crq; 183 + u64 *buf = (u64 *) &crq; 184 + int rc; 185 + 186 + crq.valid = (u8)IBMVTPM_VALID_CMD; 187 + crq.msg = (u8)VTPM_GET_RTCE_BUFFER_SIZE; 188 + 189 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, buf[0], buf[1]); 190 + if (rc != H_SUCCESS) 191 + dev_err(ibmvtpm->dev, 192 + "ibmvtpm_crq_get_rtce_size failed rc=%d\n", rc); 193 + 194 + return rc; 195 + } 196 + 197 + /** 198 + * ibmvtpm_crq_get_version - Send a CRQ request to get vtpm version 199 + * - Note that this is vtpm version and not tpm version 200 + * @ibmvtpm: vtpm device struct 201 + * 202 + * Return value: 203 + * 0 - Success 204 + * Non-zero - Failure 205 + */ 206 + static int ibmvtpm_crq_get_version(struct ibmvtpm_dev *ibmvtpm) 207 + { 208 + struct ibmvtpm_crq crq; 209 + u64 *buf = (u64 *) &crq; 210 + int rc; 211 + 212 + crq.valid = (u8)IBMVTPM_VALID_CMD; 213 + crq.msg = (u8)VTPM_GET_VERSION; 214 + 215 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, buf[0], buf[1]); 216 + if (rc != H_SUCCESS) 217 + dev_err(ibmvtpm->dev, 218 + "ibmvtpm_crq_get_version failed rc=%d\n", rc); 219 + 220 + return rc; 221 + } 222 + 223 + /** 224 + * ibmvtpm_crq_send_init_complete - Send a CRQ initialize complete message 225 + * @ibmvtpm: vtpm device struct 226 + * 227 + * Return value: 228 + * 0 - Success 229 + * Non-zero - Failure 230 + */ 231 + static int ibmvtpm_crq_send_init_complete(struct ibmvtpm_dev *ibmvtpm) 232 + { 233 + int rc; 234 + 235 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_COMP_CMD, 0); 236 + if (rc != H_SUCCESS) 237 + dev_err(ibmvtpm->dev, 238 + "ibmvtpm_crq_send_init_complete failed rc=%d\n", rc); 239 + 240 + return rc; 241 + } 242 + 243 + /** 244 + * ibmvtpm_crq_send_init - Send a CRQ initialize message 245 + * @ibmvtpm: vtpm device struct 246 + * 247 + * Return value: 248 + * 0 - Success 249 + * Non-zero - Failure 250 + */ 251 + static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm) 252 + { 253 + int rc; 254 + 255 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_CMD, 0); 256 + if (rc != H_SUCCESS) 257 + dev_err(ibmvtpm->dev, 258 + "ibmvtpm_crq_send_init failed rc=%d\n", rc); 259 + 260 + return rc; 261 + } 262 + 263 + /** 264 + * tpm_ibmvtpm_remove - ibm vtpm remove entry point 265 + * @vdev: vio device struct 266 + * 267 + * Return value: 268 + * 0 269 + */ 270 + static int __devexit tpm_ibmvtpm_remove(struct vio_dev *vdev) 271 + { 272 + struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev); 273 + int rc = 0; 274 + 275 + free_irq(vdev->irq, ibmvtpm); 276 + tasklet_kill(&ibmvtpm->tasklet); 277 + 278 + do { 279 + if (rc) 280 + msleep(100); 281 + rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); 282 + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); 283 + 284 + dma_unmap_single(ibmvtpm->dev, ibmvtpm->crq_dma_handle, 285 + CRQ_RES_BUF_SIZE, DMA_BIDIRECTIONAL); 286 + free_page((unsigned long)ibmvtpm->crq_queue.crq_addr); 287 + 288 + if (ibmvtpm->rtce_buf) { 289 + dma_unmap_single(ibmvtpm->dev, ibmvtpm->rtce_dma_handle, 290 + ibmvtpm->rtce_size, DMA_BIDIRECTIONAL); 291 + kfree(ibmvtpm->rtce_buf); 292 + } 293 + 294 + tpm_remove_hardware(ibmvtpm->dev); 295 + 296 + kfree(ibmvtpm); 297 + 298 + return 0; 299 + } 300 + 301 + /** 302 + * tpm_ibmvtpm_get_desired_dma - Get DMA size needed by this driver 303 + * @vdev: vio device struct 304 + * 305 + * Return value: 306 + * Number of bytes the driver needs to DMA map 307 + */ 308 + static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev) 309 + { 310 + struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev); 311 + return CRQ_RES_BUF_SIZE + ibmvtpm->rtce_size; 312 + } 313 + 314 + /** 315 + * tpm_ibmvtpm_suspend - Suspend 316 + * @dev: device struct 317 + * 318 + * Return value: 319 + * 0 320 + */ 321 + static int tpm_ibmvtpm_suspend(struct device *dev) 322 + { 323 + struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev); 324 + struct ibmvtpm_crq crq; 325 + u64 *buf = (u64 *) &crq; 326 + int rc = 0; 327 + 328 + crq.valid = (u8)IBMVTPM_VALID_CMD; 329 + crq.msg = (u8)VTPM_PREPARE_TO_SUSPEND; 330 + 331 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, buf[0], buf[1]); 332 + if (rc != H_SUCCESS) 333 + dev_err(ibmvtpm->dev, 334 + "tpm_ibmvtpm_suspend failed rc=%d\n", rc); 335 + 336 + return rc; 337 + } 338 + 339 + /** 340 + * ibmvtpm_reset_crq - Reset CRQ 341 + * @ibmvtpm: ibm vtpm struct 342 + * 343 + * Return value: 344 + * 0 - Success 345 + * Non-zero - Failure 346 + */ 347 + static int ibmvtpm_reset_crq(struct ibmvtpm_dev *ibmvtpm) 348 + { 349 + int rc = 0; 350 + 351 + do { 352 + if (rc) 353 + msleep(100); 354 + rc = plpar_hcall_norets(H_FREE_CRQ, 355 + ibmvtpm->vdev->unit_address); 356 + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); 357 + 358 + memset(ibmvtpm->crq_queue.crq_addr, 0, CRQ_RES_BUF_SIZE); 359 + ibmvtpm->crq_queue.index = 0; 360 + 361 + return plpar_hcall_norets(H_REG_CRQ, ibmvtpm->vdev->unit_address, 362 + ibmvtpm->crq_dma_handle, CRQ_RES_BUF_SIZE); 363 + } 364 + 365 + /** 366 + * tpm_ibmvtpm_resume - Resume from suspend 367 + * @dev: device struct 368 + * 369 + * Return value: 370 + * 0 371 + */ 372 + static int tpm_ibmvtpm_resume(struct device *dev) 373 + { 374 + struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev); 375 + unsigned long flags; 376 + int rc = 0; 377 + 378 + do { 379 + if (rc) 380 + msleep(100); 381 + rc = plpar_hcall_norets(H_ENABLE_CRQ, 382 + ibmvtpm->vdev->unit_address); 383 + } while (rc == H_IN_PROGRESS || rc == H_BUSY || H_IS_LONG_BUSY(rc)); 384 + 385 + if (rc) { 386 + dev_err(dev, "Error enabling ibmvtpm rc=%d\n", rc); 387 + return rc; 388 + } 389 + 390 + spin_lock_irqsave(&ibmvtpm->lock, flags); 391 + vio_disable_interrupts(ibmvtpm->vdev); 392 + tasklet_schedule(&ibmvtpm->tasklet); 393 + spin_unlock_irqrestore(&ibmvtpm->lock, flags); 394 + 395 + rc = ibmvtpm_crq_send_init(ibmvtpm); 396 + if (rc) 397 + dev_err(dev, "Error send_init rc=%d\n", rc); 398 + 399 + return rc; 400 + } 401 + 402 + static const struct file_operations ibmvtpm_ops = { 403 + .owner = THIS_MODULE, 404 + .llseek = no_llseek, 405 + .open = tpm_open, 406 + .read = tpm_read, 407 + .write = tpm_write, 408 + .release = tpm_release, 409 + }; 410 + 411 + static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL); 412 + static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL); 413 + static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL); 414 + static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL); 415 + static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL); 416 + static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, 417 + NULL); 418 + static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps_1_2, NULL); 419 + static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel); 420 + static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL); 421 + static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL); 422 + 423 + static struct attribute *ibmvtpm_attrs[] = { 424 + &dev_attr_pubek.attr, 425 + &dev_attr_pcrs.attr, 426 + &dev_attr_enabled.attr, 427 + &dev_attr_active.attr, 428 + &dev_attr_owned.attr, 429 + &dev_attr_temp_deactivated.attr, 430 + &dev_attr_caps.attr, 431 + &dev_attr_cancel.attr, 432 + &dev_attr_durations.attr, 433 + &dev_attr_timeouts.attr, NULL, 434 + }; 435 + 436 + static struct attribute_group ibmvtpm_attr_grp = { .attrs = ibmvtpm_attrs }; 437 + 438 + static const struct tpm_vendor_specific tpm_ibmvtpm = { 439 + .recv = tpm_ibmvtpm_recv, 440 + .send = tpm_ibmvtpm_send, 441 + .cancel = tpm_ibmvtpm_cancel, 442 + .status = tpm_ibmvtpm_status, 443 + .req_complete_mask = 0, 444 + .req_complete_val = 0, 445 + .req_canceled = 0, 446 + .attr_group = &ibmvtpm_attr_grp, 447 + .miscdev = { .fops = &ibmvtpm_ops, }, 448 + }; 449 + 450 + static const struct dev_pm_ops tpm_ibmvtpm_pm_ops = { 451 + .suspend = tpm_ibmvtpm_suspend, 452 + .resume = tpm_ibmvtpm_resume, 453 + }; 454 + 455 + /** 456 + * ibmvtpm_crq_get_next - Get next responded crq 457 + * @ibmvtpm vtpm device struct 458 + * 459 + * Return value: 460 + * vtpm crq pointer 461 + */ 462 + static struct ibmvtpm_crq *ibmvtpm_crq_get_next(struct ibmvtpm_dev *ibmvtpm) 463 + { 464 + struct ibmvtpm_crq_queue *crq_q = &ibmvtpm->crq_queue; 465 + struct ibmvtpm_crq *crq = &crq_q->crq_addr[crq_q->index]; 466 + 467 + if (crq->valid & VTPM_MSG_RES) { 468 + if (++crq_q->index == crq_q->num_entry) 469 + crq_q->index = 0; 470 + rmb(); 471 + } else 472 + crq = NULL; 473 + return crq; 474 + } 475 + 476 + /** 477 + * ibmvtpm_crq_process - Process responded crq 478 + * @crq crq to be processed 479 + * @ibmvtpm vtpm device struct 480 + * 481 + * Return value: 482 + * Nothing 483 + */ 484 + static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq, 485 + struct ibmvtpm_dev *ibmvtpm) 486 + { 487 + int rc = 0; 488 + 489 + switch (crq->valid) { 490 + case VALID_INIT_CRQ: 491 + switch (crq->msg) { 492 + case INIT_CRQ_RES: 493 + dev_info(ibmvtpm->dev, "CRQ initialized\n"); 494 + rc = ibmvtpm_crq_send_init_complete(ibmvtpm); 495 + if (rc) 496 + dev_err(ibmvtpm->dev, "Unable to send CRQ init complete rc=%d\n", rc); 497 + return; 498 + case INIT_CRQ_COMP_RES: 499 + dev_info(ibmvtpm->dev, 500 + "CRQ initialization completed\n"); 501 + return; 502 + default: 503 + dev_err(ibmvtpm->dev, "Unknown crq message type: %d\n", crq->msg); 504 + return; 505 + } 506 + return; 507 + case IBMVTPM_VALID_CMD: 508 + switch (crq->msg) { 509 + case VTPM_GET_RTCE_BUFFER_SIZE_RES: 510 + if (crq->len <= 0) { 511 + dev_err(ibmvtpm->dev, "Invalid rtce size\n"); 512 + return; 513 + } 514 + ibmvtpm->rtce_size = crq->len; 515 + ibmvtpm->rtce_buf = kmalloc(ibmvtpm->rtce_size, 516 + GFP_KERNEL); 517 + if (!ibmvtpm->rtce_buf) { 518 + dev_err(ibmvtpm->dev, "Failed to allocate memory for rtce buffer\n"); 519 + return; 520 + } 521 + 522 + ibmvtpm->rtce_dma_handle = dma_map_single(ibmvtpm->dev, 523 + ibmvtpm->rtce_buf, ibmvtpm->rtce_size, 524 + DMA_BIDIRECTIONAL); 525 + 526 + if (dma_mapping_error(ibmvtpm->dev, 527 + ibmvtpm->rtce_dma_handle)) { 528 + kfree(ibmvtpm->rtce_buf); 529 + ibmvtpm->rtce_buf = NULL; 530 + dev_err(ibmvtpm->dev, "Failed to dma map rtce buffer\n"); 531 + } 532 + 533 + return; 534 + case VTPM_GET_VERSION_RES: 535 + ibmvtpm->vtpm_version = crq->data; 536 + return; 537 + case VTPM_TPM_COMMAND_RES: 538 + ibmvtpm->crq_res.valid = crq->valid; 539 + ibmvtpm->crq_res.msg = crq->msg; 540 + ibmvtpm->crq_res.len = crq->len; 541 + ibmvtpm->crq_res.data = crq->data; 542 + wake_up_interruptible(&wq); 543 + return; 544 + default: 545 + return; 546 + } 547 + } 548 + return; 549 + } 550 + 551 + /** 552 + * ibmvtpm_interrupt - Interrupt handler 553 + * @irq: irq number to handle 554 + * @vtpm_instance: vtpm that received interrupt 555 + * 556 + * Returns: 557 + * IRQ_HANDLED 558 + **/ 559 + static irqreturn_t ibmvtpm_interrupt(int irq, void *vtpm_instance) 560 + { 561 + struct ibmvtpm_dev *ibmvtpm = (struct ibmvtpm_dev *) vtpm_instance; 562 + unsigned long flags; 563 + 564 + spin_lock_irqsave(&ibmvtpm->lock, flags); 565 + vio_disable_interrupts(ibmvtpm->vdev); 566 + tasklet_schedule(&ibmvtpm->tasklet); 567 + spin_unlock_irqrestore(&ibmvtpm->lock, flags); 568 + 569 + return IRQ_HANDLED; 570 + } 571 + 572 + /** 573 + * ibmvtpm_tasklet - Interrupt handler tasklet 574 + * @data: ibm vtpm device struct 575 + * 576 + * Returns: 577 + * Nothing 578 + **/ 579 + static void ibmvtpm_tasklet(void *data) 580 + { 581 + struct ibmvtpm_dev *ibmvtpm = data; 582 + struct ibmvtpm_crq *crq; 583 + unsigned long flags; 584 + 585 + spin_lock_irqsave(&ibmvtpm->lock, flags); 586 + while ((crq = ibmvtpm_crq_get_next(ibmvtpm)) != NULL) { 587 + ibmvtpm_crq_process(crq, ibmvtpm); 588 + crq->valid = 0; 589 + wmb(); 590 + } 591 + 592 + vio_enable_interrupts(ibmvtpm->vdev); 593 + spin_unlock_irqrestore(&ibmvtpm->lock, flags); 594 + } 595 + 596 + /** 597 + * tpm_ibmvtpm_probe - ibm vtpm initialize entry point 598 + * @vio_dev: vio device struct 599 + * @id: vio device id struct 600 + * 601 + * Return value: 602 + * 0 - Success 603 + * Non-zero - Failure 604 + */ 605 + static int __devinit tpm_ibmvtpm_probe(struct vio_dev *vio_dev, 606 + const struct vio_device_id *id) 607 + { 608 + struct ibmvtpm_dev *ibmvtpm; 609 + struct device *dev = &vio_dev->dev; 610 + struct ibmvtpm_crq_queue *crq_q; 611 + struct tpm_chip *chip; 612 + int rc = -ENOMEM, rc1; 613 + 614 + chip = tpm_register_hardware(dev, &tpm_ibmvtpm); 615 + if (!chip) { 616 + dev_err(dev, "tpm_register_hardware failed\n"); 617 + return -ENODEV; 618 + } 619 + 620 + ibmvtpm = kzalloc(sizeof(struct ibmvtpm_dev), GFP_KERNEL); 621 + if (!ibmvtpm) { 622 + dev_err(dev, "kzalloc for ibmvtpm failed\n"); 623 + goto cleanup; 624 + } 625 + 626 + crq_q = &ibmvtpm->crq_queue; 627 + crq_q->crq_addr = (struct ibmvtpm_crq *)get_zeroed_page(GFP_KERNEL); 628 + if (!crq_q->crq_addr) { 629 + dev_err(dev, "Unable to allocate memory for crq_addr\n"); 630 + goto cleanup; 631 + } 632 + 633 + crq_q->num_entry = CRQ_RES_BUF_SIZE / sizeof(*crq_q->crq_addr); 634 + ibmvtpm->crq_dma_handle = dma_map_single(dev, crq_q->crq_addr, 635 + CRQ_RES_BUF_SIZE, 636 + DMA_BIDIRECTIONAL); 637 + 638 + if (dma_mapping_error(dev, ibmvtpm->crq_dma_handle)) { 639 + dev_err(dev, "dma mapping failed\n"); 640 + goto cleanup; 641 + } 642 + 643 + rc = plpar_hcall_norets(H_REG_CRQ, vio_dev->unit_address, 644 + ibmvtpm->crq_dma_handle, CRQ_RES_BUF_SIZE); 645 + if (rc == H_RESOURCE) 646 + rc = ibmvtpm_reset_crq(ibmvtpm); 647 + 648 + if (rc) { 649 + dev_err(dev, "Unable to register CRQ rc=%d\n", rc); 650 + goto reg_crq_cleanup; 651 + } 652 + 653 + tasklet_init(&ibmvtpm->tasklet, (void *)ibmvtpm_tasklet, 654 + (unsigned long)ibmvtpm); 655 + 656 + rc = request_irq(vio_dev->irq, ibmvtpm_interrupt, 0, 657 + tpm_ibmvtpm_driver_name, ibmvtpm); 658 + if (rc) { 659 + dev_err(dev, "Error %d register irq 0x%x\n", rc, vio_dev->irq); 660 + goto init_irq_cleanup; 661 + } 662 + 663 + rc = vio_enable_interrupts(vio_dev); 664 + if (rc) { 665 + dev_err(dev, "Error %d enabling interrupts\n", rc); 666 + goto init_irq_cleanup; 667 + } 668 + 669 + crq_q->index = 0; 670 + 671 + ibmvtpm->dev = dev; 672 + ibmvtpm->vdev = vio_dev; 673 + chip->vendor.data = (void *)ibmvtpm; 674 + 675 + spin_lock_init(&ibmvtpm->lock); 676 + spin_lock_init(&ibmvtpm->rtce_lock); 677 + 678 + rc = ibmvtpm_crq_send_init(ibmvtpm); 679 + if (rc) 680 + goto init_irq_cleanup; 681 + 682 + rc = ibmvtpm_crq_get_version(ibmvtpm); 683 + if (rc) 684 + goto init_irq_cleanup; 685 + 686 + rc = ibmvtpm_crq_get_rtce_size(ibmvtpm); 687 + if (rc) 688 + goto init_irq_cleanup; 689 + 690 + return rc; 691 + init_irq_cleanup: 692 + tasklet_kill(&ibmvtpm->tasklet); 693 + do { 694 + rc1 = plpar_hcall_norets(H_FREE_CRQ, vio_dev->unit_address); 695 + } while (rc1 == H_BUSY || H_IS_LONG_BUSY(rc1)); 696 + reg_crq_cleanup: 697 + dma_unmap_single(dev, ibmvtpm->crq_dma_handle, CRQ_RES_BUF_SIZE, 698 + DMA_BIDIRECTIONAL); 699 + cleanup: 700 + if (ibmvtpm) { 701 + if (crq_q->crq_addr) 702 + free_page((unsigned long)crq_q->crq_addr); 703 + kfree(ibmvtpm); 704 + } 705 + 706 + tpm_remove_hardware(dev); 707 + 708 + return rc; 709 + } 710 + 711 + static struct vio_driver ibmvtpm_driver = { 712 + .id_table = tpm_ibmvtpm_device_table, 713 + .probe = tpm_ibmvtpm_probe, 714 + .remove = tpm_ibmvtpm_remove, 715 + .get_desired_dma = tpm_ibmvtpm_get_desired_dma, 716 + .name = tpm_ibmvtpm_driver_name, 717 + .pm = &tpm_ibmvtpm_pm_ops, 718 + }; 719 + 720 + /** 721 + * ibmvtpm_module_init - Initialize ibm vtpm module 722 + * 723 + * Return value: 724 + * 0 -Success 725 + * Non-zero - Failure 726 + */ 727 + static int __init ibmvtpm_module_init(void) 728 + { 729 + return vio_register_driver(&ibmvtpm_driver); 730 + } 731 + 732 + /** 733 + * ibmvtpm_module_exit - Teardown ibm vtpm module 734 + * 735 + * Return value: 736 + * Nothing 737 + */ 738 + static void __exit ibmvtpm_module_exit(void) 739 + { 740 + vio_unregister_driver(&ibmvtpm_driver); 741 + } 742 + 743 + module_init(ibmvtpm_module_init); 744 + module_exit(ibmvtpm_module_exit); 745 + 746 + MODULE_AUTHOR("adlai@us.ibm.com"); 747 + MODULE_DESCRIPTION("IBM vTPM Driver"); 748 + MODULE_VERSION("1.0"); 749 + MODULE_LICENSE("GPL");
+77
drivers/char/tpm/tpm_ibmvtpm.h
··· 1 + /* 2 + * Copyright (C) 2012 IBM Corporation 3 + * 4 + * Author: Ashley Lai <adlai@us.ibm.com> 5 + * 6 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 7 + * 8 + * Device driver for TCG/TCPA TPM (trusted platform module). 9 + * Specifications at www.trustedcomputinggroup.org 10 + * 11 + * This program is free software; you can redistribute it and/or 12 + * modify it under the terms of the GNU General Public License as 13 + * published by the Free Software Foundation, version 2 of the 14 + * License. 15 + * 16 + */ 17 + 18 + #ifndef __TPM_IBMVTPM_H__ 19 + #define __TPM_IBMVTPM_H__ 20 + 21 + /* vTPM Message Format 1 */ 22 + struct ibmvtpm_crq { 23 + u8 valid; 24 + u8 msg; 25 + u16 len; 26 + u32 data; 27 + u64 reserved; 28 + } __attribute__((packed, aligned(8))); 29 + 30 + struct ibmvtpm_crq_queue { 31 + struct ibmvtpm_crq *crq_addr; 32 + u32 index; 33 + u32 num_entry; 34 + }; 35 + 36 + struct ibmvtpm_dev { 37 + struct device *dev; 38 + struct vio_dev *vdev; 39 + struct ibmvtpm_crq_queue crq_queue; 40 + dma_addr_t crq_dma_handle; 41 + spinlock_t lock; 42 + struct tasklet_struct tasklet; 43 + u32 rtce_size; 44 + void __iomem *rtce_buf; 45 + dma_addr_t rtce_dma_handle; 46 + spinlock_t rtce_lock; 47 + struct ibmvtpm_crq crq_res; 48 + u32 vtpm_version; 49 + }; 50 + 51 + #define CRQ_RES_BUF_SIZE PAGE_SIZE 52 + 53 + /* Initialize CRQ */ 54 + #define INIT_CRQ_CMD 0xC001000000000000LL /* Init cmd */ 55 + #define INIT_CRQ_COMP_CMD 0xC002000000000000LL /* Init complete cmd */ 56 + #define INIT_CRQ_RES 0x01 /* Init respond */ 57 + #define INIT_CRQ_COMP_RES 0x02 /* Init complete respond */ 58 + #define VALID_INIT_CRQ 0xC0 /* Valid command for init crq */ 59 + 60 + /* vTPM CRQ response is the message type | 0x80 */ 61 + #define VTPM_MSG_RES 0x80 62 + #define IBMVTPM_VALID_CMD 0x80 63 + 64 + /* vTPM CRQ message types */ 65 + #define VTPM_GET_VERSION 0x01 66 + #define VTPM_GET_VERSION_RES (0x01 | VTPM_MSG_RES) 67 + 68 + #define VTPM_TPM_COMMAND 0x02 69 + #define VTPM_TPM_COMMAND_RES (0x02 | VTPM_MSG_RES) 70 + 71 + #define VTPM_GET_RTCE_BUFFER_SIZE 0x03 72 + #define VTPM_GET_RTCE_BUFFER_SIZE_RES (0x03 | VTPM_MSG_RES) 73 + 74 + #define VTPM_PREPARE_TO_SUSPEND 0x04 75 + #define VTPM_PREPARE_TO_SUSPEND_RES (0x04 | VTPM_MSG_RES) 76 + 77 + #endif
+73
drivers/char/tpm/tpm_of.c
··· 1 + /* 2 + * Copyright 2012 IBM Corporation 3 + * 4 + * Author: Ashley Lai <adlai@us.ibm.com> 5 + * 6 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 7 + * 8 + * Read the event log created by the firmware on PPC64 9 + * 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License 12 + * as published by the Free Software Foundation; either version 13 + * 2 of the License, or (at your option) any later version. 14 + * 15 + */ 16 + 17 + #include <linux/slab.h> 18 + #include <linux/of.h> 19 + 20 + #include "tpm.h" 21 + #include "tpm_eventlog.h" 22 + 23 + int read_log(struct tpm_bios_log *log) 24 + { 25 + struct device_node *np; 26 + const u32 *sizep; 27 + const __be64 *basep; 28 + 29 + if (log->bios_event_log != NULL) { 30 + pr_err("%s: ERROR - Eventlog already initialized\n", __func__); 31 + return -EFAULT; 32 + } 33 + 34 + np = of_find_node_by_name(NULL, "ibm,vtpm"); 35 + if (!np) { 36 + pr_err("%s: ERROR - IBMVTPM not supported\n", __func__); 37 + return -ENODEV; 38 + } 39 + 40 + sizep = of_get_property(np, "linux,sml-size", NULL); 41 + if (sizep == NULL) { 42 + pr_err("%s: ERROR - SML size not found\n", __func__); 43 + goto cleanup_eio; 44 + } 45 + if (*sizep == 0) { 46 + pr_err("%s: ERROR - event log area empty\n", __func__); 47 + goto cleanup_eio; 48 + } 49 + 50 + basep = of_get_property(np, "linux,sml-base", NULL); 51 + if (basep == NULL) { 52 + pr_err(KERN_ERR "%s: ERROR - SML not found\n", __func__); 53 + goto cleanup_eio; 54 + } 55 + 56 + of_node_put(np); 57 + log->bios_event_log = kmalloc(*sizep, GFP_KERNEL); 58 + if (!log->bios_event_log) { 59 + pr_err("%s: ERROR - Not enough memory for BIOS measurements\n", 60 + __func__); 61 + return -ENOMEM; 62 + } 63 + 64 + log->bios_event_log_end = log->bios_event_log + *sizep; 65 + 66 + memcpy(log->bios_event_log, __va(be64_to_cpup(basep)), *sizep); 67 + 68 + return 0; 69 + 70 + cleanup_eio: 71 + of_node_put(np); 72 + return -EIO; 73 + }
+461
drivers/char/tpm/tpm_ppi.c
··· 1 + #include <linux/acpi.h> 2 + #include <acpi/acpi_drivers.h> 3 + #include "tpm.h" 4 + 5 + static const u8 tpm_ppi_uuid[] = { 6 + 0xA6, 0xFA, 0xDD, 0x3D, 7 + 0x1B, 0x36, 8 + 0xB4, 0x4E, 9 + 0xA4, 0x24, 10 + 0x8D, 0x10, 0x08, 0x9D, 0x16, 0x53 11 + }; 12 + static char *tpm_device_name = "TPM"; 13 + 14 + #define TPM_PPI_REVISION_ID 1 15 + #define TPM_PPI_FN_VERSION 1 16 + #define TPM_PPI_FN_SUBREQ 2 17 + #define TPM_PPI_FN_GETREQ 3 18 + #define TPM_PPI_FN_GETACT 4 19 + #define TPM_PPI_FN_GETRSP 5 20 + #define TPM_PPI_FN_SUBREQ2 7 21 + #define TPM_PPI_FN_GETOPR 8 22 + #define PPI_TPM_REQ_MAX 22 23 + #define PPI_VS_REQ_START 128 24 + #define PPI_VS_REQ_END 255 25 + #define PPI_VERSION_LEN 3 26 + 27 + static acpi_status ppi_callback(acpi_handle handle, u32 level, void *context, 28 + void **return_value) 29 + { 30 + acpi_status status; 31 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 32 + status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 33 + if (strstr(buffer.pointer, context) != NULL) { 34 + *return_value = handle; 35 + kfree(buffer.pointer); 36 + return AE_CTRL_TERMINATE; 37 + } 38 + return AE_OK; 39 + } 40 + 41 + static inline void ppi_assign_params(union acpi_object params[4], 42 + u64 function_num) 43 + { 44 + params[0].type = ACPI_TYPE_BUFFER; 45 + params[0].buffer.length = sizeof(tpm_ppi_uuid); 46 + params[0].buffer.pointer = (char *)tpm_ppi_uuid; 47 + params[1].type = ACPI_TYPE_INTEGER; 48 + params[1].integer.value = TPM_PPI_REVISION_ID; 49 + params[2].type = ACPI_TYPE_INTEGER; 50 + params[2].integer.value = function_num; 51 + params[3].type = ACPI_TYPE_PACKAGE; 52 + params[3].package.count = 0; 53 + params[3].package.elements = NULL; 54 + } 55 + 56 + static ssize_t tpm_show_ppi_version(struct device *dev, 57 + struct device_attribute *attr, char *buf) 58 + { 59 + acpi_handle handle; 60 + acpi_status status; 61 + struct acpi_object_list input; 62 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 63 + union acpi_object params[4]; 64 + union acpi_object *obj; 65 + 66 + input.count = 4; 67 + ppi_assign_params(params, TPM_PPI_FN_VERSION); 68 + input.pointer = params; 69 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 70 + ACPI_UINT32_MAX, ppi_callback, NULL, 71 + tpm_device_name, &handle); 72 + if (ACPI_FAILURE(status)) 73 + return -ENXIO; 74 + 75 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 76 + ACPI_TYPE_STRING); 77 + if (ACPI_FAILURE(status)) 78 + return -ENOMEM; 79 + obj = (union acpi_object *)output.pointer; 80 + status = scnprintf(buf, PAGE_SIZE, "%s\n", obj->string.pointer); 81 + kfree(output.pointer); 82 + return status; 83 + } 84 + 85 + static ssize_t tpm_show_ppi_request(struct device *dev, 86 + struct device_attribute *attr, char *buf) 87 + { 88 + acpi_handle handle; 89 + acpi_status status; 90 + struct acpi_object_list input; 91 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 92 + union acpi_object params[4]; 93 + union acpi_object *ret_obj; 94 + 95 + input.count = 4; 96 + ppi_assign_params(params, TPM_PPI_FN_GETREQ); 97 + input.pointer = params; 98 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 99 + ACPI_UINT32_MAX, ppi_callback, NULL, 100 + tpm_device_name, &handle); 101 + if (ACPI_FAILURE(status)) 102 + return -ENXIO; 103 + 104 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 105 + ACPI_TYPE_PACKAGE); 106 + if (ACPI_FAILURE(status)) 107 + return -ENOMEM; 108 + /* 109 + * output.pointer should be of package type, including two integers. 110 + * The first is function return code, 0 means success and 1 means 111 + * error. The second is pending TPM operation requested by the OS, 0 112 + * means none and >0 means operation value. 113 + */ 114 + ret_obj = ((union acpi_object *)output.pointer)->package.elements; 115 + if (ret_obj->type == ACPI_TYPE_INTEGER) { 116 + if (ret_obj->integer.value) { 117 + status = -EFAULT; 118 + goto cleanup; 119 + } 120 + ret_obj++; 121 + if (ret_obj->type == ACPI_TYPE_INTEGER) 122 + status = scnprintf(buf, PAGE_SIZE, "%llu\n", 123 + ret_obj->integer.value); 124 + else 125 + status = -EINVAL; 126 + } else { 127 + status = -EINVAL; 128 + } 129 + cleanup: 130 + kfree(output.pointer); 131 + return status; 132 + } 133 + 134 + static ssize_t tpm_store_ppi_request(struct device *dev, 135 + struct device_attribute *attr, 136 + const char *buf, size_t count) 137 + { 138 + char version[PPI_VERSION_LEN + 1]; 139 + acpi_handle handle; 140 + acpi_status status; 141 + struct acpi_object_list input; 142 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 143 + union acpi_object params[4]; 144 + union acpi_object obj; 145 + u32 req; 146 + u64 ret; 147 + 148 + input.count = 4; 149 + ppi_assign_params(params, TPM_PPI_FN_VERSION); 150 + input.pointer = params; 151 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 152 + ACPI_UINT32_MAX, ppi_callback, NULL, 153 + tpm_device_name, &handle); 154 + if (ACPI_FAILURE(status)) 155 + return -ENXIO; 156 + 157 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 158 + ACPI_TYPE_STRING); 159 + if (ACPI_FAILURE(status)) 160 + return -ENOMEM; 161 + strncpy(version, 162 + ((union acpi_object *)output.pointer)->string.pointer, 163 + PPI_VERSION_LEN); 164 + kfree(output.pointer); 165 + output.length = ACPI_ALLOCATE_BUFFER; 166 + output.pointer = NULL; 167 + /* 168 + * the function to submit TPM operation request to pre-os environment 169 + * is updated with function index from SUBREQ to SUBREQ2 since PPI 170 + * version 1.1 171 + */ 172 + if (strcmp(version, "1.1") == -1) 173 + params[2].integer.value = TPM_PPI_FN_SUBREQ; 174 + else 175 + params[2].integer.value = TPM_PPI_FN_SUBREQ2; 176 + /* 177 + * PPI spec defines params[3].type as ACPI_TYPE_PACKAGE. Some BIOS 178 + * accept buffer/string/integer type, but some BIOS accept buffer/ 179 + * string/package type. For PPI version 1.0 and 1.1, use buffer type 180 + * for compatibility, and use package type since 1.2 according to spec. 181 + */ 182 + if (strcmp(version, "1.2") == -1) { 183 + params[3].type = ACPI_TYPE_BUFFER; 184 + params[3].buffer.length = sizeof(req); 185 + sscanf(buf, "%d", &req); 186 + params[3].buffer.pointer = (char *)&req; 187 + } else { 188 + params[3].package.count = 1; 189 + obj.type = ACPI_TYPE_INTEGER; 190 + sscanf(buf, "%llu", &obj.integer.value); 191 + params[3].package.elements = &obj; 192 + } 193 + 194 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 195 + ACPI_TYPE_INTEGER); 196 + if (ACPI_FAILURE(status)) 197 + return -ENOMEM; 198 + ret = ((union acpi_object *)output.pointer)->integer.value; 199 + if (ret == 0) 200 + status = (acpi_status)count; 201 + else if (ret == 1) 202 + status = -EPERM; 203 + else 204 + status = -EFAULT; 205 + kfree(output.pointer); 206 + return status; 207 + } 208 + 209 + static ssize_t tpm_show_ppi_transition_action(struct device *dev, 210 + struct device_attribute *attr, 211 + char *buf) 212 + { 213 + char version[PPI_VERSION_LEN + 1]; 214 + acpi_handle handle; 215 + acpi_status status; 216 + struct acpi_object_list input; 217 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 218 + union acpi_object params[4]; 219 + u32 ret; 220 + char *info[] = { 221 + "None", 222 + "Shutdown", 223 + "Reboot", 224 + "OS Vendor-specific", 225 + "Error", 226 + }; 227 + input.count = 4; 228 + ppi_assign_params(params, TPM_PPI_FN_VERSION); 229 + input.pointer = params; 230 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 231 + ACPI_UINT32_MAX, ppi_callback, NULL, 232 + tpm_device_name, &handle); 233 + if (ACPI_FAILURE(status)) 234 + return -ENXIO; 235 + 236 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 237 + ACPI_TYPE_STRING); 238 + if (ACPI_FAILURE(status)) 239 + return -ENOMEM; 240 + strncpy(version, 241 + ((union acpi_object *)output.pointer)->string.pointer, 242 + PPI_VERSION_LEN); 243 + /* 244 + * PPI spec defines params[3].type as empty package, but some platforms 245 + * (e.g. Capella with PPI 1.0) need integer/string/buffer type, so for 246 + * compatibility, define params[3].type as buffer, if PPI version < 1.2 247 + */ 248 + if (strcmp(version, "1.2") == -1) { 249 + params[3].type = ACPI_TYPE_BUFFER; 250 + params[3].buffer.length = 0; 251 + params[3].buffer.pointer = NULL; 252 + } 253 + params[2].integer.value = TPM_PPI_FN_GETACT; 254 + kfree(output.pointer); 255 + output.length = ACPI_ALLOCATE_BUFFER; 256 + output.pointer = NULL; 257 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 258 + ACPI_TYPE_INTEGER); 259 + if (ACPI_FAILURE(status)) 260 + return -ENOMEM; 261 + ret = ((union acpi_object *)output.pointer)->integer.value; 262 + if (ret < ARRAY_SIZE(info) - 1) 263 + status = scnprintf(buf, PAGE_SIZE, "%d: %s\n", ret, info[ret]); 264 + else 265 + status = scnprintf(buf, PAGE_SIZE, "%d: %s\n", ret, 266 + info[ARRAY_SIZE(info)-1]); 267 + kfree(output.pointer); 268 + return status; 269 + } 270 + 271 + static ssize_t tpm_show_ppi_response(struct device *dev, 272 + struct device_attribute *attr, 273 + char *buf) 274 + { 275 + acpi_handle handle; 276 + acpi_status status; 277 + struct acpi_object_list input; 278 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 279 + union acpi_object params[4]; 280 + union acpi_object *ret_obj; 281 + u64 req; 282 + 283 + input.count = 4; 284 + ppi_assign_params(params, TPM_PPI_FN_GETRSP); 285 + input.pointer = params; 286 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 287 + ACPI_UINT32_MAX, ppi_callback, NULL, 288 + tpm_device_name, &handle); 289 + if (ACPI_FAILURE(status)) 290 + return -ENXIO; 291 + 292 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 293 + ACPI_TYPE_PACKAGE); 294 + if (ACPI_FAILURE(status)) 295 + return -ENOMEM; 296 + /* 297 + * parameter output.pointer should be of package type, including 298 + * 3 integers. The first means function return code, the second means 299 + * most recent TPM operation request, and the last means response to 300 + * the most recent TPM operation request. Only if the first is 0, and 301 + * the second integer is not 0, the response makes sense. 302 + */ 303 + ret_obj = ((union acpi_object *)output.pointer)->package.elements; 304 + if (ret_obj->type != ACPI_TYPE_INTEGER) { 305 + status = -EINVAL; 306 + goto cleanup; 307 + } 308 + if (ret_obj->integer.value) { 309 + status = -EFAULT; 310 + goto cleanup; 311 + } 312 + ret_obj++; 313 + if (ret_obj->type != ACPI_TYPE_INTEGER) { 314 + status = -EINVAL; 315 + goto cleanup; 316 + } 317 + if (ret_obj->integer.value) { 318 + req = ret_obj->integer.value; 319 + ret_obj++; 320 + if (ret_obj->type != ACPI_TYPE_INTEGER) { 321 + status = -EINVAL; 322 + goto cleanup; 323 + } 324 + if (ret_obj->integer.value == 0) 325 + status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 326 + "0: Success"); 327 + else if (ret_obj->integer.value == 0xFFFFFFF0) 328 + status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 329 + "0xFFFFFFF0: User Abort"); 330 + else if (ret_obj->integer.value == 0xFFFFFFF1) 331 + status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 332 + "0xFFFFFFF1: BIOS Failure"); 333 + else if (ret_obj->integer.value >= 1 && 334 + ret_obj->integer.value <= 0x00000FFF) 335 + status = scnprintf(buf, PAGE_SIZE, "%llu %llu: %s\n", 336 + req, ret_obj->integer.value, 337 + "Corresponding TPM error"); 338 + else 339 + status = scnprintf(buf, PAGE_SIZE, "%llu %llu: %s\n", 340 + req, ret_obj->integer.value, 341 + "Error"); 342 + } else { 343 + status = scnprintf(buf, PAGE_SIZE, "%llu: %s\n", 344 + ret_obj->integer.value, "No Recent Request"); 345 + } 346 + cleanup: 347 + kfree(output.pointer); 348 + return status; 349 + } 350 + 351 + static ssize_t show_ppi_operations(char *buf, u32 start, u32 end) 352 + { 353 + char *str = buf; 354 + char version[PPI_VERSION_LEN]; 355 + acpi_handle handle; 356 + acpi_status status; 357 + struct acpi_object_list input; 358 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 359 + union acpi_object params[4]; 360 + union acpi_object obj; 361 + int i; 362 + u32 ret; 363 + char *info[] = { 364 + "Not implemented", 365 + "BIOS only", 366 + "Blocked for OS by BIOS", 367 + "User required", 368 + "User not required", 369 + }; 370 + input.count = 4; 371 + ppi_assign_params(params, TPM_PPI_FN_VERSION); 372 + input.pointer = params; 373 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 374 + ACPI_UINT32_MAX, ppi_callback, NULL, 375 + tpm_device_name, &handle); 376 + if (ACPI_FAILURE(status)) 377 + return -ENXIO; 378 + 379 + status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 380 + ACPI_TYPE_STRING); 381 + if (ACPI_FAILURE(status)) 382 + return -ENOMEM; 383 + 384 + strncpy(version, 385 + ((union acpi_object *)output.pointer)->string.pointer, 386 + PPI_VERSION_LEN); 387 + kfree(output.pointer); 388 + output.length = ACPI_ALLOCATE_BUFFER; 389 + output.pointer = NULL; 390 + if (strcmp(version, "1.2") == -1) 391 + return -EPERM; 392 + 393 + params[2].integer.value = TPM_PPI_FN_GETOPR; 394 + params[3].package.count = 1; 395 + obj.type = ACPI_TYPE_INTEGER; 396 + params[3].package.elements = &obj; 397 + for (i = start; i <= end; i++) { 398 + obj.integer.value = i; 399 + status = acpi_evaluate_object_typed(handle, "_DSM", 400 + &input, &output, ACPI_TYPE_INTEGER); 401 + if (ACPI_FAILURE(status)) 402 + return -ENOMEM; 403 + 404 + ret = ((union acpi_object *)output.pointer)->integer.value; 405 + if (ret > 0 && ret < ARRAY_SIZE(info)) 406 + str += scnprintf(str, PAGE_SIZE, "%d %d: %s\n", 407 + i, ret, info[ret]); 408 + kfree(output.pointer); 409 + output.length = ACPI_ALLOCATE_BUFFER; 410 + output.pointer = NULL; 411 + } 412 + return str - buf; 413 + } 414 + 415 + static ssize_t tpm_show_ppi_tcg_operations(struct device *dev, 416 + struct device_attribute *attr, 417 + char *buf) 418 + { 419 + return show_ppi_operations(buf, 0, PPI_TPM_REQ_MAX); 420 + } 421 + 422 + static ssize_t tpm_show_ppi_vs_operations(struct device *dev, 423 + struct device_attribute *attr, 424 + char *buf) 425 + { 426 + return show_ppi_operations(buf, PPI_VS_REQ_START, PPI_VS_REQ_END); 427 + } 428 + 429 + static DEVICE_ATTR(version, S_IRUGO, tpm_show_ppi_version, NULL); 430 + static DEVICE_ATTR(request, S_IRUGO | S_IWUSR | S_IWGRP, 431 + tpm_show_ppi_request, tpm_store_ppi_request); 432 + static DEVICE_ATTR(transition_action, S_IRUGO, 433 + tpm_show_ppi_transition_action, NULL); 434 + static DEVICE_ATTR(response, S_IRUGO, tpm_show_ppi_response, NULL); 435 + static DEVICE_ATTR(tcg_operations, S_IRUGO, tpm_show_ppi_tcg_operations, NULL); 436 + static DEVICE_ATTR(vs_operations, S_IRUGO, tpm_show_ppi_vs_operations, NULL); 437 + 438 + static struct attribute *ppi_attrs[] = { 439 + &dev_attr_version.attr, 440 + &dev_attr_request.attr, 441 + &dev_attr_transition_action.attr, 442 + &dev_attr_response.attr, 443 + &dev_attr_tcg_operations.attr, 444 + &dev_attr_vs_operations.attr, NULL, 445 + }; 446 + static struct attribute_group ppi_attr_grp = { 447 + .attrs = ppi_attrs 448 + }; 449 + 450 + ssize_t sys_add_ppi(struct kobject *parent) 451 + { 452 + struct kobject *ppi; 453 + ppi = kobject_create_and_add("ppi", parent); 454 + if (sysfs_create_group(ppi, &ppi_attr_grp)) 455 + return -EFAULT; 456 + else 457 + return 0; 458 + } 459 + EXPORT_SYMBOL_GPL(sys_add_ppi); 460 + 461 + MODULE_LICENSE("GPL");
+2 -1
drivers/char/tpm/tpm_tis.c
··· 705 705 return rc; 706 706 } 707 707 708 + #if defined(CONFIG_PNP) || defined(CONFIG_PM_SLEEP) 708 709 static void tpm_tis_reenable_interrupts(struct tpm_chip *chip) 709 710 { 710 711 u32 intmask; ··· 726 725 iowrite32(intmask, 727 726 chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality)); 728 727 } 729 - 728 + #endif 730 729 731 730 #ifdef CONFIG_PNP 732 731 static int __devinit tpm_tis_pnp_init(struct pnp_dev *pnp_dev,
+2
fs/attr.c
··· 14 14 #include <linux/fcntl.h> 15 15 #include <linux/security.h> 16 16 #include <linux/evm.h> 17 + #include <linux/ima.h> 17 18 18 19 /** 19 20 * inode_change_ok - check if attribute changes to an inode are allowed ··· 248 247 249 248 if (!error) { 250 249 fsnotify_change(dentry, ia_valid); 250 + ima_inode_post_setattr(dentry); 251 251 evm_inode_post_setattr(dentry, ia_valid); 252 252 } 253 253
+1 -1
fs/file_table.c
··· 243 243 if (file->f_op && file->f_op->fasync) 244 244 file->f_op->fasync(-1, file, 0); 245 245 } 246 + ima_file_free(file); 246 247 if (file->f_op && file->f_op->release) 247 248 file->f_op->release(inode, file); 248 249 security_file_free(file); 249 - ima_file_free(file); 250 250 if (unlikely(S_ISCHR(inode->i_mode) && inode->i_cdev != NULL && 251 251 !(file->f_mode & FMODE_PATH))) { 252 252 cdev_put(inode->i_cdev);
+6 -4
fs/xattr.c
··· 296 296 if (error) 297 297 return error; 298 298 299 - error = security_inode_removexattr(dentry, name); 300 - if (error) 301 - return error; 302 - 303 299 mutex_lock(&inode->i_mutex); 300 + error = security_inode_removexattr(dentry, name); 301 + if (error) { 302 + mutex_unlock(&inode->i_mutex); 303 + return error; 304 + } 305 + 304 306 error = inode->i_op->removexattr(dentry, name); 305 307 mutex_unlock(&inode->i_mutex); 306 308
+2
include/linux/audit.h
··· 531 531 #define audit_get_loginuid(t) ((t)->loginuid) 532 532 #define audit_get_sessionid(t) ((t)->sessionid) 533 533 extern void audit_log_task_context(struct audit_buffer *ab); 534 + extern void audit_log_task_info(struct audit_buffer *ab, struct task_struct *tsk); 534 535 extern void __audit_ipc_obj(struct kern_ipc_perm *ipcp); 535 536 extern void __audit_ipc_set_perm(unsigned long qbytes, uid_t uid, gid_t gid, umode_t mode); 536 537 extern int __audit_bprm(struct linux_binprm *bprm); ··· 643 642 #define audit_get_loginuid(t) (INVALID_UID) 644 643 #define audit_get_sessionid(t) (-1) 645 644 #define audit_log_task_context(b) do { ; } while (0) 645 + #define audit_log_task_info(b, t) do { ; } while (0) 646 646 #define audit_ipc_obj(i) ((void)0) 647 647 #define audit_ipc_set_perm(q,u,g,m) ((void)0) 648 648 #define audit_bprm(p) ({ 0; })
+27
include/linux/ima.h
··· 39 39 { 40 40 return 0; 41 41 } 42 + 42 43 #endif /* CONFIG_IMA_H */ 44 + 45 + #ifdef CONFIG_IMA_APPRAISE 46 + extern void ima_inode_post_setattr(struct dentry *dentry); 47 + extern int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name, 48 + const void *xattr_value, size_t xattr_value_len); 49 + extern int ima_inode_removexattr(struct dentry *dentry, const char *xattr_name); 50 + #else 51 + static inline void ima_inode_post_setattr(struct dentry *dentry) 52 + { 53 + return; 54 + } 55 + 56 + static inline int ima_inode_setxattr(struct dentry *dentry, 57 + const char *xattr_name, 58 + const void *xattr_value, 59 + size_t xattr_value_len) 60 + { 61 + return 0; 62 + } 63 + 64 + static inline int ima_inode_removexattr(struct dentry *dentry, 65 + const char *xattr_name) 66 + { 67 + return 0; 68 + } 69 + #endif /* CONFIG_IMA_APPRAISE_H */ 43 70 #endif /* _LINUX_IMA_H */
+4 -3
include/linux/integrity.h
··· 22 22 23 23 /* List of EVM protected security xattrs */ 24 24 #ifdef CONFIG_INTEGRITY 25 - extern int integrity_inode_alloc(struct inode *inode); 25 + extern struct integrity_iint_cache *integrity_inode_get(struct inode *inode); 26 26 extern void integrity_inode_free(struct inode *inode); 27 27 28 28 #else 29 - static inline int integrity_inode_alloc(struct inode *inode) 29 + static inline struct integrity_iint_cache * 30 + integrity_inode_get(struct inode *inode) 30 31 { 31 - return 0; 32 + return NULL; 32 33 } 33 34 34 35 static inline void integrity_inode_free(struct inode *inode)
-2
include/linux/ptrace.h
··· 130 130 #define PTRACE_MODE_READ 0x01 131 131 #define PTRACE_MODE_ATTACH 0x02 132 132 #define PTRACE_MODE_NOAUDIT 0x04 133 - /* Returns 0 on success, -errno on denial. */ 134 - extern int __ptrace_may_access(struct task_struct *task, unsigned int mode); 135 133 /* Returns true on success, false on denial. */ 136 134 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); 137 135
+31
include/linux/security.h
··· 3022 3022 { } 3023 3023 #endif /* CONFIG_SECURITY */ 3024 3024 3025 + #ifdef CONFIG_SECURITY_YAMA 3026 + extern int yama_ptrace_access_check(struct task_struct *child, 3027 + unsigned int mode); 3028 + extern int yama_ptrace_traceme(struct task_struct *parent); 3029 + extern void yama_task_free(struct task_struct *task); 3030 + extern int yama_task_prctl(int option, unsigned long arg2, unsigned long arg3, 3031 + unsigned long arg4, unsigned long arg5); 3032 + #else 3033 + static inline int yama_ptrace_access_check(struct task_struct *child, 3034 + unsigned int mode) 3035 + { 3036 + return 0; 3037 + } 3038 + 3039 + static inline int yama_ptrace_traceme(struct task_struct *parent) 3040 + { 3041 + return 0; 3042 + } 3043 + 3044 + static inline void yama_task_free(struct task_struct *task) 3045 + { 3046 + } 3047 + 3048 + static inline int yama_task_prctl(int option, unsigned long arg2, 3049 + unsigned long arg3, unsigned long arg4, 3050 + unsigned long arg5) 3051 + { 3052 + return -ENOSYS; 3053 + } 3054 + #endif /* CONFIG_SECURITY_YAMA */ 3055 + 3025 3056 #endif /* ! __LINUX_SECURITY_H */ 3026 3057
+4
include/linux/tpm.h
··· 32 32 extern int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf); 33 33 extern int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash); 34 34 extern int tpm_send(u32 chip_num, void *cmd, size_t buflen); 35 + extern int tpm_get_random(u32 chip_num, u8 *data, size_t max); 35 36 #else 36 37 static inline int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf) { 37 38 return -ENODEV; ··· 41 40 return -ENODEV; 42 41 } 43 42 static inline int tpm_send(u32 chip_num, void *cmd, size_t buflen) { 43 + return -ENODEV; 44 + } 45 + static inline int tpm_get_random(u32 chip_num, u8 *data, size_t max) { 44 46 return -ENODEV; 45 47 } 46 48 #endif
+3
include/linux/xattr.h
··· 33 33 #define XATTR_EVM_SUFFIX "evm" 34 34 #define XATTR_NAME_EVM XATTR_SECURITY_PREFIX XATTR_EVM_SUFFIX 35 35 36 + #define XATTR_IMA_SUFFIX "ima" 37 + #define XATTR_NAME_IMA XATTR_SECURITY_PREFIX XATTR_IMA_SUFFIX 38 + 36 39 #define XATTR_SELINUX_SUFFIX "selinux" 37 40 #define XATTR_NAME_SELINUX XATTR_SECURITY_PREFIX XATTR_SELINUX_SUFFIX 38 41
+40 -45
kernel/auditsc.c
··· 1146 1146 1147 1147 EXPORT_SYMBOL(audit_log_task_context); 1148 1148 1149 - static void audit_log_task_info(struct audit_buffer *ab, struct task_struct *tsk) 1149 + void audit_log_task_info(struct audit_buffer *ab, struct task_struct *tsk) 1150 1150 { 1151 + const struct cred *cred; 1151 1152 char name[sizeof(tsk->comm)]; 1152 1153 struct mm_struct *mm = tsk->mm; 1153 1154 struct vm_area_struct *vma; 1155 + char *tty; 1156 + 1157 + if (!ab) 1158 + return; 1154 1159 1155 1160 /* tsk == current */ 1161 + cred = current_cred(); 1162 + 1163 + spin_lock_irq(&tsk->sighand->siglock); 1164 + if (tsk->signal && tsk->signal->tty && tsk->signal->tty->name) 1165 + tty = tsk->signal->tty->name; 1166 + else 1167 + tty = "(none)"; 1168 + spin_unlock_irq(&tsk->sighand->siglock); 1169 + 1170 + 1171 + audit_log_format(ab, 1172 + " ppid=%ld pid=%d auid=%u uid=%u gid=%u" 1173 + " euid=%u suid=%u fsuid=%u" 1174 + " egid=%u sgid=%u fsgid=%u ses=%u tty=%s", 1175 + sys_getppid(), 1176 + tsk->pid, 1177 + from_kuid(&init_user_ns, tsk->loginuid), 1178 + from_kuid(&init_user_ns, cred->uid), 1179 + from_kgid(&init_user_ns, cred->gid), 1180 + from_kuid(&init_user_ns, cred->euid), 1181 + from_kuid(&init_user_ns, cred->suid), 1182 + from_kuid(&init_user_ns, cred->fsuid), 1183 + from_kgid(&init_user_ns, cred->egid), 1184 + from_kgid(&init_user_ns, cred->sgid), 1185 + from_kgid(&init_user_ns, cred->fsgid), 1186 + tsk->sessionid, tty); 1156 1187 1157 1188 get_task_comm(name, tsk); 1158 1189 audit_log_format(ab, " comm="); ··· 1205 1174 } 1206 1175 audit_log_task_context(ab); 1207 1176 } 1177 + 1178 + EXPORT_SYMBOL(audit_log_task_info); 1208 1179 1209 1180 static int audit_log_pid_context(struct audit_context *context, pid_t pid, 1210 1181 kuid_t auid, kuid_t uid, unsigned int sessionid, ··· 1613 1580 1614 1581 static void audit_log_exit(struct audit_context *context, struct task_struct *tsk) 1615 1582 { 1616 - const struct cred *cred; 1617 1583 int i, call_panic = 0; 1618 1584 struct audit_buffer *ab; 1619 1585 struct audit_aux_data *aux; 1620 - const char *tty; 1621 1586 struct audit_names *n; 1622 1587 1623 1588 /* tsk == current */ 1624 - context->pid = tsk->pid; 1625 - if (!context->ppid) 1626 - context->ppid = sys_getppid(); 1627 - cred = current_cred(); 1628 - context->uid = cred->uid; 1629 - context->gid = cred->gid; 1630 - context->euid = cred->euid; 1631 - context->suid = cred->suid; 1632 - context->fsuid = cred->fsuid; 1633 - context->egid = cred->egid; 1634 - context->sgid = cred->sgid; 1635 - context->fsgid = cred->fsgid; 1636 1589 context->personality = tsk->personality; 1637 1590 1638 1591 ab = audit_log_start(context, GFP_KERNEL, AUDIT_SYSCALL); ··· 1633 1614 (context->return_valid==AUDITSC_SUCCESS)?"yes":"no", 1634 1615 context->return_code); 1635 1616 1636 - spin_lock_irq(&tsk->sighand->siglock); 1637 - if (tsk->signal && tsk->signal->tty && tsk->signal->tty->name) 1638 - tty = tsk->signal->tty->name; 1639 - else 1640 - tty = "(none)"; 1641 - spin_unlock_irq(&tsk->sighand->siglock); 1642 - 1643 1617 audit_log_format(ab, 1644 - " a0=%lx a1=%lx a2=%lx a3=%lx items=%d" 1645 - " ppid=%d pid=%d auid=%u uid=%u gid=%u" 1646 - " euid=%u suid=%u fsuid=%u" 1647 - " egid=%u sgid=%u fsgid=%u tty=%s ses=%u", 1648 - context->argv[0], 1649 - context->argv[1], 1650 - context->argv[2], 1651 - context->argv[3], 1652 - context->name_count, 1653 - context->ppid, 1654 - context->pid, 1655 - from_kuid(&init_user_ns, tsk->loginuid), 1656 - from_kuid(&init_user_ns, context->uid), 1657 - from_kgid(&init_user_ns, context->gid), 1658 - from_kuid(&init_user_ns, context->euid), 1659 - from_kuid(&init_user_ns, context->suid), 1660 - from_kuid(&init_user_ns, context->fsuid), 1661 - from_kgid(&init_user_ns, context->egid), 1662 - from_kgid(&init_user_ns, context->sgid), 1663 - from_kgid(&init_user_ns, context->fsgid), 1664 - tty, 1665 - tsk->sessionid); 1666 - 1618 + " a0=%lx a1=%lx a2=%lx a3=%lx items=%d", 1619 + context->argv[0], 1620 + context->argv[1], 1621 + context->argv[2], 1622 + context->argv[3], 1623 + context->name_count); 1667 1624 1668 1625 audit_log_task_info(ab, tsk); 1669 1626 audit_log_key(ab, context->filterkey);
+2 -1
kernel/ptrace.c
··· 180 180 return has_ns_capability(current, ns, CAP_SYS_PTRACE); 181 181 } 182 182 183 - int __ptrace_may_access(struct task_struct *task, unsigned int mode) 183 + /* Returns 0 on success, -errno on denial. */ 184 + static int __ptrace_may_access(struct task_struct *task, unsigned int mode) 184 185 { 185 186 const struct cred *cred = current_cred(), *tcred; 186 187
+16 -8
samples/seccomp/Makefile
··· 18 18 bpf-direct-objs := bpf-direct.o 19 19 20 20 # Try to match the kernel target. 21 - ifeq ($(CONFIG_64BIT),) 22 - HOSTCFLAGS_bpf-direct.o += -m32 23 - HOSTCFLAGS_dropper.o += -m32 24 - HOSTCFLAGS_bpf-helper.o += -m32 25 - HOSTCFLAGS_bpf-fancy.o += -m32 26 - HOSTLOADLIBES_bpf-direct += -m32 27 - HOSTLOADLIBES_bpf-fancy += -m32 28 - HOSTLOADLIBES_dropper += -m32 21 + ifndef CONFIG_64BIT 22 + 23 + # s390 has -m31 flag to build 31 bit binaries 24 + ifndef CONFIG_S390 25 + MFLAG = -m32 26 + else 27 + MFLAG = -m31 28 + endif 29 + 30 + HOSTCFLAGS_bpf-direct.o += $(MFLAG) 31 + HOSTCFLAGS_dropper.o += $(MFLAG) 32 + HOSTCFLAGS_bpf-helper.o += $(MFLAG) 33 + HOSTCFLAGS_bpf-fancy.o += $(MFLAG) 34 + HOSTLOADLIBES_bpf-direct += $(MFLAG) 35 + HOSTLOADLIBES_bpf-fancy += $(MFLAG) 36 + HOSTLOADLIBES_dropper += $(MFLAG) 29 37 endif 30 38 31 39 # Tell kbuild to always build the programs
+10 -5
samples/seccomp/bpf-helper.h
··· 59 59 #define FIND_LABEL(labels, label) seccomp_bpf_label((labels), #label) 60 60 61 61 #define EXPAND(...) __VA_ARGS__ 62 + 63 + /* Ensure that we load the logically correct offset. */ 64 + #if __BYTE_ORDER == __LITTLE_ENDIAN 65 + #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 66 + #elif __BYTE_ORDER == __BIG_ENDIAN 67 + #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32) 68 + #else 69 + #error "Unknown endianness" 70 + #endif 71 + 62 72 /* Map all width-sensitive operations */ 63 73 #if __BITS_PER_LONG == 32 64 74 ··· 80 70 #define JLE(x, jt) JLE32(x, EXPAND(jt)) 81 71 #define JA(x, jt) JA32(x, EXPAND(jt)) 82 72 #define ARG(i) ARG_32(i) 83 - #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 84 73 85 74 #elif __BITS_PER_LONG == 64 86 75 87 76 /* Ensure that we load the logically correct offset. */ 88 77 #if __BYTE_ORDER == __LITTLE_ENDIAN 89 78 #define ENDIAN(_lo, _hi) _lo, _hi 90 - #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 91 79 #define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32) 92 80 #elif __BYTE_ORDER == __BIG_ENDIAN 93 81 #define ENDIAN(_lo, _hi) _hi, _lo 94 - #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32) 95 82 #define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 96 - #else 97 - #error "Unknown endianness" 98 83 #endif 99 84 100 85 union arg64 {
+3
security/integrity/evm/evm_main.c
··· 34 34 #ifdef CONFIG_SECURITY_SMACK 35 35 XATTR_NAME_SMACK, 36 36 #endif 37 + #ifdef CONFIG_IMA_APPRAISE 38 + XATTR_NAME_IMA, 39 + #endif 37 40 XATTR_NAME_CAPS, 38 41 NULL 39 42 };
+28 -36
security/integrity/iint.c
··· 22 22 #include "integrity.h" 23 23 24 24 static struct rb_root integrity_iint_tree = RB_ROOT; 25 - static DEFINE_SPINLOCK(integrity_iint_lock); 25 + static DEFINE_RWLOCK(integrity_iint_lock); 26 26 static struct kmem_cache *iint_cache __read_mostly; 27 27 28 28 int iint_initialized; ··· 34 34 { 35 35 struct integrity_iint_cache *iint; 36 36 struct rb_node *n = integrity_iint_tree.rb_node; 37 - 38 - assert_spin_locked(&integrity_iint_lock); 39 37 40 38 while (n) { 41 39 iint = rb_entry(n, struct integrity_iint_cache, rb_node); ··· 61 63 if (!IS_IMA(inode)) 62 64 return NULL; 63 65 64 - spin_lock(&integrity_iint_lock); 66 + read_lock(&integrity_iint_lock); 65 67 iint = __integrity_iint_find(inode); 66 - spin_unlock(&integrity_iint_lock); 68 + read_unlock(&integrity_iint_lock); 67 69 68 70 return iint; 69 71 } ··· 72 74 { 73 75 iint->version = 0; 74 76 iint->flags = 0UL; 77 + iint->ima_status = INTEGRITY_UNKNOWN; 75 78 iint->evm_status = INTEGRITY_UNKNOWN; 76 79 kmem_cache_free(iint_cache, iint); 77 80 } 78 81 79 82 /** 80 - * integrity_inode_alloc - allocate an iint associated with an inode 83 + * integrity_inode_get - find or allocate an iint associated with an inode 81 84 * @inode: pointer to the inode 85 + * @return: allocated iint 86 + * 87 + * Caller must lock i_mutex 82 88 */ 83 - int integrity_inode_alloc(struct inode *inode) 89 + struct integrity_iint_cache *integrity_inode_get(struct inode *inode) 84 90 { 85 91 struct rb_node **p; 86 - struct rb_node *new_node, *parent = NULL; 87 - struct integrity_iint_cache *new_iint, *test_iint; 88 - int rc; 92 + struct rb_node *node, *parent = NULL; 93 + struct integrity_iint_cache *iint, *test_iint; 89 94 90 - new_iint = kmem_cache_alloc(iint_cache, GFP_NOFS); 91 - if (!new_iint) 92 - return -ENOMEM; 95 + iint = integrity_iint_find(inode); 96 + if (iint) 97 + return iint; 93 98 94 - new_iint->inode = inode; 95 - new_node = &new_iint->rb_node; 99 + iint = kmem_cache_alloc(iint_cache, GFP_NOFS); 100 + if (!iint) 101 + return NULL; 96 102 97 - mutex_lock(&inode->i_mutex); /* i_flags */ 98 - spin_lock(&integrity_iint_lock); 103 + write_lock(&integrity_iint_lock); 99 104 100 105 p = &integrity_iint_tree.rb_node; 101 106 while (*p) { 102 107 parent = *p; 103 108 test_iint = rb_entry(parent, struct integrity_iint_cache, 104 109 rb_node); 105 - rc = -EEXIST; 106 110 if (inode < test_iint->inode) 107 111 p = &(*p)->rb_left; 108 - else if (inode > test_iint->inode) 109 - p = &(*p)->rb_right; 110 112 else 111 - goto out_err; 113 + p = &(*p)->rb_right; 112 114 } 113 115 116 + iint->inode = inode; 117 + node = &iint->rb_node; 114 118 inode->i_flags |= S_IMA; 115 - rb_link_node(new_node, parent, p); 116 - rb_insert_color(new_node, &integrity_iint_tree); 119 + rb_link_node(node, parent, p); 120 + rb_insert_color(node, &integrity_iint_tree); 117 121 118 - spin_unlock(&integrity_iint_lock); 119 - mutex_unlock(&inode->i_mutex); /* i_flags */ 120 - 121 - return 0; 122 - out_err: 123 - spin_unlock(&integrity_iint_lock); 124 - mutex_unlock(&inode->i_mutex); /* i_flags */ 125 - iint_free(new_iint); 126 - 127 - return rc; 122 + write_unlock(&integrity_iint_lock); 123 + return iint; 128 124 } 129 125 130 126 /** ··· 134 142 if (!IS_IMA(inode)) 135 143 return; 136 144 137 - spin_lock(&integrity_iint_lock); 145 + write_lock(&integrity_iint_lock); 138 146 iint = __integrity_iint_find(inode); 139 147 rb_erase(&iint->rb_node, &integrity_iint_tree); 140 - spin_unlock(&integrity_iint_lock); 148 + write_unlock(&integrity_iint_lock); 141 149 142 150 iint_free(iint); 143 151 } ··· 149 157 memset(iint, 0, sizeof *iint); 150 158 iint->version = 0; 151 159 iint->flags = 0UL; 152 - mutex_init(&iint->mutex); 160 + iint->ima_status = INTEGRITY_UNKNOWN; 153 161 iint->evm_status = INTEGRITY_UNKNOWN; 154 162 } 155 163
+16
security/integrity/ima/Kconfig
··· 11 11 select CRYPTO_SHA1 12 12 select TCG_TPM if HAS_IOMEM && !UML 13 13 select TCG_TIS if TCG_TPM && X86 14 + select TCG_IBMVTPM if TCG_TPM && PPC64 14 15 help 15 16 The Trusted Computing Group(TCG) runtime Integrity 16 17 Measurement Architecture(IMA) maintains a list of hash ··· 56 55 default y 57 56 help 58 57 Disabling this option will disregard LSM based policy rules. 58 + 59 + config IMA_APPRAISE 60 + bool "Appraise integrity measurements" 61 + depends on IMA 62 + default n 63 + help 64 + This option enables local measurement integrity appraisal. 65 + It requires the system to be labeled with a security extended 66 + attribute containing the file hash measurement. To protect 67 + the security extended attributes from offline attack, enable 68 + and configure EVM. 69 + 70 + For more information on integrity appraisal refer to: 71 + <http://linux-ima.sourceforge.net> 72 + If unsure, say N.
+1
security/integrity/ima/Makefile
··· 8 8 ima-y := ima_fs.o ima_queue.o ima_init.o ima_main.o ima_crypto.o ima_api.o \ 9 9 ima_policy.o 10 10 ima-$(CONFIG_IMA_AUDIT) += ima_audit.o 11 + ima-$(CONFIG_IMA_APPRAISE) += ima_appraise.o
+37 -2
security/integrity/ima/ima.h
··· 40 40 extern int ima_initialized; 41 41 extern int ima_used_chip; 42 42 extern char *ima_hash; 43 + extern int ima_appraise; 43 44 44 45 /* IMA inode template definition */ 45 46 struct ima_template_data { ··· 108 107 } 109 108 110 109 /* LIM API function definitions */ 110 + int ima_get_action(struct inode *inode, int mask, int function); 111 111 int ima_must_measure(struct inode *inode, int mask, int function); 112 112 int ima_collect_measurement(struct integrity_iint_cache *iint, 113 113 struct file *file); 114 114 void ima_store_measurement(struct integrity_iint_cache *iint, struct file *file, 115 + const unsigned char *filename); 116 + void ima_audit_measurement(struct integrity_iint_cache *iint, 115 117 const unsigned char *filename); 116 118 int ima_store_template(struct ima_template_entry *entry, int violation, 117 119 struct inode *inode); ··· 127 123 struct integrity_iint_cache *integrity_iint_find(struct inode *inode); 128 124 129 125 /* IMA policy related functions */ 130 - enum ima_hooks { FILE_CHECK = 1, FILE_MMAP, BPRM_CHECK }; 126 + enum ima_hooks { FILE_CHECK = 1, FILE_MMAP, BPRM_CHECK, POST_SETATTR }; 131 127 132 - int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask); 128 + int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask, 129 + int flags); 133 130 void ima_init_policy(void); 134 131 void ima_update_policy(void); 135 132 ssize_t ima_parse_add_rule(char *); 136 133 void ima_delete_rules(void); 134 + 135 + /* Appraise integrity measurements */ 136 + #define IMA_APPRAISE_ENFORCE 0x01 137 + #define IMA_APPRAISE_FIX 0x02 138 + 139 + #ifdef CONFIG_IMA_APPRAISE 140 + int ima_appraise_measurement(struct integrity_iint_cache *iint, 141 + struct file *file, const unsigned char *filename); 142 + int ima_must_appraise(struct inode *inode, enum ima_hooks func, int mask); 143 + void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file); 144 + 145 + #else 146 + static inline int ima_appraise_measurement(struct integrity_iint_cache *iint, 147 + struct file *file, 148 + const unsigned char *filename) 149 + { 150 + return INTEGRITY_UNKNOWN; 151 + } 152 + 153 + static inline int ima_must_appraise(struct inode *inode, 154 + enum ima_hooks func, int mask) 155 + { 156 + return 0; 157 + } 158 + 159 + static inline void ima_update_xattr(struct integrity_iint_cache *iint, 160 + struct file *file) 161 + { 162 + } 163 + #endif 137 164 138 165 /* LSM based policy rules require audit */ 139 166 #ifdef CONFIG_IMA_LSM_RULES
+69 -17
security/integrity/ima/ima_api.c
··· 9 9 * License. 10 10 * 11 11 * File: ima_api.c 12 - * Implements must_measure, collect_measurement, store_measurement, 13 - * and store_template. 12 + * Implements must_appraise_or_measure, collect_measurement, 13 + * appraise_measurement, store_measurement and store_template. 14 14 */ 15 15 #include <linux/module.h> 16 16 #include <linux/slab.h> 17 - 17 + #include <linux/file.h> 18 + #include <linux/fs.h> 19 + #include <linux/xattr.h> 20 + #include <linux/evm.h> 18 21 #include "ima.h" 22 + 19 23 static const char *IMA_TEMPLATE_NAME = "ima"; 20 24 21 25 /* ··· 97 93 } 98 94 99 95 /** 100 - * ima_must_measure - measure decision based on policy. 96 + * ima_get_action - appraise & measure decision based on policy. 101 97 * @inode: pointer to inode to measure 102 98 * @mask: contains the permission mask (MAY_READ, MAY_WRITE, MAY_EXECUTE) 103 99 * @function: calling function (FILE_CHECK, BPRM_CHECK, FILE_MMAP) ··· 109 105 * mask: contains the permission mask 110 106 * fsmagic: hex value 111 107 * 112 - * Return 0 to measure. For matching a DONT_MEASURE policy, no policy, 113 - * or other error, return an error code. 114 - */ 108 + * Returns IMA_MEASURE, IMA_APPRAISE mask. 109 + * 110 + */ 111 + int ima_get_action(struct inode *inode, int mask, int function) 112 + { 113 + int flags = IMA_MEASURE | IMA_AUDIT | IMA_APPRAISE; 114 + 115 + if (!ima_appraise) 116 + flags &= ~IMA_APPRAISE; 117 + 118 + return ima_match_policy(inode, function, mask, flags); 119 + } 120 + 115 121 int ima_must_measure(struct inode *inode, int mask, int function) 116 122 { 117 - int must_measure; 118 - 119 - must_measure = ima_match_policy(inode, function, mask); 120 - return must_measure ? 0 : -EACCES; 123 + return ima_match_policy(inode, function, mask, IMA_MEASURE); 121 124 } 122 125 123 126 /* ··· 140 129 int ima_collect_measurement(struct integrity_iint_cache *iint, 141 130 struct file *file) 142 131 { 143 - int result = -EEXIST; 132 + struct inode *inode = file->f_dentry->d_inode; 133 + const char *filename = file->f_dentry->d_name.name; 134 + int result = 0; 144 135 145 - if (!(iint->flags & IMA_MEASURED)) { 136 + if (!(iint->flags & IMA_COLLECTED)) { 146 137 u64 i_version = file->f_dentry->d_inode->i_version; 147 138 148 - memset(iint->digest, 0, IMA_DIGEST_SIZE); 149 - result = ima_calc_hash(file, iint->digest); 150 - if (!result) 139 + iint->ima_xattr.type = IMA_XATTR_DIGEST; 140 + result = ima_calc_hash(file, iint->ima_xattr.digest); 141 + if (!result) { 151 142 iint->version = i_version; 143 + iint->flags |= IMA_COLLECTED; 144 + } 152 145 } 146 + if (result) 147 + integrity_audit_msg(AUDIT_INTEGRITY_DATA, inode, 148 + filename, "collect_data", "failed", 149 + result, 0); 153 150 return result; 154 151 } 155 152 ··· 186 167 struct ima_template_entry *entry; 187 168 int violation = 0; 188 169 170 + if (iint->flags & IMA_MEASURED) 171 + return; 172 + 189 173 entry = kmalloc(sizeof(*entry), GFP_KERNEL); 190 174 if (!entry) { 191 175 integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, filename, ··· 196 174 return; 197 175 } 198 176 memset(&entry->template, 0, sizeof(entry->template)); 199 - memcpy(entry->template.digest, iint->digest, IMA_DIGEST_SIZE); 177 + memcpy(entry->template.digest, iint->ima_xattr.digest, IMA_DIGEST_SIZE); 200 178 strcpy(entry->template.file_name, 201 179 (strlen(filename) > IMA_EVENT_NAME_LEN_MAX) ? 202 180 file->f_dentry->d_name.name : filename); ··· 206 184 iint->flags |= IMA_MEASURED; 207 185 if (result < 0) 208 186 kfree(entry); 187 + } 188 + 189 + void ima_audit_measurement(struct integrity_iint_cache *iint, 190 + const unsigned char *filename) 191 + { 192 + struct audit_buffer *ab; 193 + char hash[(IMA_DIGEST_SIZE * 2) + 1]; 194 + int i; 195 + 196 + if (iint->flags & IMA_AUDITED) 197 + return; 198 + 199 + for (i = 0; i < IMA_DIGEST_SIZE; i++) 200 + hex_byte_pack(hash + (i * 2), iint->ima_xattr.digest[i]); 201 + hash[i * 2] = '\0'; 202 + 203 + ab = audit_log_start(current->audit_context, GFP_KERNEL, 204 + AUDIT_INTEGRITY_RULE); 205 + if (!ab) 206 + return; 207 + 208 + audit_log_format(ab, "file="); 209 + audit_log_untrustedstring(ab, filename); 210 + audit_log_format(ab, " hash="); 211 + audit_log_untrustedstring(ab, hash); 212 + 213 + audit_log_task_info(ab, current); 214 + audit_log_end(ab); 215 + 216 + iint->flags |= IMA_AUDITED; 209 217 }
+263
security/integrity/ima/ima_appraise.c
··· 1 + /* 2 + * Copyright (C) 2011 IBM Corporation 3 + * 4 + * Author: 5 + * Mimi Zohar <zohar@us.ibm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation, version 2 of the License. 10 + */ 11 + #include <linux/module.h> 12 + #include <linux/file.h> 13 + #include <linux/fs.h> 14 + #include <linux/xattr.h> 15 + #include <linux/magic.h> 16 + #include <linux/ima.h> 17 + #include <linux/evm.h> 18 + 19 + #include "ima.h" 20 + 21 + static int __init default_appraise_setup(char *str) 22 + { 23 + if (strncmp(str, "off", 3) == 0) 24 + ima_appraise = 0; 25 + else if (strncmp(str, "fix", 3) == 0) 26 + ima_appraise = IMA_APPRAISE_FIX; 27 + return 1; 28 + } 29 + 30 + __setup("ima_appraise=", default_appraise_setup); 31 + 32 + /* 33 + * ima_must_appraise - set appraise flag 34 + * 35 + * Return 1 to appraise 36 + */ 37 + int ima_must_appraise(struct inode *inode, enum ima_hooks func, int mask) 38 + { 39 + if (!ima_appraise) 40 + return 0; 41 + 42 + return ima_match_policy(inode, func, mask, IMA_APPRAISE); 43 + } 44 + 45 + static void ima_fix_xattr(struct dentry *dentry, 46 + struct integrity_iint_cache *iint) 47 + { 48 + iint->ima_xattr.type = IMA_XATTR_DIGEST; 49 + __vfs_setxattr_noperm(dentry, XATTR_NAME_IMA, (u8 *)&iint->ima_xattr, 50 + sizeof iint->ima_xattr, 0); 51 + } 52 + 53 + /* 54 + * ima_appraise_measurement - appraise file measurement 55 + * 56 + * Call evm_verifyxattr() to verify the integrity of 'security.ima'. 57 + * Assuming success, compare the xattr hash with the collected measurement. 58 + * 59 + * Return 0 on success, error code otherwise 60 + */ 61 + int ima_appraise_measurement(struct integrity_iint_cache *iint, 62 + struct file *file, const unsigned char *filename) 63 + { 64 + struct dentry *dentry = file->f_dentry; 65 + struct inode *inode = dentry->d_inode; 66 + struct evm_ima_xattr_data *xattr_value = NULL; 67 + enum integrity_status status = INTEGRITY_UNKNOWN; 68 + const char *op = "appraise_data"; 69 + char *cause = "unknown"; 70 + int rc; 71 + 72 + if (!ima_appraise) 73 + return 0; 74 + if (!inode->i_op->getxattr) 75 + return INTEGRITY_UNKNOWN; 76 + 77 + if (iint->flags & IMA_APPRAISED) 78 + return iint->ima_status; 79 + 80 + rc = vfs_getxattr_alloc(dentry, XATTR_NAME_IMA, (char **)&xattr_value, 81 + 0, GFP_NOFS); 82 + if (rc <= 0) { 83 + if (rc && rc != -ENODATA) 84 + goto out; 85 + 86 + cause = "missing-hash"; 87 + status = 88 + (inode->i_size == 0) ? INTEGRITY_PASS : INTEGRITY_NOLABEL; 89 + goto out; 90 + } 91 + 92 + status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint); 93 + if ((status != INTEGRITY_PASS) && (status != INTEGRITY_UNKNOWN)) { 94 + if ((status == INTEGRITY_NOLABEL) 95 + || (status == INTEGRITY_NOXATTRS)) 96 + cause = "missing-HMAC"; 97 + else if (status == INTEGRITY_FAIL) 98 + cause = "invalid-HMAC"; 99 + goto out; 100 + } 101 + 102 + switch (xattr_value->type) { 103 + case IMA_XATTR_DIGEST: 104 + rc = memcmp(xattr_value->digest, iint->ima_xattr.digest, 105 + IMA_DIGEST_SIZE); 106 + if (rc) { 107 + cause = "invalid-hash"; 108 + status = INTEGRITY_FAIL; 109 + print_hex_dump_bytes("security.ima: ", DUMP_PREFIX_NONE, 110 + xattr_value, sizeof(*xattr_value)); 111 + print_hex_dump_bytes("collected: ", DUMP_PREFIX_NONE, 112 + (u8 *)&iint->ima_xattr, 113 + sizeof iint->ima_xattr); 114 + break; 115 + } 116 + status = INTEGRITY_PASS; 117 + break; 118 + case EVM_IMA_XATTR_DIGSIG: 119 + iint->flags |= IMA_DIGSIG; 120 + rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA, 121 + xattr_value->digest, rc - 1, 122 + iint->ima_xattr.digest, 123 + IMA_DIGEST_SIZE); 124 + if (rc == -EOPNOTSUPP) { 125 + status = INTEGRITY_UNKNOWN; 126 + } else if (rc) { 127 + cause = "invalid-signature"; 128 + status = INTEGRITY_FAIL; 129 + } else { 130 + status = INTEGRITY_PASS; 131 + } 132 + break; 133 + default: 134 + status = INTEGRITY_UNKNOWN; 135 + cause = "unknown-ima-data"; 136 + break; 137 + } 138 + 139 + out: 140 + if (status != INTEGRITY_PASS) { 141 + if ((ima_appraise & IMA_APPRAISE_FIX) && 142 + (!xattr_value || 143 + xattr_value->type != EVM_IMA_XATTR_DIGSIG)) { 144 + ima_fix_xattr(dentry, iint); 145 + status = INTEGRITY_PASS; 146 + } 147 + integrity_audit_msg(AUDIT_INTEGRITY_DATA, inode, filename, 148 + op, cause, rc, 0); 149 + } else { 150 + iint->flags |= IMA_APPRAISED; 151 + } 152 + iint->ima_status = status; 153 + kfree(xattr_value); 154 + return status; 155 + } 156 + 157 + /* 158 + * ima_update_xattr - update 'security.ima' hash value 159 + */ 160 + void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file) 161 + { 162 + struct dentry *dentry = file->f_dentry; 163 + int rc = 0; 164 + 165 + /* do not collect and update hash for digital signatures */ 166 + if (iint->flags & IMA_DIGSIG) 167 + return; 168 + 169 + rc = ima_collect_measurement(iint, file); 170 + if (rc < 0) 171 + return; 172 + 173 + ima_fix_xattr(dentry, iint); 174 + } 175 + 176 + /** 177 + * ima_inode_post_setattr - reflect file metadata changes 178 + * @dentry: pointer to the affected dentry 179 + * 180 + * Changes to a dentry's metadata might result in needing to appraise. 181 + * 182 + * This function is called from notify_change(), which expects the caller 183 + * to lock the inode's i_mutex. 184 + */ 185 + void ima_inode_post_setattr(struct dentry *dentry) 186 + { 187 + struct inode *inode = dentry->d_inode; 188 + struct integrity_iint_cache *iint; 189 + int must_appraise, rc; 190 + 191 + if (!ima_initialized || !ima_appraise || !S_ISREG(inode->i_mode) 192 + || !inode->i_op->removexattr) 193 + return; 194 + 195 + must_appraise = ima_must_appraise(inode, MAY_ACCESS, POST_SETATTR); 196 + iint = integrity_iint_find(inode); 197 + if (iint) { 198 + if (must_appraise) 199 + iint->flags |= IMA_APPRAISE; 200 + else 201 + iint->flags &= ~(IMA_APPRAISE | IMA_APPRAISED); 202 + } 203 + if (!must_appraise) 204 + rc = inode->i_op->removexattr(dentry, XATTR_NAME_IMA); 205 + return; 206 + } 207 + 208 + /* 209 + * ima_protect_xattr - protect 'security.ima' 210 + * 211 + * Ensure that not just anyone can modify or remove 'security.ima'. 212 + */ 213 + static int ima_protect_xattr(struct dentry *dentry, const char *xattr_name, 214 + const void *xattr_value, size_t xattr_value_len) 215 + { 216 + if (strcmp(xattr_name, XATTR_NAME_IMA) == 0) { 217 + if (!capable(CAP_SYS_ADMIN)) 218 + return -EPERM; 219 + return 1; 220 + } 221 + return 0; 222 + } 223 + 224 + static void ima_reset_appraise_flags(struct inode *inode) 225 + { 226 + struct integrity_iint_cache *iint; 227 + 228 + if (!ima_initialized || !ima_appraise || !S_ISREG(inode->i_mode)) 229 + return; 230 + 231 + iint = integrity_iint_find(inode); 232 + if (!iint) 233 + return; 234 + 235 + iint->flags &= ~IMA_DONE_MASK; 236 + return; 237 + } 238 + 239 + int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name, 240 + const void *xattr_value, size_t xattr_value_len) 241 + { 242 + int result; 243 + 244 + result = ima_protect_xattr(dentry, xattr_name, xattr_value, 245 + xattr_value_len); 246 + if (result == 1) { 247 + ima_reset_appraise_flags(dentry->d_inode); 248 + result = 0; 249 + } 250 + return result; 251 + } 252 + 253 + int ima_inode_removexattr(struct dentry *dentry, const char *xattr_name) 254 + { 255 + int result; 256 + 257 + result = ima_protect_xattr(dentry, xattr_name, NULL, 0); 258 + if (result == 1) { 259 + ima_reset_appraise_flags(dentry->d_inode); 260 + result = 0; 261 + } 262 + return result; 263 + }
+7 -1
security/integrity/ima/ima_crypto.c
··· 48 48 struct scatterlist sg[1]; 49 49 loff_t i_size, offset = 0; 50 50 char *rbuf; 51 - int rc; 51 + int rc, read = 0; 52 52 53 53 rc = init_desc(&desc); 54 54 if (rc != 0) ··· 58 58 if (!rbuf) { 59 59 rc = -ENOMEM; 60 60 goto out; 61 + } 62 + if (!(file->f_mode & FMODE_READ)) { 63 + file->f_mode |= FMODE_READ; 64 + read = 1; 61 65 } 62 66 i_size = i_size_read(file->f_dentry->d_inode); 63 67 while (offset < i_size) { ··· 84 80 kfree(rbuf); 85 81 if (!rc) 86 82 rc = crypto_hash_final(&desc, digest); 83 + if (read) 84 + file->f_mode &= ~FMODE_READ; 87 85 out: 88 86 crypto_free_hash(desc.tfm); 89 87 return rc;
+60 -33
security/integrity/ima/ima_main.c
··· 22 22 #include <linux/mount.h> 23 23 #include <linux/mman.h> 24 24 #include <linux/slab.h> 25 + #include <linux/xattr.h> 25 26 #include <linux/ima.h> 26 27 27 28 #include "ima.h" 28 29 29 30 int ima_initialized; 31 + 32 + #ifdef CONFIG_IMA_APPRAISE 33 + int ima_appraise = IMA_APPRAISE_ENFORCE; 34 + #else 35 + int ima_appraise; 36 + #endif 30 37 31 38 char *ima_hash = "sha1"; 32 39 static int __init hash_setup(char *str) ··· 59 52 struct dentry *dentry = file->f_path.dentry; 60 53 struct inode *inode = dentry->d_inode; 61 54 fmode_t mode = file->f_mode; 62 - int rc; 55 + int must_measure; 63 56 bool send_tomtou = false, send_writers = false; 64 57 unsigned char *pathname = NULL, *pathbuf = NULL; 65 58 ··· 74 67 goto out; 75 68 } 76 69 77 - rc = ima_must_measure(inode, MAY_READ, FILE_CHECK); 78 - if (rc < 0) 70 + must_measure = ima_must_measure(inode, MAY_READ, FILE_CHECK); 71 + if (!must_measure) 79 72 goto out; 80 73 81 74 if (atomic_read(&inode->i_writecount) > 0) ··· 107 100 } 108 101 109 102 static void ima_check_last_writer(struct integrity_iint_cache *iint, 110 - struct inode *inode, 111 - struct file *file) 103 + struct inode *inode, struct file *file) 112 104 { 113 105 fmode_t mode = file->f_mode; 114 106 115 - mutex_lock(&iint->mutex); 116 - if (mode & FMODE_WRITE && 117 - atomic_read(&inode->i_writecount) == 1 && 118 - iint->version != inode->i_version) 119 - iint->flags &= ~IMA_MEASURED; 120 - mutex_unlock(&iint->mutex); 107 + if (!(mode & FMODE_WRITE)) 108 + return; 109 + 110 + mutex_lock(&inode->i_mutex); 111 + if (atomic_read(&inode->i_writecount) == 1 && 112 + iint->version != inode->i_version) { 113 + iint->flags &= ~IMA_DONE_MASK; 114 + if (iint->flags & IMA_APPRAISE) 115 + ima_update_xattr(iint, file); 116 + } 117 + mutex_unlock(&inode->i_mutex); 121 118 } 122 119 123 120 /** ··· 151 140 struct inode *inode = file->f_dentry->d_inode; 152 141 struct integrity_iint_cache *iint; 153 142 unsigned char *pathname = NULL, *pathbuf = NULL; 154 - int rc = 0; 143 + int rc = -ENOMEM, action, must_appraise; 155 144 156 145 if (!ima_initialized || !S_ISREG(inode->i_mode)) 157 146 return 0; 158 147 159 - rc = ima_must_measure(inode, mask, function); 160 - if (rc != 0) 161 - return rc; 162 - retry: 163 - iint = integrity_iint_find(inode); 164 - if (!iint) { 165 - rc = integrity_inode_alloc(inode); 166 - if (!rc || rc == -EEXIST) 167 - goto retry; 168 - return rc; 169 - } 148 + /* Determine if in appraise/audit/measurement policy, 149 + * returns IMA_MEASURE, IMA_APPRAISE, IMA_AUDIT bitmask. */ 150 + action = ima_get_action(inode, mask, function); 151 + if (!action) 152 + return 0; 170 153 171 - mutex_lock(&iint->mutex); 154 + must_appraise = action & IMA_APPRAISE; 172 155 173 - rc = iint->flags & IMA_MEASURED ? 1 : 0; 174 - if (rc != 0) 156 + mutex_lock(&inode->i_mutex); 157 + 158 + iint = integrity_inode_get(inode); 159 + if (!iint) 175 160 goto out; 161 + 162 + /* Determine if already appraised/measured based on bitmask 163 + * (IMA_MEASURE, IMA_MEASURED, IMA_APPRAISE, IMA_APPRAISED, 164 + * IMA_AUDIT, IMA_AUDITED) */ 165 + iint->flags |= action; 166 + action &= ~((iint->flags & IMA_DONE_MASK) >> 1); 167 + 168 + /* Nothing to do, just return existing appraised status */ 169 + if (!action) { 170 + if (iint->flags & IMA_APPRAISED) 171 + rc = iint->ima_status; 172 + goto out; 173 + } 176 174 177 175 rc = ima_collect_measurement(iint, file); 178 176 if (rc != 0) ··· 197 177 pathname = NULL; 198 178 } 199 179 } 200 - ima_store_measurement(iint, file, !pathname ? filename : pathname); 180 + if (action & IMA_MEASURE) 181 + ima_store_measurement(iint, file, 182 + !pathname ? filename : pathname); 183 + if (action & IMA_APPRAISE) 184 + rc = ima_appraise_measurement(iint, file, 185 + !pathname ? filename : pathname); 186 + if (action & IMA_AUDIT) 187 + ima_audit_measurement(iint, !pathname ? filename : pathname); 201 188 kfree(pathbuf); 202 189 out: 203 - mutex_unlock(&iint->mutex); 204 - return rc; 190 + mutex_unlock(&inode->i_mutex); 191 + return (rc && must_appraise) ? -EACCES : 0; 205 192 } 206 193 207 194 /** ··· 224 197 */ 225 198 int ima_file_mmap(struct file *file, unsigned long prot) 226 199 { 227 - int rc; 200 + int rc = 0; 228 201 229 202 if (!file) 230 203 return 0; 231 204 if (prot & PROT_EXEC) 232 205 rc = process_measurement(file, file->f_dentry->d_name.name, 233 206 MAY_EXEC, FILE_MMAP); 234 - return 0; 207 + return (ima_appraise & IMA_APPRAISE_ENFORCE) ? rc : 0; 235 208 } 236 209 237 210 /** ··· 255 228 (strcmp(bprm->filename, bprm->interp) == 0) ? 256 229 bprm->filename : bprm->interp, 257 230 MAY_EXEC, BPRM_CHECK); 258 - return 0; 231 + return (ima_appraise & IMA_APPRAISE_ENFORCE) ? rc : 0; 259 232 } 260 233 261 234 /** ··· 276 249 rc = process_measurement(file, file->f_dentry->d_name.name, 277 250 mask & (MAY_READ | MAY_WRITE | MAY_EXEC), 278 251 FILE_CHECK); 279 - return 0; 252 + return (ima_appraise & IMA_APPRAISE_ENFORCE) ? rc : 0; 280 253 } 281 254 EXPORT_SYMBOL_GPL(ima_file_check); 282 255
+147 -46
security/integrity/ima/ima_policy.c
··· 24 24 #define IMA_MASK 0x0002 25 25 #define IMA_FSMAGIC 0x0004 26 26 #define IMA_UID 0x0008 27 + #define IMA_FOWNER 0x0010 27 28 28 - enum ima_action { UNKNOWN = -1, DONT_MEASURE = 0, MEASURE }; 29 + #define UNKNOWN 0 30 + #define MEASURE 0x0001 /* same as IMA_MEASURE */ 31 + #define DONT_MEASURE 0x0002 32 + #define APPRAISE 0x0004 /* same as IMA_APPRAISE */ 33 + #define DONT_APPRAISE 0x0008 34 + #define AUDIT 0x0040 29 35 30 36 #define MAX_LSM_RULES 6 31 37 enum lsm_rule_types { LSM_OBJ_USER, LSM_OBJ_ROLE, LSM_OBJ_TYPE, 32 38 LSM_SUBJ_USER, LSM_SUBJ_ROLE, LSM_SUBJ_TYPE 33 39 }; 34 40 35 - struct ima_measure_rule_entry { 41 + struct ima_rule_entry { 36 42 struct list_head list; 37 - enum ima_action action; 43 + int action; 38 44 unsigned int flags; 39 45 enum ima_hooks func; 40 46 int mask; 41 47 unsigned long fsmagic; 42 48 kuid_t uid; 49 + kuid_t fowner; 43 50 struct { 44 51 void *rule; /* LSM file metadata specific */ 45 52 int type; /* audit type */ ··· 55 48 56 49 /* 57 50 * Without LSM specific knowledge, the default policy can only be 58 - * written in terms of .action, .func, .mask, .fsmagic, and .uid 51 + * written in terms of .action, .func, .mask, .fsmagic, .uid, and .fowner 59 52 */ 60 53 61 54 /* ··· 64 57 * normal users can easily run the machine out of memory simply building 65 58 * and running executables. 66 59 */ 67 - static struct ima_measure_rule_entry default_rules[] = { 60 + static struct ima_rule_entry default_rules[] = { 68 61 {.action = DONT_MEASURE,.fsmagic = PROC_SUPER_MAGIC,.flags = IMA_FSMAGIC}, 69 62 {.action = DONT_MEASURE,.fsmagic = SYSFS_MAGIC,.flags = IMA_FSMAGIC}, 70 63 {.action = DONT_MEASURE,.fsmagic = DEBUGFS_MAGIC,.flags = IMA_FSMAGIC}, ··· 82 75 .flags = IMA_FUNC | IMA_MASK | IMA_UID}, 83 76 }; 84 77 85 - static LIST_HEAD(measure_default_rules); 86 - static LIST_HEAD(measure_policy_rules); 87 - static struct list_head *ima_measure; 78 + static struct ima_rule_entry default_appraise_rules[] = { 79 + {.action = DONT_APPRAISE,.fsmagic = PROC_SUPER_MAGIC,.flags = IMA_FSMAGIC}, 80 + {.action = DONT_APPRAISE,.fsmagic = SYSFS_MAGIC,.flags = IMA_FSMAGIC}, 81 + {.action = DONT_APPRAISE,.fsmagic = DEBUGFS_MAGIC,.flags = IMA_FSMAGIC}, 82 + {.action = DONT_APPRAISE,.fsmagic = TMPFS_MAGIC,.flags = IMA_FSMAGIC}, 83 + {.action = DONT_APPRAISE,.fsmagic = RAMFS_MAGIC,.flags = IMA_FSMAGIC}, 84 + {.action = DONT_APPRAISE,.fsmagic = DEVPTS_SUPER_MAGIC,.flags = IMA_FSMAGIC}, 85 + {.action = DONT_APPRAISE,.fsmagic = BINFMTFS_MAGIC,.flags = IMA_FSMAGIC}, 86 + {.action = DONT_APPRAISE,.fsmagic = SECURITYFS_MAGIC,.flags = IMA_FSMAGIC}, 87 + {.action = DONT_APPRAISE,.fsmagic = SELINUX_MAGIC,.flags = IMA_FSMAGIC}, 88 + {.action = DONT_APPRAISE,.fsmagic = CGROUP_SUPER_MAGIC,.flags = IMA_FSMAGIC}, 89 + {.action = APPRAISE,.fowner = GLOBAL_ROOT_UID,.flags = IMA_FOWNER}, 90 + }; 88 91 89 - static DEFINE_MUTEX(ima_measure_mutex); 92 + static LIST_HEAD(ima_default_rules); 93 + static LIST_HEAD(ima_policy_rules); 94 + static struct list_head *ima_rules; 95 + 96 + static DEFINE_MUTEX(ima_rules_mutex); 90 97 91 98 static bool ima_use_tcb __initdata; 92 - static int __init default_policy_setup(char *str) 99 + static int __init default_measure_policy_setup(char *str) 93 100 { 94 101 ima_use_tcb = 1; 95 102 return 1; 96 103 } 97 - __setup("ima_tcb", default_policy_setup); 104 + __setup("ima_tcb", default_measure_policy_setup); 105 + 106 + static bool ima_use_appraise_tcb __initdata; 107 + static int __init default_appraise_policy_setup(char *str) 108 + { 109 + ima_use_appraise_tcb = 1; 110 + return 1; 111 + } 112 + __setup("ima_appraise_tcb", default_appraise_policy_setup); 98 113 99 114 /** 100 115 * ima_match_rules - determine whether an inode matches the measure rule. ··· 127 98 * 128 99 * Returns true on rule match, false on failure. 129 100 */ 130 - static bool ima_match_rules(struct ima_measure_rule_entry *rule, 101 + static bool ima_match_rules(struct ima_rule_entry *rule, 131 102 struct inode *inode, enum ima_hooks func, int mask) 132 103 { 133 104 struct task_struct *tsk = current; ··· 142 113 && rule->fsmagic != inode->i_sb->s_magic) 143 114 return false; 144 115 if ((rule->flags & IMA_UID) && !uid_eq(rule->uid, cred->uid)) 116 + return false; 117 + if ((rule->flags & IMA_FOWNER) && !uid_eq(rule->fowner, inode->i_uid)) 145 118 return false; 146 119 for (i = 0; i < MAX_LSM_RULES; i++) { 147 120 int rc = 0; ··· 194 163 * as elements in the list are never deleted, nor does the list 195 164 * change.) 196 165 */ 197 - int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask) 166 + int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask, 167 + int flags) 198 168 { 199 - struct ima_measure_rule_entry *entry; 169 + struct ima_rule_entry *entry; 170 + int action = 0, actmask = flags | (flags << 1); 200 171 201 - list_for_each_entry(entry, ima_measure, list) { 202 - bool rc; 172 + list_for_each_entry(entry, ima_rules, list) { 203 173 204 - rc = ima_match_rules(entry, inode, func, mask); 205 - if (rc) 206 - return entry->action; 174 + if (!(entry->action & actmask)) 175 + continue; 176 + 177 + if (!ima_match_rules(entry, inode, func, mask)) 178 + continue; 179 + 180 + action |= entry->action & IMA_DO_MASK; 181 + if (entry->action & IMA_DO_MASK) 182 + actmask &= ~(entry->action | entry->action << 1); 183 + else 184 + actmask &= ~(entry->action | entry->action >> 1); 185 + 186 + if (!actmask) 187 + break; 207 188 } 208 - return 0; 189 + 190 + return action; 209 191 } 210 192 211 193 /** 212 194 * ima_init_policy - initialize the default measure rules. 213 195 * 214 - * ima_measure points to either the measure_default_rules or the 215 - * the new measure_policy_rules. 196 + * ima_rules points to either the ima_default_rules or the 197 + * the new ima_policy_rules. 216 198 */ 217 199 void __init ima_init_policy(void) 218 200 { 219 - int i, entries; 201 + int i, measure_entries, appraise_entries; 220 202 221 203 /* if !ima_use_tcb set entries = 0 so we load NO default rules */ 222 - if (ima_use_tcb) 223 - entries = ARRAY_SIZE(default_rules); 224 - else 225 - entries = 0; 204 + measure_entries = ima_use_tcb ? ARRAY_SIZE(default_rules) : 0; 205 + appraise_entries = ima_use_appraise_tcb ? 206 + ARRAY_SIZE(default_appraise_rules) : 0; 207 + 208 + for (i = 0; i < measure_entries + appraise_entries; i++) { 209 + if (i < measure_entries) 210 + list_add_tail(&default_rules[i].list, 211 + &ima_default_rules); 212 + else { 213 + int j = i - measure_entries; 226 214 227 - for (i = 0; i < entries; i++) 228 - list_add_tail(&default_rules[i].list, &measure_default_rules); 229 - ima_measure = &measure_default_rules; 215 + list_add_tail(&default_appraise_rules[j].list, 216 + &ima_default_rules); 217 + } 218 + } 219 + 220 + ima_rules = &ima_default_rules; 230 221 } 231 222 232 223 /** ··· 265 212 int result = 1; 266 213 int audit_info = 0; 267 214 268 - if (ima_measure == &measure_default_rules) { 269 - ima_measure = &measure_policy_rules; 215 + if (ima_rules == &ima_default_rules) { 216 + ima_rules = &ima_policy_rules; 270 217 cause = "complete"; 271 218 result = 0; 272 219 } ··· 277 224 enum { 278 225 Opt_err = -1, 279 226 Opt_measure = 1, Opt_dont_measure, 227 + Opt_appraise, Opt_dont_appraise, 228 + Opt_audit, 280 229 Opt_obj_user, Opt_obj_role, Opt_obj_type, 281 230 Opt_subj_user, Opt_subj_role, Opt_subj_type, 282 - Opt_func, Opt_mask, Opt_fsmagic, Opt_uid 231 + Opt_func, Opt_mask, Opt_fsmagic, Opt_uid, Opt_fowner 283 232 }; 284 233 285 234 static match_table_t policy_tokens = { 286 235 {Opt_measure, "measure"}, 287 236 {Opt_dont_measure, "dont_measure"}, 237 + {Opt_appraise, "appraise"}, 238 + {Opt_dont_appraise, "dont_appraise"}, 239 + {Opt_audit, "audit"}, 288 240 {Opt_obj_user, "obj_user=%s"}, 289 241 {Opt_obj_role, "obj_role=%s"}, 290 242 {Opt_obj_type, "obj_type=%s"}, ··· 300 242 {Opt_mask, "mask=%s"}, 301 243 {Opt_fsmagic, "fsmagic=%s"}, 302 244 {Opt_uid, "uid=%s"}, 245 + {Opt_fowner, "fowner=%s"}, 303 246 {Opt_err, NULL} 304 247 }; 305 248 306 - static int ima_lsm_rule_init(struct ima_measure_rule_entry *entry, 249 + static int ima_lsm_rule_init(struct ima_rule_entry *entry, 307 250 char *args, int lsm_rule, int audit_type) 308 251 { 309 252 int result; ··· 328 269 audit_log_format(ab, " "); 329 270 } 330 271 331 - static int ima_parse_rule(char *rule, struct ima_measure_rule_entry *entry) 272 + static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) 332 273 { 333 274 struct audit_buffer *ab; 334 275 char *p; ··· 337 278 ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_INTEGRITY_RULE); 338 279 339 280 entry->uid = INVALID_UID; 281 + entry->fowner = INVALID_UID; 340 282 entry->action = UNKNOWN; 341 283 while ((p = strsep(&rule, " \t")) != NULL) { 342 284 substring_t args[MAX_OPT_ARGS]; ··· 366 306 367 307 entry->action = DONT_MEASURE; 368 308 break; 309 + case Opt_appraise: 310 + ima_log_string(ab, "action", "appraise"); 311 + 312 + if (entry->action != UNKNOWN) 313 + result = -EINVAL; 314 + 315 + entry->action = APPRAISE; 316 + break; 317 + case Opt_dont_appraise: 318 + ima_log_string(ab, "action", "dont_appraise"); 319 + 320 + if (entry->action != UNKNOWN) 321 + result = -EINVAL; 322 + 323 + entry->action = DONT_APPRAISE; 324 + break; 325 + case Opt_audit: 326 + ima_log_string(ab, "action", "audit"); 327 + 328 + if (entry->action != UNKNOWN) 329 + result = -EINVAL; 330 + 331 + entry->action = AUDIT; 332 + break; 369 333 case Opt_func: 370 334 ima_log_string(ab, "func", args[0].from); 371 335 372 336 if (entry->func) 373 - result = -EINVAL; 337 + result = -EINVAL; 374 338 375 339 if (strcmp(args[0].from, "FILE_CHECK") == 0) 376 340 entry->func = FILE_CHECK; ··· 459 375 entry->flags |= IMA_UID; 460 376 } 461 377 break; 378 + case Opt_fowner: 379 + ima_log_string(ab, "fowner", args[0].from); 380 + 381 + if (uid_valid(entry->fowner)) { 382 + result = -EINVAL; 383 + break; 384 + } 385 + 386 + result = strict_strtoul(args[0].from, 10, &lnum); 387 + if (!result) { 388 + entry->fowner = make_kuid(current_user_ns(), (uid_t)lnum); 389 + if (!uid_valid(entry->fowner) || (((uid_t)lnum) != lnum)) 390 + result = -EINVAL; 391 + else 392 + entry->flags |= IMA_FOWNER; 393 + } 394 + break; 462 395 case Opt_obj_user: 463 396 ima_log_string(ab, "obj_user", args[0].from); 464 397 result = ima_lsm_rule_init(entry, args[0].from, ··· 527 426 } 528 427 529 428 /** 530 - * ima_parse_add_rule - add a rule to measure_policy_rules 429 + * ima_parse_add_rule - add a rule to ima_policy_rules 531 430 * @rule - ima measurement policy rule 532 431 * 533 432 * Uses a mutex to protect the policy list from multiple concurrent writers. ··· 537 436 { 538 437 const char *op = "update_policy"; 539 438 char *p; 540 - struct ima_measure_rule_entry *entry; 439 + struct ima_rule_entry *entry; 541 440 ssize_t result, len; 542 441 int audit_info = 0; 543 442 544 443 /* Prevent installed policy from changing */ 545 - if (ima_measure != &measure_default_rules) { 444 + if (ima_rules != &ima_default_rules) { 546 445 integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL, 547 446 NULL, op, "already exists", 548 447 -EACCES, audit_info); ··· 575 474 return result; 576 475 } 577 476 578 - mutex_lock(&ima_measure_mutex); 579 - list_add_tail(&entry->list, &measure_policy_rules); 580 - mutex_unlock(&ima_measure_mutex); 477 + mutex_lock(&ima_rules_mutex); 478 + list_add_tail(&entry->list, &ima_policy_rules); 479 + mutex_unlock(&ima_rules_mutex); 581 480 582 481 return len; 583 482 } ··· 585 484 /* ima_delete_rules called to cleanup invalid policy */ 586 485 void ima_delete_rules(void) 587 486 { 588 - struct ima_measure_rule_entry *entry, *tmp; 487 + struct ima_rule_entry *entry, *tmp; 589 488 590 - mutex_lock(&ima_measure_mutex); 591 - list_for_each_entry_safe(entry, tmp, &measure_policy_rules, list) { 489 + mutex_lock(&ima_rules_mutex); 490 + list_for_each_entry_safe(entry, tmp, &ima_policy_rules, list) { 592 491 list_del(&entry->list); 593 492 kfree(entry); 594 493 } 595 - mutex_unlock(&ima_measure_mutex); 494 + mutex_unlock(&ima_rules_mutex); 596 495 }
+18 -4
security/integrity/integrity.h
··· 15 15 #include <linux/integrity.h> 16 16 #include <crypto/sha.h> 17 17 18 + /* iint action cache flags */ 19 + #define IMA_MEASURE 0x0001 20 + #define IMA_MEASURED 0x0002 21 + #define IMA_APPRAISE 0x0004 22 + #define IMA_APPRAISED 0x0008 23 + /*#define IMA_COLLECT 0x0010 do not use this flag */ 24 + #define IMA_COLLECTED 0x0020 25 + #define IMA_AUDIT 0x0040 26 + #define IMA_AUDITED 0x0080 27 + 18 28 /* iint cache flags */ 19 - #define IMA_MEASURED 0x01 29 + #define IMA_DIGSIG 0x0100 30 + 31 + #define IMA_DO_MASK (IMA_MEASURE | IMA_APPRAISE | IMA_AUDIT) 32 + #define IMA_DONE_MASK (IMA_MEASURED | IMA_APPRAISED | IMA_AUDITED \ 33 + | IMA_COLLECTED) 20 34 21 35 enum evm_ima_xattr_type { 22 36 IMA_XATTR_DIGEST = 0x01, ··· 48 34 struct rb_node rb_node; /* rooted in integrity_iint_tree */ 49 35 struct inode *inode; /* back pointer to inode in question */ 50 36 u64 version; /* track inode changes */ 51 - unsigned char flags; 52 - u8 digest[SHA1_DIGEST_SIZE]; 53 - struct mutex mutex; /* protects: version, flags, digest */ 37 + unsigned short flags; 38 + struct evm_ima_xattr_data ima_xattr; 39 + enum integrity_status ima_status; 54 40 enum integrity_status evm_status; 55 41 }; 56 42
+12 -42
security/keys/trusted.c
··· 369 369 } 370 370 371 371 /* 372 - * get a random value from TPM 373 - */ 374 - static int tpm_get_random(struct tpm_buf *tb, unsigned char *buf, uint32_t len) 375 - { 376 - int ret; 377 - 378 - INIT_BUF(tb); 379 - store16(tb, TPM_TAG_RQU_COMMAND); 380 - store32(tb, TPM_GETRANDOM_SIZE); 381 - store32(tb, TPM_ORD_GETRANDOM); 382 - store32(tb, len); 383 - ret = trusted_tpm_send(TPM_ANY_NUM, tb->data, sizeof tb->data); 384 - if (!ret) 385 - memcpy(buf, tb->data + TPM_GETRANDOM_SIZE, len); 386 - return ret; 387 - } 388 - 389 - static int my_get_random(unsigned char *buf, int len) 390 - { 391 - struct tpm_buf *tb; 392 - int ret; 393 - 394 - tb = kmalloc(sizeof *tb, GFP_KERNEL); 395 - if (!tb) 396 - return -ENOMEM; 397 - ret = tpm_get_random(tb, buf, len); 398 - 399 - kfree(tb); 400 - return ret; 401 - } 402 - 403 - /* 404 372 * Lock a trusted key, by extending a selected PCR. 405 373 * 406 374 * Prevents a trusted key that is sealed to PCRs from being accessed. ··· 381 413 382 414 if (!capable(CAP_SYS_ADMIN)) 383 415 return -EPERM; 384 - ret = my_get_random(hash, SHA1_DIGEST_SIZE); 385 - if (ret < 0) 416 + ret = tpm_get_random(TPM_ANY_NUM, hash, SHA1_DIGEST_SIZE); 417 + if (ret != SHA1_DIGEST_SIZE) 386 418 return ret; 387 419 return tpm_pcr_extend(TPM_ANY_NUM, pcrnum, hash) ? -EINVAL : 0; 388 420 } ··· 397 429 unsigned char ononce[TPM_NONCE_SIZE]; 398 430 int ret; 399 431 400 - ret = tpm_get_random(tb, ononce, TPM_NONCE_SIZE); 401 - if (ret < 0) 432 + ret = tpm_get_random(TPM_ANY_NUM, ononce, TPM_NONCE_SIZE); 433 + if (ret != TPM_NONCE_SIZE) 402 434 return ret; 403 435 404 436 INIT_BUF(tb); ··· 492 524 if (ret < 0) 493 525 goto out; 494 526 495 - ret = tpm_get_random(tb, td->nonceodd, TPM_NONCE_SIZE); 496 - if (ret < 0) 527 + ret = tpm_get_random(TPM_ANY_NUM, td->nonceodd, TPM_NONCE_SIZE); 528 + if (ret != TPM_NONCE_SIZE) 497 529 goto out; 498 530 ordinal = htonl(TPM_ORD_SEAL); 499 531 datsize = htonl(datalen); ··· 602 634 603 635 ordinal = htonl(TPM_ORD_UNSEAL); 604 636 keyhndl = htonl(SRKHANDLE); 605 - ret = tpm_get_random(tb, nonceodd, TPM_NONCE_SIZE); 606 - if (ret < 0) { 637 + ret = tpm_get_random(TPM_ANY_NUM, nonceodd, TPM_NONCE_SIZE); 638 + if (ret != TPM_NONCE_SIZE) { 607 639 pr_info("trusted_key: tpm_get_random failed (%d)\n", ret); 608 640 return ret; 609 641 } ··· 903 935 char *datablob; 904 936 int ret = 0; 905 937 int key_cmd; 938 + size_t key_len; 906 939 907 940 if (datalen <= 0 || datalen > 32767 || !data) 908 941 return -EINVAL; ··· 943 974 pr_info("trusted_key: key_unseal failed (%d)\n", ret); 944 975 break; 945 976 case Opt_new: 946 - ret = my_get_random(payload->key, payload->key_len); 947 - if (ret < 0) { 977 + key_len = payload->key_len; 978 + ret = tpm_get_random(TPM_ANY_NUM, payload->key, key_len); 979 + if (ret != key_len) { 948 980 pr_info("trusted_key: key_create failed (%d)\n", ret); 949 981 goto out; 950 982 }
+27
security/security.c
··· 136 136 137 137 int security_ptrace_access_check(struct task_struct *child, unsigned int mode) 138 138 { 139 + #ifdef CONFIG_SECURITY_YAMA_STACKED 140 + int rc; 141 + rc = yama_ptrace_access_check(child, mode); 142 + if (rc) 143 + return rc; 144 + #endif 139 145 return security_ops->ptrace_access_check(child, mode); 140 146 } 141 147 142 148 int security_ptrace_traceme(struct task_struct *parent) 143 149 { 150 + #ifdef CONFIG_SECURITY_YAMA_STACKED 151 + int rc; 152 + rc = yama_ptrace_traceme(parent); 153 + if (rc) 154 + return rc; 155 + #endif 144 156 return security_ops->ptrace_traceme(parent); 145 157 } 146 158 ··· 573 561 ret = security_ops->inode_setxattr(dentry, name, value, size, flags); 574 562 if (ret) 575 563 return ret; 564 + ret = ima_inode_setxattr(dentry, name, value, size); 565 + if (ret) 566 + return ret; 576 567 return evm_inode_setxattr(dentry, name, value, size); 577 568 } 578 569 ··· 609 594 if (unlikely(IS_PRIVATE(dentry->d_inode))) 610 595 return 0; 611 596 ret = security_ops->inode_removexattr(dentry, name); 597 + if (ret) 598 + return ret; 599 + ret = ima_inode_removexattr(dentry, name); 612 600 if (ret) 613 601 return ret; 614 602 return evm_inode_removexattr(dentry, name); ··· 779 761 780 762 void security_task_free(struct task_struct *task) 781 763 { 764 + #ifdef CONFIG_SECURITY_YAMA_STACKED 765 + yama_task_free(task); 766 + #endif 782 767 security_ops->task_free(task); 783 768 } 784 769 ··· 897 876 int security_task_prctl(int option, unsigned long arg2, unsigned long arg3, 898 877 unsigned long arg4, unsigned long arg5) 899 878 { 879 + #ifdef CONFIG_SECURITY_YAMA_STACKED 880 + int rc; 881 + rc = yama_task_prctl(option, arg2, arg3, arg4, arg5); 882 + if (rc != -ENOSYS) 883 + return rc; 884 + #endif 900 885 return security_ops->task_prctl(option, arg2, arg3, arg4, arg5); 901 886 } 902 887
+10 -41
security/smack/smack_lsm.c
··· 1691 1691 * smack_task_wait - Smack access check for waiting 1692 1692 * @p: task to wait for 1693 1693 * 1694 - * Returns 0 if current can wait for p, error code otherwise 1694 + * Returns 0 1695 1695 */ 1696 1696 static int smack_task_wait(struct task_struct *p) 1697 1697 { 1698 - struct smk_audit_info ad; 1699 - char *sp = smk_of_current(); 1700 - char *tsp = smk_of_forked(task_security(p)); 1701 - int rc; 1702 - 1703 - /* we don't log here, we can be overriden */ 1704 - rc = smk_access(tsp, sp, MAY_WRITE, NULL); 1705 - if (rc == 0) 1706 - goto out_log; 1707 - 1708 1698 /* 1709 - * Allow the operation to succeed if either task 1710 - * has privilege to perform operations that might 1711 - * account for the smack labels having gotten to 1712 - * be different in the first place. 1713 - * 1714 - * This breaks the strict subject/object access 1715 - * control ideal, taking the object's privilege 1716 - * state into account in the decision as well as 1717 - * the smack value. 1699 + * Allow the operation to succeed. 1700 + * Zombies are bad. 1701 + * In userless environments (e.g. phones) programs 1702 + * get marked with SMACK64EXEC and even if the parent 1703 + * and child shouldn't be talking the parent still 1704 + * may expect to know when the child exits. 1718 1705 */ 1719 - if (smack_privileged(CAP_MAC_OVERRIDE) || 1720 - has_capability(p, CAP_MAC_OVERRIDE)) 1721 - rc = 0; 1722 - /* we log only if we didn't get overriden */ 1723 - out_log: 1724 - smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK); 1725 - smk_ad_setfield_u_tsk(&ad, p); 1726 - smack_log(tsp, sp, MAY_WRITE, rc, &ad); 1727 - return rc; 1706 + return 0; 1728 1707 } 1729 1708 1730 1709 /** ··· 2684 2705 static int smack_setprocattr(struct task_struct *p, char *name, 2685 2706 void *value, size_t size) 2686 2707 { 2687 - int rc; 2688 2708 struct task_smack *tsp; 2689 - struct task_smack *oldtsp; 2690 2709 struct cred *new; 2691 2710 char *newsmack; 2692 2711 ··· 2714 2737 if (newsmack == smack_known_web.smk_known) 2715 2738 return -EPERM; 2716 2739 2717 - oldtsp = p->cred->security; 2718 2740 new = prepare_creds(); 2719 2741 if (new == NULL) 2720 2742 return -ENOMEM; 2721 2743 2722 - tsp = new_task_smack(newsmack, oldtsp->smk_forked, GFP_KERNEL); 2723 - if (tsp == NULL) { 2724 - kfree(new); 2725 - return -ENOMEM; 2726 - } 2727 - rc = smk_copy_rules(&tsp->smk_rules, &oldtsp->smk_rules, GFP_KERNEL); 2728 - if (rc != 0) 2729 - return rc; 2744 + tsp = new->security; 2745 + tsp->smk_task = newsmack; 2730 2746 2731 - new->security = tsp; 2732 2747 commit_creds(new); 2733 2748 return size; 2734 2749 }
+75
security/smack/smackfs.c
··· 49 49 SMK_LOAD_SELF2 = 15, /* load task specific rules with long labels */ 50 50 SMK_ACCESS2 = 16, /* make an access check with long labels */ 51 51 SMK_CIPSO2 = 17, /* load long label -> CIPSO mapping */ 52 + SMK_REVOKE_SUBJ = 18, /* set rules with subject label to '-' */ 52 53 }; 53 54 54 55 /* ··· 1993 1992 }; 1994 1993 1995 1994 /** 1995 + * smk_write_revoke_subj - write() for /smack/revoke-subject 1996 + * @file: file pointer 1997 + * @buf: data from user space 1998 + * @count: bytes sent 1999 + * @ppos: where to start - must be 0 2000 + */ 2001 + static ssize_t smk_write_revoke_subj(struct file *file, const char __user *buf, 2002 + size_t count, loff_t *ppos) 2003 + { 2004 + char *data = NULL; 2005 + const char *cp = NULL; 2006 + struct smack_known *skp; 2007 + struct smack_rule *sp; 2008 + struct list_head *rule_list; 2009 + struct mutex *rule_lock; 2010 + int rc = count; 2011 + 2012 + if (*ppos != 0) 2013 + return -EINVAL; 2014 + 2015 + if (!smack_privileged(CAP_MAC_ADMIN)) 2016 + return -EPERM; 2017 + 2018 + if (count == 0 || count > SMK_LONGLABEL) 2019 + return -EINVAL; 2020 + 2021 + data = kzalloc(count, GFP_KERNEL); 2022 + if (data == NULL) 2023 + return -ENOMEM; 2024 + 2025 + if (copy_from_user(data, buf, count) != 0) { 2026 + rc = -EFAULT; 2027 + goto free_out; 2028 + } 2029 + 2030 + cp = smk_parse_smack(data, count); 2031 + if (cp == NULL) { 2032 + rc = -EINVAL; 2033 + goto free_out; 2034 + } 2035 + 2036 + skp = smk_find_entry(cp); 2037 + if (skp == NULL) { 2038 + rc = -EINVAL; 2039 + goto free_out; 2040 + } 2041 + 2042 + rule_list = &skp->smk_rules; 2043 + rule_lock = &skp->smk_rules_lock; 2044 + 2045 + mutex_lock(rule_lock); 2046 + 2047 + list_for_each_entry_rcu(sp, rule_list, list) 2048 + sp->smk_access = 0; 2049 + 2050 + mutex_unlock(rule_lock); 2051 + 2052 + free_out: 2053 + kfree(data); 2054 + kfree(cp); 2055 + return rc; 2056 + } 2057 + 2058 + static const struct file_operations smk_revoke_subj_ops = { 2059 + .write = smk_write_revoke_subj, 2060 + .read = simple_transaction_read, 2061 + .release = simple_transaction_release, 2062 + .llseek = generic_file_llseek, 2063 + }; 2064 + 2065 + /** 1996 2066 * smk_fill_super - fill the /smackfs superblock 1997 2067 * @sb: the empty superblock 1998 2068 * @data: unused ··· 2109 2037 "access2", &smk_access2_ops, S_IRUGO|S_IWUGO}, 2110 2038 [SMK_CIPSO2] = { 2111 2039 "cipso2", &smk_cipso2_ops, S_IRUGO|S_IWUSR}, 2040 + [SMK_REVOKE_SUBJ] = { 2041 + "revoke-subject", &smk_revoke_subj_ops, 2042 + S_IRUGO|S_IWUSR}, 2112 2043 /* last one */ 2113 2044 {""} 2114 2045 };
+8
security/yama/Kconfig
··· 11 11 Further information can be found in Documentation/security/Yama.txt. 12 12 13 13 If you are unsure how to answer this question, answer N. 14 + 15 + config SECURITY_YAMA_STACKED 16 + bool "Yama stacked with other LSMs" 17 + depends on SECURITY_YAMA 18 + default n 19 + help 20 + When Yama is built into the kernel, force it to stack with the 21 + selected primary LSM.
+11 -5
security/yama/yama_lsm.c
··· 100 100 * yama_task_free - check for task_pid to remove from exception list 101 101 * @task: task being removed 102 102 */ 103 - static void yama_task_free(struct task_struct *task) 103 + void yama_task_free(struct task_struct *task) 104 104 { 105 105 yama_ptracer_del(task, task); 106 106 } ··· 116 116 * Return 0 on success, -ve on error. -ENOSYS is returned when Yama 117 117 * does not handle the given option. 118 118 */ 119 - static int yama_task_prctl(int option, unsigned long arg2, unsigned long arg3, 119 + int yama_task_prctl(int option, unsigned long arg2, unsigned long arg3, 120 120 unsigned long arg4, unsigned long arg5) 121 121 { 122 122 int rc; ··· 143 143 if (arg2 == 0) { 144 144 yama_ptracer_del(NULL, myself); 145 145 rc = 0; 146 - } else if (arg2 == PR_SET_PTRACER_ANY) { 146 + } else if (arg2 == PR_SET_PTRACER_ANY || (int)arg2 == -1) { 147 147 rc = yama_ptracer_add(NULL, myself); 148 148 } else { 149 149 struct task_struct *tracer; ··· 243 243 * 244 244 * Returns 0 if following the ptrace is allowed, -ve on error. 245 245 */ 246 - static int yama_ptrace_access_check(struct task_struct *child, 246 + int yama_ptrace_access_check(struct task_struct *child, 247 247 unsigned int mode) 248 248 { 249 249 int rc; ··· 293 293 * 294 294 * Returns 0 if following the ptrace is allowed, -ve on error. 295 295 */ 296 - static int yama_ptrace_traceme(struct task_struct *parent) 296 + int yama_ptrace_traceme(struct task_struct *parent) 297 297 { 298 298 int rc; 299 299 ··· 324 324 return rc; 325 325 } 326 326 327 + #ifndef CONFIG_SECURITY_YAMA_STACKED 327 328 static struct security_operations yama_ops = { 328 329 .name = "yama", 329 330 ··· 333 332 .task_prctl = yama_task_prctl, 334 333 .task_free = yama_task_free, 335 334 }; 335 + #endif 336 336 337 337 #ifdef CONFIG_SYSCTL 338 338 static int yama_dointvec_minmax(struct ctl_table *table, int write, ··· 380 378 381 379 static __init int yama_init(void) 382 380 { 381 + #ifndef CONFIG_SECURITY_YAMA_STACKED 383 382 if (!security_module_enable(&yama_ops)) 384 383 return 0; 384 + #endif 385 385 386 386 printk(KERN_INFO "Yama: becoming mindful.\n"); 387 387 388 + #ifndef CONFIG_SECURITY_YAMA_STACKED 388 389 if (register_security(&yama_ops)) 389 390 panic("Yama: kernel registration failed.\n"); 391 + #endif 390 392 391 393 #ifdef CONFIG_SYSCTL 392 394 if (!register_sysctl_paths(yama_sysctl_path, yama_sysctl_table))