Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
"Highlights:

- TPM core and driver updates/fixes
- IPv6 security labeling (CALIPSO)
- Lots of Apparmor fixes
- Seccomp: remove 2-phase API, close hole where ptrace can change
syscall #"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (156 commits)
apparmor: fix SECURITY_APPARMOR_HASH_DEFAULT parameter handling
tpm: Add TPM 2.0 support to the Nuvoton i2c driver (NPCT6xx family)
tpm: Factor out common startup code
tpm: use devm_add_action_or_reset
tpm2_i2c_nuvoton: add irq validity check
tpm: read burstcount from TPM_STS in one 32-bit transaction
tpm: fix byte-order for the value read by tpm2_get_tpm_pt
tpm_tis_core: convert max timeouts from msec to jiffies
apparmor: fix arg_size computation for when setprocattr is null terminated
apparmor: fix oops, validate buffer size in apparmor_setprocattr()
apparmor: do not expose kernel stack
apparmor: fix module parameters can be changed after policy is locked
apparmor: fix oops in profile_unpack() when policy_db is not present
apparmor: don't check for vmalloc_addr if kvzalloc() failed
apparmor: add missing id bounds check on dfa verification
apparmor: allow SYS_CAP_RESOURCE to be sufficient to prlimit another task
apparmor: use list_next_entry instead of list_entry_next
apparmor: fix refcount race when finding a child profile
apparmor: fix ref count leak when profile sha1 hash is read
apparmor: check that xindex is in trans_table bounds
...

+7294 -2144
+1
Documentation/devicetree/bindings/i2c/trivial-devices.txt
··· 126 126 national,lm85 Temperature sensor with integrated fan control 127 127 national,lm92 ±0.33°C Accurate, 12-Bit + Sign Temperature Sensor and Thermal Window Comparator with Two-Wire Interface 128 128 nuvoton,npct501 i2c trusted platform module (TPM) 129 + nuvoton,npct601 i2c trusted platform module (TPM2) 129 130 nxp,pca9556 Octal SMBus and I2C registered interface 130 131 nxp,pca9557 8-bit I2C-bus and SMBus I/O port with reset 131 132 nxp,pcf8563 Real-time clock/calendar
+24
Documentation/devicetree/bindings/security/tpm/tpm_tis_spi.txt
··· 1 + Required properties: 2 + - compatible: should be one of the following 3 + "st,st33htpm-spi" 4 + "infineon,slb9670" 5 + "tcg,tpm_tis-spi" 6 + - spi-max-frequency: Maximum SPI frequency (depends on TPMs). 7 + 8 + Optional SoC Specific Properties: 9 + - pinctrl-names: Contains only one value - "default". 10 + - pintctrl-0: Specifies the pin control groups used for this controller. 11 + 12 + Example (for ARM-based BeagleBoard xM with TPM_TIS on SPI4): 13 + 14 + &mcspi4 { 15 + 16 + status = "okay"; 17 + 18 + tpm_tis@0 { 19 + 20 + compatible = "tcg,tpm_tis-spi"; 21 + 22 + spi-max-frequency = <10000000>; 23 + }; 24 + };
+2
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 128 128 ifi Ingenieurburo Fur Ic-Technologie (I/F/I) 129 129 iom Iomega Corporation 130 130 img Imagination Technologies Ltd. 131 + infineon Infineon Technologies 131 132 inforce Inforce Computing 132 133 ingenic Ingenic Semiconductor 133 134 innolux Innolux Corporation ··· 256 255 synology Synology, Inc. 257 256 SUNW Sun Microsystems, Inc 258 257 tbs TBS Technologies 258 + tcg Trusted Computing Group 259 259 tcl Toby Churchill Ltd. 260 260 technexion TechNexion 261 261 technologic Technologic Systems
+1
Documentation/ioctl/ioctl-number.txt
··· 303 303 <mailto:buk@buks.ipn.de> 304 304 0xA0 all linux/sdp/sdp.h Industrial Device Project 305 305 <mailto:kenji@bitgate.com> 306 + 0xA1 0 linux/vtpm_proxy.h TPM Emulator Proxy Driver 306 307 0xA2 00-0F arch/tile/include/asm/hardwall.h 307 308 0xA3 80-8F Port ACL in development: 308 309 <mailto:tlewis@mindspring.com>
+71
Documentation/tpm/tpm_vtpm_proxy.txt
··· 1 + Virtual TPM Proxy Driver for Linux Containers 2 + 3 + Authors: Stefan Berger (IBM) 4 + 5 + This document describes the virtual Trusted Platform Module (vTPM) 6 + proxy device driver for Linux containers. 7 + 8 + INTRODUCTION 9 + ------------ 10 + 11 + The goal of this work is to provide TPM functionality to each Linux 12 + container. This allows programs to interact with a TPM in a container 13 + the same way they interact with a TPM on the physical system. Each 14 + container gets its own unique, emulated, software TPM. 15 + 16 + 17 + DESIGN 18 + ------ 19 + 20 + To make an emulated software TPM available to each container, the container 21 + management stack needs to create a device pair consisting of a client TPM 22 + character device /dev/tpmX (with X=0,1,2...) and a 'server side' file 23 + descriptor. The former is moved into the container by creating a character 24 + device with the appropriate major and minor numbers while the file descriptor 25 + is passed to the TPM emulator. Software inside the container can then send 26 + TPM commands using the character device and the emulator will receive the 27 + commands via the file descriptor and use it for sending back responses. 28 + 29 + To support this, the virtual TPM proxy driver provides a device /dev/vtpmx 30 + that is used to create device pairs using an ioctl. The ioctl takes as 31 + an input flags for configuring the device. The flags for example indicate 32 + whether TPM 1.2 or TPM 2 functionality is supported by the TPM emulator. 33 + The result of the ioctl are the file descriptor for the 'server side' 34 + as well as the major and minor numbers of the character device that was created. 35 + Besides that the number of the TPM character device is return. If for 36 + example /dev/tpm10 was created, the number (dev_num) 10 is returned. 37 + 38 + The following is the data structure of the TPM_PROXY_IOC_NEW_DEV ioctl: 39 + 40 + struct vtpm_proxy_new_dev { 41 + __u32 flags; /* input */ 42 + __u32 tpm_num; /* output */ 43 + __u32 fd; /* output */ 44 + __u32 major; /* output */ 45 + __u32 minor; /* output */ 46 + }; 47 + 48 + Note that if unsupported flags are passed to the device driver, the ioctl will 49 + fail and errno will be set to EOPNOTSUPP. Similarly, if an unsupported ioctl is 50 + called on the device driver, the ioctl will fail and errno will be set to 51 + ENOTTY. 52 + 53 + See /usr/include/linux/vtpm_proxy.h for definitions related to the public interface 54 + of this vTPM device driver. 55 + 56 + Once the device has been created, the driver will immediately try to talk 57 + to the TPM. All commands from the driver can be read from the file descriptor 58 + returned by the ioctl. The commands should be responded to immediately. 59 + 60 + Depending on the version of TPM the following commands will be sent by the 61 + driver: 62 + 63 + - TPM 1.2: 64 + - the driver will send a TPM_Startup command to the TPM emulator 65 + - the driver will send commands to read the command durations and 66 + interface timeouts from the TPM emulator 67 + - TPM 2: 68 + - the driver will send a TPM2_Startup command to the TPM emulator 69 + 70 + The TPM device /dev/tpmX will only appear if all of the relevant commands 71 + were responded to properly.
+2 -2
MAINTAINERS
··· 2837 2837 F: include/uapi/linux/can/netlink.h 2838 2838 2839 2839 CAPABILITIES 2840 - M: Serge Hallyn <serge.hallyn@canonical.com> 2840 + M: Serge Hallyn <serge@hallyn.com> 2841 2841 L: linux-security-module@vger.kernel.org 2842 2842 S: Supported 2843 2843 F: include/linux/capability.h ··· 10675 10675 M: Casey Schaufler <casey@schaufler-ca.com> 10676 10676 L: linux-security-module@vger.kernel.org 10677 10677 W: http://schaufler-ca.com 10678 - T: git git://git.gitorious.org/smack-next/kernel.git 10678 + T: git git://github.com/cschaufler/smack-next 10679 10679 S: Maintained 10680 10680 F: Documentation/security/Smack.txt 10681 10681 F: security/smack/
+10 -9
arch/arm/kernel/ptrace.c
··· 932 932 { 933 933 current_thread_info()->syscall = scno; 934 934 935 - /* Do the secure computing check first; failures should be fast. */ 936 - #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER 937 - if (secure_computing() == -1) 938 - return -1; 939 - #else 940 - /* XXX: remove this once OABI gets fixed */ 941 - secure_computing_strict(scno); 942 - #endif 943 - 944 935 if (test_thread_flag(TIF_SYSCALL_TRACE)) 945 936 tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); 946 937 938 + /* Do seccomp after ptrace; syscall may have changed. */ 939 + #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER 940 + if (secure_computing(NULL) == -1) 941 + return -1; 942 + #else 943 + /* XXX: remove this once OABI gets fixed */ 944 + secure_computing_strict(current_thread_info()->syscall); 945 + #endif 946 + 947 + /* Tracer or seccomp may have changed syscall. */ 947 948 scno = current_thread_info()->syscall; 948 949 949 950 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+4 -4
arch/arm64/kernel/ptrace.c
··· 1347 1347 1348 1348 asmlinkage int syscall_trace_enter(struct pt_regs *regs) 1349 1349 { 1350 - /* Do the secure computing check first; failures should be fast. */ 1351 - if (secure_computing() == -1) 1352 - return -1; 1353 - 1354 1350 if (test_thread_flag(TIF_SYSCALL_TRACE)) 1355 1351 tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); 1352 + 1353 + /* Do the secure computing after ptrace; failures should be fast. */ 1354 + if (secure_computing(NULL) == -1) 1355 + return -1; 1356 1356 1357 1357 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 1358 1358 trace_sys_enter(regs, regs->syscallno);
+4 -5
arch/mips/kernel/ptrace.c
··· 888 888 */ 889 889 asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall) 890 890 { 891 - long ret = 0; 892 891 user_exit(); 893 892 894 893 current_thread_info()->syscall = syscall; 895 894 896 - if (secure_computing() == -1) 897 - return -1; 898 - 899 895 if (test_thread_flag(TIF_SYSCALL_TRACE) && 900 896 tracehook_report_syscall_entry(regs)) 901 - ret = -1; 897 + return -1; 898 + 899 + if (secure_computing(NULL) == -1) 900 + return -1; 902 901 903 902 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) 904 903 trace_sys_enter(regs, regs->regs[2]);
+5 -4
arch/parisc/kernel/ptrace.c
··· 311 311 312 312 long do_syscall_trace_enter(struct pt_regs *regs) 313 313 { 314 - /* Do the secure computing check first. */ 315 - if (secure_computing() == -1) 316 - return -1; 317 - 318 314 if (test_thread_flag(TIF_SYSCALL_TRACE) && 319 315 tracehook_report_syscall_entry(regs)) { 320 316 /* ··· 321 325 regs->gr[20] = -1UL; 322 326 goto out; 323 327 } 328 + 329 + /* Do the secure computing check after ptrace. */ 330 + if (secure_computing(NULL) == -1) 331 + return -1; 332 + 324 333 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS 325 334 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) 326 335 trace_sys_enter(regs, regs->gr[20]);
+24 -22
arch/powerpc/kernel/ptrace.c
··· 1783 1783 * have already loaded -ENOSYS into r3, or seccomp has put 1784 1784 * something else in r3 (via SECCOMP_RET_ERRNO/TRACE). 1785 1785 */ 1786 - if (__secure_computing()) 1786 + if (__secure_computing(NULL)) 1787 1787 return -1; 1788 1788 1789 1789 /* 1790 1790 * The syscall was allowed by seccomp, restore the register 1791 - * state to what ptrace and audit expect. 1791 + * state to what audit expects. 1792 1792 * Note that we use orig_gpr3, which means a seccomp tracer can 1793 1793 * modify the first syscall parameter (in orig_gpr3) and also 1794 1794 * allow the syscall to proceed. ··· 1822 1822 */ 1823 1823 long do_syscall_trace_enter(struct pt_regs *regs) 1824 1824 { 1825 - bool abort = false; 1826 - 1827 1825 user_exit(); 1828 1826 1827 + /* 1828 + * The tracer may decide to abort the syscall, if so tracehook 1829 + * will return !0. Note that the tracer may also just change 1830 + * regs->gpr[0] to an invalid syscall number, that is handled 1831 + * below on the exit path. 1832 + */ 1833 + if (test_thread_flag(TIF_SYSCALL_TRACE) && 1834 + tracehook_report_syscall_entry(regs)) 1835 + goto skip; 1836 + 1837 + /* Run seccomp after ptrace; allow it to set gpr[3]. */ 1829 1838 if (do_seccomp(regs)) 1830 1839 return -1; 1831 1840 1832 - if (test_thread_flag(TIF_SYSCALL_TRACE)) { 1833 - /* 1834 - * The tracer may decide to abort the syscall, if so tracehook 1835 - * will return !0. Note that the tracer may also just change 1836 - * regs->gpr[0] to an invalid syscall number, that is handled 1837 - * below on the exit path. 1838 - */ 1839 - abort = tracehook_report_syscall_entry(regs) != 0; 1840 - } 1841 + /* Avoid trace and audit when syscall is invalid. */ 1842 + if (regs->gpr[0] >= NR_syscalls) 1843 + goto skip; 1841 1844 1842 1845 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) 1843 1846 trace_sys_enter(regs, regs->gpr[0]); ··· 1857 1854 regs->gpr[5] & 0xffffffff, 1858 1855 regs->gpr[6] & 0xffffffff); 1859 1856 1860 - if (abort || regs->gpr[0] >= NR_syscalls) { 1861 - /* 1862 - * If we are aborting explicitly, or if the syscall number is 1863 - * now invalid, set the return value to -ENOSYS. 1864 - */ 1865 - regs->gpr[3] = -ENOSYS; 1866 - return -1; 1867 - } 1868 - 1869 1857 /* Return the possibly modified but valid syscall number */ 1870 1858 return regs->gpr[0]; 1859 + 1860 + skip: 1861 + /* 1862 + * If we are aborting explicitly, or if the syscall number is 1863 + * now invalid, set the return value to -ENOSYS. 1864 + */ 1865 + regs->gpr[3] = -ENOSYS; 1866 + return -1; 1871 1867 } 1872 1868 1873 1869 void do_syscall_trace_leave(struct pt_regs *regs)
+9 -12
arch/s390/kernel/ptrace.c
··· 821 821 822 822 asmlinkage long do_syscall_trace_enter(struct pt_regs *regs) 823 823 { 824 - long ret = 0; 825 - 826 - /* Do the secure computing check first. */ 827 - if (secure_computing()) { 828 - /* seccomp failures shouldn't expose any additional code. */ 829 - ret = -1; 830 - goto out; 831 - } 832 - 833 824 /* 834 825 * The sysc_tracesys code in entry.S stored the system 835 826 * call number to gprs[2]. ··· 834 843 * the system call and the system call restart handling. 835 844 */ 836 845 clear_pt_regs_flag(regs, PIF_SYSCALL); 837 - ret = -1; 846 + return -1; 847 + } 848 + 849 + /* Do the secure computing check after ptrace. */ 850 + if (secure_computing(NULL)) { 851 + /* seccomp failures shouldn't expose any additional code. */ 852 + return -1; 838 853 } 839 854 840 855 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) ··· 849 852 audit_syscall_entry(regs->gprs[2], regs->orig_gpr2, 850 853 regs->gprs[3], regs->gprs[4], 851 854 regs->gprs[5]); 852 - out: 853 - return ret ?: regs->gprs[2]; 855 + 856 + return regs->gprs[2]; 854 857 } 855 858 856 859 asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)
+6 -5
arch/tile/kernel/ptrace.c
··· 255 255 { 256 256 u32 work = ACCESS_ONCE(current_thread_info()->flags); 257 257 258 - if (secure_computing() == -1) 258 + if ((work & _TIF_SYSCALL_TRACE) && 259 + tracehook_report_syscall_entry(regs)) { 260 + regs->regs[TREG_SYSCALL_NR] = -1; 259 261 return -1; 260 - 261 - if (work & _TIF_SYSCALL_TRACE) { 262 - if (tracehook_report_syscall_entry(regs)) 263 - regs->regs[TREG_SYSCALL_NR] = -1; 264 262 } 263 + 264 + if (secure_computing(NULL) == -1) 265 + return -1; 265 266 266 267 if (work & _TIF_SYSCALL_TRACEPOINT) 267 268 trace_sys_enter(regs, regs->regs[TREG_SYSCALL_NR]);
+4 -5
arch/um/kernel/skas/syscall.c
··· 20 20 UPT_SYSCALL_NR(r) = PT_SYSCALL_NR(r->gp); 21 21 PT_REGS_SET_SYSCALL_RETURN(regs, -ENOSYS); 22 22 23 - /* Do the secure computing check first; failures should be fast. */ 24 - if (secure_computing() == -1) 23 + if (syscall_trace_enter(regs)) 25 24 return; 26 25 27 - if (syscall_trace_enter(regs)) 28 - goto out; 26 + /* Do the seccomp check after ptrace; failures should be fast. */ 27 + if (secure_computing(NULL) == -1) 28 + return; 29 29 30 30 /* Update the syscall number after orig_ax has potentially been updated 31 31 * with ptrace. ··· 37 37 PT_REGS_SET_SYSCALL_RETURN(regs, 38 38 EXECUTE_SYSCALL(syscall, regs)); 39 39 40 - out: 41 40 syscall_trace_leave(regs); 42 41 }
+20 -86
arch/x86/entry/common.c
··· 64 64 } 65 65 66 66 /* 67 - * We can return 0 to resume the syscall or anything else to go to phase 68 - * 2. If we resume the syscall, we need to put something appropriate in 69 - * regs->orig_ax. 70 - * 71 - * NB: We don't have full pt_regs here, but regs->orig_ax and regs->ax 72 - * are fully functional. 73 - * 74 - * For phase 2's benefit, our return value is: 75 - * 0: resume the syscall 76 - * 1: go to phase 2; no seccomp phase 2 needed 77 - * anything else: go to phase 2; pass return value to seccomp 67 + * Returns the syscall nr to run (which should match regs->orig_ax) or -1 68 + * to skip the syscall. 78 69 */ 79 - unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch) 70 + static long syscall_trace_enter(struct pt_regs *regs) 80 71 { 72 + u32 arch = in_ia32_syscall() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64; 73 + 81 74 struct thread_info *ti = pt_regs_to_thread_info(regs); 82 75 unsigned long ret = 0; 76 + bool emulated = false; 83 77 u32 work; 84 78 85 79 if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) ··· 81 87 82 88 work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY; 83 89 90 + if (unlikely(work & _TIF_SYSCALL_EMU)) 91 + emulated = true; 92 + 93 + if ((emulated || (work & _TIF_SYSCALL_TRACE)) && 94 + tracehook_report_syscall_entry(regs)) 95 + return -1L; 96 + 97 + if (emulated) 98 + return -1L; 99 + 84 100 #ifdef CONFIG_SECCOMP 85 101 /* 86 - * Do seccomp first -- it should minimize exposure of other 87 - * code, and keeping seccomp fast is probably more valuable 88 - * than the rest of this. 102 + * Do seccomp after ptrace, to catch any tracer changes. 89 103 */ 90 104 if (work & _TIF_SECCOMP) { 91 105 struct seccomp_data sd; ··· 120 118 sd.args[5] = regs->bp; 121 119 } 122 120 123 - BUILD_BUG_ON(SECCOMP_PHASE1_OK != 0); 124 - BUILD_BUG_ON(SECCOMP_PHASE1_SKIP != 1); 125 - 126 - ret = seccomp_phase1(&sd); 127 - if (ret == SECCOMP_PHASE1_SKIP) { 128 - regs->orig_ax = -1; 129 - ret = 0; 130 - } else if (ret != SECCOMP_PHASE1_OK) { 131 - return ret; /* Go directly to phase 2 */ 132 - } 133 - 134 - work &= ~_TIF_SECCOMP; 121 + ret = __secure_computing(&sd); 122 + if (ret == -1) 123 + return ret; 135 124 } 136 125 #endif 137 - 138 - /* Do our best to finish without phase 2. */ 139 - if (work == 0) 140 - return ret; /* seccomp and/or nohz only (ret == 0 here) */ 141 - 142 - #ifdef CONFIG_AUDITSYSCALL 143 - if (work == _TIF_SYSCALL_AUDIT) { 144 - /* 145 - * If there is no more work to be done except auditing, 146 - * then audit in phase 1. Phase 2 always audits, so, if 147 - * we audit here, then we can't go on to phase 2. 148 - */ 149 - do_audit_syscall_entry(regs, arch); 150 - return 0; 151 - } 152 - #endif 153 - 154 - return 1; /* Something is enabled that we can't handle in phase 1 */ 155 - } 156 - 157 - /* Returns the syscall nr to run (which should match regs->orig_ax). */ 158 - long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch, 159 - unsigned long phase1_result) 160 - { 161 - struct thread_info *ti = pt_regs_to_thread_info(regs); 162 - long ret = 0; 163 - u32 work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY; 164 - 165 - if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) 166 - BUG_ON(regs != task_pt_regs(current)); 167 - 168 - #ifdef CONFIG_SECCOMP 169 - /* 170 - * Call seccomp_phase2 before running the other hooks so that 171 - * they can see any changes made by a seccomp tracer. 172 - */ 173 - if (phase1_result > 1 && seccomp_phase2(phase1_result)) { 174 - /* seccomp failures shouldn't expose any additional code. */ 175 - return -1; 176 - } 177 - #endif 178 - 179 - if (unlikely(work & _TIF_SYSCALL_EMU)) 180 - ret = -1L; 181 - 182 - if ((ret || test_thread_flag(TIF_SYSCALL_TRACE)) && 183 - tracehook_report_syscall_entry(regs)) 184 - ret = -1L; 185 126 186 127 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) 187 128 trace_sys_enter(regs, regs->orig_ax); ··· 132 187 do_audit_syscall_entry(regs, arch); 133 188 134 189 return ret ?: regs->orig_ax; 135 - } 136 - 137 - long syscall_trace_enter(struct pt_regs *regs) 138 - { 139 - u32 arch = in_ia32_syscall() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64; 140 - unsigned long phase1_result = syscall_trace_enter_phase1(regs, arch); 141 - 142 - if (phase1_result == 0) 143 - return regs->orig_ax; 144 - else 145 - return syscall_trace_enter_phase2(regs, arch, phase1_result); 146 190 } 147 191 148 192 #define EXIT_TO_USERMODE_LOOP_FLAGS \
+1 -1
arch/x86/entry/vsyscall/vsyscall_64.c
··· 207 207 */ 208 208 regs->orig_ax = syscall_nr; 209 209 regs->ax = -ENOSYS; 210 - tmp = secure_computing(); 210 + tmp = secure_computing(NULL); 211 211 if ((!tmp && regs->orig_ax != syscall_nr) || regs->ip != address) { 212 212 warn_bad_vsyscall(KERN_DEBUG, regs, 213 213 "seccomp tried to change syscall nr or ip");
-6
arch/x86/include/asm/ptrace.h
··· 83 83 int error_code, int si_code); 84 84 85 85 86 - extern unsigned long syscall_trace_enter_phase1(struct pt_regs *, u32 arch); 87 - extern long syscall_trace_enter_phase2(struct pt_regs *, u32 arch, 88 - unsigned long phase1_result); 89 - 90 - extern long syscall_trace_enter(struct pt_regs *); 91 - 92 86 static inline unsigned long regs_return_value(struct pt_regs *regs) 93 87 { 94 88 return regs->ax;
+30
drivers/char/tpm/Kconfig
··· 24 24 25 25 if TCG_TPM 26 26 27 + config TCG_TIS_CORE 28 + tristate 29 + ---help--- 30 + TCG TIS TPM core driver. It implements the TPM TCG TIS logic and hooks 31 + into the TPM kernel APIs. Physical layers will register against it. 32 + 27 33 config TCG_TIS 28 34 tristate "TPM Interface Specification 1.2 Interface / TPM 2.0 FIFO Interface" 29 35 depends on X86 36 + select TCG_TIS_CORE 30 37 ---help--- 31 38 If you have a TPM security chip that is compliant with the 32 39 TCG TIS 1.2 TPM specification (TPM1.2) or the TCG PTP FIFO 33 40 specification (TPM2.0) say Yes and it will be accessible from 34 41 within Linux. To compile this driver as a module, choose M here; 35 42 the module will be called tpm_tis. 43 + 44 + config TCG_TIS_SPI 45 + tristate "TPM Interface Specification 1.3 Interface / TPM 2.0 FIFO Interface - (SPI)" 46 + depends on SPI 47 + select TCG_TIS_CORE 48 + ---help--- 49 + If you have a TPM security chip which is connected to a regular, 50 + non-tcg SPI master (i.e. most embedded platforms) that is compliant with the 51 + TCG TIS 1.3 TPM specification (TPM1.2) or the TCG PTP FIFO 52 + specification (TPM2.0) say Yes and it will be accessible from 53 + within Linux. To compile this driver as a module, choose M here; 54 + the module will be called tpm_tis_spi. 36 55 37 56 config TCG_TIS_I2C_ATMEL 38 57 tristate "TPM Interface Specification 1.2 Interface (I2C - Atmel)" ··· 140 121 TCG CRB 2.0 TPM specification say Yes and it will be accessible 141 122 from within Linux. To compile this driver as a module, choose 142 123 M here; the module will be called tpm_crb. 124 + 125 + config TCG_VTPM_PROXY 126 + tristate "VTPM Proxy Interface" 127 + depends on TCG_TPM 128 + select ANON_INODES 129 + ---help--- 130 + This driver proxies for an emulated TPM (vTPM) running in userspace. 131 + A device /dev/vtpmx is provided that creates a device pair 132 + /dev/vtpmX and a server-side file descriptor on which the vTPM 133 + can receive commands. 134 + 143 135 144 136 source "drivers/char/tpm/st33zp24/Kconfig" 145 137 endif # TCG_TPM
+3
drivers/char/tpm/Makefile
··· 12 12 tpm-y += tpm_eventlog.o tpm_of.o 13 13 endif 14 14 endif 15 + obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o 15 16 obj-$(CONFIG_TCG_TIS) += tpm_tis.o 17 + obj-$(CONFIG_TCG_TIS_SPI) += tpm_tis_spi.o 16 18 obj-$(CONFIG_TCG_TIS_I2C_ATMEL) += tpm_i2c_atmel.o 17 19 obj-$(CONFIG_TCG_TIS_I2C_INFINEON) += tpm_i2c_infineon.o 18 20 obj-$(CONFIG_TCG_TIS_I2C_NUVOTON) += tpm_i2c_nuvoton.o ··· 25 23 obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/ 26 24 obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o 27 25 obj-$(CONFIG_TCG_CRB) += tpm_crb.o 26 + obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o
+5 -6
drivers/char/tpm/st33zp24/Kconfig
··· 1 1 config TCG_TIS_ST33ZP24 2 - tristate "STMicroelectronics TPM Interface Specification 1.2 Interface" 3 - depends on GPIOLIB || COMPILE_TEST 2 + tristate 4 3 ---help--- 5 4 STMicroelectronics ST33ZP24 core driver. It implements the core 6 5 TPM1.2 logic and hooks into the TPM kernel APIs. Physical layers will ··· 9 10 tpm_st33zp24. 10 11 11 12 config TCG_TIS_ST33ZP24_I2C 12 - tristate "TPM 1.2 ST33ZP24 I2C support" 13 - depends on TCG_TIS_ST33ZP24 13 + tristate "STMicroelectronics TPM Interface Specification 1.2 Interface (I2C)" 14 14 depends on I2C 15 + select TCG_TIS_ST33ZP24 15 16 ---help--- 16 17 This module adds support for the STMicroelectronics TPM security chip 17 18 ST33ZP24 with i2c interface. ··· 19 20 called tpm_st33zp24_i2c. 20 21 21 22 config TCG_TIS_ST33ZP24_SPI 22 - tristate "TPM 1.2 ST33ZP24 SPI support" 23 - depends on TCG_TIS_ST33ZP24 23 + tristate "STMicroelectronics TPM Interface Specification 1.2 Interface (SPI)" 24 24 depends on SPI 25 + select TCG_TIS_ST33ZP24 25 26 ---help--- 26 27 This module adds support for the STMicroelectronics TPM security chip 27 28 ST33ZP24 with spi interface.
+54 -16
drivers/char/tpm/st33zp24/i2c.c
··· 1 1 /* 2 2 * STMicroelectronics TPM I2C Linux driver for TPM ST33ZP24 3 - * Copyright (C) 2009 - 2015 STMicroelectronics 3 + * Copyright (C) 2009 - 2016 STMicroelectronics 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by ··· 19 19 #include <linux/module.h> 20 20 #include <linux/i2c.h> 21 21 #include <linux/gpio.h> 22 + #include <linux/gpio/consumer.h> 22 23 #include <linux/of_irq.h> 23 24 #include <linux/of_gpio.h> 25 + #include <linux/acpi.h> 24 26 #include <linux/tpm.h> 25 27 #include <linux/platform_data/st33zp24.h> 26 28 29 + #include "../tpm.h" 27 30 #include "st33zp24.h" 28 31 29 32 #define TPM_DUMMY_BYTE 0xAA ··· 111 108 .recv = st33zp24_i2c_recv, 112 109 }; 113 110 114 - #ifdef CONFIG_OF 115 - static int st33zp24_i2c_of_request_resources(struct st33zp24_i2c_phy *phy) 111 + static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client) 116 112 { 113 + struct tpm_chip *chip = i2c_get_clientdata(client); 114 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 115 + struct st33zp24_i2c_phy *phy = tpm_dev->phy_id; 116 + struct gpio_desc *gpiod_lpcpd; 117 + struct device *dev = &client->dev; 118 + 119 + /* Get LPCPD GPIO from ACPI */ 120 + gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, 121 + GPIOD_OUT_HIGH); 122 + if (IS_ERR(gpiod_lpcpd)) { 123 + dev_err(&client->dev, 124 + "Failed to retrieve lpcpd-gpios from acpi.\n"); 125 + phy->io_lpcpd = -1; 126 + /* 127 + * lpcpd pin is not specified. This is not an issue as 128 + * power management can be also managed by TPM specific 129 + * commands. So leave with a success status code. 130 + */ 131 + return 0; 132 + } 133 + 134 + phy->io_lpcpd = desc_to_gpio(gpiod_lpcpd); 135 + 136 + return 0; 137 + } 138 + 139 + static int st33zp24_i2c_of_request_resources(struct i2c_client *client) 140 + { 141 + struct tpm_chip *chip = i2c_get_clientdata(client); 142 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 143 + struct st33zp24_i2c_phy *phy = tpm_dev->phy_id; 117 144 struct device_node *pp; 118 - struct i2c_client *client = phy->client; 119 145 int gpio; 120 146 int ret; 121 147 ··· 178 146 179 147 return 0; 180 148 } 181 - #else 182 - static int st33zp24_i2c_of_request_resources(struct st33zp24_i2c_phy *phy) 183 - { 184 - return -ENODEV; 185 - } 186 - #endif 187 149 188 - static int st33zp24_i2c_request_resources(struct i2c_client *client, 189 - struct st33zp24_i2c_phy *phy) 150 + static int st33zp24_i2c_request_resources(struct i2c_client *client) 190 151 { 152 + struct tpm_chip *chip = i2c_get_clientdata(client); 153 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 154 + struct st33zp24_i2c_phy *phy = tpm_dev->phy_id; 191 155 struct st33zp24_platform_data *pdata; 192 156 int ret; 193 157 ··· 240 212 return -ENOMEM; 241 213 242 214 phy->client = client; 215 + 243 216 pdata = client->dev.platform_data; 244 217 if (!pdata && client->dev.of_node) { 245 - ret = st33zp24_i2c_of_request_resources(phy); 218 + ret = st33zp24_i2c_of_request_resources(client); 246 219 if (ret) 247 220 return ret; 248 221 } else if (pdata) { 249 - ret = st33zp24_i2c_request_resources(client, phy); 222 + ret = st33zp24_i2c_request_resources(client); 223 + if (ret) 224 + return ret; 225 + } else if (ACPI_HANDLE(&client->dev)) { 226 + ret = st33zp24_i2c_acpi_request_resources(client); 250 227 if (ret) 251 228 return ret; 252 229 } ··· 278 245 }; 279 246 MODULE_DEVICE_TABLE(i2c, st33zp24_i2c_id); 280 247 281 - #ifdef CONFIG_OF 282 248 static const struct of_device_id of_st33zp24_i2c_match[] = { 283 249 { .compatible = "st,st33zp24-i2c", }, 284 250 {} 285 251 }; 286 252 MODULE_DEVICE_TABLE(of, of_st33zp24_i2c_match); 287 - #endif 253 + 254 + static const struct acpi_device_id st33zp24_i2c_acpi_match[] = { 255 + {"SMO3324"}, 256 + {} 257 + }; 258 + MODULE_DEVICE_TABLE(acpi, st33zp24_i2c_acpi_match); 288 259 289 260 static SIMPLE_DEV_PM_OPS(st33zp24_i2c_ops, st33zp24_pm_suspend, 290 261 st33zp24_pm_resume); ··· 298 261 .name = TPM_ST33_I2C, 299 262 .pm = &st33zp24_i2c_ops, 300 263 .of_match_table = of_match_ptr(of_st33zp24_i2c_match), 264 + .acpi_match_table = ACPI_PTR(st33zp24_i2c_acpi_match), 301 265 }, 302 266 .probe = st33zp24_i2c_probe, 303 267 .remove = st33zp24_i2c_remove,
+108 -76
drivers/char/tpm/st33zp24/spi.c
··· 1 1 /* 2 2 * STMicroelectronics TPM SPI Linux driver for TPM ST33ZP24 3 - * Copyright (C) 2009 - 2015 STMicroelectronics 3 + * Copyright (C) 2009 - 2016 STMicroelectronics 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by ··· 19 19 #include <linux/module.h> 20 20 #include <linux/spi/spi.h> 21 21 #include <linux/gpio.h> 22 + #include <linux/gpio/consumer.h> 22 23 #include <linux/of_irq.h> 23 24 #include <linux/of_gpio.h> 25 + #include <linux/acpi.h> 24 26 #include <linux/tpm.h> 25 27 #include <linux/platform_data/st33zp24.h> 26 28 29 + #include "../tpm.h" 27 30 #include "st33zp24.h" 28 31 29 32 #define TPM_DATA_FIFO 0x24 ··· 69 66 70 67 struct st33zp24_spi_phy { 71 68 struct spi_device *spi_device; 72 - struct spi_transfer spi_xfer; 69 + 73 70 u8 tx_buf[ST33ZP24_SPI_BUFFER_SIZE]; 74 71 u8 rx_buf[ST33ZP24_SPI_BUFFER_SIZE]; 75 72 ··· 113 110 static int st33zp24_spi_send(void *phy_id, u8 tpm_register, u8 *tpm_data, 114 111 int tpm_size) 115 112 { 116 - u8 data = 0; 117 - int total_length = 0, nbr_dummy_bytes = 0, ret = 0; 113 + int total_length = 0, ret = 0; 118 114 struct st33zp24_spi_phy *phy = phy_id; 119 115 struct spi_device *dev = phy->spi_device; 120 - u8 *tx_buf = (u8 *)phy->spi_xfer.tx_buf; 121 - u8 *rx_buf = phy->spi_xfer.rx_buf; 116 + struct spi_transfer spi_xfer = { 117 + .tx_buf = phy->tx_buf, 118 + .rx_buf = phy->rx_buf, 119 + }; 122 120 123 121 /* Pre-Header */ 124 - data = TPM_WRITE_DIRECTION | LOCALITY0; 125 - memcpy(tx_buf + total_length, &data, sizeof(data)); 126 - total_length++; 127 - data = tpm_register; 128 - memcpy(tx_buf + total_length, &data, sizeof(data)); 129 - total_length++; 122 + phy->tx_buf[total_length++] = TPM_WRITE_DIRECTION | LOCALITY0; 123 + phy->tx_buf[total_length++] = tpm_register; 130 124 131 125 if (tpm_size > 0 && tpm_register == TPM_DATA_FIFO) { 132 - tx_buf[total_length++] = tpm_size >> 8; 133 - tx_buf[total_length++] = tpm_size; 126 + phy->tx_buf[total_length++] = tpm_size >> 8; 127 + phy->tx_buf[total_length++] = tpm_size; 134 128 } 135 129 136 - memcpy(&tx_buf[total_length], tpm_data, tpm_size); 130 + memcpy(&phy->tx_buf[total_length], tpm_data, tpm_size); 137 131 total_length += tpm_size; 138 132 139 - nbr_dummy_bytes = phy->latency; 140 - memset(&tx_buf[total_length], TPM_DUMMY_BYTE, nbr_dummy_bytes); 133 + memset(&phy->tx_buf[total_length], TPM_DUMMY_BYTE, phy->latency); 141 134 142 - phy->spi_xfer.len = total_length + nbr_dummy_bytes; 135 + spi_xfer.len = total_length + phy->latency; 143 136 144 - ret = spi_sync_transfer(dev, &phy->spi_xfer, 1); 137 + ret = spi_sync_transfer(dev, &spi_xfer, 1); 145 138 if (ret == 0) 146 - ret = rx_buf[total_length + nbr_dummy_bytes - 1]; 139 + ret = phy->rx_buf[total_length + phy->latency - 1]; 147 140 148 141 return st33zp24_status_to_errno(ret); 149 142 } /* st33zp24_spi_send() */ 150 143 151 144 /* 152 - * read8_recv 145 + * st33zp24_spi_read8_recv 153 146 * Recv byte from the TIS register according to the ST33ZP24 SPI protocol. 154 147 * @param: phy_id, the phy description 155 148 * @param: tpm_register, the tpm tis register where the data should be read ··· 153 154 * @param: tpm_size, tpm TPM response size to read. 154 155 * @return: should be zero if success else a negative error code. 155 156 */ 156 - static int read8_reg(void *phy_id, u8 tpm_register, u8 *tpm_data, int tpm_size) 157 + static int st33zp24_spi_read8_reg(void *phy_id, u8 tpm_register, u8 *tpm_data, 158 + int tpm_size) 157 159 { 158 - u8 data = 0; 159 - int total_length = 0, nbr_dummy_bytes, ret; 160 + int total_length = 0, ret; 160 161 struct st33zp24_spi_phy *phy = phy_id; 161 162 struct spi_device *dev = phy->spi_device; 162 - u8 *tx_buf = (u8 *)phy->spi_xfer.tx_buf; 163 - u8 *rx_buf = phy->spi_xfer.rx_buf; 163 + struct spi_transfer spi_xfer = { 164 + .tx_buf = phy->tx_buf, 165 + .rx_buf = phy->rx_buf, 166 + }; 164 167 165 168 /* Pre-Header */ 166 - data = LOCALITY0; 167 - memcpy(tx_buf + total_length, &data, sizeof(data)); 168 - total_length++; 169 - data = tpm_register; 170 - memcpy(tx_buf + total_length, &data, sizeof(data)); 171 - total_length++; 169 + phy->tx_buf[total_length++] = LOCALITY0; 170 + phy->tx_buf[total_length++] = tpm_register; 172 171 173 - nbr_dummy_bytes = phy->latency; 174 - memset(&tx_buf[total_length], TPM_DUMMY_BYTE, 175 - nbr_dummy_bytes + tpm_size); 172 + memset(&phy->tx_buf[total_length], TPM_DUMMY_BYTE, 173 + phy->latency + tpm_size); 176 174 177 - phy->spi_xfer.len = total_length + nbr_dummy_bytes + tpm_size; 175 + spi_xfer.len = total_length + phy->latency + tpm_size; 178 176 179 177 /* header + status byte + size of the data + status byte */ 180 - ret = spi_sync_transfer(dev, &phy->spi_xfer, 1); 178 + ret = spi_sync_transfer(dev, &spi_xfer, 1); 181 179 if (tpm_size > 0 && ret == 0) { 182 - ret = rx_buf[total_length + nbr_dummy_bytes - 1]; 180 + ret = phy->rx_buf[total_length + phy->latency - 1]; 183 181 184 - memcpy(tpm_data, rx_buf + total_length + nbr_dummy_bytes, 182 + memcpy(tpm_data, phy->rx_buf + total_length + phy->latency, 185 183 tpm_size); 186 184 } 187 185 188 186 return ret; 189 - } /* read8_reg() */ 187 + } /* st33zp24_spi_read8_reg() */ 190 188 191 189 /* 192 190 * st33zp24_spi_recv ··· 199 203 { 200 204 int ret; 201 205 202 - ret = read8_reg(phy_id, tpm_register, tpm_data, tpm_size); 206 + ret = st33zp24_spi_read8_reg(phy_id, tpm_register, tpm_data, tpm_size); 203 207 if (!st33zp24_status_to_errno(ret)) 204 208 return tpm_size; 205 209 return ret; 206 210 } /* st33zp24_spi_recv() */ 207 211 208 - static int evaluate_latency(void *phy_id) 212 + static int st33zp24_spi_evaluate_latency(void *phy_id) 209 213 { 210 214 struct st33zp24_spi_phy *phy = phy_id; 211 215 int latency = 1, status = 0; ··· 213 217 214 218 while (!status && latency < MAX_SPI_LATENCY) { 215 219 phy->latency = latency; 216 - status = read8_reg(phy_id, TPM_INTF_CAPABILITY, &data, 1); 220 + status = st33zp24_spi_read8_reg(phy_id, TPM_INTF_CAPABILITY, 221 + &data, 1); 217 222 latency++; 218 223 } 224 + if (status < 0) 225 + return status; 226 + if (latency == MAX_SPI_LATENCY) 227 + return -ENODEV; 228 + 219 229 return latency - 1; 220 230 } /* evaluate_latency() */ 221 231 ··· 230 228 .recv = st33zp24_spi_recv, 231 229 }; 232 230 233 - #ifdef CONFIG_OF 234 - static int tpm_stm_spi_of_request_resources(struct st33zp24_spi_phy *phy) 231 + static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev) 235 232 { 233 + struct tpm_chip *chip = spi_get_drvdata(spi_dev); 234 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 235 + struct st33zp24_spi_phy *phy = tpm_dev->phy_id; 236 + struct gpio_desc *gpiod_lpcpd; 237 + struct device *dev = &spi_dev->dev; 238 + 239 + /* Get LPCPD GPIO from ACPI */ 240 + gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, 241 + GPIOD_OUT_HIGH); 242 + if (IS_ERR(gpiod_lpcpd)) { 243 + dev_err(dev, "Failed to retrieve lpcpd-gpios from acpi.\n"); 244 + phy->io_lpcpd = -1; 245 + /* 246 + * lpcpd pin is not specified. This is not an issue as 247 + * power management can be also managed by TPM specific 248 + * commands. So leave with a success status code. 249 + */ 250 + return 0; 251 + } 252 + 253 + phy->io_lpcpd = desc_to_gpio(gpiod_lpcpd); 254 + 255 + return 0; 256 + } 257 + 258 + static int st33zp24_spi_of_request_resources(struct spi_device *spi_dev) 259 + { 260 + struct tpm_chip *chip = spi_get_drvdata(spi_dev); 261 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 262 + struct st33zp24_spi_phy *phy = tpm_dev->phy_id; 236 263 struct device_node *pp; 237 - struct spi_device *dev = phy->spi_device; 238 264 int gpio; 239 265 int ret; 240 266 241 - pp = dev->dev.of_node; 267 + pp = spi_dev->dev.of_node; 242 268 if (!pp) { 243 - dev_err(&dev->dev, "No platform data\n"); 269 + dev_err(&spi_dev->dev, "No platform data\n"); 244 270 return -ENODEV; 245 271 } 246 272 247 273 /* Get GPIO from device tree */ 248 274 gpio = of_get_named_gpio(pp, "lpcpd-gpios", 0); 249 275 if (gpio < 0) { 250 - dev_err(&dev->dev, 276 + dev_err(&spi_dev->dev, 251 277 "Failed to retrieve lpcpd-gpios from dts.\n"); 252 278 phy->io_lpcpd = -1; 253 279 /* ··· 286 256 return 0; 287 257 } 288 258 /* GPIO request and configuration */ 289 - ret = devm_gpio_request_one(&dev->dev, gpio, 259 + ret = devm_gpio_request_one(&spi_dev->dev, gpio, 290 260 GPIOF_OUT_INIT_HIGH, "TPM IO LPCPD"); 291 261 if (ret) { 292 - dev_err(&dev->dev, "Failed to request lpcpd pin\n"); 262 + dev_err(&spi_dev->dev, "Failed to request lpcpd pin\n"); 293 263 return -ENODEV; 294 264 } 295 265 phy->io_lpcpd = gpio; 296 266 297 267 return 0; 298 268 } 299 - #else 300 - static int tpm_stm_spi_of_request_resources(struct st33zp24_spi_phy *phy) 301 - { 302 - return -ENODEV; 303 - } 304 - #endif 305 269 306 - static int tpm_stm_spi_request_resources(struct spi_device *dev, 307 - struct st33zp24_spi_phy *phy) 270 + static int st33zp24_spi_request_resources(struct spi_device *dev) 308 271 { 272 + struct tpm_chip *chip = spi_get_drvdata(dev); 273 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 274 + struct st33zp24_spi_phy *phy = tpm_dev->phy_id; 309 275 struct st33zp24_platform_data *pdata; 310 276 int ret; 311 277 ··· 329 303 } 330 304 331 305 /* 332 - * tpm_st33_spi_probe initialize the TPM device 306 + * st33zp24_spi_probe initialize the TPM device 333 307 * @param: dev, the spi_device drescription (TPM SPI description). 334 308 * @return: 0 in case of success. 335 309 * or a negative value describing the error. 336 310 */ 337 - static int 338 - tpm_st33_spi_probe(struct spi_device *dev) 311 + static int st33zp24_spi_probe(struct spi_device *dev) 339 312 { 340 313 int ret; 341 314 struct st33zp24_platform_data *pdata; ··· 353 328 return -ENOMEM; 354 329 355 330 phy->spi_device = dev; 331 + 356 332 pdata = dev->dev.platform_data; 357 333 if (!pdata && dev->dev.of_node) { 358 - ret = tpm_stm_spi_of_request_resources(phy); 334 + ret = st33zp24_spi_of_request_resources(dev); 359 335 if (ret) 360 336 return ret; 361 337 } else if (pdata) { 362 - ret = tpm_stm_spi_request_resources(dev, phy); 338 + ret = st33zp24_spi_request_resources(dev); 339 + if (ret) 340 + return ret; 341 + } else if (ACPI_HANDLE(&dev->dev)) { 342 + ret = st33zp24_spi_acpi_request_resources(dev); 363 343 if (ret) 364 344 return ret; 365 345 } 366 346 367 - phy->spi_xfer.tx_buf = phy->tx_buf; 368 - phy->spi_xfer.rx_buf = phy->rx_buf; 369 - 370 - phy->latency = evaluate_latency(phy); 347 + phy->latency = st33zp24_spi_evaluate_latency(phy); 371 348 if (phy->latency <= 0) 372 349 return -ENODEV; 373 350 ··· 378 351 } 379 352 380 353 /* 381 - * tpm_st33_spi_remove remove the TPM device 354 + * st33zp24_spi_remove remove the TPM device 382 355 * @param: client, the spi_device drescription (TPM SPI description). 383 356 * @return: 0 in case of success. 384 357 */ 385 - static int tpm_st33_spi_remove(struct spi_device *dev) 358 + static int st33zp24_spi_remove(struct spi_device *dev) 386 359 { 387 360 struct tpm_chip *chip = spi_get_drvdata(dev); 388 361 ··· 395 368 }; 396 369 MODULE_DEVICE_TABLE(spi, st33zp24_spi_id); 397 370 398 - #ifdef CONFIG_OF 399 371 static const struct of_device_id of_st33zp24_spi_match[] = { 400 372 { .compatible = "st,st33zp24-spi", }, 401 373 {} 402 374 }; 403 375 MODULE_DEVICE_TABLE(of, of_st33zp24_spi_match); 404 - #endif 376 + 377 + static const struct acpi_device_id st33zp24_spi_acpi_match[] = { 378 + {"SMO3324"}, 379 + {} 380 + }; 381 + MODULE_DEVICE_TABLE(acpi, st33zp24_spi_acpi_match); 405 382 406 383 static SIMPLE_DEV_PM_OPS(st33zp24_spi_ops, st33zp24_pm_suspend, 407 384 st33zp24_pm_resume); 408 385 409 - static struct spi_driver tpm_st33_spi_driver = { 386 + static struct spi_driver st33zp24_spi_driver = { 410 387 .driver = { 411 388 .name = TPM_ST33_SPI, 412 389 .pm = &st33zp24_spi_ops, 413 390 .of_match_table = of_match_ptr(of_st33zp24_spi_match), 391 + .acpi_match_table = ACPI_PTR(st33zp24_spi_acpi_match), 414 392 }, 415 - .probe = tpm_st33_spi_probe, 416 - .remove = tpm_st33_spi_remove, 393 + .probe = st33zp24_spi_probe, 394 + .remove = st33zp24_spi_remove, 417 395 .id_table = st33zp24_spi_id, 418 396 }; 419 397 420 - module_spi_driver(tpm_st33_spi_driver); 398 + module_spi_driver(st33zp24_spi_driver); 421 399 422 400 MODULE_AUTHOR("TPM support (TPMsupport@list.st.com)"); 423 401 MODULE_DESCRIPTION("STM TPM 1.2 SPI ST33 Driver");
+49 -80
drivers/char/tpm/st33zp24/st33zp24.c
··· 1 1 /* 2 2 * STMicroelectronics TPM Linux driver for TPM ST33ZP24 3 - * Copyright (C) 2009 - 2015 STMicroelectronics 3 + * Copyright (C) 2009 - 2016 STMicroelectronics 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by ··· 73 73 TIS_LONG_TIMEOUT = 2000, 74 74 }; 75 75 76 - struct st33zp24_dev { 77 - struct tpm_chip *chip; 78 - void *phy_id; 79 - const struct st33zp24_phy_ops *ops; 80 - u32 intrs; 81 - int io_lpcpd; 82 - }; 83 - 84 76 /* 85 77 * clear_interruption clear the pending interrupt. 86 78 * @param: tpm_dev, the tpm device device. ··· 94 102 */ 95 103 static void st33zp24_cancel(struct tpm_chip *chip) 96 104 { 97 - struct st33zp24_dev *tpm_dev; 105 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 98 106 u8 data; 99 - 100 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 101 107 102 108 data = TPM_STS_COMMAND_READY; 103 109 tpm_dev->ops->send(tpm_dev->phy_id, TPM_STS, &data, 1); ··· 108 118 */ 109 119 static u8 st33zp24_status(struct tpm_chip *chip) 110 120 { 111 - struct st33zp24_dev *tpm_dev; 121 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 112 122 u8 data; 113 - 114 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 115 123 116 124 tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS, &data, 1); 117 125 return data; ··· 122 134 */ 123 135 static int check_locality(struct tpm_chip *chip) 124 136 { 125 - struct st33zp24_dev *tpm_dev; 137 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 126 138 u8 data; 127 139 u8 status; 128 - 129 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 130 140 131 141 status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_ACCESS, &data, 1); 132 142 if (status && (data & 133 143 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 134 144 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) 135 - return chip->vendor.locality; 145 + return tpm_dev->locality; 136 146 137 147 return -EACCES; 138 148 } /* check_locality() */ ··· 142 156 */ 143 157 static int request_locality(struct tpm_chip *chip) 144 158 { 159 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 145 160 unsigned long stop; 146 161 long ret; 147 - struct st33zp24_dev *tpm_dev; 148 162 u8 data; 149 163 150 - if (check_locality(chip) == chip->vendor.locality) 151 - return chip->vendor.locality; 152 - 153 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 164 + if (check_locality(chip) == tpm_dev->locality) 165 + return tpm_dev->locality; 154 166 155 167 data = TPM_ACCESS_REQUEST_USE; 156 168 ret = tpm_dev->ops->send(tpm_dev->phy_id, TPM_ACCESS, &data, 1); 157 169 if (ret < 0) 158 170 return ret; 159 171 160 - stop = jiffies + chip->vendor.timeout_a; 172 + stop = jiffies + chip->timeout_a; 161 173 162 174 /* Request locality is usually effective after the request */ 163 175 do { 164 176 if (check_locality(chip) >= 0) 165 - return chip->vendor.locality; 177 + return tpm_dev->locality; 166 178 msleep(TPM_TIMEOUT); 167 179 } while (time_before(jiffies, stop)); 168 180 ··· 174 190 */ 175 191 static void release_locality(struct tpm_chip *chip) 176 192 { 177 - struct st33zp24_dev *tpm_dev; 193 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 178 194 u8 data; 179 195 180 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 181 196 data = TPM_ACCESS_ACTIVE_LOCALITY; 182 197 183 198 tpm_dev->ops->send(tpm_dev->phy_id, TPM_ACCESS, &data, 1); ··· 189 206 */ 190 207 static int get_burstcount(struct tpm_chip *chip) 191 208 { 209 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 192 210 unsigned long stop; 193 211 int burstcnt, status; 194 - u8 tpm_reg, temp; 195 - struct st33zp24_dev *tpm_dev; 212 + u8 temp; 196 213 197 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 198 - 199 - stop = jiffies + chip->vendor.timeout_d; 214 + stop = jiffies + chip->timeout_d; 200 215 do { 201 - tpm_reg = TPM_STS + 1; 202 - status = tpm_dev->ops->recv(tpm_dev->phy_id, tpm_reg, &temp, 1); 216 + status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS + 1, 217 + &temp, 1); 203 218 if (status < 0) 204 219 return -EBUSY; 205 220 206 - tpm_reg = TPM_STS + 2; 207 221 burstcnt = temp; 208 - status = tpm_dev->ops->recv(tpm_dev->phy_id, tpm_reg, &temp, 1); 222 + status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS + 2, 223 + &temp, 1); 209 224 if (status < 0) 210 225 return -EBUSY; 211 226 ··· 252 271 static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout, 253 272 wait_queue_head_t *queue, bool check_cancel) 254 273 { 274 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 255 275 unsigned long stop; 256 276 int ret = 0; 257 277 bool canceled = false; 258 278 bool condition; 259 279 u32 cur_intrs; 260 280 u8 status; 261 - struct st33zp24_dev *tpm_dev; 262 - 263 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 264 281 265 282 /* check current status */ 266 283 status = st33zp24_status(chip); ··· 267 288 268 289 stop = jiffies + timeout; 269 290 270 - if (chip->vendor.irq) { 291 + if (chip->flags & TPM_CHIP_FLAG_IRQ) { 271 292 cur_intrs = tpm_dev->intrs; 272 293 clear_interruption(tpm_dev); 273 - enable_irq(chip->vendor.irq); 294 + enable_irq(tpm_dev->irq); 274 295 275 296 do { 276 297 if (ret == -ERESTARTSYS && freezing(current)) ··· 293 314 } 294 315 } while (ret == -ERESTARTSYS && freezing(current)); 295 316 296 - disable_irq_nosync(chip->vendor.irq); 317 + disable_irq_nosync(tpm_dev->irq); 297 318 298 319 } else { 299 320 do { ··· 316 337 */ 317 338 static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count) 318 339 { 340 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 319 341 int size = 0, burstcnt, len, ret; 320 - struct st33zp24_dev *tpm_dev; 321 - 322 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 323 342 324 343 while (size < count && 325 344 wait_for_stat(chip, 326 345 TPM_STS_DATA_AVAIL | TPM_STS_VALID, 327 - chip->vendor.timeout_c, 328 - &chip->vendor.read_queue, true) == 0) { 346 + chip->timeout_c, 347 + &tpm_dev->read_queue, true) == 0) { 329 348 burstcnt = get_burstcount(chip); 330 349 if (burstcnt < 0) 331 350 return burstcnt; ··· 347 370 static irqreturn_t tpm_ioserirq_handler(int irq, void *dev_id) 348 371 { 349 372 struct tpm_chip *chip = dev_id; 350 - struct st33zp24_dev *tpm_dev; 351 - 352 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 373 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 353 374 354 375 tpm_dev->intrs++; 355 - wake_up_interruptible(&chip->vendor.read_queue); 356 - disable_irq_nosync(chip->vendor.irq); 376 + wake_up_interruptible(&tpm_dev->read_queue); 377 + disable_irq_nosync(tpm_dev->irq); 357 378 358 379 return IRQ_HANDLED; 359 380 } /* tpm_ioserirq_handler() */ ··· 368 393 static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf, 369 394 size_t len) 370 395 { 396 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 371 397 u32 status, i, size, ordinal; 372 398 int burstcnt = 0; 373 399 int ret; 374 400 u8 data; 375 - struct st33zp24_dev *tpm_dev; 376 401 377 402 if (!chip) 378 403 return -EBUSY; 379 404 if (len < TPM_HEADER_SIZE) 380 405 return -EBUSY; 381 - 382 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 383 406 384 407 ret = request_locality(chip); 385 408 if (ret < 0) ··· 387 414 if ((status & TPM_STS_COMMAND_READY) == 0) { 388 415 st33zp24_cancel(chip); 389 416 if (wait_for_stat 390 - (chip, TPM_STS_COMMAND_READY, chip->vendor.timeout_b, 391 - &chip->vendor.read_queue, false) < 0) { 417 + (chip, TPM_STS_COMMAND_READY, chip->timeout_b, 418 + &tpm_dev->read_queue, false) < 0) { 392 419 ret = -ETIME; 393 420 goto out_err; 394 421 } ··· 429 456 if (ret < 0) 430 457 goto out_err; 431 458 432 - if (chip->vendor.irq) { 459 + if (chip->flags & TPM_CHIP_FLAG_IRQ) { 433 460 ordinal = be32_to_cpu(*((__be32 *) (buf + 6))); 434 461 435 462 ret = wait_for_stat(chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID, 436 463 tpm_calc_ordinal_duration(chip, ordinal), 437 - &chip->vendor.read_queue, false); 464 + &tpm_dev->read_queue, false); 438 465 if (ret < 0) 439 466 goto out_err; 440 467 } ··· 505 532 } 506 533 507 534 static const struct tpm_class_ops st33zp24_tpm = { 535 + .flags = TPM_OPS_AUTO_STARTUP, 508 536 .send = st33zp24_send, 509 537 .recv = st33zp24_recv, 510 538 .cancel = st33zp24_cancel, ··· 539 565 if (!tpm_dev) 540 566 return -ENOMEM; 541 567 542 - TPM_VPRIV(chip) = tpm_dev; 543 568 tpm_dev->phy_id = phy_id; 544 569 tpm_dev->ops = ops; 570 + dev_set_drvdata(&chip->dev, tpm_dev); 545 571 546 - chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 547 - chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); 548 - chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 549 - chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 572 + chip->timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 573 + chip->timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); 574 + chip->timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 575 + chip->timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 550 576 551 - chip->vendor.locality = LOCALITY0; 577 + tpm_dev->locality = LOCALITY0; 552 578 553 579 if (irq) { 554 580 /* INTERRUPT Setup */ 555 - init_waitqueue_head(&chip->vendor.read_queue); 581 + init_waitqueue_head(&tpm_dev->read_queue); 556 582 tpm_dev->intrs = 0; 557 583 558 584 if (request_locality(chip) != LOCALITY0) { ··· 585 611 if (ret < 0) 586 612 goto _tpm_clean_answer; 587 613 588 - chip->vendor.irq = irq; 614 + tpm_dev->irq = irq; 615 + chip->flags |= TPM_CHIP_FLAG_IRQ; 589 616 590 - disable_irq_nosync(chip->vendor.irq); 617 + disable_irq_nosync(tpm_dev->irq); 591 618 592 619 tpm_gen_interrupt(chip); 593 620 } 594 - 595 - tpm_get_timeouts(chip); 596 - tpm_do_selftest(chip); 597 621 598 622 return tpm_chip_register(chip); 599 623 _tpm_clean_answer: ··· 622 650 int st33zp24_pm_suspend(struct device *dev) 623 651 { 624 652 struct tpm_chip *chip = dev_get_drvdata(dev); 625 - struct st33zp24_dev *tpm_dev; 626 - int ret = 0; 653 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 627 654 628 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 655 + int ret = 0; 629 656 630 657 if (gpio_is_valid(tpm_dev->io_lpcpd)) 631 658 gpio_set_value(tpm_dev->io_lpcpd, 0); ··· 643 672 int st33zp24_pm_resume(struct device *dev) 644 673 { 645 674 struct tpm_chip *chip = dev_get_drvdata(dev); 646 - struct st33zp24_dev *tpm_dev; 675 + struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); 647 676 int ret = 0; 648 - 649 - tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip); 650 677 651 678 if (gpio_is_valid(tpm_dev->io_lpcpd)) { 652 679 gpio_set_value(tpm_dev->io_lpcpd, 1); 653 680 ret = wait_for_stat(chip, 654 - TPM_STS_VALID, chip->vendor.timeout_b, 655 - &chip->vendor.read_queue, false); 681 + TPM_STS_VALID, chip->timeout_b, 682 + &tpm_dev->read_queue, false); 656 683 } else { 657 684 ret = tpm_pm_resume(dev); 658 685 if (!ret)
+13 -1
drivers/char/tpm/st33zp24/st33zp24.h
··· 1 1 /* 2 2 * STMicroelectronics TPM Linux driver for TPM ST33ZP24 3 - * Copyright (C) 2009 - 2015 STMicroelectronics 3 + * Copyright (C) 2009 - 2016 STMicroelectronics 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it 6 6 * under the terms and conditions of the GNU General Public License, ··· 20 20 21 21 #define TPM_WRITE_DIRECTION 0x80 22 22 #define TPM_BUFSIZE 2048 23 + 24 + struct st33zp24_dev { 25 + struct tpm_chip *chip; 26 + void *phy_id; 27 + const struct st33zp24_phy_ops *ops; 28 + int locality; 29 + int irq; 30 + u32 intrs; 31 + int io_lpcpd; 32 + wait_queue_head_t read_queue; 33 + }; 34 + 23 35 24 36 struct st33zp24_phy_ops { 25 37 int (*send)(void *phy_id, u8 tpm_register, u8 *tpm_data, int tpm_size);
+213 -86
drivers/char/tpm/tpm-chip.c
··· 29 29 #include "tpm.h" 30 30 #include "tpm_eventlog.h" 31 31 32 - static DECLARE_BITMAP(dev_mask, TPM_NUM_DEVICES); 33 - static LIST_HEAD(tpm_chip_list); 34 - static DEFINE_SPINLOCK(driver_lock); 32 + DEFINE_IDR(dev_nums_idr); 33 + static DEFINE_MUTEX(idr_lock); 35 34 36 35 struct class *tpm_class; 37 36 dev_t tpm_devt; 38 37 39 - /* 40 - * tpm_chip_find_get - return tpm_chip for a given chip number 41 - * @chip_num the device number for the chip 38 + /** 39 + * tpm_try_get_ops() - Get a ref to the tpm_chip 40 + * @chip: Chip to ref 41 + * 42 + * The caller must already have some kind of locking to ensure that chip is 43 + * valid. This function will lock the chip so that the ops member can be 44 + * accessed safely. The locking prevents tpm_chip_unregister from 45 + * completing, so it should not be held for long periods. 46 + * 47 + * Returns -ERRNO if the chip could not be got. 42 48 */ 49 + int tpm_try_get_ops(struct tpm_chip *chip) 50 + { 51 + int rc = -EIO; 52 + 53 + get_device(&chip->dev); 54 + 55 + down_read(&chip->ops_sem); 56 + if (!chip->ops) 57 + goto out_lock; 58 + 59 + return 0; 60 + out_lock: 61 + up_read(&chip->ops_sem); 62 + put_device(&chip->dev); 63 + return rc; 64 + } 65 + EXPORT_SYMBOL_GPL(tpm_try_get_ops); 66 + 67 + /** 68 + * tpm_put_ops() - Release a ref to the tpm_chip 69 + * @chip: Chip to put 70 + * 71 + * This is the opposite pair to tpm_try_get_ops(). After this returns chip may 72 + * be kfree'd. 73 + */ 74 + void tpm_put_ops(struct tpm_chip *chip) 75 + { 76 + up_read(&chip->ops_sem); 77 + put_device(&chip->dev); 78 + } 79 + EXPORT_SYMBOL_GPL(tpm_put_ops); 80 + 81 + /** 82 + * tpm_chip_find_get() - return tpm_chip for a given chip number 83 + * @chip_num: id to find 84 + * 85 + * The return'd chip has been tpm_try_get_ops'd and must be released via 86 + * tpm_put_ops 87 + */ 43 88 struct tpm_chip *tpm_chip_find_get(int chip_num) 44 89 { 45 - struct tpm_chip *pos, *chip = NULL; 90 + struct tpm_chip *chip, *res = NULL; 91 + int chip_prev; 46 92 47 - rcu_read_lock(); 48 - list_for_each_entry_rcu(pos, &tpm_chip_list, list) { 49 - if (chip_num != TPM_ANY_NUM && chip_num != pos->dev_num) 50 - continue; 93 + mutex_lock(&idr_lock); 51 94 52 - if (try_module_get(pos->pdev->driver->owner)) { 53 - chip = pos; 54 - break; 55 - } 95 + if (chip_num == TPM_ANY_NUM) { 96 + chip_num = 0; 97 + do { 98 + chip_prev = chip_num; 99 + chip = idr_get_next(&dev_nums_idr, &chip_num); 100 + if (chip && !tpm_try_get_ops(chip)) { 101 + res = chip; 102 + break; 103 + } 104 + } while (chip_prev != chip_num); 105 + } else { 106 + chip = idr_find_slowpath(&dev_nums_idr, chip_num); 107 + if (chip && !tpm_try_get_ops(chip)) 108 + res = chip; 56 109 } 57 - rcu_read_unlock(); 58 - return chip; 110 + 111 + mutex_unlock(&idr_lock); 112 + 113 + return res; 59 114 } 60 115 61 116 /** ··· 123 68 { 124 69 struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev); 125 70 126 - spin_lock(&driver_lock); 127 - clear_bit(chip->dev_num, dev_mask); 128 - spin_unlock(&driver_lock); 71 + mutex_lock(&idr_lock); 72 + idr_remove(&dev_nums_idr, chip->dev_num); 73 + mutex_unlock(&idr_lock); 74 + 129 75 kfree(chip); 130 76 } 131 77 132 78 /** 133 - * tpmm_chip_alloc() - allocate a new struct tpm_chip instance 134 - * @dev: device to which the chip is associated 79 + * tpm_chip_alloc() - allocate a new struct tpm_chip instance 80 + * @pdev: device to which the chip is associated 81 + * At this point pdev mst be initialized, but does not have to 82 + * be registered 135 83 * @ops: struct tpm_class_ops instance 136 84 * 137 85 * Allocates a new struct tpm_chip instance and assigns a free 138 - * device number for it. Caller does not have to worry about 139 - * freeing the allocated resources. When the devices is removed 140 - * devres calls tpmm_chip_remove() to do the job. 86 + * device number for it. Must be paired with put_device(&chip->dev). 141 87 */ 142 - struct tpm_chip *tpmm_chip_alloc(struct device *dev, 143 - const struct tpm_class_ops *ops) 88 + struct tpm_chip *tpm_chip_alloc(struct device *dev, 89 + const struct tpm_class_ops *ops) 144 90 { 145 91 struct tpm_chip *chip; 146 92 int rc; ··· 151 95 return ERR_PTR(-ENOMEM); 152 96 153 97 mutex_init(&chip->tpm_mutex); 154 - INIT_LIST_HEAD(&chip->list); 98 + init_rwsem(&chip->ops_sem); 155 99 156 100 chip->ops = ops; 157 101 158 - spin_lock(&driver_lock); 159 - chip->dev_num = find_first_zero_bit(dev_mask, TPM_NUM_DEVICES); 160 - spin_unlock(&driver_lock); 161 - 162 - if (chip->dev_num >= TPM_NUM_DEVICES) { 102 + mutex_lock(&idr_lock); 103 + rc = idr_alloc(&dev_nums_idr, NULL, 0, TPM_NUM_DEVICES, GFP_KERNEL); 104 + mutex_unlock(&idr_lock); 105 + if (rc < 0) { 163 106 dev_err(dev, "No available tpm device numbers\n"); 164 107 kfree(chip); 165 - return ERR_PTR(-ENOMEM); 108 + return ERR_PTR(rc); 166 109 } 110 + chip->dev_num = rc; 167 111 168 - set_bit(chip->dev_num, dev_mask); 169 - 170 - scnprintf(chip->devname, sizeof(chip->devname), "tpm%d", chip->dev_num); 171 - 172 - chip->pdev = dev; 173 - 174 - dev_set_drvdata(dev, chip); 112 + device_initialize(&chip->dev); 175 113 176 114 chip->dev.class = tpm_class; 177 115 chip->dev.release = tpm_dev_release; 178 - chip->dev.parent = chip->pdev; 179 - #ifdef CONFIG_ACPI 116 + chip->dev.parent = dev; 180 117 chip->dev.groups = chip->groups; 181 - #endif 182 118 183 119 if (chip->dev_num == 0) 184 120 chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR); 185 121 else 186 122 chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num); 187 123 188 - dev_set_name(&chip->dev, "%s", chip->devname); 124 + rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num); 125 + if (rc) 126 + goto out; 189 127 190 - device_initialize(&chip->dev); 128 + if (!dev) 129 + chip->flags |= TPM_CHIP_FLAG_VIRTUAL; 191 130 192 131 cdev_init(&chip->cdev, &tpm_fops); 193 - chip->cdev.owner = chip->pdev->driver->owner; 132 + chip->cdev.owner = THIS_MODULE; 194 133 chip->cdev.kobj.parent = &chip->dev.kobj; 195 134 196 - rc = devm_add_action(dev, (void (*)(void *)) put_device, &chip->dev); 197 - if (rc) { 198 - put_device(&chip->dev); 135 + return chip; 136 + 137 + out: 138 + put_device(&chip->dev); 139 + return ERR_PTR(rc); 140 + } 141 + EXPORT_SYMBOL_GPL(tpm_chip_alloc); 142 + 143 + /** 144 + * tpmm_chip_alloc() - allocate a new struct tpm_chip instance 145 + * @pdev: parent device to which the chip is associated 146 + * @ops: struct tpm_class_ops instance 147 + * 148 + * Same as tpm_chip_alloc except devm is used to do the put_device 149 + */ 150 + struct tpm_chip *tpmm_chip_alloc(struct device *pdev, 151 + const struct tpm_class_ops *ops) 152 + { 153 + struct tpm_chip *chip; 154 + int rc; 155 + 156 + chip = tpm_chip_alloc(pdev, ops); 157 + if (IS_ERR(chip)) 158 + return chip; 159 + 160 + rc = devm_add_action_or_reset(pdev, 161 + (void (*)(void *)) put_device, 162 + &chip->dev); 163 + if (rc) 199 164 return ERR_PTR(rc); 200 - } 165 + 166 + dev_set_drvdata(pdev, chip); 201 167 202 168 return chip; 203 169 } ··· 233 155 if (rc) { 234 156 dev_err(&chip->dev, 235 157 "unable to cdev_add() %s, major %d, minor %d, err=%d\n", 236 - chip->devname, MAJOR(chip->dev.devt), 158 + dev_name(&chip->dev), MAJOR(chip->dev.devt), 237 159 MINOR(chip->dev.devt), rc); 238 160 239 161 return rc; ··· 243 165 if (rc) { 244 166 dev_err(&chip->dev, 245 167 "unable to device_register() %s, major %d, minor %d, err=%d\n", 246 - chip->devname, MAJOR(chip->dev.devt), 168 + dev_name(&chip->dev), MAJOR(chip->dev.devt), 247 169 MINOR(chip->dev.devt), rc); 248 170 249 171 cdev_del(&chip->cdev); 250 172 return rc; 251 173 } 174 + 175 + /* Make the chip available. */ 176 + mutex_lock(&idr_lock); 177 + idr_replace(&dev_nums_idr, chip, chip->dev_num); 178 + mutex_unlock(&idr_lock); 252 179 253 180 return rc; 254 181 } ··· 262 179 { 263 180 cdev_del(&chip->cdev); 264 181 device_del(&chip->dev); 182 + 183 + /* Make the chip unavailable. */ 184 + mutex_lock(&idr_lock); 185 + idr_replace(&dev_nums_idr, NULL, chip->dev_num); 186 + mutex_unlock(&idr_lock); 187 + 188 + /* Make the driver uncallable. */ 189 + down_write(&chip->ops_sem); 190 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 191 + tpm2_shutdown(chip, TPM2_SU_CLEAR); 192 + chip->ops = NULL; 193 + up_write(&chip->ops_sem); 265 194 } 266 195 267 196 static int tpm1_chip_register(struct tpm_chip *chip) 268 197 { 269 - int rc; 270 - 271 198 if (chip->flags & TPM_CHIP_FLAG_TPM2) 272 199 return 0; 273 200 274 - rc = tpm_sysfs_add_device(chip); 275 - if (rc) 276 - return rc; 201 + tpm_sysfs_add_device(chip); 277 202 278 - chip->bios_dir = tpm_bios_log_setup(chip->devname); 203 + chip->bios_dir = tpm_bios_log_setup(dev_name(&chip->dev)); 279 204 280 205 return 0; 281 206 } ··· 295 204 296 205 if (chip->bios_dir) 297 206 tpm_bios_log_teardown(chip->bios_dir); 298 - 299 - tpm_sysfs_del_device(chip); 300 207 } 301 208 209 + static void tpm_del_legacy_sysfs(struct tpm_chip *chip) 210 + { 211 + struct attribute **i; 212 + 213 + if (chip->flags & (TPM_CHIP_FLAG_TPM2 | TPM_CHIP_FLAG_VIRTUAL)) 214 + return; 215 + 216 + sysfs_remove_link(&chip->dev.parent->kobj, "ppi"); 217 + 218 + for (i = chip->groups[0]->attrs; *i != NULL; ++i) 219 + sysfs_remove_link(&chip->dev.parent->kobj, (*i)->name); 220 + } 221 + 222 + /* For compatibility with legacy sysfs paths we provide symlinks from the 223 + * parent dev directory to selected names within the tpm chip directory. Old 224 + * kernel versions created these files directly under the parent. 225 + */ 226 + static int tpm_add_legacy_sysfs(struct tpm_chip *chip) 227 + { 228 + struct attribute **i; 229 + int rc; 230 + 231 + if (chip->flags & (TPM_CHIP_FLAG_TPM2 | TPM_CHIP_FLAG_VIRTUAL)) 232 + return 0; 233 + 234 + rc = __compat_only_sysfs_link_entry_to_kobj( 235 + &chip->dev.parent->kobj, &chip->dev.kobj, "ppi"); 236 + if (rc && rc != -ENOENT) 237 + return rc; 238 + 239 + /* All the names from tpm-sysfs */ 240 + for (i = chip->groups[0]->attrs; *i != NULL; ++i) { 241 + rc = __compat_only_sysfs_link_entry_to_kobj( 242 + &chip->dev.parent->kobj, &chip->dev.kobj, (*i)->name); 243 + if (rc) { 244 + tpm_del_legacy_sysfs(chip); 245 + return rc; 246 + } 247 + } 248 + 249 + return 0; 250 + } 302 251 /* 303 252 * tpm_chip_register() - create a character device for the TPM chip 304 253 * @chip: TPM chip to use. ··· 354 223 { 355 224 int rc; 356 225 226 + if (chip->ops->flags & TPM_OPS_AUTO_STARTUP) { 227 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 228 + rc = tpm2_auto_startup(chip); 229 + else 230 + rc = tpm1_auto_startup(chip); 231 + if (rc) 232 + return rc; 233 + } 234 + 357 235 rc = tpm1_chip_register(chip); 358 236 if (rc) 359 237 return rc; ··· 370 230 tpm_add_ppi(chip); 371 231 372 232 rc = tpm_add_char_device(chip); 373 - if (rc) 374 - goto out_err; 375 - 376 - /* Make the chip available. */ 377 - spin_lock(&driver_lock); 378 - list_add_tail_rcu(&chip->list, &tpm_chip_list); 379 - spin_unlock(&driver_lock); 233 + if (rc) { 234 + tpm1_chip_unregister(chip); 235 + return rc; 236 + } 380 237 381 238 chip->flags |= TPM_CHIP_FLAG_REGISTERED; 382 239 383 - if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { 384 - rc = __compat_only_sysfs_link_entry_to_kobj(&chip->pdev->kobj, 385 - &chip->dev.kobj, 386 - "ppi"); 387 - if (rc && rc != -ENOENT) { 388 - tpm_chip_unregister(chip); 389 - return rc; 390 - } 240 + rc = tpm_add_legacy_sysfs(chip); 241 + if (rc) { 242 + tpm_chip_unregister(chip); 243 + return rc; 391 244 } 392 245 393 246 return 0; 394 - out_err: 395 - tpm1_chip_unregister(chip); 396 - return rc; 397 247 } 398 248 EXPORT_SYMBOL_GPL(tpm_chip_register); 399 249 ··· 394 264 * Takes the chip first away from the list of available TPM chips and then 395 265 * cleans up all the resources reserved by tpm_chip_register(). 396 266 * 267 + * Once this function returns the driver call backs in 'op's will not be 268 + * running and will no longer start. 269 + * 397 270 * NOTE: This function should be only called before deinitializing chip 398 271 * resources. 399 272 */ ··· 405 272 if (!(chip->flags & TPM_CHIP_FLAG_REGISTERED)) 406 273 return; 407 274 408 - spin_lock(&driver_lock); 409 - list_del_rcu(&chip->list); 410 - spin_unlock(&driver_lock); 411 - synchronize_rcu(); 412 - 413 - if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) 414 - sysfs_remove_link(&chip->pdev->kobj, "ppi"); 275 + tpm_del_legacy_sysfs(chip); 415 276 416 277 tpm1_chip_unregister(chip); 417 278 tpm_del_char_device(chip);
+11 -4
drivers/char/tpm/tpm-dev.c
··· 61 61 * by the check of is_open variable, which is protected 62 62 * by driver_lock. */ 63 63 if (test_and_set_bit(0, &chip->is_open)) { 64 - dev_dbg(chip->pdev, "Another process owns this TPM\n"); 64 + dev_dbg(&chip->dev, "Another process owns this TPM\n"); 65 65 return -EBUSY; 66 66 } 67 67 ··· 79 79 INIT_WORK(&priv->work, timeout_work); 80 80 81 81 file->private_data = priv; 82 - get_device(chip->pdev); 83 82 return 0; 84 83 } 85 84 ··· 136 137 return -EFAULT; 137 138 } 138 139 139 - /* atomic tpm command send and result receive */ 140 + /* atomic tpm command send and result receive. We only hold the ops 141 + * lock during this period so that the tpm can be unregistered even if 142 + * the char dev is held open. 143 + */ 144 + if (tpm_try_get_ops(priv->chip)) { 145 + mutex_unlock(&priv->buffer_mutex); 146 + return -EPIPE; 147 + } 140 148 out_size = tpm_transmit(priv->chip, priv->data_buffer, 141 149 sizeof(priv->data_buffer)); 150 + 151 + tpm_put_ops(priv->chip); 142 152 if (out_size < 0) { 143 153 mutex_unlock(&priv->buffer_mutex); 144 154 return out_size; ··· 174 166 file->private_data = NULL; 175 167 atomic_set(&priv->data_pending, 0); 176 168 clear_bit(0, &priv->chip->is_open); 177 - put_device(priv->chip->pdev); 178 169 kfree(priv); 179 170 return 0; 180 171 }
+81 -51
drivers/char/tpm/tpm-interface.c
··· 319 319 duration_idx = tpm_ordinal_duration[ordinal]; 320 320 321 321 if (duration_idx != TPM_UNDEFINED) 322 - duration = chip->vendor.duration[duration_idx]; 322 + duration = chip->duration[duration_idx]; 323 323 if (duration <= 0) 324 324 return 2 * 60 * HZ; 325 325 else ··· 345 345 if (count == 0) 346 346 return -ENODATA; 347 347 if (count > bufsiz) { 348 - dev_err(chip->pdev, 348 + dev_err(&chip->dev, 349 349 "invalid count value %x %zx\n", count, bufsiz); 350 350 return -E2BIG; 351 351 } ··· 354 354 355 355 rc = chip->ops->send(chip, (u8 *) buf, count); 356 356 if (rc < 0) { 357 - dev_err(chip->pdev, 357 + dev_err(&chip->dev, 358 358 "tpm_transmit: tpm_send: error %zd\n", rc); 359 359 goto out; 360 360 } 361 361 362 - if (chip->vendor.irq) 362 + if (chip->flags & TPM_CHIP_FLAG_IRQ) 363 363 goto out_recv; 364 364 365 365 if (chip->flags & TPM_CHIP_FLAG_TPM2) ··· 373 373 goto out_recv; 374 374 375 375 if (chip->ops->req_canceled(chip, status)) { 376 - dev_err(chip->pdev, "Operation Canceled\n"); 376 + dev_err(&chip->dev, "Operation Canceled\n"); 377 377 rc = -ECANCELED; 378 378 goto out; 379 379 } ··· 383 383 } while (time_before(jiffies, stop)); 384 384 385 385 chip->ops->cancel(chip); 386 - dev_err(chip->pdev, "Operation Timed out\n"); 386 + dev_err(&chip->dev, "Operation Timed out\n"); 387 387 rc = -ETIME; 388 388 goto out; 389 389 390 390 out_recv: 391 391 rc = chip->ops->recv(chip, (u8 *) buf, bufsiz); 392 392 if (rc < 0) 393 - dev_err(chip->pdev, 393 + dev_err(&chip->dev, 394 394 "tpm_transmit: tpm_recv: error %zd\n", rc); 395 395 out: 396 396 mutex_unlock(&chip->tpm_mutex); ··· 416 416 417 417 err = be32_to_cpu(header->return_code); 418 418 if (err != 0 && desc) 419 - dev_err(chip->pdev, "A TPM error (%d) occurred %s\n", err, 419 + dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err, 420 420 desc); 421 421 422 422 return err; ··· 432 432 .ordinal = TPM_ORD_GET_CAP 433 433 }; 434 434 435 - ssize_t tpm_getcap(struct device *dev, __be32 subcap_id, cap_t *cap, 435 + ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap, 436 436 const char *desc) 437 437 { 438 438 struct tpm_cmd_t tpm_cmd; 439 439 int rc; 440 - struct tpm_chip *chip = dev_get_drvdata(dev); 441 440 442 441 tpm_cmd.header.in = tpm_getcap_header; 443 442 if (subcap_id == CAP_VERSION_1_1 || subcap_id == CAP_VERSION_1_2) { ··· 504 505 505 506 if (chip->flags & TPM_CHIP_FLAG_TPM2) { 506 507 /* Fixed timeouts for TPM2 */ 507 - chip->vendor.timeout_a = msecs_to_jiffies(TPM2_TIMEOUT_A); 508 - chip->vendor.timeout_b = msecs_to_jiffies(TPM2_TIMEOUT_B); 509 - chip->vendor.timeout_c = msecs_to_jiffies(TPM2_TIMEOUT_C); 510 - chip->vendor.timeout_d = msecs_to_jiffies(TPM2_TIMEOUT_D); 511 - chip->vendor.duration[TPM_SHORT] = 508 + chip->timeout_a = msecs_to_jiffies(TPM2_TIMEOUT_A); 509 + chip->timeout_b = msecs_to_jiffies(TPM2_TIMEOUT_B); 510 + chip->timeout_c = msecs_to_jiffies(TPM2_TIMEOUT_C); 511 + chip->timeout_d = msecs_to_jiffies(TPM2_TIMEOUT_D); 512 + chip->duration[TPM_SHORT] = 512 513 msecs_to_jiffies(TPM2_DURATION_SHORT); 513 - chip->vendor.duration[TPM_MEDIUM] = 514 + chip->duration[TPM_MEDIUM] = 514 515 msecs_to_jiffies(TPM2_DURATION_MEDIUM); 515 - chip->vendor.duration[TPM_LONG] = 516 + chip->duration[TPM_LONG] = 516 517 msecs_to_jiffies(TPM2_DURATION_LONG); 517 518 return 0; 518 519 } ··· 526 527 if (rc == TPM_ERR_INVALID_POSTINIT) { 527 528 /* The TPM is not started, we are the first to talk to it. 528 529 Execute a startup command. */ 529 - dev_info(chip->pdev, "Issuing TPM_STARTUP"); 530 + dev_info(&chip->dev, "Issuing TPM_STARTUP"); 530 531 if (tpm_startup(chip, TPM_ST_CLEAR)) 531 532 return rc; 532 533 ··· 538 539 NULL); 539 540 } 540 541 if (rc) { 541 - dev_err(chip->pdev, 542 + dev_err(&chip->dev, 542 543 "A TPM error (%zd) occurred attempting to determine the timeouts\n", 543 544 rc); 544 545 goto duration; ··· 560 561 * of misreporting. 561 562 */ 562 563 if (chip->ops->update_timeouts != NULL) 563 - chip->vendor.timeout_adjusted = 564 + chip->timeout_adjusted = 564 565 chip->ops->update_timeouts(chip, new_timeout); 565 566 566 - if (!chip->vendor.timeout_adjusted) { 567 + if (!chip->timeout_adjusted) { 567 568 /* Don't overwrite default if value is 0 */ 568 569 if (new_timeout[0] != 0 && new_timeout[0] < 1000) { 569 570 int i; ··· 571 572 /* timeouts in msec rather usec */ 572 573 for (i = 0; i != ARRAY_SIZE(new_timeout); i++) 573 574 new_timeout[i] *= 1000; 574 - chip->vendor.timeout_adjusted = true; 575 + chip->timeout_adjusted = true; 575 576 } 576 577 } 577 578 578 579 /* Report adjusted timeouts */ 579 - if (chip->vendor.timeout_adjusted) { 580 - dev_info(chip->pdev, 580 + if (chip->timeout_adjusted) { 581 + dev_info(&chip->dev, 581 582 HW_ERR "Adjusting reported timeouts: A %lu->%luus B %lu->%luus C %lu->%luus D %lu->%luus\n", 582 583 old_timeout[0], new_timeout[0], 583 584 old_timeout[1], new_timeout[1], ··· 585 586 old_timeout[3], new_timeout[3]); 586 587 } 587 588 588 - chip->vendor.timeout_a = usecs_to_jiffies(new_timeout[0]); 589 - chip->vendor.timeout_b = usecs_to_jiffies(new_timeout[1]); 590 - chip->vendor.timeout_c = usecs_to_jiffies(new_timeout[2]); 591 - chip->vendor.timeout_d = usecs_to_jiffies(new_timeout[3]); 589 + chip->timeout_a = usecs_to_jiffies(new_timeout[0]); 590 + chip->timeout_b = usecs_to_jiffies(new_timeout[1]); 591 + chip->timeout_c = usecs_to_jiffies(new_timeout[2]); 592 + chip->timeout_d = usecs_to_jiffies(new_timeout[3]); 592 593 593 594 duration: 594 595 tpm_cmd.header.in = tpm_getcap_header; ··· 607 608 return -EINVAL; 608 609 609 610 duration_cap = &tpm_cmd.params.getcap_out.cap.duration; 610 - chip->vendor.duration[TPM_SHORT] = 611 + chip->duration[TPM_SHORT] = 611 612 usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_short)); 612 - chip->vendor.duration[TPM_MEDIUM] = 613 + chip->duration[TPM_MEDIUM] = 613 614 usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_medium)); 614 - chip->vendor.duration[TPM_LONG] = 615 + chip->duration[TPM_LONG] = 615 616 usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_long)); 616 617 617 618 /* The Broadcom BCM0102 chipset in a Dell Latitude D820 gets the above ··· 619 620 * fix up the resulting too-small TPM_SHORT value to make things work. 620 621 * We also scale the TPM_MEDIUM and -_LONG values by 1000. 621 622 */ 622 - if (chip->vendor.duration[TPM_SHORT] < (HZ / 100)) { 623 - chip->vendor.duration[TPM_SHORT] = HZ; 624 - chip->vendor.duration[TPM_MEDIUM] *= 1000; 625 - chip->vendor.duration[TPM_LONG] *= 1000; 626 - chip->vendor.duration_adjusted = true; 627 - dev_info(chip->pdev, "Adjusting TPM timeout parameters."); 623 + if (chip->duration[TPM_SHORT] < (HZ / 100)) { 624 + chip->duration[TPM_SHORT] = HZ; 625 + chip->duration[TPM_MEDIUM] *= 1000; 626 + chip->duration[TPM_LONG] *= 1000; 627 + chip->duration_adjusted = true; 628 + dev_info(&chip->dev, "Adjusting TPM timeout parameters."); 628 629 } 629 630 return 0; 630 631 } ··· 699 700 700 701 rc = (chip->flags & TPM_CHIP_FLAG_TPM2) != 0; 701 702 702 - tpm_chip_put(chip); 703 + tpm_put_ops(chip); 703 704 704 705 return rc; 705 706 } ··· 728 729 rc = tpm2_pcr_read(chip, pcr_idx, res_buf); 729 730 else 730 731 rc = tpm_pcr_read_dev(chip, pcr_idx, res_buf); 731 - tpm_chip_put(chip); 732 + tpm_put_ops(chip); 732 733 return rc; 733 734 } 734 735 EXPORT_SYMBOL_GPL(tpm_pcr_read); ··· 763 764 764 765 if (chip->flags & TPM_CHIP_FLAG_TPM2) { 765 766 rc = tpm2_pcr_extend(chip, pcr_idx, hash); 766 - tpm_chip_put(chip); 767 + tpm_put_ops(chip); 767 768 return rc; 768 769 } 769 770 ··· 773 774 rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, 774 775 "attempting extend a PCR value"); 775 776 776 - tpm_chip_put(chip); 777 + tpm_put_ops(chip); 777 778 return rc; 778 779 } 779 780 EXPORT_SYMBOL_GPL(tpm_pcr_extend); ··· 814 815 * around 300ms while the self test is ongoing, keep trying 815 816 * until the self test duration expires. */ 816 817 if (rc == -ETIME) { 817 - dev_info(chip->pdev, HW_ERR "TPM command timed out during continue self test"); 818 + dev_info( 819 + &chip->dev, HW_ERR 820 + "TPM command timed out during continue self test"); 818 821 msleep(delay_msec); 819 822 continue; 820 823 } ··· 826 825 827 826 rc = be32_to_cpu(cmd.header.out.return_code); 828 827 if (rc == TPM_ERR_DISABLED || rc == TPM_ERR_DEACTIVATED) { 829 - dev_info(chip->pdev, 828 + dev_info(&chip->dev, 830 829 "TPM is disabled/deactivated (0x%X)\n", rc); 831 830 /* TPM is disabled and/or deactivated; driver can 832 831 * proceed and TPM does handle commands for ··· 843 842 } 844 843 EXPORT_SYMBOL_GPL(tpm_do_selftest); 845 844 845 + /** 846 + * tpm1_auto_startup - Perform the standard automatic TPM initialization 847 + * sequence 848 + * @chip: TPM chip to use 849 + * 850 + * Returns 0 on success, < 0 in case of fatal error. 851 + */ 852 + int tpm1_auto_startup(struct tpm_chip *chip) 853 + { 854 + int rc; 855 + 856 + rc = tpm_get_timeouts(chip); 857 + if (rc) 858 + goto out; 859 + rc = tpm_do_selftest(chip); 860 + if (rc) { 861 + dev_err(&chip->dev, "TPM self test failed\n"); 862 + goto out; 863 + } 864 + 865 + return rc; 866 + out: 867 + if (rc > 0) 868 + rc = -ENODEV; 869 + return rc; 870 + } 871 + 846 872 int tpm_send(u32 chip_num, void *cmd, size_t buflen) 847 873 { 848 874 struct tpm_chip *chip; ··· 881 853 882 854 rc = tpm_transmit_cmd(chip, cmd, buflen, "attempting tpm_cmd"); 883 855 884 - tpm_chip_put(chip); 856 + tpm_put_ops(chip); 885 857 return rc; 886 858 } 887 859 EXPORT_SYMBOL_GPL(tpm_send); ··· 916 888 917 889 stop = jiffies + timeout; 918 890 919 - if (chip->vendor.irq) { 891 + if (chip->flags & TPM_CHIP_FLAG_IRQ) { 920 892 again: 921 893 timeout = stop - jiffies; 922 894 if ((long)timeout <= 0) ··· 1006 978 } 1007 979 1008 980 if (rc) 1009 - dev_err(chip->pdev, 981 + dev_err(&chip->dev, 1010 982 "Error (%d) sending savestate before suspend\n", rc); 1011 983 else if (try > 0) 1012 - dev_warn(chip->pdev, "TPM savestate took %dms\n", 984 + dev_warn(&chip->dev, "TPM savestate took %dms\n", 1013 985 try * TPM_TIMEOUT_RETRY); 1014 986 1015 987 return rc; ··· 1063 1035 1064 1036 if (chip->flags & TPM_CHIP_FLAG_TPM2) { 1065 1037 err = tpm2_get_random(chip, out, max); 1066 - tpm_chip_put(chip); 1038 + tpm_put_ops(chip); 1067 1039 return err; 1068 1040 } 1069 1041 ··· 1085 1057 num_bytes -= recd; 1086 1058 } while (retries-- && total < max); 1087 1059 1088 - tpm_chip_put(chip); 1060 + tpm_put_ops(chip); 1089 1061 return total ? total : -EIO; 1090 1062 } 1091 1063 EXPORT_SYMBOL_GPL(tpm_get_random); ··· 1111 1083 1112 1084 rc = tpm2_seal_trusted(chip, payload, options); 1113 1085 1114 - tpm_chip_put(chip); 1086 + tpm_put_ops(chip); 1115 1087 return rc; 1116 1088 } 1117 1089 EXPORT_SYMBOL_GPL(tpm_seal_trusted); ··· 1137 1109 1138 1110 rc = tpm2_unseal_trusted(chip, payload, options); 1139 1111 1140 - tpm_chip_put(chip); 1112 + tpm_put_ops(chip); 1113 + 1141 1114 return rc; 1142 1115 } 1143 1116 EXPORT_SYMBOL_GPL(tpm_unseal_trusted); ··· 1165 1136 1166 1137 static void __exit tpm_exit(void) 1167 1138 { 1139 + idr_destroy(&dev_nums_idr); 1168 1140 class_destroy(tpm_class); 1169 1141 unregister_chrdev_region(tpm_devt, TPM_NUM_DEVICES); 1170 1142 }
+36 -42
drivers/char/tpm/tpm-sysfs.c
··· 36 36 int i, rc; 37 37 char *str = buf; 38 38 39 - struct tpm_chip *chip = dev_get_drvdata(dev); 39 + struct tpm_chip *chip = to_tpm_chip(dev); 40 40 41 41 tpm_cmd.header.in = tpm_readpubek_header; 42 42 err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE, ··· 92 92 ssize_t rc; 93 93 int i, j, num_pcrs; 94 94 char *str = buf; 95 - struct tpm_chip *chip = dev_get_drvdata(dev); 95 + struct tpm_chip *chip = to_tpm_chip(dev); 96 96 97 - rc = tpm_getcap(dev, TPM_CAP_PROP_PCR, &cap, 97 + rc = tpm_getcap(chip, TPM_CAP_PROP_PCR, &cap, 98 98 "attempting to determine the number of PCRS"); 99 99 if (rc) 100 100 return 0; ··· 119 119 cap_t cap; 120 120 ssize_t rc; 121 121 122 - rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap, 123 - "attempting to determine the permanent enabled state"); 122 + rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_PERM, &cap, 123 + "attempting to determine the permanent enabled state"); 124 124 if (rc) 125 125 return 0; 126 126 ··· 135 135 cap_t cap; 136 136 ssize_t rc; 137 137 138 - rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap, 139 - "attempting to determine the permanent active state"); 138 + rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_PERM, &cap, 139 + "attempting to determine the permanent active state"); 140 140 if (rc) 141 141 return 0; 142 142 ··· 151 151 cap_t cap; 152 152 ssize_t rc; 153 153 154 - rc = tpm_getcap(dev, TPM_CAP_PROP_OWNER, &cap, 155 - "attempting to determine the owner state"); 154 + rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_PROP_OWNER, &cap, 155 + "attempting to determine the owner state"); 156 156 if (rc) 157 157 return 0; 158 158 ··· 167 167 cap_t cap; 168 168 ssize_t rc; 169 169 170 - rc = tpm_getcap(dev, TPM_CAP_FLAG_VOL, &cap, 171 - "attempting to determine the temporary state"); 170 + rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_VOL, &cap, 171 + "attempting to determine the temporary state"); 172 172 if (rc) 173 173 return 0; 174 174 ··· 180 180 static ssize_t caps_show(struct device *dev, struct device_attribute *attr, 181 181 char *buf) 182 182 { 183 + struct tpm_chip *chip = to_tpm_chip(dev); 183 184 cap_t cap; 184 185 ssize_t rc; 185 186 char *str = buf; 186 187 187 - rc = tpm_getcap(dev, TPM_CAP_PROP_MANUFACTURER, &cap, 188 + rc = tpm_getcap(chip, TPM_CAP_PROP_MANUFACTURER, &cap, 188 189 "attempting to determine the manufacturer"); 189 190 if (rc) 190 191 return 0; ··· 193 192 be32_to_cpu(cap.manufacturer_id)); 194 193 195 194 /* Try to get a TPM version 1.2 TPM_CAP_VERSION_INFO */ 196 - rc = tpm_getcap(dev, CAP_VERSION_1_2, &cap, 197 - "attempting to determine the 1.2 version"); 195 + rc = tpm_getcap(chip, CAP_VERSION_1_2, &cap, 196 + "attempting to determine the 1.2 version"); 198 197 if (!rc) { 199 198 str += sprintf(str, 200 199 "TCG version: %d.%d\nFirmware version: %d.%d\n", ··· 204 203 cap.tpm_version_1_2.revMinor); 205 204 } else { 206 205 /* Otherwise just use TPM_STRUCT_VER */ 207 - rc = tpm_getcap(dev, CAP_VERSION_1_1, &cap, 206 + rc = tpm_getcap(chip, CAP_VERSION_1_1, &cap, 208 207 "attempting to determine the 1.1 version"); 209 208 if (rc) 210 209 return 0; ··· 223 222 static ssize_t cancel_store(struct device *dev, struct device_attribute *attr, 224 223 const char *buf, size_t count) 225 224 { 226 - struct tpm_chip *chip = dev_get_drvdata(dev); 225 + struct tpm_chip *chip = to_tpm_chip(dev); 227 226 if (chip == NULL) 228 227 return 0; 229 228 ··· 235 234 static ssize_t durations_show(struct device *dev, struct device_attribute *attr, 236 235 char *buf) 237 236 { 238 - struct tpm_chip *chip = dev_get_drvdata(dev); 237 + struct tpm_chip *chip = to_tpm_chip(dev); 239 238 240 - if (chip->vendor.duration[TPM_LONG] == 0) 239 + if (chip->duration[TPM_LONG] == 0) 241 240 return 0; 242 241 243 242 return sprintf(buf, "%d %d %d [%s]\n", 244 - jiffies_to_usecs(chip->vendor.duration[TPM_SHORT]), 245 - jiffies_to_usecs(chip->vendor.duration[TPM_MEDIUM]), 246 - jiffies_to_usecs(chip->vendor.duration[TPM_LONG]), 247 - chip->vendor.duration_adjusted 243 + jiffies_to_usecs(chip->duration[TPM_SHORT]), 244 + jiffies_to_usecs(chip->duration[TPM_MEDIUM]), 245 + jiffies_to_usecs(chip->duration[TPM_LONG]), 246 + chip->duration_adjusted 248 247 ? "adjusted" : "original"); 249 248 } 250 249 static DEVICE_ATTR_RO(durations); ··· 252 251 static ssize_t timeouts_show(struct device *dev, struct device_attribute *attr, 253 252 char *buf) 254 253 { 255 - struct tpm_chip *chip = dev_get_drvdata(dev); 254 + struct tpm_chip *chip = to_tpm_chip(dev); 256 255 257 256 return sprintf(buf, "%d %d %d %d [%s]\n", 258 - jiffies_to_usecs(chip->vendor.timeout_a), 259 - jiffies_to_usecs(chip->vendor.timeout_b), 260 - jiffies_to_usecs(chip->vendor.timeout_c), 261 - jiffies_to_usecs(chip->vendor.timeout_d), 262 - chip->vendor.timeout_adjusted 257 + jiffies_to_usecs(chip->timeout_a), 258 + jiffies_to_usecs(chip->timeout_b), 259 + jiffies_to_usecs(chip->timeout_c), 260 + jiffies_to_usecs(chip->timeout_d), 261 + chip->timeout_adjusted 263 262 ? "adjusted" : "original"); 264 263 } 265 264 static DEVICE_ATTR_RO(timeouts); ··· 282 281 .attrs = tpm_dev_attrs, 283 282 }; 284 283 285 - int tpm_sysfs_add_device(struct tpm_chip *chip) 284 + void tpm_sysfs_add_device(struct tpm_chip *chip) 286 285 { 287 - int err; 288 - err = sysfs_create_group(&chip->pdev->kobj, 289 - &tpm_dev_group); 290 - 291 - if (err) 292 - dev_err(chip->pdev, 293 - "failed to create sysfs attributes, %d\n", err); 294 - return err; 295 - } 296 - 297 - void tpm_sysfs_del_device(struct tpm_chip *chip) 298 - { 299 - sysfs_remove_group(&chip->pdev->kobj, &tpm_dev_group); 286 + /* The sysfs routines rely on an implicit tpm_try_get_ops, device_del 287 + * is called before ops is null'd and the sysfs core synchronizes this 288 + * removal so that no callbacks are running or can run again 289 + */ 290 + WARN_ON(chip->groups_cnt != 0); 291 + chip->groups[chip->groups_cnt++] = &tpm_dev_group; 300 292 }
+35 -47
drivers/char/tpm/tpm.h
··· 19 19 * License. 20 20 * 21 21 */ 22 + 23 + #ifndef __TPM_H__ 24 + #define __TPM_H__ 25 + 22 26 #include <linux/module.h> 23 27 #include <linux/delay.h> 24 28 #include <linux/fs.h> ··· 38 34 enum tpm_const { 39 35 TPM_MINOR = 224, /* officially assigned */ 40 36 TPM_BUFSIZE = 4096, 41 - TPM_NUM_DEVICES = 256, 37 + TPM_NUM_DEVICES = 65536, 42 38 TPM_RETRY = 50, /* 5 seconds */ 43 39 }; 44 40 ··· 132 128 TPM2_SU_STATE = 0x0001, 133 129 }; 134 130 135 - struct tpm_chip; 136 - 137 - struct tpm_vendor_specific { 138 - void __iomem *iobase; /* ioremapped address */ 139 - unsigned long base; /* TPM base address */ 140 - 141 - int irq; 142 - 143 - int region_size; 144 - int have_region; 145 - 146 - struct list_head list; 147 - int locality; 148 - unsigned long timeout_a, timeout_b, timeout_c, timeout_d; /* jiffies */ 149 - bool timeout_adjusted; 150 - unsigned long duration[3]; /* jiffies */ 151 - bool duration_adjusted; 152 - void *priv; 153 - 154 - wait_queue_head_t read_queue; 155 - wait_queue_head_t int_queue; 156 - 157 - u16 manufacturer_id; 158 - }; 159 - 160 - #define TPM_VPRIV(c) ((c)->vendor.priv) 161 - 162 131 #define TPM_VID_INTEL 0x8086 163 132 #define TPM_VID_WINBOND 0x1050 164 133 #define TPM_VID_STM 0x104A ··· 141 164 enum tpm_chip_flags { 142 165 TPM_CHIP_FLAG_REGISTERED = BIT(0), 143 166 TPM_CHIP_FLAG_TPM2 = BIT(1), 167 + TPM_CHIP_FLAG_IRQ = BIT(2), 168 + TPM_CHIP_FLAG_VIRTUAL = BIT(3), 144 169 }; 145 170 146 171 struct tpm_chip { 147 - struct device *pdev; /* Device stuff */ 148 172 struct device dev; 149 173 struct cdev cdev; 150 174 175 + /* A driver callback under ops cannot be run unless ops_sem is held 176 + * (sometimes implicitly, eg for the sysfs code). ops becomes null 177 + * when the driver is unregistered, see tpm_try_get_ops. 178 + */ 179 + struct rw_semaphore ops_sem; 151 180 const struct tpm_class_ops *ops; 181 + 152 182 unsigned int flags; 153 183 154 184 int dev_num; /* /dev/tpm# */ 155 - char devname[7]; 156 185 unsigned long is_open; /* only one allowed */ 157 - int time_expired; 158 186 159 187 struct mutex tpm_mutex; /* tpm is processing */ 160 188 161 - struct tpm_vendor_specific vendor; 189 + unsigned long timeout_a; /* jiffies */ 190 + unsigned long timeout_b; /* jiffies */ 191 + unsigned long timeout_c; /* jiffies */ 192 + unsigned long timeout_d; /* jiffies */ 193 + bool timeout_adjusted; 194 + unsigned long duration[3]; /* jiffies */ 195 + bool duration_adjusted; 162 196 163 197 struct dentry **bios_dir; 164 198 165 - #ifdef CONFIG_ACPI 166 - const struct attribute_group *groups[2]; 199 + const struct attribute_group *groups[3]; 167 200 unsigned int groups_cnt; 201 + #ifdef CONFIG_ACPI 168 202 acpi_handle acpi_dev_handle; 169 203 char ppi_version[TPM_PPI_VERSION_LEN + 1]; 170 204 #endif /* CONFIG_ACPI */ 171 - 172 - struct list_head list; 173 205 }; 174 206 175 207 #define to_tpm_chip(d) container_of(d, struct tpm_chip, dev) 176 - 177 - static inline void tpm_chip_put(struct tpm_chip *chip) 178 - { 179 - module_put(chip->pdev->driver->owner); 180 - } 181 208 182 209 static inline int tpm_read_index(int base, int index) 183 210 { ··· 474 493 extern struct class *tpm_class; 475 494 extern dev_t tpm_devt; 476 495 extern const struct file_operations tpm_fops; 496 + extern struct idr dev_nums_idr; 477 497 478 - ssize_t tpm_getcap(struct device *, __be32, cap_t *, const char *); 498 + ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap, 499 + const char *desc); 479 500 ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf, 480 501 size_t bufsiz); 481 502 ssize_t tpm_transmit_cmd(struct tpm_chip *chip, void *cmd, int len, 482 503 const char *desc); 483 504 extern int tpm_get_timeouts(struct tpm_chip *); 484 505 extern void tpm_gen_interrupt(struct tpm_chip *); 506 + int tpm1_auto_startup(struct tpm_chip *chip); 485 507 extern int tpm_do_selftest(struct tpm_chip *); 486 508 extern unsigned long tpm_calc_ordinal_duration(struct tpm_chip *, u32); 487 509 extern int tpm_pm_suspend(struct device *); ··· 493 509 wait_queue_head_t *, bool); 494 510 495 511 struct tpm_chip *tpm_chip_find_get(int chip_num); 496 - extern struct tpm_chip *tpmm_chip_alloc(struct device *dev, 512 + __must_check int tpm_try_get_ops(struct tpm_chip *chip); 513 + void tpm_put_ops(struct tpm_chip *chip); 514 + 515 + extern struct tpm_chip *tpm_chip_alloc(struct device *dev, 516 + const struct tpm_class_ops *ops); 517 + extern struct tpm_chip *tpmm_chip_alloc(struct device *pdev, 497 518 const struct tpm_class_ops *ops); 498 519 extern int tpm_chip_register(struct tpm_chip *chip); 499 520 extern void tpm_chip_unregister(struct tpm_chip *chip); 500 521 501 - int tpm_sysfs_add_device(struct tpm_chip *chip); 502 - void tpm_sysfs_del_device(struct tpm_chip *chip); 522 + void tpm_sysfs_add_device(struct tpm_chip *chip); 503 523 504 524 int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf); 505 525 ··· 527 539 ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, 528 540 u32 *value, const char *desc); 529 541 530 - extern int tpm2_startup(struct tpm_chip *chip, u16 startup_type); 542 + int tpm2_auto_startup(struct tpm_chip *chip); 531 543 extern void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type); 532 544 extern unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *, u32); 533 - extern int tpm2_do_selftest(struct tpm_chip *chip); 534 545 extern int tpm2_gen_interrupt(struct tpm_chip *chip); 535 546 extern int tpm2_probe(struct tpm_chip *chip); 547 + #endif
+48 -11
drivers/char/tpm/tpm2-cmd.c
··· 597 597 598 598 rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT); 599 599 if (rc) { 600 - dev_warn(chip->pdev, "0x%08x was not flushed, out of memory\n", 600 + dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n", 601 601 handle); 602 602 return; 603 603 } ··· 606 606 607 607 rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "flushing context"); 608 608 if (rc) 609 - dev_warn(chip->pdev, "0x%08x was not flushed, rc=%d\n", handle, 609 + dev_warn(&chip->dev, "0x%08x was not flushed, rc=%d\n", handle, 610 610 rc); 611 611 612 612 tpm_buf_destroy(&buf); ··· 703 703 704 704 rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), desc); 705 705 if (!rc) 706 - *value = cmd.params.get_tpm_pt_out.value; 706 + *value = be32_to_cpu(cmd.params.get_tpm_pt_out.value); 707 707 708 708 return rc; 709 709 } ··· 728 728 * returned it remarks a POSIX error code. If a positive number is returned 729 729 * it remarks a TPM error. 730 730 */ 731 - int tpm2_startup(struct tpm_chip *chip, u16 startup_type) 731 + static int tpm2_startup(struct tpm_chip *chip, u16 startup_type) 732 732 { 733 733 struct tpm2_cmd cmd; 734 734 ··· 738 738 return tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 739 739 "attempting to start the TPM"); 740 740 } 741 - EXPORT_SYMBOL_GPL(tpm2_startup); 742 741 743 742 #define TPM2_SHUTDOWN_IN_SIZE \ 744 743 (sizeof(struct tpm_input_header) + \ ··· 769 770 * except print the error code on a system failure. 770 771 */ 771 772 if (rc < 0) 772 - dev_warn(chip->pdev, "transmit returned %d while stopping the TPM", 773 + dev_warn(&chip->dev, "transmit returned %d while stopping the TPM", 773 774 rc); 774 775 } 775 - EXPORT_SYMBOL_GPL(tpm2_shutdown); 776 776 777 777 /* 778 778 * tpm2_calc_ordinal_duration() - maximum duration for a command ··· 791 793 index = tpm2_ordinal_duration[ordinal - TPM2_CC_FIRST]; 792 794 793 795 if (index != TPM_UNDEFINED) 794 - duration = chip->vendor.duration[index]; 796 + duration = chip->duration[index]; 795 797 796 798 if (duration <= 0) 797 799 duration = 2 * 60 * HZ; ··· 835 837 * immediately. This is a workaround for that. 836 838 */ 837 839 if (rc == TPM2_RC_TESTING) { 838 - dev_warn(chip->pdev, "Got RC_TESTING, ignoring\n"); 840 + dev_warn(&chip->dev, "Got RC_TESTING, ignoring\n"); 839 841 rc = 0; 840 842 } 841 843 ··· 853 855 * returned it remarks a POSIX error code. If a positive number is returned 854 856 * it remarks a TPM error. 855 857 */ 856 - int tpm2_do_selftest(struct tpm_chip *chip) 858 + static int tpm2_do_selftest(struct tpm_chip *chip) 857 859 { 858 860 int rc; 859 861 unsigned int loops; ··· 893 895 894 896 return rc; 895 897 } 896 - EXPORT_SYMBOL_GPL(tpm2_do_selftest); 897 898 898 899 /** 899 900 * tpm2_gen_interrupt() - generate an interrupt ··· 940 943 return 0; 941 944 } 942 945 EXPORT_SYMBOL_GPL(tpm2_probe); 946 + 947 + /** 948 + * tpm2_auto_startup - Perform the standard automatic TPM initialization 949 + * sequence 950 + * @chip: TPM chip to use 951 + * 952 + * Returns 0 on success, < 0 in case of fatal error. 953 + */ 954 + int tpm2_auto_startup(struct tpm_chip *chip) 955 + { 956 + int rc; 957 + 958 + rc = tpm_get_timeouts(chip); 959 + if (rc) 960 + goto out; 961 + 962 + rc = tpm2_do_selftest(chip); 963 + if (rc != TPM2_RC_INITIALIZE) { 964 + dev_err(&chip->dev, "TPM self test failed\n"); 965 + goto out; 966 + } 967 + 968 + if (rc == TPM2_RC_INITIALIZE) { 969 + rc = tpm2_startup(chip, TPM2_SU_CLEAR); 970 + if (rc) 971 + goto out; 972 + 973 + rc = tpm2_do_selftest(chip); 974 + if (rc) { 975 + dev_err(&chip->dev, "TPM self test failed\n"); 976 + goto out; 977 + } 978 + } 979 + 980 + return rc; 981 + out: 982 + if (rc > 0) 983 + rc = -ENODEV; 984 + return rc; 985 + }
+39 -24
drivers/char/tpm/tpm_atmel.c
··· 37 37 38 38 static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count) 39 39 { 40 + struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev); 40 41 u8 status, *hdr = buf; 41 42 u32 size; 42 43 int i; ··· 48 47 return -EIO; 49 48 50 49 for (i = 0; i < 6; i++) { 51 - status = ioread8(chip->vendor.iobase + 1); 50 + status = ioread8(priv->iobase + 1); 52 51 if ((status & ATML_STATUS_DATA_AVAIL) == 0) { 53 - dev_err(chip->pdev, "error reading header\n"); 52 + dev_err(&chip->dev, "error reading header\n"); 54 53 return -EIO; 55 54 } 56 - *buf++ = ioread8(chip->vendor.iobase); 55 + *buf++ = ioread8(priv->iobase); 57 56 } 58 57 59 58 /* size of the data received */ ··· 61 60 size = be32_to_cpu(*native_size); 62 61 63 62 if (count < size) { 64 - dev_err(chip->pdev, 63 + dev_err(&chip->dev, 65 64 "Recv size(%d) less than available space\n", size); 66 65 for (; i < size; i++) { /* clear the waiting data anyway */ 67 - status = ioread8(chip->vendor.iobase + 1); 66 + status = ioread8(priv->iobase + 1); 68 67 if ((status & ATML_STATUS_DATA_AVAIL) == 0) { 69 - dev_err(chip->pdev, "error reading data\n"); 68 + dev_err(&chip->dev, "error reading data\n"); 70 69 return -EIO; 71 70 } 72 71 } ··· 75 74 76 75 /* read all the data available */ 77 76 for (; i < size; i++) { 78 - status = ioread8(chip->vendor.iobase + 1); 77 + status = ioread8(priv->iobase + 1); 79 78 if ((status & ATML_STATUS_DATA_AVAIL) == 0) { 80 - dev_err(chip->pdev, "error reading data\n"); 79 + dev_err(&chip->dev, "error reading data\n"); 81 80 return -EIO; 82 81 } 83 - *buf++ = ioread8(chip->vendor.iobase); 82 + *buf++ = ioread8(priv->iobase); 84 83 } 85 84 86 85 /* make sure data available is gone */ 87 - status = ioread8(chip->vendor.iobase + 1); 86 + status = ioread8(priv->iobase + 1); 88 87 89 88 if (status & ATML_STATUS_DATA_AVAIL) { 90 - dev_err(chip->pdev, "data available is stuck\n"); 89 + dev_err(&chip->dev, "data available is stuck\n"); 91 90 return -EIO; 92 91 } 93 92 ··· 96 95 97 96 static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count) 98 97 { 98 + struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev); 99 99 int i; 100 100 101 - dev_dbg(chip->pdev, "tpm_atml_send:\n"); 101 + dev_dbg(&chip->dev, "tpm_atml_send:\n"); 102 102 for (i = 0; i < count; i++) { 103 - dev_dbg(chip->pdev, "%d 0x%x(%d)\n", i, buf[i], buf[i]); 104 - iowrite8(buf[i], chip->vendor.iobase); 103 + dev_dbg(&chip->dev, "%d 0x%x(%d)\n", i, buf[i], buf[i]); 104 + iowrite8(buf[i], priv->iobase); 105 105 } 106 106 107 107 return count; ··· 110 108 111 109 static void tpm_atml_cancel(struct tpm_chip *chip) 112 110 { 113 - iowrite8(ATML_STATUS_ABORT, chip->vendor.iobase + 1); 111 + struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev); 112 + 113 + iowrite8(ATML_STATUS_ABORT, priv->iobase + 1); 114 114 } 115 115 116 116 static u8 tpm_atml_status(struct tpm_chip *chip) 117 117 { 118 - return ioread8(chip->vendor.iobase + 1); 118 + struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev); 119 + 120 + return ioread8(priv->iobase + 1); 119 121 } 120 122 121 123 static bool tpm_atml_req_canceled(struct tpm_chip *chip, u8 status) ··· 142 136 static void atml_plat_remove(void) 143 137 { 144 138 struct tpm_chip *chip = dev_get_drvdata(&pdev->dev); 139 + struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev); 145 140 146 141 if (chip) { 147 142 tpm_chip_unregister(chip); 148 - if (chip->vendor.have_region) 149 - atmel_release_region(chip->vendor.base, 150 - chip->vendor.region_size); 151 - atmel_put_base_addr(chip->vendor.iobase); 143 + if (priv->have_region) 144 + atmel_release_region(priv->base, priv->region_size); 145 + atmel_put_base_addr(priv->iobase); 152 146 platform_device_unregister(pdev); 153 147 } 154 148 } ··· 169 163 int have_region, region_size; 170 164 unsigned long base; 171 165 struct tpm_chip *chip; 166 + struct tpm_atmel_priv *priv; 172 167 173 168 rc = platform_driver_register(&atml_drv); 174 169 if (rc) ··· 190 183 goto err_rel_reg; 191 184 } 192 185 186 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 187 + if (!priv) { 188 + rc = -ENOMEM; 189 + goto err_unreg_dev; 190 + } 191 + 192 + priv->iobase = iobase; 193 + priv->base = base; 194 + priv->have_region = have_region; 195 + priv->region_size = region_size; 196 + 193 197 chip = tpmm_chip_alloc(&pdev->dev, &tpm_atmel); 194 198 if (IS_ERR(chip)) { 195 199 rc = PTR_ERR(chip); 196 200 goto err_unreg_dev; 197 201 } 198 202 199 - chip->vendor.iobase = iobase; 200 - chip->vendor.base = base; 201 - chip->vendor.have_region = have_region; 202 - chip->vendor.region_size = region_size; 203 + dev_set_drvdata(&chip->dev, priv); 203 204 204 205 rc = tpm_chip_register(chip); 205 206 if (rc)
+12 -4
drivers/char/tpm/tpm_atmel.h
··· 22 22 * 23 23 */ 24 24 25 + struct tpm_atmel_priv { 26 + int region_size; 27 + int have_region; 28 + unsigned long base; 29 + void __iomem *iobase; 30 + }; 31 + 25 32 #ifdef CONFIG_PPC64 26 33 27 34 #include <asm/prom.h> 28 35 29 - #define atmel_getb(chip, offset) readb(chip->vendor->iobase + offset); 30 - #define atmel_putb(val, chip, offset) writeb(val, chip->vendor->iobase + offset) 36 + #define atmel_getb(priv, offset) readb(priv->iobase + offset) 37 + #define atmel_putb(val, priv, offset) writeb(val, priv->iobase + offset) 31 38 #define atmel_request_region request_mem_region 32 39 #define atmel_release_region release_mem_region 33 40 ··· 85 78 return ioremap(*base, *region_size); 86 79 } 87 80 #else 88 - #define atmel_getb(chip, offset) inb(chip->vendor->base + offset) 89 - #define atmel_putb(val, chip, offset) outb(val, chip->vendor->base + offset) 81 + #define atmel_getb(chip, offset) inb(atmel_get_priv(chip)->base + offset) 82 + #define atmel_putb(val, chip, offset) \ 83 + outb(val, atmel_get_priv(chip)->base + offset) 90 84 #define atmel_request_region request_region 91 85 #define atmel_release_region release_region 92 86 /* Atmel definitions */
+47 -38
drivers/char/tpm/tpm_crb.c
··· 77 77 78 78 struct crb_priv { 79 79 unsigned int flags; 80 - struct resource res; 81 80 void __iomem *iobase; 82 81 struct crb_control_area __iomem *cca; 83 82 u8 __iomem *cmd; ··· 87 88 88 89 static u8 crb_status(struct tpm_chip *chip) 89 90 { 90 - struct crb_priv *priv = chip->vendor.priv; 91 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 91 92 u8 sts = 0; 92 93 93 94 if ((ioread32(&priv->cca->start) & CRB_START_INVOKE) != ··· 99 100 100 101 static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count) 101 102 { 102 - struct crb_priv *priv = chip->vendor.priv; 103 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 103 104 unsigned int expected; 104 105 105 106 /* sanity check */ ··· 139 140 140 141 static int crb_send(struct tpm_chip *chip, u8 *buf, size_t len) 141 142 { 142 - struct crb_priv *priv = chip->vendor.priv; 143 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 143 144 int rc = 0; 144 145 145 146 if (len > ioread32(&priv->cca->cmd_size)) { ··· 166 167 167 168 static void crb_cancel(struct tpm_chip *chip) 168 169 { 169 - struct crb_priv *priv = chip->vendor.priv; 170 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 170 171 171 172 iowrite32(cpu_to_le32(CRB_CANCEL_INVOKE), &priv->cca->cancel); 172 173 ··· 181 182 182 183 static bool crb_req_canceled(struct tpm_chip *chip, u8 status) 183 184 { 184 - struct crb_priv *priv = chip->vendor.priv; 185 + struct crb_priv *priv = dev_get_drvdata(&chip->dev); 185 186 u32 cancel = ioread32(&priv->cca->cancel); 186 187 187 188 return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE; 188 189 } 189 190 190 191 static const struct tpm_class_ops tpm_crb = { 192 + .flags = TPM_OPS_AUTO_STARTUP, 191 193 .status = crb_status, 192 194 .recv = crb_recv, 193 195 .send = crb_send, ··· 201 201 static int crb_init(struct acpi_device *device, struct crb_priv *priv) 202 202 { 203 203 struct tpm_chip *chip; 204 - int rc; 205 204 206 205 chip = tpmm_chip_alloc(&device->dev, &tpm_crb); 207 206 if (IS_ERR(chip)) 208 207 return PTR_ERR(chip); 209 208 210 - chip->vendor.priv = priv; 209 + dev_set_drvdata(&chip->dev, priv); 211 210 chip->acpi_dev_handle = device->handle; 212 211 chip->flags = TPM_CHIP_FLAG_TPM2; 213 - 214 - rc = tpm_get_timeouts(chip); 215 - if (rc) 216 - return rc; 217 - 218 - rc = tpm2_do_selftest(chip); 219 - if (rc) 220 - return rc; 221 212 222 213 return tpm_chip_register(chip); 223 214 } 224 215 225 216 static int crb_check_resource(struct acpi_resource *ares, void *data) 226 217 { 227 - struct crb_priv *priv = data; 218 + struct resource *io_res = data; 228 219 struct resource res; 229 220 230 221 if (acpi_dev_resource_memory(ares, &res)) { 231 - priv->res = res; 232 - priv->res.name = NULL; 222 + *io_res = res; 223 + io_res->name = NULL; 233 224 } 234 225 235 226 return 1; 236 227 } 237 228 238 229 static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv, 239 - u64 start, u32 size) 230 + struct resource *io_res, u64 start, u32 size) 240 231 { 241 232 struct resource new_res = { 242 233 .start = start, ··· 237 246 238 247 /* Detect a 64 bit address on a 32 bit system */ 239 248 if (start != new_res.start) 240 - return ERR_PTR(-EINVAL); 249 + return (void __iomem *) ERR_PTR(-EINVAL); 241 250 242 - if (!resource_contains(&priv->res, &new_res)) 251 + if (!resource_contains(io_res, &new_res)) 243 252 return devm_ioremap_resource(dev, &new_res); 244 253 245 - return priv->iobase + (new_res.start - priv->res.start); 254 + return priv->iobase + (new_res.start - io_res->start); 246 255 } 247 256 248 257 static int crb_map_io(struct acpi_device *device, struct crb_priv *priv, 249 258 struct acpi_table_tpm2 *buf) 250 259 { 251 260 struct list_head resources; 261 + struct resource io_res; 252 262 struct device *dev = &device->dev; 253 - u64 pa; 263 + u64 cmd_pa; 264 + u32 cmd_size; 265 + u64 rsp_pa; 266 + u32 rsp_size; 254 267 int ret; 255 268 256 269 INIT_LIST_HEAD(&resources); 257 270 ret = acpi_dev_get_resources(device, &resources, crb_check_resource, 258 - priv); 271 + &io_res); 259 272 if (ret < 0) 260 273 return ret; 261 274 acpi_dev_free_resource_list(&resources); 262 275 263 - if (resource_type(&priv->res) != IORESOURCE_MEM) { 276 + if (resource_type(&io_res) != IORESOURCE_MEM) { 264 277 dev_err(dev, 265 278 FW_BUG "TPM2 ACPI table does not define a memory resource\n"); 266 279 return -EINVAL; 267 280 } 268 281 269 - priv->iobase = devm_ioremap_resource(dev, &priv->res); 282 + priv->iobase = devm_ioremap_resource(dev, &io_res); 270 283 if (IS_ERR(priv->iobase)) 271 284 return PTR_ERR(priv->iobase); 272 285 273 - priv->cca = crb_map_res(dev, priv, buf->control_address, 0x1000); 286 + priv->cca = crb_map_res(dev, priv, &io_res, buf->control_address, 287 + sizeof(struct crb_control_area)); 274 288 if (IS_ERR(priv->cca)) 275 289 return PTR_ERR(priv->cca); 276 290 277 - pa = ((u64) ioread32(&priv->cca->cmd_pa_high) << 32) | 278 - (u64) ioread32(&priv->cca->cmd_pa_low); 279 - priv->cmd = crb_map_res(dev, priv, pa, ioread32(&priv->cca->cmd_size)); 291 + cmd_pa = ((u64) ioread32(&priv->cca->cmd_pa_high) << 32) | 292 + (u64) ioread32(&priv->cca->cmd_pa_low); 293 + cmd_size = ioread32(&priv->cca->cmd_size); 294 + priv->cmd = crb_map_res(dev, priv, &io_res, cmd_pa, cmd_size); 280 295 if (IS_ERR(priv->cmd)) 281 296 return PTR_ERR(priv->cmd); 282 297 283 - memcpy_fromio(&pa, &priv->cca->rsp_pa, 8); 284 - pa = le64_to_cpu(pa); 285 - priv->rsp = crb_map_res(dev, priv, pa, ioread32(&priv->cca->rsp_size)); 286 - return PTR_ERR_OR_ZERO(priv->rsp); 298 + memcpy_fromio(&rsp_pa, &priv->cca->rsp_pa, 8); 299 + rsp_pa = le64_to_cpu(rsp_pa); 300 + rsp_size = ioread32(&priv->cca->rsp_size); 301 + 302 + if (cmd_pa != rsp_pa) { 303 + priv->rsp = crb_map_res(dev, priv, &io_res, rsp_pa, rsp_size); 304 + return PTR_ERR_OR_ZERO(priv->rsp); 305 + } 306 + 307 + /* According to the PTP specification, overlapping command and response 308 + * buffer sizes must be identical. 309 + */ 310 + if (cmd_size != rsp_size) { 311 + dev_err(dev, FW_BUG "overlapping command and response buffer sizes are not identical"); 312 + return -EINVAL; 313 + } 314 + 315 + priv->rsp = priv->cmd; 316 + return 0; 287 317 } 288 318 289 319 static int crb_acpi_add(struct acpi_device *device) ··· 355 343 { 356 344 struct device *dev = &device->dev; 357 345 struct tpm_chip *chip = dev_get_drvdata(dev); 358 - 359 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 360 - tpm2_shutdown(chip, TPM2_SU_CLEAR); 361 346 362 347 tpm_chip_unregister(chip); 363 348
+1 -1
drivers/char/tpm/tpm_eventlog.c
··· 403 403 return 0; 404 404 } 405 405 406 - struct dentry **tpm_bios_log_setup(char *name) 406 + struct dentry **tpm_bios_log_setup(const char *name) 407 407 { 408 408 struct dentry **ret = NULL, *tpm_dir, *bin_file, *ascii_file; 409 409
+2 -2
drivers/char/tpm/tpm_eventlog.h
··· 77 77 78 78 #if defined(CONFIG_TCG_IBMVTPM) || defined(CONFIG_TCG_IBMVTPM_MODULE) || \ 79 79 defined(CONFIG_ACPI) 80 - extern struct dentry **tpm_bios_log_setup(char *); 80 + extern struct dentry **tpm_bios_log_setup(const char *); 81 81 extern void tpm_bios_log_teardown(struct dentry **); 82 82 #else 83 - static inline struct dentry **tpm_bios_log_setup(char *name) 83 + static inline struct dentry **tpm_bios_log_setup(const char *name) 84 84 { 85 85 return NULL; 86 86 }
+21 -24
drivers/char/tpm/tpm_i2c_atmel.c
··· 51 51 52 52 static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len) 53 53 { 54 - struct priv_data *priv = chip->vendor.priv; 55 - struct i2c_client *client = to_i2c_client(chip->pdev); 54 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 55 + struct i2c_client *client = to_i2c_client(chip->dev.parent); 56 56 s32 status; 57 57 58 58 priv->len = 0; ··· 62 62 63 63 status = i2c_master_send(client, buf, len); 64 64 65 - dev_dbg(chip->pdev, 65 + dev_dbg(&chip->dev, 66 66 "%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__, 67 67 (int)min_t(size_t, 64, len), buf, len, status); 68 68 return status; ··· 70 70 71 71 static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count) 72 72 { 73 - struct priv_data *priv = chip->vendor.priv; 74 - struct i2c_client *client = to_i2c_client(chip->pdev); 73 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 74 + struct i2c_client *client = to_i2c_client(chip->dev.parent); 75 75 struct tpm_output_header *hdr = 76 76 (struct tpm_output_header *)priv->buffer; 77 77 u32 expected_len; ··· 88 88 return -ENOMEM; 89 89 90 90 if (priv->len >= expected_len) { 91 - dev_dbg(chip->pdev, 91 + dev_dbg(&chip->dev, 92 92 "%s early(buf=%*ph count=%0zx) -> ret=%d\n", __func__, 93 93 (int)min_t(size_t, 64, expected_len), buf, count, 94 94 expected_len); ··· 97 97 } 98 98 99 99 rc = i2c_master_recv(client, buf, expected_len); 100 - dev_dbg(chip->pdev, 100 + dev_dbg(&chip->dev, 101 101 "%s reread(buf=%*ph count=%0zx) -> ret=%d\n", __func__, 102 102 (int)min_t(size_t, 64, expected_len), buf, count, 103 103 expected_len); ··· 106 106 107 107 static void i2c_atmel_cancel(struct tpm_chip *chip) 108 108 { 109 - dev_err(chip->pdev, "TPM operation cancellation was requested, but is not supported"); 109 + dev_err(&chip->dev, "TPM operation cancellation was requested, but is not supported"); 110 110 } 111 111 112 112 static u8 i2c_atmel_read_status(struct tpm_chip *chip) 113 113 { 114 - struct priv_data *priv = chip->vendor.priv; 115 - struct i2c_client *client = to_i2c_client(chip->pdev); 114 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 115 + struct i2c_client *client = to_i2c_client(chip->dev.parent); 116 116 int rc; 117 117 118 118 /* The TPM fails the I2C read until it is ready, so we do the entire ··· 125 125 /* Once the TPM has completed the command the command remains readable 126 126 * until another command is issued. */ 127 127 rc = i2c_master_recv(client, priv->buffer, sizeof(priv->buffer)); 128 - dev_dbg(chip->pdev, 128 + dev_dbg(&chip->dev, 129 129 "%s: sts=%d", __func__, rc); 130 130 if (rc <= 0) 131 131 return 0; ··· 141 141 } 142 142 143 143 static const struct tpm_class_ops i2c_atmel = { 144 + .flags = TPM_OPS_AUTO_STARTUP, 144 145 .status = i2c_atmel_read_status, 145 146 .recv = i2c_atmel_recv, 146 147 .send = i2c_atmel_send, ··· 156 155 { 157 156 struct tpm_chip *chip; 158 157 struct device *dev = &client->dev; 158 + struct priv_data *priv; 159 159 160 160 if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) 161 161 return -ENODEV; ··· 165 163 if (IS_ERR(chip)) 166 164 return PTR_ERR(chip); 167 165 168 - chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data), 169 - GFP_KERNEL); 170 - if (!chip->vendor.priv) 166 + priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL); 167 + if (!priv) 171 168 return -ENOMEM; 172 169 173 170 /* Default timeouts */ 174 - chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 175 - chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT); 176 - chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 177 - chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 178 - chip->vendor.irq = 0; 171 + chip->timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 172 + chip->timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT); 173 + chip->timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 174 + chip->timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 175 + 176 + dev_set_drvdata(&chip->dev, priv); 179 177 180 178 /* There is no known way to probe for this device, and all version 181 179 * information seems to be read via TPM commands. Thus we rely on the 182 180 * TPM startup process in the common code to detect the device. */ 183 - if (tpm_get_timeouts(chip)) 184 - return -ENODEV; 185 - 186 - if (tpm_do_selftest(chip)) 187 - return -ENODEV; 188 181 189 182 return tpm_chip_register(chip); 190 183 }
+27 -32
drivers/char/tpm/tpm_i2c_infineon.c
··· 66 66 /* Structure to store I2C TPM specific stuff */ 67 67 struct tpm_inf_dev { 68 68 struct i2c_client *client; 69 + int locality; 69 70 u8 buf[TPM_BUFSIZE + sizeof(u8)]; /* max. buffer size + addr */ 70 71 struct tpm_chip *chip; 71 72 enum i2c_chip_type chip_type; ··· 289 288 290 289 if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 291 290 (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { 292 - chip->vendor.locality = loc; 291 + tpm_dev.locality = loc; 293 292 return loc; 294 293 } 295 294 ··· 321 320 iic_tpm_write(TPM_ACCESS(loc), &buf, 1); 322 321 323 322 /* wait for burstcount */ 324 - stop = jiffies + chip->vendor.timeout_a; 323 + stop = jiffies + chip->timeout_a; 325 324 do { 326 325 if (check_locality(chip, loc) >= 0) 327 326 return loc; ··· 338 337 u8 i = 0; 339 338 340 339 do { 341 - if (iic_tpm_read(TPM_STS(chip->vendor.locality), &buf, 1) < 0) 340 + if (iic_tpm_read(TPM_STS(tpm_dev.locality), &buf, 1) < 0) 342 341 return 0; 343 342 344 343 i++; ··· 352 351 { 353 352 /* this causes the current command to be aborted */ 354 353 u8 buf = TPM_STS_COMMAND_READY; 355 - iic_tpm_write_long(TPM_STS(chip->vendor.locality), &buf, 1); 354 + iic_tpm_write_long(TPM_STS(tpm_dev.locality), &buf, 1); 356 355 } 357 356 358 357 static ssize_t get_burstcount(struct tpm_chip *chip) ··· 363 362 364 363 /* wait for burstcount */ 365 364 /* which timeout value, spec has 2 answers (c & d) */ 366 - stop = jiffies + chip->vendor.timeout_d; 365 + stop = jiffies + chip->timeout_d; 367 366 do { 368 367 /* Note: STS is little endian */ 369 - if (iic_tpm_read(TPM_STS(chip->vendor.locality)+1, buf, 3) < 0) 368 + if (iic_tpm_read(TPM_STS(tpm_dev.locality)+1, buf, 3) < 0) 370 369 burstcnt = 0; 371 370 else 372 371 burstcnt = (buf[2] << 16) + (buf[1] << 8) + buf[0]; ··· 420 419 if (burstcnt > (count - size)) 421 420 burstcnt = count - size; 422 421 423 - rc = iic_tpm_read(TPM_DATA_FIFO(chip->vendor.locality), 422 + rc = iic_tpm_read(TPM_DATA_FIFO(tpm_dev.locality), 424 423 &(buf[size]), burstcnt); 425 424 if (rc == 0) 426 425 size += burstcnt; ··· 447 446 /* read first 10 bytes, including tag, paramsize, and result */ 448 447 size = recv_data(chip, buf, TPM_HEADER_SIZE); 449 448 if (size < TPM_HEADER_SIZE) { 450 - dev_err(chip->pdev, "Unable to read header\n"); 449 + dev_err(&chip->dev, "Unable to read header\n"); 451 450 goto out; 452 451 } 453 452 ··· 460 459 size += recv_data(chip, &buf[TPM_HEADER_SIZE], 461 460 expected - TPM_HEADER_SIZE); 462 461 if (size < expected) { 463 - dev_err(chip->pdev, "Unable to read remainder of result\n"); 462 + dev_err(&chip->dev, "Unable to read remainder of result\n"); 464 463 size = -ETIME; 465 464 goto out; 466 465 } 467 466 468 - wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status); 467 + wait_for_stat(chip, TPM_STS_VALID, chip->timeout_c, &status); 469 468 if (status & TPM_STS_DATA_AVAIL) { /* retry? */ 470 - dev_err(chip->pdev, "Error left over data\n"); 469 + dev_err(&chip->dev, "Error left over data\n"); 471 470 size = -EIO; 472 471 goto out; 473 472 } ··· 478 477 * so we sleep rather than keeping the bus busy 479 478 */ 480 479 usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI); 481 - release_locality(chip, chip->vendor.locality, 0); 480 + release_locality(chip, tpm_dev.locality, 0); 482 481 return size; 483 482 } 484 483 ··· 501 500 tpm_tis_i2c_ready(chip); 502 501 if (wait_for_stat 503 502 (chip, TPM_STS_COMMAND_READY, 504 - chip->vendor.timeout_b, &status) < 0) { 503 + chip->timeout_b, &status) < 0) { 505 504 rc = -ETIME; 506 505 goto out_err; 507 506 } ··· 517 516 if (burstcnt > (len - 1 - count)) 518 517 burstcnt = len - 1 - count; 519 518 520 - rc = iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality), 519 + rc = iic_tpm_write(TPM_DATA_FIFO(tpm_dev.locality), 521 520 &(buf[count]), burstcnt); 522 521 if (rc == 0) 523 522 count += burstcnt; ··· 531 530 } 532 531 533 532 wait_for_stat(chip, TPM_STS_VALID, 534 - chip->vendor.timeout_c, &status); 533 + chip->timeout_c, &status); 535 534 536 535 if ((status & TPM_STS_DATA_EXPECT) == 0) { 537 536 rc = -EIO; ··· 540 539 } 541 540 542 541 /* write last byte */ 543 - iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality), &(buf[count]), 1); 544 - wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status); 542 + iic_tpm_write(TPM_DATA_FIFO(tpm_dev.locality), &(buf[count]), 1); 543 + wait_for_stat(chip, TPM_STS_VALID, chip->timeout_c, &status); 545 544 if ((status & TPM_STS_DATA_EXPECT) != 0) { 546 545 rc = -EIO; 547 546 goto out_err; 548 547 } 549 548 550 549 /* go and do it */ 551 - iic_tpm_write(TPM_STS(chip->vendor.locality), &sts, 1); 550 + iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1); 552 551 553 552 return len; 554 553 out_err: ··· 557 556 * so we sleep rather than keeping the bus busy 558 557 */ 559 558 usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI); 560 - release_locality(chip, chip->vendor.locality, 0); 559 + release_locality(chip, tpm_dev.locality, 0); 561 560 return rc; 562 561 } 563 562 ··· 567 566 } 568 567 569 568 static const struct tpm_class_ops tpm_tis_i2c = { 569 + .flags = TPM_OPS_AUTO_STARTUP, 570 570 .status = tpm_tis_i2c_status, 571 571 .recv = tpm_tis_i2c_recv, 572 572 .send = tpm_tis_i2c_send, ··· 587 585 if (IS_ERR(chip)) 588 586 return PTR_ERR(chip); 589 587 590 - /* Disable interrupts */ 591 - chip->vendor.irq = 0; 592 - 593 588 /* Default timeouts */ 594 - chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 595 - chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); 596 - chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 597 - chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 589 + chip->timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 590 + chip->timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); 591 + chip->timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 592 + chip->timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); 598 593 599 594 if (request_locality(chip, 0) != 0) { 600 595 dev_err(dev, "could not request locality\n"); ··· 618 619 619 620 dev_info(dev, "1.2 TPM (device-id 0x%X)\n", vendor >> 16); 620 621 621 - INIT_LIST_HEAD(&chip->vendor.list); 622 622 tpm_dev.chip = chip; 623 - 624 - tpm_get_timeouts(chip); 625 - tpm_do_selftest(chip); 626 623 627 624 return tpm_chip_register(chip); 628 625 out_release: 629 - release_locality(chip, chip->vendor.locality, 1); 626 + release_locality(chip, tpm_dev.locality, 1); 630 627 tpm_dev.client = NULL; 631 628 out_err: 632 629 return rc; ··· 694 699 struct tpm_chip *chip = tpm_dev.chip; 695 700 696 701 tpm_chip_unregister(chip); 697 - release_locality(chip, chip->vendor.locality, 1); 702 + release_locality(chip, tpm_dev.locality, 1); 698 703 tpm_dev.client = NULL; 699 704 700 705 return 0;
+73 -58
drivers/char/tpm/tpm_i2c_nuvoton.c
··· 1 - /****************************************************************************** 2 - * Nuvoton TPM I2C Device Driver Interface for WPCT301/NPCT501, 1 + /****************************************************************************** 2 + * Nuvoton TPM I2C Device Driver Interface for WPCT301/NPCT501/NPCT6XX, 3 3 * based on the TCG TPM Interface Spec version 1.2. 4 4 * Specifications at www.trustedcomputinggroup.org 5 5 * ··· 31 31 #include <linux/interrupt.h> 32 32 #include <linux/wait.h> 33 33 #include <linux/i2c.h> 34 + #include <linux/of_device.h> 34 35 #include "tpm.h" 35 36 36 37 /* I2C interface offsets */ ··· 53 52 #define TPM_I2C_RETRY_DELAY_SHORT 2 /* msec */ 54 53 #define TPM_I2C_RETRY_DELAY_LONG 10 /* msec */ 55 54 56 - #define I2C_DRIVER_NAME "tpm_i2c_nuvoton" 55 + #define OF_IS_TPM2 ((void *)1) 56 + #define I2C_IS_TPM2 1 57 57 58 58 struct priv_data { 59 + int irq; 59 60 unsigned int intrs; 61 + wait_queue_head_t read_queue; 60 62 }; 61 63 62 64 static s32 i2c_nuvoton_read_buf(struct i2c_client *client, u8 offset, u8 size, ··· 100 96 /* read TPM_STS register */ 101 97 static u8 i2c_nuvoton_read_status(struct tpm_chip *chip) 102 98 { 103 - struct i2c_client *client = to_i2c_client(chip->pdev); 99 + struct i2c_client *client = to_i2c_client(chip->dev.parent); 104 100 s32 status; 105 101 u8 data; 106 102 107 103 status = i2c_nuvoton_read_buf(client, TPM_STS, 1, &data); 108 104 if (status <= 0) { 109 - dev_err(chip->pdev, "%s() error return %d\n", __func__, 105 + dev_err(&chip->dev, "%s() error return %d\n", __func__, 110 106 status); 111 107 data = TPM_STS_ERR_VAL; 112 108 } ··· 131 127 /* write commandReady to TPM_STS register */ 132 128 static void i2c_nuvoton_ready(struct tpm_chip *chip) 133 129 { 134 - struct i2c_client *client = to_i2c_client(chip->pdev); 130 + struct i2c_client *client = to_i2c_client(chip->dev.parent); 135 131 s32 status; 136 132 137 133 /* this causes the current command to be aborted */ 138 134 status = i2c_nuvoton_write_status(client, TPM_STS_COMMAND_READY); 139 135 if (status < 0) 140 - dev_err(chip->pdev, 136 + dev_err(&chip->dev, 141 137 "%s() fail to write TPM_STS.commandReady\n", __func__); 142 138 } 143 139 ··· 146 142 static int i2c_nuvoton_get_burstcount(struct i2c_client *client, 147 143 struct tpm_chip *chip) 148 144 { 149 - unsigned long stop = jiffies + chip->vendor.timeout_d; 145 + unsigned long stop = jiffies + chip->timeout_d; 150 146 s32 status; 151 147 int burst_count = -1; 152 148 u8 data; ··· 167 163 } 168 164 169 165 /* 170 - * WPCT301/NPCT501 SINT# supports only dataAvail 166 + * WPCT301/NPCT501/NPCT6XX SINT# supports only dataAvail 171 167 * any call to this function which is not waiting for dataAvail will 172 168 * set queue to NULL to avoid waiting for interrupt 173 169 */ ··· 180 176 static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value, 181 177 u32 timeout, wait_queue_head_t *queue) 182 178 { 183 - if (chip->vendor.irq && queue) { 179 + if ((chip->flags & TPM_CHIP_FLAG_IRQ) && queue) { 184 180 s32 rc; 185 - struct priv_data *priv = chip->vendor.priv; 181 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 186 182 unsigned int cur_intrs = priv->intrs; 187 183 188 - enable_irq(chip->vendor.irq); 184 + enable_irq(priv->irq); 189 185 rc = wait_event_interruptible_timeout(*queue, 190 186 cur_intrs != priv->intrs, 191 187 timeout); ··· 216 212 return 0; 217 213 } while (time_before(jiffies, stop)); 218 214 } 219 - dev_err(chip->pdev, "%s(%02x, %02x) -> timeout\n", __func__, mask, 215 + dev_err(&chip->dev, "%s(%02x, %02x) -> timeout\n", __func__, mask, 220 216 value); 221 217 return -ETIMEDOUT; 222 218 } ··· 235 231 static int i2c_nuvoton_recv_data(struct i2c_client *client, 236 232 struct tpm_chip *chip, u8 *buf, size_t count) 237 233 { 234 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 238 235 s32 rc; 239 236 int burst_count, bytes2read, size = 0; 240 237 241 238 while (size < count && 242 239 i2c_nuvoton_wait_for_data_avail(chip, 243 - chip->vendor.timeout_c, 244 - &chip->vendor.read_queue) == 0) { 240 + chip->timeout_c, 241 + &priv->read_queue) == 0) { 245 242 burst_count = i2c_nuvoton_get_burstcount(client, chip); 246 243 if (burst_count < 0) { 247 - dev_err(chip->pdev, 244 + dev_err(&chip->dev, 248 245 "%s() fail to read burstCount=%d\n", __func__, 249 246 burst_count); 250 247 return -EIO; ··· 254 249 rc = i2c_nuvoton_read_buf(client, TPM_DATA_FIFO_R, 255 250 bytes2read, &buf[size]); 256 251 if (rc < 0) { 257 - dev_err(chip->pdev, 252 + dev_err(&chip->dev, 258 253 "%s() fail on i2c_nuvoton_read_buf()=%d\n", 259 254 __func__, rc); 260 255 return -EIO; 261 256 } 262 - dev_dbg(chip->pdev, "%s(%d):", __func__, bytes2read); 257 + dev_dbg(&chip->dev, "%s(%d):", __func__, bytes2read); 263 258 size += bytes2read; 264 259 } 265 260 ··· 269 264 /* Read TPM command results */ 270 265 static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count) 271 266 { 272 - struct device *dev = chip->pdev; 267 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 268 + struct device *dev = chip->dev.parent; 273 269 struct i2c_client *client = to_i2c_client(dev); 274 270 s32 rc; 275 271 int expected, status, burst_count, retries, size = 0; ··· 291 285 * tag, paramsize, and result 292 286 */ 293 287 status = i2c_nuvoton_wait_for_data_avail( 294 - chip, chip->vendor.timeout_c, &chip->vendor.read_queue); 288 + chip, chip->timeout_c, &priv->read_queue); 295 289 if (status != 0) { 296 290 dev_err(dev, "%s() timeout on dataAvail\n", __func__); 297 291 size = -ETIMEDOUT; ··· 331 325 } 332 326 if (i2c_nuvoton_wait_for_stat( 333 327 chip, TPM_STS_VALID | TPM_STS_DATA_AVAIL, 334 - TPM_STS_VALID, chip->vendor.timeout_c, 328 + TPM_STS_VALID, chip->timeout_c, 335 329 NULL)) { 336 330 dev_err(dev, "%s() error left over data\n", __func__); 337 331 size = -ETIMEDOUT; ··· 340 334 break; 341 335 } 342 336 i2c_nuvoton_ready(chip); 343 - dev_dbg(chip->pdev, "%s() -> %d\n", __func__, size); 337 + dev_dbg(&chip->dev, "%s() -> %d\n", __func__, size); 344 338 return size; 345 339 } 346 340 ··· 353 347 */ 354 348 static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len) 355 349 { 356 - struct device *dev = chip->pdev; 350 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 351 + struct device *dev = chip->dev.parent; 357 352 struct i2c_client *client = to_i2c_client(dev); 358 353 u32 ordinal; 359 354 size_t count = 0; ··· 364 357 i2c_nuvoton_ready(chip); 365 358 if (i2c_nuvoton_wait_for_stat(chip, TPM_STS_COMMAND_READY, 366 359 TPM_STS_COMMAND_READY, 367 - chip->vendor.timeout_b, NULL)) { 360 + chip->timeout_b, NULL)) { 368 361 dev_err(dev, "%s() timeout on commandReady\n", 369 362 __func__); 370 363 rc = -EIO; ··· 396 389 TPM_STS_EXPECT, 397 390 TPM_STS_VALID | 398 391 TPM_STS_EXPECT, 399 - chip->vendor.timeout_c, 392 + chip->timeout_c, 400 393 NULL); 401 394 if (rc < 0) { 402 395 dev_err(dev, "%s() timeout on Expect\n", ··· 421 414 rc = i2c_nuvoton_wait_for_stat(chip, 422 415 TPM_STS_VALID | TPM_STS_EXPECT, 423 416 TPM_STS_VALID, 424 - chip->vendor.timeout_c, NULL); 417 + chip->timeout_c, NULL); 425 418 if (rc) { 426 419 dev_err(dev, "%s() timeout on Expect to clear\n", 427 420 __func__); ··· 446 439 rc = i2c_nuvoton_wait_for_data_avail(chip, 447 440 tpm_calc_ordinal_duration(chip, 448 441 ordinal), 449 - &chip->vendor.read_queue); 442 + &priv->read_queue); 450 443 if (rc) { 451 444 dev_err(dev, "%s() timeout command duration\n", __func__); 452 445 i2c_nuvoton_ready(chip); ··· 463 456 } 464 457 465 458 static const struct tpm_class_ops tpm_i2c = { 459 + .flags = TPM_OPS_AUTO_STARTUP, 466 460 .status = i2c_nuvoton_read_status, 467 461 .recv = i2c_nuvoton_recv, 468 462 .send = i2c_nuvoton_send, ··· 481 473 static irqreturn_t i2c_nuvoton_int_handler(int dummy, void *dev_id) 482 474 { 483 475 struct tpm_chip *chip = dev_id; 484 - struct priv_data *priv = chip->vendor.priv; 476 + struct priv_data *priv = dev_get_drvdata(&chip->dev); 485 477 486 478 priv->intrs++; 487 - wake_up(&chip->vendor.read_queue); 488 - disable_irq_nosync(chip->vendor.irq); 479 + wake_up(&priv->read_queue); 480 + disable_irq_nosync(priv->irq); 489 481 return IRQ_HANDLED; 490 482 } 491 483 ··· 529 521 int rc; 530 522 struct tpm_chip *chip; 531 523 struct device *dev = &client->dev; 524 + struct priv_data *priv; 532 525 u32 vid = 0; 533 526 534 527 rc = get_vid(client, &vid); ··· 543 534 if (IS_ERR(chip)) 544 535 return PTR_ERR(chip); 545 536 546 - chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data), 547 - GFP_KERNEL); 548 - if (!chip->vendor.priv) 537 + priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL); 538 + if (!priv) 549 539 return -ENOMEM; 550 540 551 - init_waitqueue_head(&chip->vendor.read_queue); 552 - init_waitqueue_head(&chip->vendor.int_queue); 541 + if (dev->of_node) { 542 + const struct of_device_id *of_id; 543 + 544 + of_id = of_match_device(dev->driver->of_match_table, dev); 545 + if (of_id && of_id->data == OF_IS_TPM2) 546 + chip->flags |= TPM_CHIP_FLAG_TPM2; 547 + } else 548 + if (id->driver_data == I2C_IS_TPM2) 549 + chip->flags |= TPM_CHIP_FLAG_TPM2; 550 + 551 + init_waitqueue_head(&priv->read_queue); 553 552 554 553 /* Default timeouts */ 555 - chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 556 - chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT); 557 - chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 558 - chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 554 + chip->timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 555 + chip->timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT); 556 + chip->timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 557 + chip->timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT); 558 + 559 + dev_set_drvdata(&chip->dev, priv); 559 560 560 561 /* 561 562 * I2C intfcaps (interrupt capabilitieis) in the chip are hard coded to: 562 563 * TPM_INTF_INT_LEVEL_LOW | TPM_INTF_DATA_AVAIL_INT 563 564 * The IRQ should be set in the i2c_board_info (which is done 564 565 * automatically in of_i2c_register_devices, for device tree users */ 565 - chip->vendor.irq = client->irq; 566 - 567 - if (chip->vendor.irq) { 568 - dev_dbg(dev, "%s() chip-vendor.irq\n", __func__); 569 - rc = devm_request_irq(dev, chip->vendor.irq, 566 + priv->irq = client->irq; 567 + if (client->irq) { 568 + dev_dbg(dev, "%s() priv->irq\n", __func__); 569 + rc = devm_request_irq(dev, client->irq, 570 570 i2c_nuvoton_int_handler, 571 571 IRQF_TRIGGER_LOW, 572 - chip->devname, 572 + dev_name(&chip->dev), 573 573 chip); 574 574 if (rc) { 575 575 dev_err(dev, "%s() Unable to request irq: %d for use\n", 576 - __func__, chip->vendor.irq); 577 - chip->vendor.irq = 0; 576 + __func__, priv->irq); 577 + priv->irq = 0; 578 578 } else { 579 + chip->flags |= TPM_CHIP_FLAG_IRQ; 579 580 /* Clear any pending interrupt */ 580 581 i2c_nuvoton_ready(chip); 581 582 /* - wait for TPM_STS==0xA0 (stsValid, commandReady) */ 582 583 rc = i2c_nuvoton_wait_for_stat(chip, 583 584 TPM_STS_COMMAND_READY, 584 585 TPM_STS_COMMAND_READY, 585 - chip->vendor.timeout_b, 586 + chip->timeout_b, 586 587 NULL); 587 588 if (rc == 0) { 588 589 /* ··· 620 601 } 621 602 } 622 603 623 - if (tpm_get_timeouts(chip)) 624 - return -ENODEV; 625 - 626 - if (tpm_do_selftest(chip)) 627 - return -ENODEV; 628 - 629 604 return tpm_chip_register(chip); 630 605 } 631 606 632 607 static int i2c_nuvoton_remove(struct i2c_client *client) 633 608 { 634 - struct device *dev = &(client->dev); 635 - struct tpm_chip *chip = dev_get_drvdata(dev); 609 + struct tpm_chip *chip = i2c_get_clientdata(client); 610 + 636 611 tpm_chip_unregister(chip); 637 612 return 0; 638 613 } 639 614 640 615 static const struct i2c_device_id i2c_nuvoton_id[] = { 641 - {I2C_DRIVER_NAME, 0}, 616 + {"tpm_i2c_nuvoton"}, 617 + {"tpm2_i2c_nuvoton", .driver_data = I2C_IS_TPM2}, 642 618 {} 643 619 }; 644 620 MODULE_DEVICE_TABLE(i2c, i2c_nuvoton_id); ··· 642 628 static const struct of_device_id i2c_nuvoton_of_match[] = { 643 629 {.compatible = "nuvoton,npct501"}, 644 630 {.compatible = "winbond,wpct301"}, 631 + {.compatible = "nuvoton,npct601", .data = OF_IS_TPM2}, 645 632 {}, 646 633 }; 647 634 MODULE_DEVICE_TABLE(of, i2c_nuvoton_of_match); ··· 655 640 .probe = i2c_nuvoton_probe, 656 641 .remove = i2c_nuvoton_remove, 657 642 .driver = { 658 - .name = I2C_DRIVER_NAME, 643 + .name = "tpm_i2c_nuvoton", 659 644 .pm = &i2c_nuvoton_pm_ops, 660 645 .of_match_table = of_match_ptr(i2c_nuvoton_of_match), 661 646 },
+11 -27
drivers/char/tpm/tpm_ibmvtpm.c
··· 54 54 } 55 55 56 56 /** 57 - * ibmvtpm_get_data - Retrieve ibm vtpm data 58 - * @dev: device struct 59 - * 60 - * Return value: 61 - * vtpm device struct 62 - */ 63 - static struct ibmvtpm_dev *ibmvtpm_get_data(const struct device *dev) 64 - { 65 - struct tpm_chip *chip = dev_get_drvdata(dev); 66 - if (chip) 67 - return (struct ibmvtpm_dev *)TPM_VPRIV(chip); 68 - return NULL; 69 - } 70 - 71 - /** 72 57 * tpm_ibmvtpm_recv - Receive data after send 73 58 * @chip: tpm chip struct 74 59 * @buf: buffer to read ··· 64 79 */ 65 80 static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) 66 81 { 67 - struct ibmvtpm_dev *ibmvtpm; 82 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 68 83 u16 len; 69 84 int sig; 70 - 71 - ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); 72 85 73 86 if (!ibmvtpm->rtce_buf) { 74 87 dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n"); ··· 105 122 */ 106 123 static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) 107 124 { 108 - struct ibmvtpm_dev *ibmvtpm; 125 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 109 126 struct ibmvtpm_crq crq; 110 127 __be64 *word = (__be64 *)&crq; 111 128 int rc, sig; 112 - 113 - ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); 114 129 115 130 if (!ibmvtpm->rtce_buf) { 116 131 dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n"); ··· 270 289 */ 271 290 static int tpm_ibmvtpm_remove(struct vio_dev *vdev) 272 291 { 273 - struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev); 274 - struct tpm_chip *chip = dev_get_drvdata(ibmvtpm->dev); 292 + struct tpm_chip *chip = dev_get_drvdata(&vdev->dev); 293 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 275 294 int rc = 0; 276 295 277 296 tpm_chip_unregister(chip); ··· 308 327 */ 309 328 static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev) 310 329 { 311 - struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev); 330 + struct tpm_chip *chip = dev_get_drvdata(&vdev->dev); 331 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 312 332 313 333 /* ibmvtpm initializes at probe time, so the data we are 314 334 * asking for may not be set yet. Estimate that 4K required ··· 330 348 */ 331 349 static int tpm_ibmvtpm_suspend(struct device *dev) 332 350 { 333 - struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev); 351 + struct tpm_chip *chip = dev_get_drvdata(dev); 352 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 334 353 struct ibmvtpm_crq crq; 335 354 u64 *buf = (u64 *) &crq; 336 355 int rc = 0; ··· 383 400 */ 384 401 static int tpm_ibmvtpm_resume(struct device *dev) 385 402 { 386 - struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev); 403 + struct tpm_chip *chip = dev_get_drvdata(dev); 404 + struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); 387 405 int rc = 0; 388 406 389 407 do { ··· 627 643 628 644 crq_q->index = 0; 629 645 630 - TPM_VPRIV(chip) = (void *)ibmvtpm; 646 + dev_set_drvdata(&chip->dev, ibmvtpm); 631 647 632 648 spin_lock_init(&ibmvtpm->rtce_lock); 633 649
+11 -11
drivers/char/tpm/tpm_infineon.c
··· 195 195 } 196 196 if (i == TPM_MAX_TRIES) { /* timeout occurs */ 197 197 if (wait_for_bit == STAT_XFE) 198 - dev_err(chip->pdev, "Timeout in wait(STAT_XFE)\n"); 198 + dev_err(&chip->dev, "Timeout in wait(STAT_XFE)\n"); 199 199 if (wait_for_bit == STAT_RDA) 200 - dev_err(chip->pdev, "Timeout in wait(STAT_RDA)\n"); 200 + dev_err(&chip->dev, "Timeout in wait(STAT_RDA)\n"); 201 201 return -EIO; 202 202 } 203 203 return 0; ··· 220 220 static void tpm_wtx(struct tpm_chip *chip) 221 221 { 222 222 number_of_wtx++; 223 - dev_info(chip->pdev, "Granting WTX (%02d / %02d)\n", 223 + dev_info(&chip->dev, "Granting WTX (%02d / %02d)\n", 224 224 number_of_wtx, TPM_MAX_WTX_PACKAGES); 225 225 wait_and_send(chip, TPM_VL_VER); 226 226 wait_and_send(chip, TPM_CTRL_WTX); ··· 231 231 232 232 static void tpm_wtx_abort(struct tpm_chip *chip) 233 233 { 234 - dev_info(chip->pdev, "Aborting WTX\n"); 234 + dev_info(&chip->dev, "Aborting WTX\n"); 235 235 wait_and_send(chip, TPM_VL_VER); 236 236 wait_and_send(chip, TPM_CTRL_WTX_ABORT); 237 237 wait_and_send(chip, 0x00); ··· 257 257 } 258 258 259 259 if (buf[0] != TPM_VL_VER) { 260 - dev_err(chip->pdev, 260 + dev_err(&chip->dev, 261 261 "Wrong transport protocol implementation!\n"); 262 262 return -EIO; 263 263 } ··· 272 272 } 273 273 274 274 if ((size == 0x6D00) && (buf[1] == 0x80)) { 275 - dev_err(chip->pdev, "Error handling on vendor layer!\n"); 275 + dev_err(&chip->dev, "Error handling on vendor layer!\n"); 276 276 return -EIO; 277 277 } 278 278 ··· 284 284 } 285 285 286 286 if (buf[1] == TPM_CTRL_WTX) { 287 - dev_info(chip->pdev, "WTX-package received\n"); 287 + dev_info(&chip->dev, "WTX-package received\n"); 288 288 if (number_of_wtx < TPM_MAX_WTX_PACKAGES) { 289 289 tpm_wtx(chip); 290 290 goto recv_begin; ··· 295 295 } 296 296 297 297 if (buf[1] == TPM_CTRL_WTX_ABORT_ACK) { 298 - dev_info(chip->pdev, "WTX-abort acknowledged\n"); 298 + dev_info(&chip->dev, "WTX-abort acknowledged\n"); 299 299 return size; 300 300 } 301 301 302 302 if (buf[1] == TPM_CTRL_ERROR) { 303 - dev_err(chip->pdev, "ERROR-package received:\n"); 303 + dev_err(&chip->dev, "ERROR-package received:\n"); 304 304 if (buf[4] == TPM_INF_NAK) 305 - dev_err(chip->pdev, 305 + dev_err(&chip->dev, 306 306 "-> Negative acknowledgement" 307 307 " - retransmit command!\n"); 308 308 return -EIO; ··· 321 321 322 322 ret = empty_fifo(chip, 1); 323 323 if (ret) { 324 - dev_err(chip->pdev, "Timeout while clearing FIFO\n"); 324 + dev_err(&chip->dev, "Timeout while clearing FIFO\n"); 325 325 return -EIO; 326 326 } 327 327
+55 -29
drivers/char/tpm/tpm_nsc.c
··· 64 64 NSC_COMMAND_EOC = 0x03, 65 65 NSC_COMMAND_CANCEL = 0x22 66 66 }; 67 + 68 + struct tpm_nsc_priv { 69 + unsigned long base; 70 + }; 71 + 67 72 /* 68 73 * Wait for a certain status to appear 69 74 */ 70 75 static int wait_for_stat(struct tpm_chip *chip, u8 mask, u8 val, u8 * data) 71 76 { 77 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 72 78 unsigned long stop; 73 79 74 80 /* status immediately available check */ 75 - *data = inb(chip->vendor.base + NSC_STATUS); 81 + *data = inb(priv->base + NSC_STATUS); 76 82 if ((*data & mask) == val) 77 83 return 0; 78 84 ··· 86 80 stop = jiffies + 10 * HZ; 87 81 do { 88 82 msleep(TPM_TIMEOUT); 89 - *data = inb(chip->vendor.base + 1); 83 + *data = inb(priv->base + 1); 90 84 if ((*data & mask) == val) 91 85 return 0; 92 86 } ··· 97 91 98 92 static int nsc_wait_for_ready(struct tpm_chip *chip) 99 93 { 94 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 100 95 int status; 101 96 unsigned long stop; 102 97 103 98 /* status immediately available check */ 104 - status = inb(chip->vendor.base + NSC_STATUS); 99 + status = inb(priv->base + NSC_STATUS); 105 100 if (status & NSC_STATUS_OBF) 106 - status = inb(chip->vendor.base + NSC_DATA); 101 + status = inb(priv->base + NSC_DATA); 107 102 if (status & NSC_STATUS_RDY) 108 103 return 0; 109 104 ··· 112 105 stop = jiffies + 100; 113 106 do { 114 107 msleep(TPM_TIMEOUT); 115 - status = inb(chip->vendor.base + NSC_STATUS); 108 + status = inb(priv->base + NSC_STATUS); 116 109 if (status & NSC_STATUS_OBF) 117 - status = inb(chip->vendor.base + NSC_DATA); 110 + status = inb(priv->base + NSC_DATA); 118 111 if (status & NSC_STATUS_RDY) 119 112 return 0; 120 113 } 121 114 while (time_before(jiffies, stop)); 122 115 123 - dev_info(chip->pdev, "wait for ready failed\n"); 116 + dev_info(&chip->dev, "wait for ready failed\n"); 124 117 return -EBUSY; 125 118 } 126 119 127 120 128 121 static int tpm_nsc_recv(struct tpm_chip *chip, u8 * buf, size_t count) 129 122 { 123 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 130 124 u8 *buffer = buf; 131 125 u8 data, *p; 132 126 u32 size; ··· 137 129 return -EIO; 138 130 139 131 if (wait_for_stat(chip, NSC_STATUS_F0, NSC_STATUS_F0, &data) < 0) { 140 - dev_err(chip->pdev, "F0 timeout\n"); 132 + dev_err(&chip->dev, "F0 timeout\n"); 141 133 return -EIO; 142 134 } 143 - if ((data = 144 - inb(chip->vendor.base + NSC_DATA)) != NSC_COMMAND_NORMAL) { 145 - dev_err(chip->pdev, "not in normal mode (0x%x)\n", 135 + 136 + data = inb(priv->base + NSC_DATA); 137 + if (data != NSC_COMMAND_NORMAL) { 138 + dev_err(&chip->dev, "not in normal mode (0x%x)\n", 146 139 data); 147 140 return -EIO; 148 141 } ··· 152 143 for (p = buffer; p < &buffer[count]; p++) { 153 144 if (wait_for_stat 154 145 (chip, NSC_STATUS_OBF, NSC_STATUS_OBF, &data) < 0) { 155 - dev_err(chip->pdev, 146 + dev_err(&chip->dev, 156 147 "OBF timeout (while reading data)\n"); 157 148 return -EIO; 158 149 } 159 150 if (data & NSC_STATUS_F0) 160 151 break; 161 - *p = inb(chip->vendor.base + NSC_DATA); 152 + *p = inb(priv->base + NSC_DATA); 162 153 } 163 154 164 155 if ((data & NSC_STATUS_F0) == 0 && 165 156 (wait_for_stat(chip, NSC_STATUS_F0, NSC_STATUS_F0, &data) < 0)) { 166 - dev_err(chip->pdev, "F0 not set\n"); 157 + dev_err(&chip->dev, "F0 not set\n"); 167 158 return -EIO; 168 159 } 169 - if ((data = inb(chip->vendor.base + NSC_DATA)) != NSC_COMMAND_EOC) { 170 - dev_err(chip->pdev, 160 + 161 + data = inb(priv->base + NSC_DATA); 162 + if (data != NSC_COMMAND_EOC) { 163 + dev_err(&chip->dev, 171 164 "expected end of command(0x%x)\n", data); 172 165 return -EIO; 173 166 } ··· 185 174 186 175 static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count) 187 176 { 177 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 188 178 u8 data; 189 179 int i; 190 180 ··· 195 183 * fix it. Not sure why this is needed, we followed the flow 196 184 * chart in the manual to the letter. 197 185 */ 198 - outb(NSC_COMMAND_CANCEL, chip->vendor.base + NSC_COMMAND); 186 + outb(NSC_COMMAND_CANCEL, priv->base + NSC_COMMAND); 199 187 200 188 if (nsc_wait_for_ready(chip) != 0) 201 189 return -EIO; 202 190 203 191 if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) { 204 - dev_err(chip->pdev, "IBF timeout\n"); 192 + dev_err(&chip->dev, "IBF timeout\n"); 205 193 return -EIO; 206 194 } 207 195 208 - outb(NSC_COMMAND_NORMAL, chip->vendor.base + NSC_COMMAND); 196 + outb(NSC_COMMAND_NORMAL, priv->base + NSC_COMMAND); 209 197 if (wait_for_stat(chip, NSC_STATUS_IBR, NSC_STATUS_IBR, &data) < 0) { 210 - dev_err(chip->pdev, "IBR timeout\n"); 198 + dev_err(&chip->dev, "IBR timeout\n"); 211 199 return -EIO; 212 200 } 213 201 214 202 for (i = 0; i < count; i++) { 215 203 if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) { 216 - dev_err(chip->pdev, 204 + dev_err(&chip->dev, 217 205 "IBF timeout (while writing data)\n"); 218 206 return -EIO; 219 207 } 220 - outb(buf[i], chip->vendor.base + NSC_DATA); 208 + outb(buf[i], priv->base + NSC_DATA); 221 209 } 222 210 223 211 if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) { 224 - dev_err(chip->pdev, "IBF timeout\n"); 212 + dev_err(&chip->dev, "IBF timeout\n"); 225 213 return -EIO; 226 214 } 227 - outb(NSC_COMMAND_EOC, chip->vendor.base + NSC_COMMAND); 215 + outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND); 228 216 229 217 return count; 230 218 } 231 219 232 220 static void tpm_nsc_cancel(struct tpm_chip *chip) 233 221 { 234 - outb(NSC_COMMAND_CANCEL, chip->vendor.base + NSC_COMMAND); 222 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 223 + 224 + outb(NSC_COMMAND_CANCEL, priv->base + NSC_COMMAND); 235 225 } 236 226 237 227 static u8 tpm_nsc_status(struct tpm_chip *chip) 238 228 { 239 - return inb(chip->vendor.base + NSC_STATUS); 229 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 230 + 231 + return inb(priv->base + NSC_STATUS); 240 232 } 241 233 242 234 static bool tpm_nsc_req_canceled(struct tpm_chip *chip, u8 status) ··· 263 247 static void tpm_nsc_remove(struct device *dev) 264 248 { 265 249 struct tpm_chip *chip = dev_get_drvdata(dev); 250 + struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev); 266 251 267 252 tpm_chip_unregister(chip); 268 - release_region(chip->vendor.base, 2); 253 + release_region(priv->base, 2); 269 254 } 270 255 271 256 static SIMPLE_DEV_PM_OPS(tpm_nsc_pm, tpm_pm_suspend, tpm_pm_resume); ··· 285 268 int nscAddrBase = TPM_ADDR; 286 269 struct tpm_chip *chip; 287 270 unsigned long base; 271 + struct tpm_nsc_priv *priv; 288 272 289 273 /* verify that it is a National part (SID) */ 290 274 if (tpm_read_index(TPM_ADDR, NSC_SID_INDEX) != 0xEF) { ··· 319 301 if ((rc = platform_device_add(pdev)) < 0) 320 302 goto err_put_dev; 321 303 304 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 305 + if (!priv) { 306 + rc = -ENOMEM; 307 + goto err_del_dev; 308 + } 309 + 310 + priv->base = base; 311 + 322 312 if (request_region(base, 2, "tpm_nsc0") == NULL ) { 323 313 rc = -EBUSY; 324 314 goto err_del_dev; ··· 337 311 rc = -ENODEV; 338 312 goto err_rel_reg; 339 313 } 314 + 315 + dev_set_drvdata(&chip->dev, priv); 340 316 341 317 rc = tpm_chip_register(chip); 342 318 if (rc) ··· 376 348 dev_info(&pdev->dev, 377 349 "NSC TPM revision %d\n", 378 350 tpm_read_index(nscAddrBase, 0x27) & 0x1F); 379 - 380 - chip->vendor.base = base; 381 351 382 352 return 0; 383 353
+74 -775
drivers/char/tpm/tpm_tis.c
··· 29 29 #include <linux/acpi.h> 30 30 #include <linux/freezer.h> 31 31 #include "tpm.h" 32 - 33 - enum tis_access { 34 - TPM_ACCESS_VALID = 0x80, 35 - TPM_ACCESS_ACTIVE_LOCALITY = 0x20, 36 - TPM_ACCESS_REQUEST_PENDING = 0x04, 37 - TPM_ACCESS_REQUEST_USE = 0x02, 38 - }; 39 - 40 - enum tis_status { 41 - TPM_STS_VALID = 0x80, 42 - TPM_STS_COMMAND_READY = 0x40, 43 - TPM_STS_GO = 0x20, 44 - TPM_STS_DATA_AVAIL = 0x10, 45 - TPM_STS_DATA_EXPECT = 0x08, 46 - }; 47 - 48 - enum tis_int_flags { 49 - TPM_GLOBAL_INT_ENABLE = 0x80000000, 50 - TPM_INTF_BURST_COUNT_STATIC = 0x100, 51 - TPM_INTF_CMD_READY_INT = 0x080, 52 - TPM_INTF_INT_EDGE_FALLING = 0x040, 53 - TPM_INTF_INT_EDGE_RISING = 0x020, 54 - TPM_INTF_INT_LEVEL_LOW = 0x010, 55 - TPM_INTF_INT_LEVEL_HIGH = 0x008, 56 - TPM_INTF_LOCALITY_CHANGE_INT = 0x004, 57 - TPM_INTF_STS_VALID_INT = 0x002, 58 - TPM_INTF_DATA_AVAIL_INT = 0x001, 59 - }; 60 - 61 - enum tis_defaults { 62 - TIS_MEM_LEN = 0x5000, 63 - TIS_SHORT_TIMEOUT = 750, /* ms */ 64 - TIS_LONG_TIMEOUT = 2000, /* 2 sec */ 65 - }; 32 + #include "tpm_tis_core.h" 66 33 67 34 struct tpm_info { 68 35 struct resource res; ··· 40 73 int irq; 41 74 }; 42 75 43 - /* Some timeout values are needed before it is known whether the chip is 44 - * TPM 1.0 or TPM 2.0. 45 - */ 46 - #define TIS_TIMEOUT_A_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_A) 47 - #define TIS_TIMEOUT_B_MAX max(TIS_LONG_TIMEOUT, TPM2_TIMEOUT_B) 48 - #define TIS_TIMEOUT_C_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_C) 49 - #define TIS_TIMEOUT_D_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_D) 50 - 51 - #define TPM_ACCESS(l) (0x0000 | ((l) << 12)) 52 - #define TPM_INT_ENABLE(l) (0x0008 | ((l) << 12)) 53 - #define TPM_INT_VECTOR(l) (0x000C | ((l) << 12)) 54 - #define TPM_INT_STATUS(l) (0x0010 | ((l) << 12)) 55 - #define TPM_INTF_CAPS(l) (0x0014 | ((l) << 12)) 56 - #define TPM_STS(l) (0x0018 | ((l) << 12)) 57 - #define TPM_STS3(l) (0x001b | ((l) << 12)) 58 - #define TPM_DATA_FIFO(l) (0x0024 | ((l) << 12)) 59 - 60 - #define TPM_DID_VID(l) (0x0F00 | ((l) << 12)) 61 - #define TPM_RID(l) (0x0F04 | ((l) << 12)) 62 - 63 - struct priv_data { 64 - bool irq_tested; 76 + struct tpm_tis_tcg_phy { 77 + struct tpm_tis_data priv; 78 + void __iomem *iobase; 65 79 }; 80 + 81 + static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *data) 82 + { 83 + return container_of(data, struct tpm_tis_tcg_phy, priv); 84 + } 85 + 86 + static bool interrupts = true; 87 + module_param(interrupts, bool, 0444); 88 + MODULE_PARM_DESC(interrupts, "Enable interrupts"); 89 + 90 + static bool itpm; 91 + module_param(itpm, bool, 0444); 92 + MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)"); 93 + 94 + static bool force; 95 + #ifdef CONFIG_X86 96 + module_param(force, bool, 0444); 97 + MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry"); 98 + #endif 66 99 67 100 #if defined(CONFIG_PNP) && defined(CONFIG_ACPI) 68 101 static int has_hid(struct acpi_device *dev, const char *hid) ··· 87 120 } 88 121 #endif 89 122 90 - /* Before we attempt to access the TPM we must see that the valid bit is set. 91 - * The specification says that this bit is 0 at reset and remains 0 until the 92 - * 'TPM has gone through its self test and initialization and has established 93 - * correct values in the other bits.' */ 94 - static int wait_startup(struct tpm_chip *chip, int l) 123 + static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len, 124 + u8 *result) 95 125 { 96 - unsigned long stop = jiffies + chip->vendor.timeout_a; 97 - do { 98 - if (ioread8(chip->vendor.iobase + TPM_ACCESS(l)) & 99 - TPM_ACCESS_VALID) 100 - return 0; 101 - msleep(TPM_TIMEOUT); 102 - } while (time_before(jiffies, stop)); 103 - return -1; 104 - } 126 + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); 105 127 106 - static int check_locality(struct tpm_chip *chip, int l) 107 - { 108 - if ((ioread8(chip->vendor.iobase + TPM_ACCESS(l)) & 109 - (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 110 - (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) 111 - return chip->vendor.locality = l; 112 - 113 - return -1; 114 - } 115 - 116 - static void release_locality(struct tpm_chip *chip, int l, int force) 117 - { 118 - if (force || (ioread8(chip->vendor.iobase + TPM_ACCESS(l)) & 119 - (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) == 120 - (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) 121 - iowrite8(TPM_ACCESS_ACTIVE_LOCALITY, 122 - chip->vendor.iobase + TPM_ACCESS(l)); 123 - } 124 - 125 - static int request_locality(struct tpm_chip *chip, int l) 126 - { 127 - unsigned long stop, timeout; 128 - long rc; 129 - 130 - if (check_locality(chip, l) >= 0) 131 - return l; 132 - 133 - iowrite8(TPM_ACCESS_REQUEST_USE, 134 - chip->vendor.iobase + TPM_ACCESS(l)); 135 - 136 - stop = jiffies + chip->vendor.timeout_a; 137 - 138 - if (chip->vendor.irq) { 139 - again: 140 - timeout = stop - jiffies; 141 - if ((long)timeout <= 0) 142 - return -1; 143 - rc = wait_event_interruptible_timeout(chip->vendor.int_queue, 144 - (check_locality 145 - (chip, l) >= 0), 146 - timeout); 147 - if (rc > 0) 148 - return l; 149 - if (rc == -ERESTARTSYS && freezing(current)) { 150 - clear_thread_flag(TIF_SIGPENDING); 151 - goto again; 152 - } 153 - } else { 154 - /* wait for burstcount */ 155 - do { 156 - if (check_locality(chip, l) >= 0) 157 - return l; 158 - msleep(TPM_TIMEOUT); 159 - } 160 - while (time_before(jiffies, stop)); 161 - } 162 - return -1; 163 - } 164 - 165 - static u8 tpm_tis_status(struct tpm_chip *chip) 166 - { 167 - return ioread8(chip->vendor.iobase + 168 - TPM_STS(chip->vendor.locality)); 169 - } 170 - 171 - static void tpm_tis_ready(struct tpm_chip *chip) 172 - { 173 - /* this causes the current command to be aborted */ 174 - iowrite8(TPM_STS_COMMAND_READY, 175 - chip->vendor.iobase + TPM_STS(chip->vendor.locality)); 176 - } 177 - 178 - static int get_burstcount(struct tpm_chip *chip) 179 - { 180 - unsigned long stop; 181 - int burstcnt; 182 - 183 - /* wait for burstcount */ 184 - /* which timeout value, spec has 2 answers (c & d) */ 185 - stop = jiffies + chip->vendor.timeout_d; 186 - do { 187 - burstcnt = ioread8(chip->vendor.iobase + 188 - TPM_STS(chip->vendor.locality) + 1); 189 - burstcnt += ioread8(chip->vendor.iobase + 190 - TPM_STS(chip->vendor.locality) + 191 - 2) << 8; 192 - if (burstcnt) 193 - return burstcnt; 194 - msleep(TPM_TIMEOUT); 195 - } while (time_before(jiffies, stop)); 196 - return -EBUSY; 197 - } 198 - 199 - static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count) 200 - { 201 - int size = 0, burstcnt; 202 - while (size < count && 203 - wait_for_tpm_stat(chip, 204 - TPM_STS_DATA_AVAIL | TPM_STS_VALID, 205 - chip->vendor.timeout_c, 206 - &chip->vendor.read_queue, true) 207 - == 0) { 208 - burstcnt = get_burstcount(chip); 209 - for (; burstcnt > 0 && size < count; burstcnt--) 210 - buf[size++] = ioread8(chip->vendor.iobase + 211 - TPM_DATA_FIFO(chip->vendor. 212 - locality)); 213 - } 214 - return size; 215 - } 216 - 217 - static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) 218 - { 219 - int size = 0; 220 - int expected, status; 221 - 222 - if (count < TPM_HEADER_SIZE) { 223 - size = -EIO; 224 - goto out; 225 - } 226 - 227 - /* read first 10 bytes, including tag, paramsize, and result */ 228 - if ((size = 229 - recv_data(chip, buf, TPM_HEADER_SIZE)) < TPM_HEADER_SIZE) { 230 - dev_err(chip->pdev, "Unable to read header\n"); 231 - goto out; 232 - } 233 - 234 - expected = be32_to_cpu(*(__be32 *) (buf + 2)); 235 - if (expected > count) { 236 - size = -EIO; 237 - goto out; 238 - } 239 - 240 - if ((size += 241 - recv_data(chip, &buf[TPM_HEADER_SIZE], 242 - expected - TPM_HEADER_SIZE)) < expected) { 243 - dev_err(chip->pdev, "Unable to read remainder of result\n"); 244 - size = -ETIME; 245 - goto out; 246 - } 247 - 248 - wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, 249 - &chip->vendor.int_queue, false); 250 - status = tpm_tis_status(chip); 251 - if (status & TPM_STS_DATA_AVAIL) { /* retry? */ 252 - dev_err(chip->pdev, "Error left over data\n"); 253 - size = -EIO; 254 - goto out; 255 - } 256 - 257 - out: 258 - tpm_tis_ready(chip); 259 - release_locality(chip, chip->vendor.locality, 0); 260 - return size; 261 - } 262 - 263 - static bool itpm; 264 - module_param(itpm, bool, 0444); 265 - MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)"); 266 - 267 - /* 268 - * If interrupts are used (signaled by an irq set in the vendor structure) 269 - * tpm.c can skip polling for the data to be available as the interrupt is 270 - * waited for here 271 - */ 272 - static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len) 273 - { 274 - int rc, status, burstcnt; 275 - size_t count = 0; 276 - 277 - if (request_locality(chip, 0) < 0) 278 - return -EBUSY; 279 - 280 - status = tpm_tis_status(chip); 281 - if ((status & TPM_STS_COMMAND_READY) == 0) { 282 - tpm_tis_ready(chip); 283 - if (wait_for_tpm_stat 284 - (chip, TPM_STS_COMMAND_READY, chip->vendor.timeout_b, 285 - &chip->vendor.int_queue, false) < 0) { 286 - rc = -ETIME; 287 - goto out_err; 288 - } 289 - } 290 - 291 - while (count < len - 1) { 292 - burstcnt = get_burstcount(chip); 293 - for (; burstcnt > 0 && count < len - 1; burstcnt--) { 294 - iowrite8(buf[count], chip->vendor.iobase + 295 - TPM_DATA_FIFO(chip->vendor.locality)); 296 - count++; 297 - } 298 - 299 - wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, 300 - &chip->vendor.int_queue, false); 301 - status = tpm_tis_status(chip); 302 - if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) { 303 - rc = -EIO; 304 - goto out_err; 305 - } 306 - } 307 - 308 - /* write last byte */ 309 - iowrite8(buf[count], 310 - chip->vendor.iobase + TPM_DATA_FIFO(chip->vendor.locality)); 311 - wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, 312 - &chip->vendor.int_queue, false); 313 - status = tpm_tis_status(chip); 314 - if ((status & TPM_STS_DATA_EXPECT) != 0) { 315 - rc = -EIO; 316 - goto out_err; 317 - } 318 - 319 - return 0; 320 - 321 - out_err: 322 - tpm_tis_ready(chip); 323 - release_locality(chip, chip->vendor.locality, 0); 324 - return rc; 325 - } 326 - 327 - static void disable_interrupts(struct tpm_chip *chip) 328 - { 329 - u32 intmask; 330 - 331 - intmask = 332 - ioread32(chip->vendor.iobase + 333 - TPM_INT_ENABLE(chip->vendor.locality)); 334 - intmask &= ~TPM_GLOBAL_INT_ENABLE; 335 - iowrite32(intmask, 336 - chip->vendor.iobase + 337 - TPM_INT_ENABLE(chip->vendor.locality)); 338 - devm_free_irq(chip->pdev, chip->vendor.irq, chip); 339 - chip->vendor.irq = 0; 340 - } 341 - 342 - /* 343 - * If interrupts are used (signaled by an irq set in the vendor structure) 344 - * tpm.c can skip polling for the data to be available as the interrupt is 345 - * waited for here 346 - */ 347 - static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len) 348 - { 349 - int rc; 350 - u32 ordinal; 351 - unsigned long dur; 352 - 353 - rc = tpm_tis_send_data(chip, buf, len); 354 - if (rc < 0) 355 - return rc; 356 - 357 - /* go and do it */ 358 - iowrite8(TPM_STS_GO, 359 - chip->vendor.iobase + TPM_STS(chip->vendor.locality)); 360 - 361 - if (chip->vendor.irq) { 362 - ordinal = be32_to_cpu(*((__be32 *) (buf + 6))); 363 - 364 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 365 - dur = tpm2_calc_ordinal_duration(chip, ordinal); 366 - else 367 - dur = tpm_calc_ordinal_duration(chip, ordinal); 368 - 369 - if (wait_for_tpm_stat 370 - (chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID, dur, 371 - &chip->vendor.read_queue, false) < 0) { 372 - rc = -ETIME; 373 - goto out_err; 374 - } 375 - } 376 - return len; 377 - out_err: 378 - tpm_tis_ready(chip); 379 - release_locality(chip, chip->vendor.locality, 0); 380 - return rc; 381 - } 382 - 383 - static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len) 384 - { 385 - int rc, irq; 386 - struct priv_data *priv = chip->vendor.priv; 387 - 388 - if (!chip->vendor.irq || priv->irq_tested) 389 - return tpm_tis_send_main(chip, buf, len); 390 - 391 - /* Verify receipt of the expected IRQ */ 392 - irq = chip->vendor.irq; 393 - chip->vendor.irq = 0; 394 - rc = tpm_tis_send_main(chip, buf, len); 395 - chip->vendor.irq = irq; 396 - if (!priv->irq_tested) 397 - msleep(1); 398 - if (!priv->irq_tested) 399 - disable_interrupts(chip); 400 - priv->irq_tested = true; 401 - return rc; 402 - } 403 - 404 - struct tis_vendor_timeout_override { 405 - u32 did_vid; 406 - unsigned long timeout_us[4]; 407 - }; 408 - 409 - static const struct tis_vendor_timeout_override vendor_timeout_overrides[] = { 410 - /* Atmel 3204 */ 411 - { 0x32041114, { (TIS_SHORT_TIMEOUT*1000), (TIS_LONG_TIMEOUT*1000), 412 - (TIS_SHORT_TIMEOUT*1000), (TIS_SHORT_TIMEOUT*1000) } }, 413 - }; 414 - 415 - static bool tpm_tis_update_timeouts(struct tpm_chip *chip, 416 - unsigned long *timeout_cap) 417 - { 418 - int i; 419 - u32 did_vid; 420 - 421 - did_vid = ioread32(chip->vendor.iobase + TPM_DID_VID(0)); 422 - 423 - for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) { 424 - if (vendor_timeout_overrides[i].did_vid != did_vid) 425 - continue; 426 - memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us, 427 - sizeof(vendor_timeout_overrides[i].timeout_us)); 428 - return true; 429 - } 430 - 431 - return false; 432 - } 433 - 434 - /* 435 - * Early probing for iTPM with STS_DATA_EXPECT flaw. 436 - * Try sending command without itpm flag set and if that 437 - * fails, repeat with itpm flag set. 438 - */ 439 - static int probe_itpm(struct tpm_chip *chip) 440 - { 441 - int rc = 0; 442 - u8 cmd_getticks[] = { 443 - 0x00, 0xc1, 0x00, 0x00, 0x00, 0x0a, 444 - 0x00, 0x00, 0x00, 0xf1 445 - }; 446 - size_t len = sizeof(cmd_getticks); 447 - bool rem_itpm = itpm; 448 - u16 vendor = ioread16(chip->vendor.iobase + TPM_DID_VID(0)); 449 - 450 - /* probe only iTPMS */ 451 - if (vendor != TPM_VID_INTEL) 452 - return 0; 453 - 454 - itpm = false; 455 - 456 - rc = tpm_tis_send_data(chip, cmd_getticks, len); 457 - if (rc == 0) 458 - goto out; 459 - 460 - tpm_tis_ready(chip); 461 - release_locality(chip, chip->vendor.locality, 0); 462 - 463 - itpm = true; 464 - 465 - rc = tpm_tis_send_data(chip, cmd_getticks, len); 466 - if (rc == 0) { 467 - dev_info(chip->pdev, "Detected an iTPM.\n"); 468 - rc = 1; 469 - } else 470 - rc = -EFAULT; 471 - 472 - out: 473 - itpm = rem_itpm; 474 - tpm_tis_ready(chip); 475 - release_locality(chip, chip->vendor.locality, 0); 476 - 477 - return rc; 478 - } 479 - 480 - static bool tpm_tis_req_canceled(struct tpm_chip *chip, u8 status) 481 - { 482 - switch (chip->vendor.manufacturer_id) { 483 - case TPM_VID_WINBOND: 484 - return ((status == TPM_STS_VALID) || 485 - (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY))); 486 - case TPM_VID_STM: 487 - return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)); 488 - default: 489 - return (status == TPM_STS_COMMAND_READY); 490 - } 491 - } 492 - 493 - static const struct tpm_class_ops tpm_tis = { 494 - .status = tpm_tis_status, 495 - .recv = tpm_tis_recv, 496 - .send = tpm_tis_send, 497 - .cancel = tpm_tis_ready, 498 - .update_timeouts = tpm_tis_update_timeouts, 499 - .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 500 - .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 501 - .req_canceled = tpm_tis_req_canceled, 502 - }; 503 - 504 - static irqreturn_t tis_int_handler(int dummy, void *dev_id) 505 - { 506 - struct tpm_chip *chip = dev_id; 507 - u32 interrupt; 508 - int i; 509 - 510 - interrupt = ioread32(chip->vendor.iobase + 511 - TPM_INT_STATUS(chip->vendor.locality)); 512 - 513 - if (interrupt == 0) 514 - return IRQ_NONE; 515 - 516 - ((struct priv_data *)chip->vendor.priv)->irq_tested = true; 517 - if (interrupt & TPM_INTF_DATA_AVAIL_INT) 518 - wake_up_interruptible(&chip->vendor.read_queue); 519 - if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) 520 - for (i = 0; i < 5; i++) 521 - if (check_locality(chip, i) >= 0) 522 - break; 523 - if (interrupt & 524 - (TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT | 525 - TPM_INTF_CMD_READY_INT)) 526 - wake_up_interruptible(&chip->vendor.int_queue); 527 - 528 - /* Clear interrupts handled with TPM_EOI */ 529 - iowrite32(interrupt, 530 - chip->vendor.iobase + 531 - TPM_INT_STATUS(chip->vendor.locality)); 532 - ioread32(chip->vendor.iobase + TPM_INT_STATUS(chip->vendor.locality)); 533 - return IRQ_HANDLED; 534 - } 535 - 536 - /* Register the IRQ and issue a command that will cause an interrupt. If an 537 - * irq is seen then leave the chip setup for IRQ operation, otherwise reverse 538 - * everything and leave in polling mode. Returns 0 on success. 539 - */ 540 - static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask, 541 - int flags, int irq) 542 - { 543 - struct priv_data *priv = chip->vendor.priv; 544 - u8 original_int_vec; 545 - 546 - if (devm_request_irq(chip->pdev, irq, tis_int_handler, flags, 547 - chip->devname, chip) != 0) { 548 - dev_info(chip->pdev, "Unable to request irq: %d for probe\n", 549 - irq); 550 - return -1; 551 - } 552 - chip->vendor.irq = irq; 553 - 554 - original_int_vec = ioread8(chip->vendor.iobase + 555 - TPM_INT_VECTOR(chip->vendor.locality)); 556 - iowrite8(irq, 557 - chip->vendor.iobase + TPM_INT_VECTOR(chip->vendor.locality)); 558 - 559 - /* Clear all existing */ 560 - iowrite32(ioread32(chip->vendor.iobase + 561 - TPM_INT_STATUS(chip->vendor.locality)), 562 - chip->vendor.iobase + TPM_INT_STATUS(chip->vendor.locality)); 563 - 564 - /* Turn on */ 565 - iowrite32(intmask | TPM_GLOBAL_INT_ENABLE, 566 - chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality)); 567 - 568 - priv->irq_tested = false; 569 - 570 - /* Generate an interrupt by having the core call through to 571 - * tpm_tis_send 572 - */ 573 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 574 - tpm2_gen_interrupt(chip); 575 - else 576 - tpm_gen_interrupt(chip); 577 - 578 - /* tpm_tis_send will either confirm the interrupt is working or it 579 - * will call disable_irq which undoes all of the above. 580 - */ 581 - if (!chip->vendor.irq) { 582 - iowrite8(original_int_vec, 583 - chip->vendor.iobase + 584 - TPM_INT_VECTOR(chip->vendor.locality)); 585 - return 1; 586 - } 587 - 128 + while (len--) 129 + *result++ = ioread8(phy->iobase + addr); 588 130 return 0; 589 131 } 590 132 591 - /* Try to find the IRQ the TPM is using. This is for legacy x86 systems that 592 - * do not have ACPI/etc. We typically expect the interrupt to be declared if 593 - * present. 594 - */ 595 - static void tpm_tis_probe_irq(struct tpm_chip *chip, u32 intmask) 133 + static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len, 134 + u8 *value) 596 135 { 597 - u8 original_int_vec; 598 - int i; 136 + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); 599 137 600 - original_int_vec = ioread8(chip->vendor.iobase + 601 - TPM_INT_VECTOR(chip->vendor.locality)); 602 - 603 - if (!original_int_vec) { 604 - if (IS_ENABLED(CONFIG_X86)) 605 - for (i = 3; i <= 15; i++) 606 - if (!tpm_tis_probe_irq_single(chip, intmask, 0, 607 - i)) 608 - return; 609 - } else if (!tpm_tis_probe_irq_single(chip, intmask, 0, 610 - original_int_vec)) 611 - return; 138 + while (len--) 139 + iowrite8(*value++, phy->iobase + addr); 140 + return 0; 612 141 } 613 142 614 - static bool interrupts = true; 615 - module_param(interrupts, bool, 0444); 616 - MODULE_PARM_DESC(interrupts, "Enable interrupts"); 617 - 618 - static void tpm_tis_remove(struct tpm_chip *chip) 143 + static int tpm_tcg_read16(struct tpm_tis_data *data, u32 addr, u16 *result) 619 144 { 620 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 621 - tpm2_shutdown(chip, TPM2_SU_CLEAR); 145 + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); 622 146 623 - iowrite32(~TPM_GLOBAL_INT_ENABLE & 624 - ioread32(chip->vendor.iobase + 625 - TPM_INT_ENABLE(chip->vendor. 626 - locality)), 627 - chip->vendor.iobase + 628 - TPM_INT_ENABLE(chip->vendor.locality)); 629 - release_locality(chip, chip->vendor.locality, 1); 147 + *result = ioread16(phy->iobase + addr); 148 + return 0; 630 149 } 150 + 151 + static int tpm_tcg_read32(struct tpm_tis_data *data, u32 addr, u32 *result) 152 + { 153 + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); 154 + 155 + *result = ioread32(phy->iobase + addr); 156 + return 0; 157 + } 158 + 159 + static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value) 160 + { 161 + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); 162 + 163 + iowrite32(value, phy->iobase + addr); 164 + return 0; 165 + } 166 + 167 + static const struct tpm_tis_phy_ops tpm_tcg = { 168 + .read_bytes = tpm_tcg_read_bytes, 169 + .write_bytes = tpm_tcg_write_bytes, 170 + .read16 = tpm_tcg_read16, 171 + .read32 = tpm_tcg_read32, 172 + .write32 = tpm_tcg_write32, 173 + }; 631 174 632 175 static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info, 633 176 acpi_handle acpi_dev_handle) 634 177 { 635 - u32 vendor, intfcaps, intmask; 636 - int rc, probe; 637 - struct tpm_chip *chip; 638 - struct priv_data *priv; 178 + struct tpm_tis_tcg_phy *phy; 179 + int irq = -1; 639 180 640 - priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL); 641 - if (priv == NULL) 181 + phy = devm_kzalloc(dev, sizeof(struct tpm_tis_tcg_phy), GFP_KERNEL); 182 + if (phy == NULL) 642 183 return -ENOMEM; 643 184 644 - chip = tpmm_chip_alloc(dev, &tpm_tis); 645 - if (IS_ERR(chip)) 646 - return PTR_ERR(chip); 185 + phy->iobase = devm_ioremap_resource(dev, &tpm_info->res); 186 + if (IS_ERR(phy->iobase)) 187 + return PTR_ERR(phy->iobase); 647 188 648 - chip->vendor.priv = priv; 649 - #ifdef CONFIG_ACPI 650 - chip->acpi_dev_handle = acpi_dev_handle; 651 - #endif 652 - 653 - chip->vendor.iobase = devm_ioremap_resource(dev, &tpm_info->res); 654 - if (IS_ERR(chip->vendor.iobase)) 655 - return PTR_ERR(chip->vendor.iobase); 656 - 657 - /* Maximum timeouts */ 658 - chip->vendor.timeout_a = TIS_TIMEOUT_A_MAX; 659 - chip->vendor.timeout_b = TIS_TIMEOUT_B_MAX; 660 - chip->vendor.timeout_c = TIS_TIMEOUT_C_MAX; 661 - chip->vendor.timeout_d = TIS_TIMEOUT_D_MAX; 662 - 663 - if (wait_startup(chip, 0) != 0) { 664 - rc = -ENODEV; 665 - goto out_err; 666 - } 667 - 668 - /* Take control of the TPM's interrupt hardware and shut it off */ 669 - intmask = ioread32(chip->vendor.iobase + 670 - TPM_INT_ENABLE(chip->vendor.locality)); 671 - intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT | 672 - TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT; 673 - intmask &= ~TPM_GLOBAL_INT_ENABLE; 674 - iowrite32(intmask, 675 - chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality)); 676 - 677 - if (request_locality(chip, 0) != 0) { 678 - rc = -ENODEV; 679 - goto out_err; 680 - } 681 - 682 - rc = tpm2_probe(chip); 683 - if (rc) 684 - goto out_err; 685 - 686 - vendor = ioread32(chip->vendor.iobase + TPM_DID_VID(0)); 687 - chip->vendor.manufacturer_id = vendor; 688 - 689 - dev_info(dev, "%s TPM (device-id 0x%X, rev-id %d)\n", 690 - (chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2", 691 - vendor >> 16, ioread8(chip->vendor.iobase + TPM_RID(0))); 692 - 693 - if (!itpm) { 694 - probe = probe_itpm(chip); 695 - if (probe < 0) { 696 - rc = -ENODEV; 697 - goto out_err; 698 - } 699 - itpm = !!probe; 700 - } 189 + if (interrupts) 190 + irq = tpm_info->irq; 701 191 702 192 if (itpm) 703 - dev_info(dev, "Intel iTPM workaround enabled\n"); 193 + phy->priv.flags |= TPM_TIS_ITPM_POSSIBLE; 704 194 705 - 706 - /* Figure out the capabilities */ 707 - intfcaps = 708 - ioread32(chip->vendor.iobase + 709 - TPM_INTF_CAPS(chip->vendor.locality)); 710 - dev_dbg(dev, "TPM interface capabilities (0x%x):\n", 711 - intfcaps); 712 - if (intfcaps & TPM_INTF_BURST_COUNT_STATIC) 713 - dev_dbg(dev, "\tBurst Count Static\n"); 714 - if (intfcaps & TPM_INTF_CMD_READY_INT) 715 - dev_dbg(dev, "\tCommand Ready Int Support\n"); 716 - if (intfcaps & TPM_INTF_INT_EDGE_FALLING) 717 - dev_dbg(dev, "\tInterrupt Edge Falling\n"); 718 - if (intfcaps & TPM_INTF_INT_EDGE_RISING) 719 - dev_dbg(dev, "\tInterrupt Edge Rising\n"); 720 - if (intfcaps & TPM_INTF_INT_LEVEL_LOW) 721 - dev_dbg(dev, "\tInterrupt Level Low\n"); 722 - if (intfcaps & TPM_INTF_INT_LEVEL_HIGH) 723 - dev_dbg(dev, "\tInterrupt Level High\n"); 724 - if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) 725 - dev_dbg(dev, "\tLocality Change Int Support\n"); 726 - if (intfcaps & TPM_INTF_STS_VALID_INT) 727 - dev_dbg(dev, "\tSts Valid Int Support\n"); 728 - if (intfcaps & TPM_INTF_DATA_AVAIL_INT) 729 - dev_dbg(dev, "\tData Avail Int Support\n"); 730 - 731 - /* Very early on issue a command to the TPM in polling mode to make 732 - * sure it works. May as well use that command to set the proper 733 - * timeouts for the driver. 734 - */ 735 - if (tpm_get_timeouts(chip)) { 736 - dev_err(dev, "Could not get TPM timeouts and durations\n"); 737 - rc = -ENODEV; 738 - goto out_err; 739 - } 740 - 741 - /* INTERRUPT Setup */ 742 - init_waitqueue_head(&chip->vendor.read_queue); 743 - init_waitqueue_head(&chip->vendor.int_queue); 744 - if (interrupts && tpm_info->irq != -1) { 745 - if (tpm_info->irq) { 746 - tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED, 747 - tpm_info->irq); 748 - if (!chip->vendor.irq) 749 - dev_err(chip->pdev, FW_BUG 750 - "TPM interrupt not working, polling instead\n"); 751 - } else 752 - tpm_tis_probe_irq(chip, intmask); 753 - } 754 - 755 - if (chip->flags & TPM_CHIP_FLAG_TPM2) { 756 - rc = tpm2_do_selftest(chip); 757 - if (rc == TPM2_RC_INITIALIZE) { 758 - dev_warn(dev, "Firmware has not started TPM\n"); 759 - rc = tpm2_startup(chip, TPM2_SU_CLEAR); 760 - if (!rc) 761 - rc = tpm2_do_selftest(chip); 762 - } 763 - 764 - if (rc) { 765 - dev_err(dev, "TPM self test failed\n"); 766 - if (rc > 0) 767 - rc = -ENODEV; 768 - goto out_err; 769 - } 770 - } else { 771 - if (tpm_do_selftest(chip)) { 772 - dev_err(dev, "TPM self test failed\n"); 773 - rc = -ENODEV; 774 - goto out_err; 775 - } 776 - } 777 - 778 - return tpm_chip_register(chip); 779 - out_err: 780 - tpm_tis_remove(chip); 781 - return rc; 195 + return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg, 196 + acpi_dev_handle); 782 197 } 783 - 784 - #ifdef CONFIG_PM_SLEEP 785 - static void tpm_tis_reenable_interrupts(struct tpm_chip *chip) 786 - { 787 - u32 intmask; 788 - 789 - /* reenable interrupts that device may have lost or 790 - BIOS/firmware may have disabled */ 791 - iowrite8(chip->vendor.irq, chip->vendor.iobase + 792 - TPM_INT_VECTOR(chip->vendor.locality)); 793 - 794 - intmask = 795 - ioread32(chip->vendor.iobase + 796 - TPM_INT_ENABLE(chip->vendor.locality)); 797 - 798 - intmask |= TPM_INTF_CMD_READY_INT 799 - | TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT 800 - | TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE; 801 - 802 - iowrite32(intmask, 803 - chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality)); 804 - } 805 - 806 - static int tpm_tis_resume(struct device *dev) 807 - { 808 - struct tpm_chip *chip = dev_get_drvdata(dev); 809 - int ret; 810 - 811 - if (chip->vendor.irq) 812 - tpm_tis_reenable_interrupts(chip); 813 - 814 - ret = tpm_pm_resume(dev); 815 - if (ret) 816 - return ret; 817 - 818 - /* TPM 1.2 requires self-test on resume. This function actually returns 819 - * an error code but for unknown reason it isn't handled. 820 - */ 821 - if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) 822 - tpm_do_selftest(chip); 823 - 824 - return 0; 825 - } 826 - #endif 827 198 828 199 static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume); 829 200 ··· 362 1057 .pm = &tpm_tis_pm, 363 1058 }, 364 1059 }; 365 - 366 - static bool force; 367 - #ifdef CONFIG_X86 368 - module_param(force, bool, 0444); 369 - MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry"); 370 - #endif 371 1060 372 1061 static int tpm_tis_force_device(void) 373 1062 {
+835
drivers/char/tpm/tpm_tis_core.c
··· 1 + /* 2 + * Copyright (C) 2005, 2006 IBM Corporation 3 + * Copyright (C) 2014, 2015 Intel Corporation 4 + * 5 + * Authors: 6 + * Leendert van Doorn <leendert@watson.ibm.com> 7 + * Kylene Hall <kjhall@us.ibm.com> 8 + * 9 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 10 + * 11 + * Device driver for TCG/TCPA TPM (trusted platform module). 12 + * Specifications at www.trustedcomputinggroup.org 13 + * 14 + * This device driver implements the TPM interface as defined in 15 + * the TCG TPM Interface Spec version 1.2, revision 1.0. 16 + * 17 + * This program is free software; you can redistribute it and/or 18 + * modify it under the terms of the GNU General Public License as 19 + * published by the Free Software Foundation, version 2 of the 20 + * License. 21 + */ 22 + #include <linux/init.h> 23 + #include <linux/module.h> 24 + #include <linux/moduleparam.h> 25 + #include <linux/pnp.h> 26 + #include <linux/slab.h> 27 + #include <linux/interrupt.h> 28 + #include <linux/wait.h> 29 + #include <linux/acpi.h> 30 + #include <linux/freezer.h> 31 + #include "tpm.h" 32 + #include "tpm_tis_core.h" 33 + 34 + /* Before we attempt to access the TPM we must see that the valid bit is set. 35 + * The specification says that this bit is 0 at reset and remains 0 until the 36 + * 'TPM has gone through its self test and initialization and has established 37 + * correct values in the other bits.' 38 + */ 39 + static int wait_startup(struct tpm_chip *chip, int l) 40 + { 41 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 42 + unsigned long stop = jiffies + chip->timeout_a; 43 + 44 + do { 45 + int rc; 46 + u8 access; 47 + 48 + rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); 49 + if (rc < 0) 50 + return rc; 51 + 52 + if (access & TPM_ACCESS_VALID) 53 + return 0; 54 + msleep(TPM_TIMEOUT); 55 + } while (time_before(jiffies, stop)); 56 + return -1; 57 + } 58 + 59 + static int check_locality(struct tpm_chip *chip, int l) 60 + { 61 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 62 + int rc; 63 + u8 access; 64 + 65 + rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); 66 + if (rc < 0) 67 + return rc; 68 + 69 + if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == 70 + (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) 71 + return priv->locality = l; 72 + 73 + return -1; 74 + } 75 + 76 + static void release_locality(struct tpm_chip *chip, int l, int force) 77 + { 78 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 79 + int rc; 80 + u8 access; 81 + 82 + rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); 83 + if (rc < 0) 84 + return; 85 + 86 + if (force || (access & 87 + (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) == 88 + (TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) 89 + tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY); 90 + 91 + } 92 + 93 + static int request_locality(struct tpm_chip *chip, int l) 94 + { 95 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 96 + unsigned long stop, timeout; 97 + long rc; 98 + 99 + if (check_locality(chip, l) >= 0) 100 + return l; 101 + 102 + rc = tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_REQUEST_USE); 103 + if (rc < 0) 104 + return rc; 105 + 106 + stop = jiffies + chip->timeout_a; 107 + 108 + if (chip->flags & TPM_CHIP_FLAG_IRQ) { 109 + again: 110 + timeout = stop - jiffies; 111 + if ((long)timeout <= 0) 112 + return -1; 113 + rc = wait_event_interruptible_timeout(priv->int_queue, 114 + (check_locality 115 + (chip, l) >= 0), 116 + timeout); 117 + if (rc > 0) 118 + return l; 119 + if (rc == -ERESTARTSYS && freezing(current)) { 120 + clear_thread_flag(TIF_SIGPENDING); 121 + goto again; 122 + } 123 + } else { 124 + /* wait for burstcount */ 125 + do { 126 + if (check_locality(chip, l) >= 0) 127 + return l; 128 + msleep(TPM_TIMEOUT); 129 + } while (time_before(jiffies, stop)); 130 + } 131 + return -1; 132 + } 133 + 134 + static u8 tpm_tis_status(struct tpm_chip *chip) 135 + { 136 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 137 + int rc; 138 + u8 status; 139 + 140 + rc = tpm_tis_read8(priv, TPM_STS(priv->locality), &status); 141 + if (rc < 0) 142 + return 0; 143 + 144 + return status; 145 + } 146 + 147 + static void tpm_tis_ready(struct tpm_chip *chip) 148 + { 149 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 150 + 151 + /* this causes the current command to be aborted */ 152 + tpm_tis_write8(priv, TPM_STS(priv->locality), TPM_STS_COMMAND_READY); 153 + } 154 + 155 + static int get_burstcount(struct tpm_chip *chip) 156 + { 157 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 158 + unsigned long stop; 159 + int burstcnt, rc; 160 + u32 value; 161 + 162 + /* wait for burstcount */ 163 + /* which timeout value, spec has 2 answers (c & d) */ 164 + stop = jiffies + chip->timeout_d; 165 + do { 166 + rc = tpm_tis_read32(priv, TPM_STS(priv->locality), &value); 167 + if (rc < 0) 168 + return rc; 169 + 170 + burstcnt = (value >> 8) & 0xFFFF; 171 + if (burstcnt) 172 + return burstcnt; 173 + msleep(TPM_TIMEOUT); 174 + } while (time_before(jiffies, stop)); 175 + return -EBUSY; 176 + } 177 + 178 + static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count) 179 + { 180 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 181 + int size = 0, burstcnt, rc; 182 + 183 + while (size < count && 184 + wait_for_tpm_stat(chip, 185 + TPM_STS_DATA_AVAIL | TPM_STS_VALID, 186 + chip->timeout_c, 187 + &priv->read_queue, true) == 0) { 188 + burstcnt = min_t(int, get_burstcount(chip), count - size); 189 + 190 + rc = tpm_tis_read_bytes(priv, TPM_DATA_FIFO(priv->locality), 191 + burstcnt, buf + size); 192 + if (rc < 0) 193 + return rc; 194 + 195 + size += burstcnt; 196 + } 197 + return size; 198 + } 199 + 200 + static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) 201 + { 202 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 203 + int size = 0; 204 + int expected, status; 205 + 206 + if (count < TPM_HEADER_SIZE) { 207 + size = -EIO; 208 + goto out; 209 + } 210 + 211 + size = recv_data(chip, buf, TPM_HEADER_SIZE); 212 + /* read first 10 bytes, including tag, paramsize, and result */ 213 + if (size < TPM_HEADER_SIZE) { 214 + dev_err(&chip->dev, "Unable to read header\n"); 215 + goto out; 216 + } 217 + 218 + expected = be32_to_cpu(*(__be32 *) (buf + 2)); 219 + if (expected > count) { 220 + size = -EIO; 221 + goto out; 222 + } 223 + 224 + size += recv_data(chip, &buf[TPM_HEADER_SIZE], 225 + expected - TPM_HEADER_SIZE); 226 + if (size < expected) { 227 + dev_err(&chip->dev, "Unable to read remainder of result\n"); 228 + size = -ETIME; 229 + goto out; 230 + } 231 + 232 + wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c, 233 + &priv->int_queue, false); 234 + status = tpm_tis_status(chip); 235 + if (status & TPM_STS_DATA_AVAIL) { /* retry? */ 236 + dev_err(&chip->dev, "Error left over data\n"); 237 + size = -EIO; 238 + goto out; 239 + } 240 + 241 + out: 242 + tpm_tis_ready(chip); 243 + release_locality(chip, priv->locality, 0); 244 + return size; 245 + } 246 + 247 + /* 248 + * If interrupts are used (signaled by an irq set in the vendor structure) 249 + * tpm.c can skip polling for the data to be available as the interrupt is 250 + * waited for here 251 + */ 252 + static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len) 253 + { 254 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 255 + int rc, status, burstcnt; 256 + size_t count = 0; 257 + bool itpm = priv->flags & TPM_TIS_ITPM_POSSIBLE; 258 + 259 + if (request_locality(chip, 0) < 0) 260 + return -EBUSY; 261 + 262 + status = tpm_tis_status(chip); 263 + if ((status & TPM_STS_COMMAND_READY) == 0) { 264 + tpm_tis_ready(chip); 265 + if (wait_for_tpm_stat 266 + (chip, TPM_STS_COMMAND_READY, chip->timeout_b, 267 + &priv->int_queue, false) < 0) { 268 + rc = -ETIME; 269 + goto out_err; 270 + } 271 + } 272 + 273 + while (count < len - 1) { 274 + burstcnt = min_t(int, get_burstcount(chip), len - count - 1); 275 + rc = tpm_tis_write_bytes(priv, TPM_DATA_FIFO(priv->locality), 276 + burstcnt, buf + count); 277 + if (rc < 0) 278 + goto out_err; 279 + 280 + count += burstcnt; 281 + 282 + wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c, 283 + &priv->int_queue, false); 284 + status = tpm_tis_status(chip); 285 + if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) { 286 + rc = -EIO; 287 + goto out_err; 288 + } 289 + } 290 + 291 + /* write last byte */ 292 + rc = tpm_tis_write8(priv, TPM_DATA_FIFO(priv->locality), buf[count]); 293 + if (rc < 0) 294 + goto out_err; 295 + 296 + wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c, 297 + &priv->int_queue, false); 298 + status = tpm_tis_status(chip); 299 + if (!itpm && (status & TPM_STS_DATA_EXPECT) != 0) { 300 + rc = -EIO; 301 + goto out_err; 302 + } 303 + 304 + return 0; 305 + 306 + out_err: 307 + tpm_tis_ready(chip); 308 + release_locality(chip, priv->locality, 0); 309 + return rc; 310 + } 311 + 312 + static void disable_interrupts(struct tpm_chip *chip) 313 + { 314 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 315 + u32 intmask; 316 + int rc; 317 + 318 + rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); 319 + if (rc < 0) 320 + intmask = 0; 321 + 322 + intmask &= ~TPM_GLOBAL_INT_ENABLE; 323 + rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); 324 + 325 + devm_free_irq(chip->dev.parent, priv->irq, chip); 326 + priv->irq = 0; 327 + chip->flags &= ~TPM_CHIP_FLAG_IRQ; 328 + } 329 + 330 + /* 331 + * If interrupts are used (signaled by an irq set in the vendor structure) 332 + * tpm.c can skip polling for the data to be available as the interrupt is 333 + * waited for here 334 + */ 335 + static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len) 336 + { 337 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 338 + int rc; 339 + u32 ordinal; 340 + unsigned long dur; 341 + 342 + rc = tpm_tis_send_data(chip, buf, len); 343 + if (rc < 0) 344 + return rc; 345 + 346 + /* go and do it */ 347 + rc = tpm_tis_write8(priv, TPM_STS(priv->locality), TPM_STS_GO); 348 + if (rc < 0) 349 + goto out_err; 350 + 351 + if (chip->flags & TPM_CHIP_FLAG_IRQ) { 352 + ordinal = be32_to_cpu(*((__be32 *) (buf + 6))); 353 + 354 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 355 + dur = tpm2_calc_ordinal_duration(chip, ordinal); 356 + else 357 + dur = tpm_calc_ordinal_duration(chip, ordinal); 358 + 359 + if (wait_for_tpm_stat 360 + (chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID, dur, 361 + &priv->read_queue, false) < 0) { 362 + rc = -ETIME; 363 + goto out_err; 364 + } 365 + } 366 + return len; 367 + out_err: 368 + tpm_tis_ready(chip); 369 + release_locality(chip, priv->locality, 0); 370 + return rc; 371 + } 372 + 373 + static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len) 374 + { 375 + int rc, irq; 376 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 377 + 378 + if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || priv->irq_tested) 379 + return tpm_tis_send_main(chip, buf, len); 380 + 381 + /* Verify receipt of the expected IRQ */ 382 + irq = priv->irq; 383 + priv->irq = 0; 384 + chip->flags &= ~TPM_CHIP_FLAG_IRQ; 385 + rc = tpm_tis_send_main(chip, buf, len); 386 + priv->irq = irq; 387 + chip->flags |= TPM_CHIP_FLAG_IRQ; 388 + if (!priv->irq_tested) 389 + msleep(1); 390 + if (!priv->irq_tested) 391 + disable_interrupts(chip); 392 + priv->irq_tested = true; 393 + return rc; 394 + } 395 + 396 + struct tis_vendor_timeout_override { 397 + u32 did_vid; 398 + unsigned long timeout_us[4]; 399 + }; 400 + 401 + static const struct tis_vendor_timeout_override vendor_timeout_overrides[] = { 402 + /* Atmel 3204 */ 403 + { 0x32041114, { (TIS_SHORT_TIMEOUT*1000), (TIS_LONG_TIMEOUT*1000), 404 + (TIS_SHORT_TIMEOUT*1000), (TIS_SHORT_TIMEOUT*1000) } }, 405 + }; 406 + 407 + static bool tpm_tis_update_timeouts(struct tpm_chip *chip, 408 + unsigned long *timeout_cap) 409 + { 410 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 411 + int i, rc; 412 + u32 did_vid; 413 + 414 + rc = tpm_tis_read32(priv, TPM_DID_VID(0), &did_vid); 415 + if (rc < 0) 416 + return rc; 417 + 418 + for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) { 419 + if (vendor_timeout_overrides[i].did_vid != did_vid) 420 + continue; 421 + memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us, 422 + sizeof(vendor_timeout_overrides[i].timeout_us)); 423 + return true; 424 + } 425 + 426 + return false; 427 + } 428 + 429 + /* 430 + * Early probing for iTPM with STS_DATA_EXPECT flaw. 431 + * Try sending command without itpm flag set and if that 432 + * fails, repeat with itpm flag set. 433 + */ 434 + static int probe_itpm(struct tpm_chip *chip) 435 + { 436 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 437 + int rc = 0; 438 + u8 cmd_getticks[] = { 439 + 0x00, 0xc1, 0x00, 0x00, 0x00, 0x0a, 440 + 0x00, 0x00, 0x00, 0xf1 441 + }; 442 + size_t len = sizeof(cmd_getticks); 443 + bool itpm; 444 + u16 vendor; 445 + 446 + rc = tpm_tis_read16(priv, TPM_DID_VID(0), &vendor); 447 + if (rc < 0) 448 + return rc; 449 + 450 + /* probe only iTPMS */ 451 + if (vendor != TPM_VID_INTEL) 452 + return 0; 453 + 454 + itpm = false; 455 + 456 + rc = tpm_tis_send_data(chip, cmd_getticks, len); 457 + if (rc == 0) 458 + goto out; 459 + 460 + tpm_tis_ready(chip); 461 + release_locality(chip, priv->locality, 0); 462 + 463 + itpm = true; 464 + 465 + rc = tpm_tis_send_data(chip, cmd_getticks, len); 466 + if (rc == 0) { 467 + dev_info(&chip->dev, "Detected an iTPM.\n"); 468 + rc = 1; 469 + } else 470 + rc = -EFAULT; 471 + 472 + out: 473 + tpm_tis_ready(chip); 474 + release_locality(chip, priv->locality, 0); 475 + 476 + return rc; 477 + } 478 + 479 + static bool tpm_tis_req_canceled(struct tpm_chip *chip, u8 status) 480 + { 481 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 482 + 483 + switch (priv->manufacturer_id) { 484 + case TPM_VID_WINBOND: 485 + return ((status == TPM_STS_VALID) || 486 + (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY))); 487 + case TPM_VID_STM: 488 + return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)); 489 + default: 490 + return (status == TPM_STS_COMMAND_READY); 491 + } 492 + } 493 + 494 + static irqreturn_t tis_int_handler(int dummy, void *dev_id) 495 + { 496 + struct tpm_chip *chip = dev_id; 497 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 498 + u32 interrupt; 499 + int i, rc; 500 + 501 + rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt); 502 + if (rc < 0) 503 + return IRQ_NONE; 504 + 505 + if (interrupt == 0) 506 + return IRQ_NONE; 507 + 508 + priv->irq_tested = true; 509 + if (interrupt & TPM_INTF_DATA_AVAIL_INT) 510 + wake_up_interruptible(&priv->read_queue); 511 + if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) 512 + for (i = 0; i < 5; i++) 513 + if (check_locality(chip, i) >= 0) 514 + break; 515 + if (interrupt & 516 + (TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT | 517 + TPM_INTF_CMD_READY_INT)) 518 + wake_up_interruptible(&priv->int_queue); 519 + 520 + /* Clear interrupts handled with TPM_EOI */ 521 + rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt); 522 + if (rc < 0) 523 + return IRQ_NONE; 524 + 525 + tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt); 526 + return IRQ_HANDLED; 527 + } 528 + 529 + /* Register the IRQ and issue a command that will cause an interrupt. If an 530 + * irq is seen then leave the chip setup for IRQ operation, otherwise reverse 531 + * everything and leave in polling mode. Returns 0 on success. 532 + */ 533 + static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask, 534 + int flags, int irq) 535 + { 536 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 537 + u8 original_int_vec; 538 + int rc; 539 + u32 int_status; 540 + 541 + if (devm_request_irq(chip->dev.parent, irq, tis_int_handler, flags, 542 + dev_name(&chip->dev), chip) != 0) { 543 + dev_info(&chip->dev, "Unable to request irq: %d for probe\n", 544 + irq); 545 + return -1; 546 + } 547 + priv->irq = irq; 548 + 549 + rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality), 550 + &original_int_vec); 551 + if (rc < 0) 552 + return rc; 553 + 554 + rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), irq); 555 + if (rc < 0) 556 + return rc; 557 + 558 + rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &int_status); 559 + if (rc < 0) 560 + return rc; 561 + 562 + /* Clear all existing */ 563 + rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), int_status); 564 + if (rc < 0) 565 + return rc; 566 + 567 + /* Turn on */ 568 + rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), 569 + intmask | TPM_GLOBAL_INT_ENABLE); 570 + if (rc < 0) 571 + return rc; 572 + 573 + priv->irq_tested = false; 574 + 575 + /* Generate an interrupt by having the core call through to 576 + * tpm_tis_send 577 + */ 578 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 579 + tpm2_gen_interrupt(chip); 580 + else 581 + tpm_gen_interrupt(chip); 582 + 583 + /* tpm_tis_send will either confirm the interrupt is working or it 584 + * will call disable_irq which undoes all of the above. 585 + */ 586 + if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) { 587 + rc = tpm_tis_write8(priv, original_int_vec, 588 + TPM_INT_VECTOR(priv->locality)); 589 + if (rc < 0) 590 + return rc; 591 + 592 + return 1; 593 + } 594 + 595 + return 0; 596 + } 597 + 598 + /* Try to find the IRQ the TPM is using. This is for legacy x86 systems that 599 + * do not have ACPI/etc. We typically expect the interrupt to be declared if 600 + * present. 601 + */ 602 + static void tpm_tis_probe_irq(struct tpm_chip *chip, u32 intmask) 603 + { 604 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 605 + u8 original_int_vec; 606 + int i, rc; 607 + 608 + rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality), 609 + &original_int_vec); 610 + if (rc < 0) 611 + return; 612 + 613 + if (!original_int_vec) { 614 + if (IS_ENABLED(CONFIG_X86)) 615 + for (i = 3; i <= 15; i++) 616 + if (!tpm_tis_probe_irq_single(chip, intmask, 0, 617 + i)) 618 + return; 619 + } else if (!tpm_tis_probe_irq_single(chip, intmask, 0, 620 + original_int_vec)) 621 + return; 622 + } 623 + 624 + void tpm_tis_remove(struct tpm_chip *chip) 625 + { 626 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 627 + u32 reg = TPM_INT_ENABLE(priv->locality); 628 + u32 interrupt; 629 + int rc; 630 + 631 + rc = tpm_tis_read32(priv, reg, &interrupt); 632 + if (rc < 0) 633 + interrupt = 0; 634 + 635 + tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); 636 + release_locality(chip, priv->locality, 1); 637 + } 638 + EXPORT_SYMBOL_GPL(tpm_tis_remove); 639 + 640 + static const struct tpm_class_ops tpm_tis = { 641 + .flags = TPM_OPS_AUTO_STARTUP, 642 + .status = tpm_tis_status, 643 + .recv = tpm_tis_recv, 644 + .send = tpm_tis_send, 645 + .cancel = tpm_tis_ready, 646 + .update_timeouts = tpm_tis_update_timeouts, 647 + .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 648 + .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, 649 + .req_canceled = tpm_tis_req_canceled, 650 + }; 651 + 652 + int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, 653 + const struct tpm_tis_phy_ops *phy_ops, 654 + acpi_handle acpi_dev_handle) 655 + { 656 + u32 vendor, intfcaps, intmask; 657 + u8 rid; 658 + int rc, probe; 659 + struct tpm_chip *chip; 660 + 661 + chip = tpmm_chip_alloc(dev, &tpm_tis); 662 + if (IS_ERR(chip)) 663 + return PTR_ERR(chip); 664 + 665 + #ifdef CONFIG_ACPI 666 + chip->acpi_dev_handle = acpi_dev_handle; 667 + #endif 668 + 669 + /* Maximum timeouts */ 670 + chip->timeout_a = msecs_to_jiffies(TIS_TIMEOUT_A_MAX); 671 + chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX); 672 + chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX); 673 + chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX); 674 + priv->phy_ops = phy_ops; 675 + dev_set_drvdata(&chip->dev, priv); 676 + 677 + if (wait_startup(chip, 0) != 0) { 678 + rc = -ENODEV; 679 + goto out_err; 680 + } 681 + 682 + /* Take control of the TPM's interrupt hardware and shut it off */ 683 + rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); 684 + if (rc < 0) 685 + goto out_err; 686 + 687 + intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT | 688 + TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT; 689 + intmask &= ~TPM_GLOBAL_INT_ENABLE; 690 + tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); 691 + 692 + if (request_locality(chip, 0) != 0) { 693 + rc = -ENODEV; 694 + goto out_err; 695 + } 696 + 697 + rc = tpm2_probe(chip); 698 + if (rc) 699 + goto out_err; 700 + 701 + rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor); 702 + if (rc < 0) 703 + goto out_err; 704 + 705 + priv->manufacturer_id = vendor; 706 + 707 + rc = tpm_tis_read8(priv, TPM_RID(0), &rid); 708 + if (rc < 0) 709 + goto out_err; 710 + 711 + dev_info(dev, "%s TPM (device-id 0x%X, rev-id %d)\n", 712 + (chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2", 713 + vendor >> 16, rid); 714 + 715 + if (!(priv->flags & TPM_TIS_ITPM_POSSIBLE)) { 716 + probe = probe_itpm(chip); 717 + if (probe < 0) { 718 + rc = -ENODEV; 719 + goto out_err; 720 + } 721 + 722 + if (!!probe) 723 + priv->flags |= TPM_TIS_ITPM_POSSIBLE; 724 + } 725 + 726 + /* Figure out the capabilities */ 727 + rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps); 728 + if (rc < 0) 729 + goto out_err; 730 + 731 + dev_dbg(dev, "TPM interface capabilities (0x%x):\n", 732 + intfcaps); 733 + if (intfcaps & TPM_INTF_BURST_COUNT_STATIC) 734 + dev_dbg(dev, "\tBurst Count Static\n"); 735 + if (intfcaps & TPM_INTF_CMD_READY_INT) 736 + dev_dbg(dev, "\tCommand Ready Int Support\n"); 737 + if (intfcaps & TPM_INTF_INT_EDGE_FALLING) 738 + dev_dbg(dev, "\tInterrupt Edge Falling\n"); 739 + if (intfcaps & TPM_INTF_INT_EDGE_RISING) 740 + dev_dbg(dev, "\tInterrupt Edge Rising\n"); 741 + if (intfcaps & TPM_INTF_INT_LEVEL_LOW) 742 + dev_dbg(dev, "\tInterrupt Level Low\n"); 743 + if (intfcaps & TPM_INTF_INT_LEVEL_HIGH) 744 + dev_dbg(dev, "\tInterrupt Level High\n"); 745 + if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) 746 + dev_dbg(dev, "\tLocality Change Int Support\n"); 747 + if (intfcaps & TPM_INTF_STS_VALID_INT) 748 + dev_dbg(dev, "\tSts Valid Int Support\n"); 749 + if (intfcaps & TPM_INTF_DATA_AVAIL_INT) 750 + dev_dbg(dev, "\tData Avail Int Support\n"); 751 + 752 + /* Very early on issue a command to the TPM in polling mode to make 753 + * sure it works. May as well use that command to set the proper 754 + * timeouts for the driver. 755 + */ 756 + if (tpm_get_timeouts(chip)) { 757 + dev_err(dev, "Could not get TPM timeouts and durations\n"); 758 + rc = -ENODEV; 759 + goto out_err; 760 + } 761 + 762 + /* INTERRUPT Setup */ 763 + init_waitqueue_head(&priv->read_queue); 764 + init_waitqueue_head(&priv->int_queue); 765 + if (irq != -1) { 766 + if (irq) { 767 + tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED, 768 + irq); 769 + if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) 770 + dev_err(&chip->dev, FW_BUG 771 + "TPM interrupt not working, polling instead\n"); 772 + } else { 773 + tpm_tis_probe_irq(chip, intmask); 774 + } 775 + } 776 + 777 + return tpm_chip_register(chip); 778 + out_err: 779 + tpm_tis_remove(chip); 780 + return rc; 781 + } 782 + EXPORT_SYMBOL_GPL(tpm_tis_core_init); 783 + 784 + #ifdef CONFIG_PM_SLEEP 785 + static void tpm_tis_reenable_interrupts(struct tpm_chip *chip) 786 + { 787 + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); 788 + u32 intmask; 789 + int rc; 790 + 791 + /* reenable interrupts that device may have lost or 792 + * BIOS/firmware may have disabled 793 + */ 794 + rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq); 795 + if (rc < 0) 796 + return; 797 + 798 + rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); 799 + if (rc < 0) 800 + return; 801 + 802 + intmask |= TPM_INTF_CMD_READY_INT 803 + | TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT 804 + | TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE; 805 + 806 + tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); 807 + } 808 + 809 + int tpm_tis_resume(struct device *dev) 810 + { 811 + struct tpm_chip *chip = dev_get_drvdata(dev); 812 + int ret; 813 + 814 + if (chip->flags & TPM_CHIP_FLAG_IRQ) 815 + tpm_tis_reenable_interrupts(chip); 816 + 817 + ret = tpm_pm_resume(dev); 818 + if (ret) 819 + return ret; 820 + 821 + /* TPM 1.2 requires self-test on resume. This function actually returns 822 + * an error code but for unknown reason it isn't handled. 823 + */ 824 + if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) 825 + tpm_do_selftest(chip); 826 + 827 + return 0; 828 + } 829 + EXPORT_SYMBOL_GPL(tpm_tis_resume); 830 + #endif 831 + 832 + MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)"); 833 + MODULE_DESCRIPTION("TPM Driver"); 834 + MODULE_VERSION("2.0"); 835 + MODULE_LICENSE("GPL");
+156
drivers/char/tpm/tpm_tis_core.h
··· 1 + /* 2 + * Copyright (C) 2005, 2006 IBM Corporation 3 + * Copyright (C) 2014, 2015 Intel Corporation 4 + * 5 + * Authors: 6 + * Leendert van Doorn <leendert@watson.ibm.com> 7 + * Kylene Hall <kjhall@us.ibm.com> 8 + * 9 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 10 + * 11 + * Device driver for TCG/TCPA TPM (trusted platform module). 12 + * Specifications at www.trustedcomputinggroup.org 13 + * 14 + * This device driver implements the TPM interface as defined in 15 + * the TCG TPM Interface Spec version 1.2, revision 1.0. 16 + * 17 + * This program is free software; you can redistribute it and/or 18 + * modify it under the terms of the GNU General Public License as 19 + * published by the Free Software Foundation, version 2 of the 20 + * License. 21 + */ 22 + 23 + #ifndef __TPM_TIS_CORE_H__ 24 + #define __TPM_TIS_CORE_H__ 25 + 26 + #include "tpm.h" 27 + 28 + enum tis_access { 29 + TPM_ACCESS_VALID = 0x80, 30 + TPM_ACCESS_ACTIVE_LOCALITY = 0x20, 31 + TPM_ACCESS_REQUEST_PENDING = 0x04, 32 + TPM_ACCESS_REQUEST_USE = 0x02, 33 + }; 34 + 35 + enum tis_status { 36 + TPM_STS_VALID = 0x80, 37 + TPM_STS_COMMAND_READY = 0x40, 38 + TPM_STS_GO = 0x20, 39 + TPM_STS_DATA_AVAIL = 0x10, 40 + TPM_STS_DATA_EXPECT = 0x08, 41 + }; 42 + 43 + enum tis_int_flags { 44 + TPM_GLOBAL_INT_ENABLE = 0x80000000, 45 + TPM_INTF_BURST_COUNT_STATIC = 0x100, 46 + TPM_INTF_CMD_READY_INT = 0x080, 47 + TPM_INTF_INT_EDGE_FALLING = 0x040, 48 + TPM_INTF_INT_EDGE_RISING = 0x020, 49 + TPM_INTF_INT_LEVEL_LOW = 0x010, 50 + TPM_INTF_INT_LEVEL_HIGH = 0x008, 51 + TPM_INTF_LOCALITY_CHANGE_INT = 0x004, 52 + TPM_INTF_STS_VALID_INT = 0x002, 53 + TPM_INTF_DATA_AVAIL_INT = 0x001, 54 + }; 55 + 56 + enum tis_defaults { 57 + TIS_MEM_LEN = 0x5000, 58 + TIS_SHORT_TIMEOUT = 750, /* ms */ 59 + TIS_LONG_TIMEOUT = 2000, /* 2 sec */ 60 + }; 61 + 62 + /* Some timeout values are needed before it is known whether the chip is 63 + * TPM 1.0 or TPM 2.0. 64 + */ 65 + #define TIS_TIMEOUT_A_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_A) 66 + #define TIS_TIMEOUT_B_MAX max(TIS_LONG_TIMEOUT, TPM2_TIMEOUT_B) 67 + #define TIS_TIMEOUT_C_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_C) 68 + #define TIS_TIMEOUT_D_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_D) 69 + 70 + #define TPM_ACCESS(l) (0x0000 | ((l) << 12)) 71 + #define TPM_INT_ENABLE(l) (0x0008 | ((l) << 12)) 72 + #define TPM_INT_VECTOR(l) (0x000C | ((l) << 12)) 73 + #define TPM_INT_STATUS(l) (0x0010 | ((l) << 12)) 74 + #define TPM_INTF_CAPS(l) (0x0014 | ((l) << 12)) 75 + #define TPM_STS(l) (0x0018 | ((l) << 12)) 76 + #define TPM_STS3(l) (0x001b | ((l) << 12)) 77 + #define TPM_DATA_FIFO(l) (0x0024 | ((l) << 12)) 78 + 79 + #define TPM_DID_VID(l) (0x0F00 | ((l) << 12)) 80 + #define TPM_RID(l) (0x0F04 | ((l) << 12)) 81 + 82 + enum tpm_tis_flags { 83 + TPM_TIS_ITPM_POSSIBLE = BIT(0), 84 + }; 85 + 86 + struct tpm_tis_data { 87 + u16 manufacturer_id; 88 + int locality; 89 + int irq; 90 + bool irq_tested; 91 + unsigned int flags; 92 + wait_queue_head_t int_queue; 93 + wait_queue_head_t read_queue; 94 + const struct tpm_tis_phy_ops *phy_ops; 95 + }; 96 + 97 + struct tpm_tis_phy_ops { 98 + int (*read_bytes)(struct tpm_tis_data *data, u32 addr, u16 len, 99 + u8 *result); 100 + int (*write_bytes)(struct tpm_tis_data *data, u32 addr, u16 len, 101 + u8 *value); 102 + int (*read16)(struct tpm_tis_data *data, u32 addr, u16 *result); 103 + int (*read32)(struct tpm_tis_data *data, u32 addr, u32 *result); 104 + int (*write32)(struct tpm_tis_data *data, u32 addr, u32 src); 105 + }; 106 + 107 + static inline int tpm_tis_read_bytes(struct tpm_tis_data *data, u32 addr, 108 + u16 len, u8 *result) 109 + { 110 + return data->phy_ops->read_bytes(data, addr, len, result); 111 + } 112 + 113 + static inline int tpm_tis_read8(struct tpm_tis_data *data, u32 addr, u8 *result) 114 + { 115 + return data->phy_ops->read_bytes(data, addr, 1, result); 116 + } 117 + 118 + static inline int tpm_tis_read16(struct tpm_tis_data *data, u32 addr, 119 + u16 *result) 120 + { 121 + return data->phy_ops->read16(data, addr, result); 122 + } 123 + 124 + static inline int tpm_tis_read32(struct tpm_tis_data *data, u32 addr, 125 + u32 *result) 126 + { 127 + return data->phy_ops->read32(data, addr, result); 128 + } 129 + 130 + static inline int tpm_tis_write_bytes(struct tpm_tis_data *data, u32 addr, 131 + u16 len, u8 *value) 132 + { 133 + return data->phy_ops->write_bytes(data, addr, len, value); 134 + } 135 + 136 + static inline int tpm_tis_write8(struct tpm_tis_data *data, u32 addr, u8 value) 137 + { 138 + return data->phy_ops->write_bytes(data, addr, 1, &value); 139 + } 140 + 141 + static inline int tpm_tis_write32(struct tpm_tis_data *data, u32 addr, 142 + u32 value) 143 + { 144 + return data->phy_ops->write32(data, addr, value); 145 + } 146 + 147 + void tpm_tis_remove(struct tpm_chip *chip); 148 + int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, 149 + const struct tpm_tis_phy_ops *phy_ops, 150 + acpi_handle acpi_dev_handle); 151 + 152 + #ifdef CONFIG_PM_SLEEP 153 + int tpm_tis_resume(struct device *dev); 154 + #endif 155 + 156 + #endif
+272
drivers/char/tpm/tpm_tis_spi.c
··· 1 + /* 2 + * Copyright (C) 2015 Infineon Technologies AG 3 + * Copyright (C) 2016 STMicroelectronics SAS 4 + * 5 + * Authors: 6 + * Peter Huewe <peter.huewe@infineon.com> 7 + * Christophe Ricard <christophe-h.ricard@st.com> 8 + * 9 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 10 + * 11 + * Device driver for TCG/TCPA TPM (trusted platform module). 12 + * Specifications at www.trustedcomputinggroup.org 13 + * 14 + * This device driver implements the TPM interface as defined in 15 + * the TCG TPM Interface Spec version 1.3, revision 27 via _raw/native 16 + * SPI access_. 17 + * 18 + * It is based on the original tpm_tis device driver from Leendert van 19 + * Dorn and Kyleen Hall and Jarko Sakkinnen. 20 + * 21 + * This program is free software; you can redistribute it and/or 22 + * modify it under the terms of the GNU General Public License as 23 + * published by the Free Software Foundation, version 2 of the 24 + * License. 25 + */ 26 + 27 + #include <linux/init.h> 28 + #include <linux/module.h> 29 + #include <linux/moduleparam.h> 30 + #include <linux/slab.h> 31 + #include <linux/interrupt.h> 32 + #include <linux/wait.h> 33 + #include <linux/acpi.h> 34 + #include <linux/freezer.h> 35 + 36 + #include <linux/module.h> 37 + #include <linux/spi/spi.h> 38 + #include <linux/gpio.h> 39 + #include <linux/of_irq.h> 40 + #include <linux/of_gpio.h> 41 + #include <linux/tpm.h> 42 + #include "tpm.h" 43 + #include "tpm_tis_core.h" 44 + 45 + #define MAX_SPI_FRAMESIZE 64 46 + 47 + struct tpm_tis_spi_phy { 48 + struct tpm_tis_data priv; 49 + struct spi_device *spi_device; 50 + 51 + u8 tx_buf[MAX_SPI_FRAMESIZE + 4]; 52 + u8 rx_buf[MAX_SPI_FRAMESIZE + 4]; 53 + }; 54 + 55 + static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data) 56 + { 57 + return container_of(data, struct tpm_tis_spi_phy, priv); 58 + } 59 + 60 + static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr, 61 + u16 len, u8 *result) 62 + { 63 + struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); 64 + int ret, i; 65 + struct spi_message m; 66 + struct spi_transfer spi_xfer = { 67 + .tx_buf = phy->tx_buf, 68 + .rx_buf = phy->rx_buf, 69 + .len = 4, 70 + }; 71 + 72 + if (len > MAX_SPI_FRAMESIZE) 73 + return -ENOMEM; 74 + 75 + phy->tx_buf[0] = 0x80 | (len - 1); 76 + phy->tx_buf[1] = 0xd4; 77 + phy->tx_buf[2] = (addr >> 8) & 0xFF; 78 + phy->tx_buf[3] = addr & 0xFF; 79 + 80 + spi_xfer.cs_change = 1; 81 + spi_message_init(&m); 82 + spi_message_add_tail(&spi_xfer, &m); 83 + 84 + spi_bus_lock(phy->spi_device->master); 85 + ret = spi_sync_locked(phy->spi_device, &m); 86 + if (ret < 0) 87 + goto exit; 88 + 89 + memset(phy->tx_buf, 0, len); 90 + 91 + /* According to TCG PTP specification, if there is no TPM present at 92 + * all, then the design has a weak pull-up on MISO. If a TPM is not 93 + * present, a pull-up on MISO means that the SB controller sees a 1, 94 + * and will latch in 0xFF on the read. 95 + */ 96 + for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) { 97 + spi_xfer.len = 1; 98 + spi_message_init(&m); 99 + spi_message_add_tail(&spi_xfer, &m); 100 + ret = spi_sync_locked(phy->spi_device, &m); 101 + if (ret < 0) 102 + goto exit; 103 + } 104 + 105 + spi_xfer.cs_change = 0; 106 + spi_xfer.len = len; 107 + spi_xfer.rx_buf = result; 108 + 109 + spi_message_init(&m); 110 + spi_message_add_tail(&spi_xfer, &m); 111 + ret = spi_sync_locked(phy->spi_device, &m); 112 + 113 + exit: 114 + spi_bus_unlock(phy->spi_device->master); 115 + return ret; 116 + } 117 + 118 + static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr, 119 + u16 len, u8 *value) 120 + { 121 + struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); 122 + int ret, i; 123 + struct spi_message m; 124 + struct spi_transfer spi_xfer = { 125 + .tx_buf = phy->tx_buf, 126 + .rx_buf = phy->rx_buf, 127 + .len = 4, 128 + }; 129 + 130 + if (len > MAX_SPI_FRAMESIZE) 131 + return -ENOMEM; 132 + 133 + phy->tx_buf[0] = len - 1; 134 + phy->tx_buf[1] = 0xd4; 135 + phy->tx_buf[2] = (addr >> 8) & 0xFF; 136 + phy->tx_buf[3] = addr & 0xFF; 137 + 138 + spi_xfer.cs_change = 1; 139 + spi_message_init(&m); 140 + spi_message_add_tail(&spi_xfer, &m); 141 + 142 + spi_bus_lock(phy->spi_device->master); 143 + ret = spi_sync_locked(phy->spi_device, &m); 144 + if (ret < 0) 145 + goto exit; 146 + 147 + memset(phy->tx_buf, 0, len); 148 + 149 + /* According to TCG PTP specification, if there is no TPM present at 150 + * all, then the design has a weak pull-up on MISO. If a TPM is not 151 + * present, a pull-up on MISO means that the SB controller sees a 1, 152 + * and will latch in 0xFF on the read. 153 + */ 154 + for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) { 155 + spi_xfer.len = 1; 156 + spi_message_init(&m); 157 + spi_message_add_tail(&spi_xfer, &m); 158 + ret = spi_sync_locked(phy->spi_device, &m); 159 + if (ret < 0) 160 + goto exit; 161 + } 162 + 163 + spi_xfer.len = len; 164 + spi_xfer.tx_buf = value; 165 + spi_xfer.cs_change = 0; 166 + spi_xfer.tx_buf = value; 167 + spi_message_init(&m); 168 + spi_message_add_tail(&spi_xfer, &m); 169 + ret = spi_sync_locked(phy->spi_device, &m); 170 + 171 + exit: 172 + spi_bus_unlock(phy->spi_device->master); 173 + return ret; 174 + } 175 + 176 + static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result) 177 + { 178 + int rc; 179 + 180 + rc = data->phy_ops->read_bytes(data, addr, sizeof(u16), (u8 *)result); 181 + if (!rc) 182 + *result = le16_to_cpu(*result); 183 + return rc; 184 + } 185 + 186 + static int tpm_tis_spi_read32(struct tpm_tis_data *data, u32 addr, u32 *result) 187 + { 188 + int rc; 189 + 190 + rc = data->phy_ops->read_bytes(data, addr, sizeof(u32), (u8 *)result); 191 + if (!rc) 192 + *result = le32_to_cpu(*result); 193 + return rc; 194 + } 195 + 196 + static int tpm_tis_spi_write32(struct tpm_tis_data *data, u32 addr, u32 value) 197 + { 198 + value = cpu_to_le32(value); 199 + return data->phy_ops->write_bytes(data, addr, sizeof(u32), 200 + (u8 *)&value); 201 + } 202 + 203 + static const struct tpm_tis_phy_ops tpm_spi_phy_ops = { 204 + .read_bytes = tpm_tis_spi_read_bytes, 205 + .write_bytes = tpm_tis_spi_write_bytes, 206 + .read16 = tpm_tis_spi_read16, 207 + .read32 = tpm_tis_spi_read32, 208 + .write32 = tpm_tis_spi_write32, 209 + }; 210 + 211 + static int tpm_tis_spi_probe(struct spi_device *dev) 212 + { 213 + struct tpm_tis_spi_phy *phy; 214 + 215 + phy = devm_kzalloc(&dev->dev, sizeof(struct tpm_tis_spi_phy), 216 + GFP_KERNEL); 217 + if (!phy) 218 + return -ENOMEM; 219 + 220 + phy->spi_device = dev; 221 + 222 + return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops, 223 + NULL); 224 + } 225 + 226 + static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume); 227 + 228 + static int tpm_tis_spi_remove(struct spi_device *dev) 229 + { 230 + struct tpm_chip *chip = spi_get_drvdata(dev); 231 + 232 + tpm_chip_unregister(chip); 233 + tpm_tis_remove(chip); 234 + return 0; 235 + } 236 + 237 + static const struct spi_device_id tpm_tis_spi_id[] = { 238 + {"tpm_tis_spi", 0}, 239 + {} 240 + }; 241 + MODULE_DEVICE_TABLE(spi, tpm_tis_spi_id); 242 + 243 + static const struct of_device_id of_tis_spi_match[] = { 244 + { .compatible = "st,st33htpm-spi", }, 245 + { .compatible = "infineon,slb9670", }, 246 + { .compatible = "tcg,tpm_tis-spi", }, 247 + {} 248 + }; 249 + MODULE_DEVICE_TABLE(of, of_tis_spi_match); 250 + 251 + static const struct acpi_device_id acpi_tis_spi_match[] = { 252 + {"SMO0768", 0}, 253 + {} 254 + }; 255 + MODULE_DEVICE_TABLE(acpi, acpi_tis_spi_match); 256 + 257 + static struct spi_driver tpm_tis_spi_driver = { 258 + .driver = { 259 + .owner = THIS_MODULE, 260 + .name = "tpm_tis_spi", 261 + .pm = &tpm_tis_pm, 262 + .of_match_table = of_match_ptr(of_tis_spi_match), 263 + .acpi_match_table = ACPI_PTR(acpi_tis_spi_match), 264 + }, 265 + .probe = tpm_tis_spi_probe, 266 + .remove = tpm_tis_spi_remove, 267 + .id_table = tpm_tis_spi_id, 268 + }; 269 + module_spi_driver(tpm_tis_spi_driver); 270 + 271 + MODULE_DESCRIPTION("TPM Driver for native SPI access"); 272 + MODULE_LICENSE("GPL");
+637
drivers/char/tpm/tpm_vtpm_proxy.c
··· 1 + /* 2 + * Copyright (C) 2015, 2016 IBM Corporation 3 + * 4 + * Author: Stefan Berger <stefanb@us.ibm.com> 5 + * 6 + * Maintained by: <tpmdd-devel@lists.sourceforge.net> 7 + * 8 + * Device driver for vTPM (vTPM proxy driver) 9 + * 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License as 12 + * published by the Free Software Foundation, version 2 of the 13 + * License. 14 + * 15 + */ 16 + 17 + #include <linux/types.h> 18 + #include <linux/spinlock.h> 19 + #include <linux/uaccess.h> 20 + #include <linux/wait.h> 21 + #include <linux/miscdevice.h> 22 + #include <linux/vtpm_proxy.h> 23 + #include <linux/file.h> 24 + #include <linux/anon_inodes.h> 25 + #include <linux/poll.h> 26 + #include <linux/compat.h> 27 + 28 + #include "tpm.h" 29 + 30 + #define VTPM_PROXY_REQ_COMPLETE_FLAG BIT(0) 31 + 32 + struct proxy_dev { 33 + struct tpm_chip *chip; 34 + 35 + u32 flags; /* public API flags */ 36 + 37 + wait_queue_head_t wq; 38 + 39 + struct mutex buf_lock; /* protect buffer and flags */ 40 + 41 + long state; /* internal state */ 42 + #define STATE_OPENED_FLAG BIT(0) 43 + #define STATE_WAIT_RESPONSE_FLAG BIT(1) /* waiting for emulator response */ 44 + 45 + size_t req_len; /* length of queued TPM request */ 46 + size_t resp_len; /* length of queued TPM response */ 47 + u8 buffer[TPM_BUFSIZE]; /* request/response buffer */ 48 + 49 + struct work_struct work; /* task that retrieves TPM timeouts */ 50 + }; 51 + 52 + /* all supported flags */ 53 + #define VTPM_PROXY_FLAGS_ALL (VTPM_PROXY_FLAG_TPM2) 54 + 55 + static struct workqueue_struct *workqueue; 56 + 57 + static void vtpm_proxy_delete_device(struct proxy_dev *proxy_dev); 58 + 59 + /* 60 + * Functions related to 'server side' 61 + */ 62 + 63 + /** 64 + * vtpm_proxy_fops_read - Read TPM commands on 'server side' 65 + * 66 + * Return value: 67 + * Number of bytes read or negative error code 68 + */ 69 + static ssize_t vtpm_proxy_fops_read(struct file *filp, char __user *buf, 70 + size_t count, loff_t *off) 71 + { 72 + struct proxy_dev *proxy_dev = filp->private_data; 73 + size_t len; 74 + int sig, rc; 75 + 76 + sig = wait_event_interruptible(proxy_dev->wq, 77 + proxy_dev->req_len != 0 || 78 + !(proxy_dev->state & STATE_OPENED_FLAG)); 79 + if (sig) 80 + return -EINTR; 81 + 82 + mutex_lock(&proxy_dev->buf_lock); 83 + 84 + if (!(proxy_dev->state & STATE_OPENED_FLAG)) { 85 + mutex_unlock(&proxy_dev->buf_lock); 86 + return -EPIPE; 87 + } 88 + 89 + len = proxy_dev->req_len; 90 + 91 + if (count < len) { 92 + mutex_unlock(&proxy_dev->buf_lock); 93 + pr_debug("Invalid size in recv: count=%zd, req_len=%zd\n", 94 + count, len); 95 + return -EIO; 96 + } 97 + 98 + rc = copy_to_user(buf, proxy_dev->buffer, len); 99 + memset(proxy_dev->buffer, 0, len); 100 + proxy_dev->req_len = 0; 101 + 102 + if (!rc) 103 + proxy_dev->state |= STATE_WAIT_RESPONSE_FLAG; 104 + 105 + mutex_unlock(&proxy_dev->buf_lock); 106 + 107 + if (rc) 108 + return -EFAULT; 109 + 110 + return len; 111 + } 112 + 113 + /** 114 + * vtpm_proxy_fops_write - Write TPM responses on 'server side' 115 + * 116 + * Return value: 117 + * Number of bytes read or negative error value 118 + */ 119 + static ssize_t vtpm_proxy_fops_write(struct file *filp, const char __user *buf, 120 + size_t count, loff_t *off) 121 + { 122 + struct proxy_dev *proxy_dev = filp->private_data; 123 + 124 + mutex_lock(&proxy_dev->buf_lock); 125 + 126 + if (!(proxy_dev->state & STATE_OPENED_FLAG)) { 127 + mutex_unlock(&proxy_dev->buf_lock); 128 + return -EPIPE; 129 + } 130 + 131 + if (count > sizeof(proxy_dev->buffer) || 132 + !(proxy_dev->state & STATE_WAIT_RESPONSE_FLAG)) { 133 + mutex_unlock(&proxy_dev->buf_lock); 134 + return -EIO; 135 + } 136 + 137 + proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG; 138 + 139 + proxy_dev->req_len = 0; 140 + 141 + if (copy_from_user(proxy_dev->buffer, buf, count)) { 142 + mutex_unlock(&proxy_dev->buf_lock); 143 + return -EFAULT; 144 + } 145 + 146 + proxy_dev->resp_len = count; 147 + 148 + mutex_unlock(&proxy_dev->buf_lock); 149 + 150 + wake_up_interruptible(&proxy_dev->wq); 151 + 152 + return count; 153 + } 154 + 155 + /* 156 + * vtpm_proxy_fops_poll: Poll status on 'server side' 157 + * 158 + * Return value: 159 + * Poll flags 160 + */ 161 + static unsigned int vtpm_proxy_fops_poll(struct file *filp, poll_table *wait) 162 + { 163 + struct proxy_dev *proxy_dev = filp->private_data; 164 + unsigned ret; 165 + 166 + poll_wait(filp, &proxy_dev->wq, wait); 167 + 168 + ret = POLLOUT; 169 + 170 + mutex_lock(&proxy_dev->buf_lock); 171 + 172 + if (proxy_dev->req_len) 173 + ret |= POLLIN | POLLRDNORM; 174 + 175 + if (!(proxy_dev->state & STATE_OPENED_FLAG)) 176 + ret |= POLLHUP; 177 + 178 + mutex_unlock(&proxy_dev->buf_lock); 179 + 180 + return ret; 181 + } 182 + 183 + /* 184 + * vtpm_proxy_fops_open - Open vTPM device on 'server side' 185 + * 186 + * Called when setting up the anonymous file descriptor 187 + */ 188 + static void vtpm_proxy_fops_open(struct file *filp) 189 + { 190 + struct proxy_dev *proxy_dev = filp->private_data; 191 + 192 + proxy_dev->state |= STATE_OPENED_FLAG; 193 + } 194 + 195 + /** 196 + * vtpm_proxy_fops_undo_open - counter-part to vtpm_fops_open 197 + * 198 + * Call to undo vtpm_proxy_fops_open 199 + */ 200 + static void vtpm_proxy_fops_undo_open(struct proxy_dev *proxy_dev) 201 + { 202 + mutex_lock(&proxy_dev->buf_lock); 203 + 204 + proxy_dev->state &= ~STATE_OPENED_FLAG; 205 + 206 + mutex_unlock(&proxy_dev->buf_lock); 207 + 208 + /* no more TPM responses -- wake up anyone waiting for them */ 209 + wake_up_interruptible(&proxy_dev->wq); 210 + } 211 + 212 + /* 213 + * vtpm_proxy_fops_release: Close 'server side' 214 + * 215 + * Return value: 216 + * Always returns 0. 217 + */ 218 + static int vtpm_proxy_fops_release(struct inode *inode, struct file *filp) 219 + { 220 + struct proxy_dev *proxy_dev = filp->private_data; 221 + 222 + filp->private_data = NULL; 223 + 224 + vtpm_proxy_delete_device(proxy_dev); 225 + 226 + return 0; 227 + } 228 + 229 + static const struct file_operations vtpm_proxy_fops = { 230 + .owner = THIS_MODULE, 231 + .llseek = no_llseek, 232 + .read = vtpm_proxy_fops_read, 233 + .write = vtpm_proxy_fops_write, 234 + .poll = vtpm_proxy_fops_poll, 235 + .release = vtpm_proxy_fops_release, 236 + }; 237 + 238 + /* 239 + * Functions invoked by the core TPM driver to send TPM commands to 240 + * 'server side' and receive responses from there. 241 + */ 242 + 243 + /* 244 + * Called when core TPM driver reads TPM responses from 'server side' 245 + * 246 + * Return value: 247 + * Number of TPM response bytes read, negative error value otherwise 248 + */ 249 + static int vtpm_proxy_tpm_op_recv(struct tpm_chip *chip, u8 *buf, size_t count) 250 + { 251 + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); 252 + size_t len; 253 + 254 + /* process gone ? */ 255 + mutex_lock(&proxy_dev->buf_lock); 256 + 257 + if (!(proxy_dev->state & STATE_OPENED_FLAG)) { 258 + mutex_unlock(&proxy_dev->buf_lock); 259 + return -EPIPE; 260 + } 261 + 262 + len = proxy_dev->resp_len; 263 + if (count < len) { 264 + dev_err(&chip->dev, 265 + "Invalid size in recv: count=%zd, resp_len=%zd\n", 266 + count, len); 267 + len = -EIO; 268 + goto out; 269 + } 270 + 271 + memcpy(buf, proxy_dev->buffer, len); 272 + proxy_dev->resp_len = 0; 273 + 274 + out: 275 + mutex_unlock(&proxy_dev->buf_lock); 276 + 277 + return len; 278 + } 279 + 280 + /* 281 + * Called when core TPM driver forwards TPM requests to 'server side'. 282 + * 283 + * Return value: 284 + * 0 in case of success, negative error value otherwise. 285 + */ 286 + static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) 287 + { 288 + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); 289 + int rc = 0; 290 + 291 + if (count > sizeof(proxy_dev->buffer)) { 292 + dev_err(&chip->dev, 293 + "Invalid size in send: count=%zd, buffer size=%zd\n", 294 + count, sizeof(proxy_dev->buffer)); 295 + return -EIO; 296 + } 297 + 298 + mutex_lock(&proxy_dev->buf_lock); 299 + 300 + if (!(proxy_dev->state & STATE_OPENED_FLAG)) { 301 + mutex_unlock(&proxy_dev->buf_lock); 302 + return -EPIPE; 303 + } 304 + 305 + proxy_dev->resp_len = 0; 306 + 307 + proxy_dev->req_len = count; 308 + memcpy(proxy_dev->buffer, buf, count); 309 + 310 + proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG; 311 + 312 + mutex_unlock(&proxy_dev->buf_lock); 313 + 314 + wake_up_interruptible(&proxy_dev->wq); 315 + 316 + return rc; 317 + } 318 + 319 + static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip) 320 + { 321 + /* not supported */ 322 + } 323 + 324 + static u8 vtpm_proxy_tpm_op_status(struct tpm_chip *chip) 325 + { 326 + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); 327 + 328 + if (proxy_dev->resp_len) 329 + return VTPM_PROXY_REQ_COMPLETE_FLAG; 330 + 331 + return 0; 332 + } 333 + 334 + static bool vtpm_proxy_tpm_req_canceled(struct tpm_chip *chip, u8 status) 335 + { 336 + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); 337 + bool ret; 338 + 339 + mutex_lock(&proxy_dev->buf_lock); 340 + 341 + ret = !(proxy_dev->state & STATE_OPENED_FLAG); 342 + 343 + mutex_unlock(&proxy_dev->buf_lock); 344 + 345 + return ret; 346 + } 347 + 348 + static const struct tpm_class_ops vtpm_proxy_tpm_ops = { 349 + .flags = TPM_OPS_AUTO_STARTUP, 350 + .recv = vtpm_proxy_tpm_op_recv, 351 + .send = vtpm_proxy_tpm_op_send, 352 + .cancel = vtpm_proxy_tpm_op_cancel, 353 + .status = vtpm_proxy_tpm_op_status, 354 + .req_complete_mask = VTPM_PROXY_REQ_COMPLETE_FLAG, 355 + .req_complete_val = VTPM_PROXY_REQ_COMPLETE_FLAG, 356 + .req_canceled = vtpm_proxy_tpm_req_canceled, 357 + }; 358 + 359 + /* 360 + * Code related to the startup of the TPM 2 and startup of TPM 1.2 + 361 + * retrieval of timeouts and durations. 362 + */ 363 + 364 + static void vtpm_proxy_work(struct work_struct *work) 365 + { 366 + struct proxy_dev *proxy_dev = container_of(work, struct proxy_dev, 367 + work); 368 + int rc; 369 + 370 + rc = tpm_chip_register(proxy_dev->chip); 371 + if (rc) 372 + goto err; 373 + 374 + return; 375 + 376 + err: 377 + vtpm_proxy_fops_undo_open(proxy_dev); 378 + } 379 + 380 + /* 381 + * vtpm_proxy_work_stop: make sure the work has finished 382 + * 383 + * This function is useful when user space closed the fd 384 + * while the driver still determines timeouts. 385 + */ 386 + static void vtpm_proxy_work_stop(struct proxy_dev *proxy_dev) 387 + { 388 + vtpm_proxy_fops_undo_open(proxy_dev); 389 + flush_work(&proxy_dev->work); 390 + } 391 + 392 + /* 393 + * vtpm_proxy_work_start: Schedule the work for TPM 1.2 & 2 initialization 394 + */ 395 + static inline void vtpm_proxy_work_start(struct proxy_dev *proxy_dev) 396 + { 397 + queue_work(workqueue, &proxy_dev->work); 398 + } 399 + 400 + /* 401 + * Code related to creation and deletion of device pairs 402 + */ 403 + static struct proxy_dev *vtpm_proxy_create_proxy_dev(void) 404 + { 405 + struct proxy_dev *proxy_dev; 406 + struct tpm_chip *chip; 407 + int err; 408 + 409 + proxy_dev = kzalloc(sizeof(*proxy_dev), GFP_KERNEL); 410 + if (proxy_dev == NULL) 411 + return ERR_PTR(-ENOMEM); 412 + 413 + init_waitqueue_head(&proxy_dev->wq); 414 + mutex_init(&proxy_dev->buf_lock); 415 + INIT_WORK(&proxy_dev->work, vtpm_proxy_work); 416 + 417 + chip = tpm_chip_alloc(NULL, &vtpm_proxy_tpm_ops); 418 + if (IS_ERR(chip)) { 419 + err = PTR_ERR(chip); 420 + goto err_proxy_dev_free; 421 + } 422 + dev_set_drvdata(&chip->dev, proxy_dev); 423 + 424 + proxy_dev->chip = chip; 425 + 426 + return proxy_dev; 427 + 428 + err_proxy_dev_free: 429 + kfree(proxy_dev); 430 + 431 + return ERR_PTR(err); 432 + } 433 + 434 + /* 435 + * Undo what has been done in vtpm_create_proxy_dev 436 + */ 437 + static inline void vtpm_proxy_delete_proxy_dev(struct proxy_dev *proxy_dev) 438 + { 439 + put_device(&proxy_dev->chip->dev); /* frees chip */ 440 + kfree(proxy_dev); 441 + } 442 + 443 + /* 444 + * Create a /dev/tpm%d and 'server side' file descriptor pair 445 + * 446 + * Return value: 447 + * Returns file pointer on success, an error value otherwise 448 + */ 449 + static struct file *vtpm_proxy_create_device( 450 + struct vtpm_proxy_new_dev *vtpm_new_dev) 451 + { 452 + struct proxy_dev *proxy_dev; 453 + int rc, fd; 454 + struct file *file; 455 + 456 + if (vtpm_new_dev->flags & ~VTPM_PROXY_FLAGS_ALL) 457 + return ERR_PTR(-EOPNOTSUPP); 458 + 459 + proxy_dev = vtpm_proxy_create_proxy_dev(); 460 + if (IS_ERR(proxy_dev)) 461 + return ERR_CAST(proxy_dev); 462 + 463 + proxy_dev->flags = vtpm_new_dev->flags; 464 + 465 + /* setup an anonymous file for the server-side */ 466 + fd = get_unused_fd_flags(O_RDWR); 467 + if (fd < 0) { 468 + rc = fd; 469 + goto err_delete_proxy_dev; 470 + } 471 + 472 + file = anon_inode_getfile("[vtpms]", &vtpm_proxy_fops, proxy_dev, 473 + O_RDWR); 474 + if (IS_ERR(file)) { 475 + rc = PTR_ERR(file); 476 + goto err_put_unused_fd; 477 + } 478 + 479 + /* from now on we can unwind with put_unused_fd() + fput() */ 480 + /* simulate an open() on the server side */ 481 + vtpm_proxy_fops_open(file); 482 + 483 + if (proxy_dev->flags & VTPM_PROXY_FLAG_TPM2) 484 + proxy_dev->chip->flags |= TPM_CHIP_FLAG_TPM2; 485 + 486 + vtpm_proxy_work_start(proxy_dev); 487 + 488 + vtpm_new_dev->fd = fd; 489 + vtpm_new_dev->major = MAJOR(proxy_dev->chip->dev.devt); 490 + vtpm_new_dev->minor = MINOR(proxy_dev->chip->dev.devt); 491 + vtpm_new_dev->tpm_num = proxy_dev->chip->dev_num; 492 + 493 + return file; 494 + 495 + err_put_unused_fd: 496 + put_unused_fd(fd); 497 + 498 + err_delete_proxy_dev: 499 + vtpm_proxy_delete_proxy_dev(proxy_dev); 500 + 501 + return ERR_PTR(rc); 502 + } 503 + 504 + /* 505 + * Counter part to vtpm_create_device. 506 + */ 507 + static void vtpm_proxy_delete_device(struct proxy_dev *proxy_dev) 508 + { 509 + vtpm_proxy_work_stop(proxy_dev); 510 + 511 + /* 512 + * A client may hold the 'ops' lock, so let it know that the server 513 + * side shuts down before we try to grab the 'ops' lock when 514 + * unregistering the chip. 515 + */ 516 + vtpm_proxy_fops_undo_open(proxy_dev); 517 + 518 + tpm_chip_unregister(proxy_dev->chip); 519 + 520 + vtpm_proxy_delete_proxy_dev(proxy_dev); 521 + } 522 + 523 + /* 524 + * Code related to the control device /dev/vtpmx 525 + */ 526 + 527 + /* 528 + * vtpmx_fops_ioctl: ioctl on /dev/vtpmx 529 + * 530 + * Return value: 531 + * Returns 0 on success, a negative error code otherwise. 532 + */ 533 + static long vtpmx_fops_ioctl(struct file *f, unsigned int ioctl, 534 + unsigned long arg) 535 + { 536 + void __user *argp = (void __user *)arg; 537 + struct vtpm_proxy_new_dev __user *vtpm_new_dev_p; 538 + struct vtpm_proxy_new_dev vtpm_new_dev; 539 + struct file *file; 540 + 541 + switch (ioctl) { 542 + case VTPM_PROXY_IOC_NEW_DEV: 543 + if (!capable(CAP_SYS_ADMIN)) 544 + return -EPERM; 545 + vtpm_new_dev_p = argp; 546 + if (copy_from_user(&vtpm_new_dev, vtpm_new_dev_p, 547 + sizeof(vtpm_new_dev))) 548 + return -EFAULT; 549 + file = vtpm_proxy_create_device(&vtpm_new_dev); 550 + if (IS_ERR(file)) 551 + return PTR_ERR(file); 552 + if (copy_to_user(vtpm_new_dev_p, &vtpm_new_dev, 553 + sizeof(vtpm_new_dev))) { 554 + put_unused_fd(vtpm_new_dev.fd); 555 + fput(file); 556 + return -EFAULT; 557 + } 558 + 559 + fd_install(vtpm_new_dev.fd, file); 560 + return 0; 561 + 562 + default: 563 + return -ENOIOCTLCMD; 564 + } 565 + } 566 + 567 + #ifdef CONFIG_COMPAT 568 + static long vtpmx_fops_compat_ioctl(struct file *f, unsigned int ioctl, 569 + unsigned long arg) 570 + { 571 + return vtpmx_fops_ioctl(f, ioctl, (unsigned long)compat_ptr(arg)); 572 + } 573 + #endif 574 + 575 + static const struct file_operations vtpmx_fops = { 576 + .owner = THIS_MODULE, 577 + .unlocked_ioctl = vtpmx_fops_ioctl, 578 + #ifdef CONFIG_COMPAT 579 + .compat_ioctl = vtpmx_fops_compat_ioctl, 580 + #endif 581 + .llseek = noop_llseek, 582 + }; 583 + 584 + static struct miscdevice vtpmx_miscdev = { 585 + .minor = MISC_DYNAMIC_MINOR, 586 + .name = "vtpmx", 587 + .fops = &vtpmx_fops, 588 + }; 589 + 590 + static int vtpmx_init(void) 591 + { 592 + return misc_register(&vtpmx_miscdev); 593 + } 594 + 595 + static void vtpmx_cleanup(void) 596 + { 597 + misc_deregister(&vtpmx_miscdev); 598 + } 599 + 600 + static int __init vtpm_module_init(void) 601 + { 602 + int rc; 603 + 604 + rc = vtpmx_init(); 605 + if (rc) { 606 + pr_err("couldn't create vtpmx device\n"); 607 + return rc; 608 + } 609 + 610 + workqueue = create_workqueue("tpm-vtpm"); 611 + if (!workqueue) { 612 + pr_err("couldn't create workqueue\n"); 613 + rc = -ENOMEM; 614 + goto err_vtpmx_cleanup; 615 + } 616 + 617 + return 0; 618 + 619 + err_vtpmx_cleanup: 620 + vtpmx_cleanup(); 621 + 622 + return rc; 623 + } 624 + 625 + static void __exit vtpm_module_exit(void) 626 + { 627 + destroy_workqueue(workqueue); 628 + vtpmx_cleanup(); 629 + } 630 + 631 + module_init(vtpm_module_init); 632 + module_exit(vtpm_module_exit); 633 + 634 + MODULE_AUTHOR("Stefan Berger (stefanb@us.ibm.com)"); 635 + MODULE_DESCRIPTION("vTPM Driver"); 636 + MODULE_VERSION("0.1"); 637 + MODULE_LICENSE("GPL");
+19 -17
drivers/char/tpm/xen-tpmfront.c
··· 28 28 unsigned int evtchn; 29 29 int ring_ref; 30 30 domid_t backend_id; 31 + int irq; 32 + wait_queue_head_t read_queue; 31 33 }; 32 34 33 35 enum status_bits { ··· 41 39 42 40 static u8 vtpm_status(struct tpm_chip *chip) 43 41 { 44 - struct tpm_private *priv = TPM_VPRIV(chip); 42 + struct tpm_private *priv = dev_get_drvdata(&chip->dev); 45 43 switch (priv->shr->state) { 46 44 case VTPM_STATE_IDLE: 47 45 return VTPM_STATUS_IDLE | VTPM_STATUS_CANCELED; ··· 62 60 63 61 static void vtpm_cancel(struct tpm_chip *chip) 64 62 { 65 - struct tpm_private *priv = TPM_VPRIV(chip); 63 + struct tpm_private *priv = dev_get_drvdata(&chip->dev); 66 64 priv->shr->state = VTPM_STATE_CANCEL; 67 65 wmb(); 68 66 notify_remote_via_evtchn(priv->evtchn); ··· 75 73 76 74 static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) 77 75 { 78 - struct tpm_private *priv = TPM_VPRIV(chip); 76 + struct tpm_private *priv = dev_get_drvdata(&chip->dev); 79 77 struct vtpm_shared_page *shr = priv->shr; 80 78 unsigned int offset = shr_data_offset(shr); 81 79 ··· 89 87 return -EINVAL; 90 88 91 89 /* Wait for completion of any existing command or cancellation */ 92 - if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, chip->vendor.timeout_c, 93 - &chip->vendor.read_queue, true) < 0) { 90 + if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, chip->timeout_c, 91 + &priv->read_queue, true) < 0) { 94 92 vtpm_cancel(chip); 95 93 return -ETIME; 96 94 } ··· 106 104 duration = tpm_calc_ordinal_duration(chip, ordinal); 107 105 108 106 if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, duration, 109 - &chip->vendor.read_queue, true) < 0) { 107 + &priv->read_queue, true) < 0) { 110 108 /* got a signal or timeout, try to cancel */ 111 109 vtpm_cancel(chip); 112 110 return -ETIME; ··· 117 115 118 116 static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) 119 117 { 120 - struct tpm_private *priv = TPM_VPRIV(chip); 118 + struct tpm_private *priv = dev_get_drvdata(&chip->dev); 121 119 struct vtpm_shared_page *shr = priv->shr; 122 120 unsigned int offset = shr_data_offset(shr); 123 121 size_t length = shr->length; ··· 126 124 return -ECANCELED; 127 125 128 126 /* In theory the wait at the end of _send makes this one unnecessary */ 129 - if (wait_for_tpm_stat(chip, VTPM_STATUS_RESULT, chip->vendor.timeout_c, 130 - &chip->vendor.read_queue, true) < 0) { 127 + if (wait_for_tpm_stat(chip, VTPM_STATUS_RESULT, chip->timeout_c, 128 + &priv->read_queue, true) < 0) { 131 129 vtpm_cancel(chip); 132 130 return -ETIME; 133 131 } ··· 163 161 switch (priv->shr->state) { 164 162 case VTPM_STATE_IDLE: 165 163 case VTPM_STATE_FINISH: 166 - wake_up_interruptible(&priv->chip->vendor.read_queue); 164 + wake_up_interruptible(&priv->read_queue); 167 165 break; 168 166 case VTPM_STATE_SUBMIT: 169 167 case VTPM_STATE_CANCEL: ··· 181 179 if (IS_ERR(chip)) 182 180 return PTR_ERR(chip); 183 181 184 - init_waitqueue_head(&chip->vendor.read_queue); 182 + init_waitqueue_head(&priv->read_queue); 185 183 186 184 priv->chip = chip; 187 - TPM_VPRIV(chip) = priv; 185 + dev_set_drvdata(&chip->dev, priv); 188 186 189 187 return 0; 190 188 } ··· 219 217 xenbus_dev_fatal(dev, rv, "allocating TPM irq"); 220 218 return rv; 221 219 } 222 - priv->chip->vendor.irq = rv; 220 + priv->irq = rv; 223 221 224 222 again: 225 223 rv = xenbus_transaction_start(&xbt); ··· 279 277 else 280 278 free_page((unsigned long)priv->shr); 281 279 282 - if (priv->chip && priv->chip->vendor.irq) 283 - unbind_from_irqhandler(priv->chip->vendor.irq, priv); 280 + if (priv->irq) 281 + unbind_from_irqhandler(priv->irq, priv); 284 282 285 283 kfree(priv); 286 284 } ··· 320 318 static int tpmfront_remove(struct xenbus_device *dev) 321 319 { 322 320 struct tpm_chip *chip = dev_get_drvdata(&dev->dev); 323 - struct tpm_private *priv = TPM_VPRIV(chip); 321 + struct tpm_private *priv = dev_get_drvdata(&chip->dev); 324 322 tpm_chip_unregister(chip); 325 323 ring_free(priv); 326 - TPM_VPRIV(chip) = NULL; 324 + dev_set_drvdata(&chip->dev, NULL); 327 325 return 0; 328 326 } 329 327
+1 -1
include/keys/rxrpc-type.h
··· 51 51 struct krb5_tagged_data { 52 52 /* for tag value, see /usr/include/krb5/krb5.h 53 53 * - KRB5_AUTHDATA_* for auth data 54 - * - 54 + * - 55 55 */ 56 56 s32 tag; 57 57 u32 data_len;
+5
include/linux/capability.h
··· 206 206 struct user_namespace *ns, int cap); 207 207 extern bool capable(int cap); 208 208 extern bool ns_capable(struct user_namespace *ns, int cap); 209 + extern bool ns_capable_noaudit(struct user_namespace *ns, int cap); 209 210 #else 210 211 static inline bool has_capability(struct task_struct *t, int cap) 211 212 { ··· 231 230 return true; 232 231 } 233 232 static inline bool ns_capable(struct user_namespace *ns, int cap) 233 + { 234 + return true; 235 + } 236 + static inline bool ns_capable_noaudit(struct user_namespace *ns, int cap) 234 237 { 235 238 return true; 236 239 }
+1 -1
include/linux/platform_data/st33zp24.h
··· 1 1 /* 2 2 * STMicroelectronics TPM Linux driver for TPM 1.2 ST33ZP24 3 - * Copyright (C) 2009 - 2015 STMicroelectronics 3 + * Copyright (C) 2009 - 2016 STMicroelectronics 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by
+4 -10
include/linux/seccomp.h
··· 28 28 }; 29 29 30 30 #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER 31 - extern int __secure_computing(void); 32 - static inline int secure_computing(void) 31 + extern int __secure_computing(const struct seccomp_data *sd); 32 + static inline int secure_computing(const struct seccomp_data *sd) 33 33 { 34 34 if (unlikely(test_thread_flag(TIF_SECCOMP))) 35 - return __secure_computing(); 35 + return __secure_computing(sd); 36 36 return 0; 37 37 } 38 - 39 - #define SECCOMP_PHASE1_OK 0 40 - #define SECCOMP_PHASE1_SKIP 1 41 - 42 - extern u32 seccomp_phase1(struct seccomp_data *sd); 43 - int seccomp_phase2(u32 phase1_result); 44 38 #else 45 39 extern void secure_computing_strict(int this_syscall); 46 40 #endif ··· 55 61 struct seccomp_filter { }; 56 62 57 63 #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER 58 - static inline int secure_computing(void) { return 0; } 64 + static inline int secure_computing(struct seccomp_data *sd) { return 0; } 59 65 #else 60 66 static inline void secure_computing_strict(int this_syscall) { return; } 61 67 #endif
+5
include/linux/tpm.h
··· 33 33 struct trusted_key_payload; 34 34 struct trusted_key_options; 35 35 36 + enum TPM_OPS_FLAGS { 37 + TPM_OPS_AUTO_STARTUP = BIT(0), 38 + }; 39 + 36 40 struct tpm_class_ops { 41 + unsigned int flags; 37 42 const u8 req_complete_mask; 38 43 const u8 req_complete_val; 39 44 bool (*req_canceled)(struct tpm_chip *chip, u8 status);
+91
include/net/calipso.h
··· 1 + /* 2 + * CALIPSO - Common Architecture Label IPv6 Security Option 3 + * 4 + * This is an implementation of the CALIPSO protocol as specified in 5 + * RFC 5570. 6 + * 7 + * Authors: Paul Moore <paul@paul-moore.com> 8 + * Huw Davies <huw@codeweavers.com> 9 + * 10 + */ 11 + 12 + /* 13 + * (c) Copyright Hewlett-Packard Development Company, L.P., 2006 14 + * (c) Copyright Huw Davies <huw@codeweavers.com>, 2015 15 + * 16 + * This program is free software; you can redistribute it and/or modify 17 + * it under the terms of the GNU General Public License as published by 18 + * the Free Software Foundation; either version 2 of the License, or 19 + * (at your option) any later version. 20 + * 21 + * This program is distributed in the hope that it will be useful, 22 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 23 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 24 + * the GNU General Public License for more details. 25 + * 26 + * You should have received a copy of the GNU General Public License 27 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 28 + * 29 + */ 30 + 31 + #ifndef _CALIPSO_H 32 + #define _CALIPSO_H 33 + 34 + #include <linux/types.h> 35 + #include <linux/rcupdate.h> 36 + #include <linux/list.h> 37 + #include <linux/net.h> 38 + #include <linux/skbuff.h> 39 + #include <net/netlabel.h> 40 + #include <net/request_sock.h> 41 + #include <linux/atomic.h> 42 + #include <asm/unaligned.h> 43 + 44 + /* known doi values */ 45 + #define CALIPSO_DOI_UNKNOWN 0x00000000 46 + 47 + /* doi mapping types */ 48 + #define CALIPSO_MAP_UNKNOWN 0 49 + #define CALIPSO_MAP_PASS 2 50 + 51 + /* 52 + * CALIPSO DOI definitions 53 + */ 54 + 55 + /* DOI definition struct */ 56 + struct calipso_doi { 57 + u32 doi; 58 + u32 type; 59 + 60 + atomic_t refcount; 61 + struct list_head list; 62 + struct rcu_head rcu; 63 + }; 64 + 65 + /* 66 + * Sysctl Variables 67 + */ 68 + extern int calipso_cache_enabled; 69 + extern int calipso_cache_bucketsize; 70 + 71 + #ifdef CONFIG_NETLABEL 72 + int __init calipso_init(void); 73 + void calipso_exit(void); 74 + bool calipso_validate(const struct sk_buff *skb, const unsigned char *option); 75 + #else 76 + static inline int __init calipso_init(void) 77 + { 78 + return 0; 79 + } 80 + 81 + static inline void calipso_exit(void) 82 + { 83 + } 84 + static inline bool calipso_validate(const struct sk_buff *skb, 85 + const unsigned char *option) 86 + { 87 + return true; 88 + } 89 + #endif /* CONFIG_NETLABEL */ 90 + 91 + #endif /* _CALIPSO_H */
+6 -1
include/net/inet_sock.h
··· 97 97 u32 ir_mark; 98 98 union { 99 99 struct ip_options_rcu *opt; 100 - struct sk_buff *pktopts; 100 + #if IS_ENABLED(CONFIG_IPV6) 101 + struct { 102 + struct ipv6_txoptions *ipv6_opt; 103 + struct sk_buff *pktopts; 104 + }; 105 + #endif 101 106 }; 102 107 }; 103 108
+9 -1
include/net/ipv6.h
··· 313 313 int newtype, 314 314 struct ipv6_opt_hdr __user *newopt, 315 315 int newoptlen); 316 + struct ipv6_txoptions * 317 + ipv6_renew_options_kern(struct sock *sk, 318 + struct ipv6_txoptions *opt, 319 + int newtype, 320 + struct ipv6_opt_hdr *newopt, 321 + int newoptlen); 316 322 struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space, 317 323 struct ipv6_txoptions *opt); 318 324 319 325 bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb, 320 326 const struct inet6_skb_parm *opt); 327 + struct ipv6_txoptions *ipv6_update_options(struct sock *sk, 328 + struct ipv6_txoptions *opt); 321 329 322 330 static inline bool ipv6_accept_ra(struct inet6_dev *idev) 323 331 { ··· 951 943 int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, int target, 952 944 unsigned short *fragoff, int *fragflg); 953 945 954 - int ipv6_find_tlv(struct sk_buff *skb, int offset, int type); 946 + int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type); 955 947 956 948 struct in6_addr *fl6_update_dst(struct flowi6 *fl6, 957 949 const struct ipv6_txoptions *opt,
+98 -3
include/net/netlabel.h
··· 40 40 #include <linux/atomic.h> 41 41 42 42 struct cipso_v4_doi; 43 + struct calipso_doi; 43 44 44 45 /* 45 46 * NetLabel - A management interface for maintaining network packet label ··· 95 94 #define NETLBL_NLTYPE_UNLABELED_NAME "NLBL_UNLBL" 96 95 #define NETLBL_NLTYPE_ADDRSELECT 6 97 96 #define NETLBL_NLTYPE_ADDRSELECT_NAME "NLBL_ADRSEL" 97 + #define NETLBL_NLTYPE_CALIPSO 7 98 + #define NETLBL_NLTYPE_CALIPSO_NAME "NLBL_CALIPSO" 98 99 99 100 /* 100 101 * NetLabel - Kernel API for accessing the network packet label mappings. ··· 217 214 } mls; 218 215 u32 secid; 219 216 } attr; 217 + }; 218 + 219 + /** 220 + * struct netlbl_calipso_ops - NetLabel CALIPSO operations 221 + * @doi_add: add a CALIPSO DOI 222 + * @doi_free: free a CALIPSO DOI 223 + * @doi_getdef: returns a reference to a DOI 224 + * @doi_putdef: releases a reference of a DOI 225 + * @doi_walk: enumerate the DOI list 226 + * @sock_getattr: retrieve the socket's attr 227 + * @sock_setattr: set the socket's attr 228 + * @sock_delattr: remove the socket's attr 229 + * @req_setattr: set the req socket's attr 230 + * @req_delattr: remove the req socket's attr 231 + * @opt_getattr: retrieve attr from memory block 232 + * @skbuff_optptr: find option in packet 233 + * @skbuff_setattr: set the skbuff's attr 234 + * @skbuff_delattr: remove the skbuff's attr 235 + * @cache_invalidate: invalidate cache 236 + * @cache_add: add cache entry 237 + * 238 + * Description: 239 + * This structure is filled out by the CALIPSO engine and passed 240 + * to the NetLabel core via a call to netlbl_calipso_ops_register(). 241 + * It enables the CALIPSO engine (and hence IPv6) to be compiled 242 + * as a module. 243 + */ 244 + struct netlbl_calipso_ops { 245 + int (*doi_add)(struct calipso_doi *doi_def, 246 + struct netlbl_audit *audit_info); 247 + void (*doi_free)(struct calipso_doi *doi_def); 248 + int (*doi_remove)(u32 doi, struct netlbl_audit *audit_info); 249 + struct calipso_doi *(*doi_getdef)(u32 doi); 250 + void (*doi_putdef)(struct calipso_doi *doi_def); 251 + int (*doi_walk)(u32 *skip_cnt, 252 + int (*callback)(struct calipso_doi *doi_def, void *arg), 253 + void *cb_arg); 254 + int (*sock_getattr)(struct sock *sk, 255 + struct netlbl_lsm_secattr *secattr); 256 + int (*sock_setattr)(struct sock *sk, 257 + const struct calipso_doi *doi_def, 258 + const struct netlbl_lsm_secattr *secattr); 259 + void (*sock_delattr)(struct sock *sk); 260 + int (*req_setattr)(struct request_sock *req, 261 + const struct calipso_doi *doi_def, 262 + const struct netlbl_lsm_secattr *secattr); 263 + void (*req_delattr)(struct request_sock *req); 264 + int (*opt_getattr)(const unsigned char *calipso, 265 + struct netlbl_lsm_secattr *secattr); 266 + unsigned char *(*skbuff_optptr)(const struct sk_buff *skb); 267 + int (*skbuff_setattr)(struct sk_buff *skb, 268 + const struct calipso_doi *doi_def, 269 + const struct netlbl_lsm_secattr *secattr); 270 + int (*skbuff_delattr)(struct sk_buff *skb); 271 + void (*cache_invalidate)(void); 272 + int (*cache_add)(const unsigned char *calipso_ptr, 273 + const struct netlbl_lsm_secattr *secattr); 220 274 }; 221 275 222 276 /* ··· 445 385 const struct in_addr *addr, 446 386 const struct in_addr *mask, 447 387 struct netlbl_audit *audit_info); 388 + int netlbl_cfg_calipso_add(struct calipso_doi *doi_def, 389 + struct netlbl_audit *audit_info); 390 + void netlbl_cfg_calipso_del(u32 doi, struct netlbl_audit *audit_info); 391 + int netlbl_cfg_calipso_map_add(u32 doi, 392 + const char *domain, 393 + const struct in6_addr *addr, 394 + const struct in6_addr *mask, 395 + struct netlbl_audit *audit_info); 448 396 /* 449 397 * LSM security attribute operations 450 398 */ ··· 472 404 u32 offset, 473 405 unsigned long bitmap, 474 406 gfp_t flags); 407 + 408 + /* Bitmap functions 409 + */ 410 + int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len, 411 + u32 offset, u8 state); 412 + void netlbl_bitmap_setbit(unsigned char *bitmap, u32 bit, u8 state); 475 413 476 414 /* 477 415 * LSM protocol operations (NetLabel LSM/kernel API) ··· 501 427 int netlbl_skbuff_getattr(const struct sk_buff *skb, 502 428 u16 family, 503 429 struct netlbl_lsm_secattr *secattr); 504 - void netlbl_skbuff_err(struct sk_buff *skb, int error, int gateway); 430 + void netlbl_skbuff_err(struct sk_buff *skb, u16 family, int error, int gateway); 505 431 506 432 /* 507 433 * LSM label mapping cache operations 508 434 */ 509 435 void netlbl_cache_invalidate(void); 510 - int netlbl_cache_add(const struct sk_buff *skb, 436 + int netlbl_cache_add(const struct sk_buff *skb, u16 family, 511 437 const struct netlbl_lsm_secattr *secattr); 512 438 513 439 /* ··· 565 491 const char *domain, 566 492 const struct in_addr *addr, 567 493 const struct in_addr *mask, 494 + struct netlbl_audit *audit_info) 495 + { 496 + return -ENOSYS; 497 + } 498 + static inline int netlbl_cfg_calipso_add(struct calipso_doi *doi_def, 499 + struct netlbl_audit *audit_info) 500 + { 501 + return -ENOSYS; 502 + } 503 + static inline void netlbl_cfg_calipso_del(u32 doi, 504 + struct netlbl_audit *audit_info) 505 + { 506 + return; 507 + } 508 + static inline int netlbl_cfg_calipso_map_add(u32 doi, 509 + const char *domain, 510 + const struct in6_addr *addr, 511 + const struct in6_addr *mask, 568 512 struct netlbl_audit *audit_info) 569 513 { 570 514 return -ENOSYS; ··· 678 586 { 679 587 return; 680 588 } 681 - static inline int netlbl_cache_add(const struct sk_buff *skb, 589 + static inline int netlbl_cache_add(const struct sk_buff *skb, u16 family, 682 590 const struct netlbl_lsm_secattr *secattr) 683 591 { 684 592 return 0; ··· 689 597 return NULL; 690 598 } 691 599 #endif /* CONFIG_NETLABEL */ 600 + 601 + const struct netlbl_calipso_ops * 602 + netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops); 692 603 693 604 #endif /* _NETLABEL_H */
+1
include/uapi/linux/Kbuild
··· 455 455 header-y += virtio_types.h 456 456 header-y += vm_sockets.h 457 457 header-y += vt.h 458 + header-y += vtpm_proxy.h 458 459 header-y += wait.h 459 460 header-y += wanrouter.h 460 461 header-y += watchdog.h
+2
include/uapi/linux/audit.h
··· 130 130 #define AUDIT_MAC_IPSEC_EVENT 1415 /* Audit an IPSec event */ 131 131 #define AUDIT_MAC_UNLBL_STCADD 1416 /* NetLabel: add a static label */ 132 132 #define AUDIT_MAC_UNLBL_STCDEL 1417 /* NetLabel: del a static label */ 133 + #define AUDIT_MAC_CALIPSO_ADD 1418 /* NetLabel: add CALIPSO DOI entry */ 134 + #define AUDIT_MAC_CALIPSO_DEL 1419 /* NetLabel: del CALIPSO DOI entry */ 133 135 134 136 #define AUDIT_FIRST_KERN_ANOM_MSG 1700 135 137 #define AUDIT_LAST_KERN_ANOM_MSG 1799
+1
include/uapi/linux/in6.h
··· 143 143 #define IPV6_TLV_PAD1 0 144 144 #define IPV6_TLV_PADN 1 145 145 #define IPV6_TLV_ROUTERALERT 5 146 + #define IPV6_TLV_CALIPSO 7 /* RFC 5570 */ 146 147 #define IPV6_TLV_JUMBO 194 147 148 #define IPV6_TLV_HAO 201 /* home address option */ 148 149
+36
include/uapi/linux/vtpm_proxy.h
··· 1 + /* 2 + * Definitions for the VTPM proxy driver 3 + * Copyright (c) 2015, 2016, IBM Corporation 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + */ 14 + 15 + #ifndef _UAPI_LINUX_VTPM_PROXY_H 16 + #define _UAPI_LINUX_VTPM_PROXY_H 17 + 18 + #include <linux/types.h> 19 + #include <linux/ioctl.h> 20 + 21 + /* ioctls */ 22 + 23 + struct vtpm_proxy_new_dev { 24 + __u32 flags; /* input */ 25 + __u32 tpm_num; /* output */ 26 + __u32 fd; /* output */ 27 + __u32 major; /* output */ 28 + __u32 minor; /* output */ 29 + }; 30 + 31 + /* above flags */ 32 + #define VTPM_PROXY_FLAG_TPM2 1 /* emulator is TPM 2 */ 33 + 34 + #define VTPM_PROXY_IOC_NEW_DEV _IOWR(0xa1, 0x00, struct vtpm_proxy_new_dev) 35 + 36 + #endif /* _UAPI_LINUX_VTPM_PROXY_H */
+36 -10
kernel/capability.c
··· 361 361 return has_ns_capability_noaudit(t, &init_user_ns, cap); 362 362 } 363 363 364 + static bool ns_capable_common(struct user_namespace *ns, int cap, bool audit) 365 + { 366 + int capable; 367 + 368 + if (unlikely(!cap_valid(cap))) { 369 + pr_crit("capable() called with invalid cap=%u\n", cap); 370 + BUG(); 371 + } 372 + 373 + capable = audit ? security_capable(current_cred(), ns, cap) : 374 + security_capable_noaudit(current_cred(), ns, cap); 375 + if (capable == 0) { 376 + current->flags |= PF_SUPERPRIV; 377 + return true; 378 + } 379 + return false; 380 + } 381 + 364 382 /** 365 383 * ns_capable - Determine if the current task has a superior capability in effect 366 384 * @ns: The usernamespace we want the capability in ··· 392 374 */ 393 375 bool ns_capable(struct user_namespace *ns, int cap) 394 376 { 395 - if (unlikely(!cap_valid(cap))) { 396 - pr_crit("capable() called with invalid cap=%u\n", cap); 397 - BUG(); 398 - } 399 - 400 - if (security_capable(current_cred(), ns, cap) == 0) { 401 - current->flags |= PF_SUPERPRIV; 402 - return true; 403 - } 404 - return false; 377 + return ns_capable_common(ns, cap, true); 405 378 } 406 379 EXPORT_SYMBOL(ns_capable); 407 380 381 + /** 382 + * ns_capable_noaudit - Determine if the current task has a superior capability 383 + * (unaudited) in effect 384 + * @ns: The usernamespace we want the capability in 385 + * @cap: The capability to be tested for 386 + * 387 + * Return true if the current task has the given superior capability currently 388 + * available for use, false if not. 389 + * 390 + * This sets PF_SUPERPRIV on the task if the capability is available on the 391 + * assumption that it's about to be used. 392 + */ 393 + bool ns_capable_noaudit(struct user_namespace *ns, int cap) 394 + { 395 + return ns_capable_common(ns, cap, false); 396 + } 397 + EXPORT_SYMBOL(ns_capable_noaudit); 408 398 409 399 /** 410 400 * capable - Determine if the current task has a superior capability in effect
+56 -88
kernel/seccomp.c
··· 173 173 * 174 174 * Returns valid seccomp BPF response codes. 175 175 */ 176 - static u32 seccomp_run_filters(struct seccomp_data *sd) 176 + static u32 seccomp_run_filters(const struct seccomp_data *sd) 177 177 { 178 178 struct seccomp_data sd_local; 179 179 u32 ret = SECCOMP_RET_ALLOW; ··· 554 554 BUG(); 555 555 } 556 556 #else 557 - int __secure_computing(void) 558 - { 559 - u32 phase1_result = seccomp_phase1(NULL); 560 - 561 - if (likely(phase1_result == SECCOMP_PHASE1_OK)) 562 - return 0; 563 - else if (likely(phase1_result == SECCOMP_PHASE1_SKIP)) 564 - return -1; 565 - else 566 - return seccomp_phase2(phase1_result); 567 - } 568 557 569 558 #ifdef CONFIG_SECCOMP_FILTER 570 - static u32 __seccomp_phase1_filter(int this_syscall, struct seccomp_data *sd) 559 + static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, 560 + const bool recheck_after_trace) 571 561 { 572 562 u32 filter_ret, action; 573 563 int data; ··· 589 599 goto skip; 590 600 591 601 case SECCOMP_RET_TRACE: 592 - return filter_ret; /* Save the rest for phase 2. */ 602 + /* We've been put in this state by the ptracer already. */ 603 + if (recheck_after_trace) 604 + return 0; 605 + 606 + /* ENOSYS these calls if there is no tracer attached. */ 607 + if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) { 608 + syscall_set_return_value(current, 609 + task_pt_regs(current), 610 + -ENOSYS, 0); 611 + goto skip; 612 + } 613 + 614 + /* Allow the BPF to provide the event message */ 615 + ptrace_event(PTRACE_EVENT_SECCOMP, data); 616 + /* 617 + * The delivery of a fatal signal during event 618 + * notification may silently skip tracer notification. 619 + * Terminating the task now avoids executing a system 620 + * call that may not be intended. 621 + */ 622 + if (fatal_signal_pending(current)) 623 + do_exit(SIGSYS); 624 + /* Check if the tracer forced the syscall to be skipped. */ 625 + this_syscall = syscall_get_nr(current, task_pt_regs(current)); 626 + if (this_syscall < 0) 627 + goto skip; 628 + 629 + /* 630 + * Recheck the syscall, since it may have changed. This 631 + * intentionally uses a NULL struct seccomp_data to force 632 + * a reload of all registers. This does not goto skip since 633 + * a skip would have already been reported. 634 + */ 635 + if (__seccomp_filter(this_syscall, NULL, true)) 636 + return -1; 637 + 638 + return 0; 593 639 594 640 case SECCOMP_RET_ALLOW: 595 - return SECCOMP_PHASE1_OK; 641 + return 0; 596 642 597 643 case SECCOMP_RET_KILL: 598 644 default: ··· 640 614 641 615 skip: 642 616 audit_seccomp(this_syscall, 0, action); 643 - return SECCOMP_PHASE1_SKIP; 617 + return -1; 618 + } 619 + #else 620 + static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, 621 + const bool recheck_after_trace) 622 + { 623 + BUG(); 644 624 } 645 625 #endif 646 626 647 - /** 648 - * seccomp_phase1() - run fast path seccomp checks on the current syscall 649 - * @arg sd: The seccomp_data or NULL 650 - * 651 - * This only reads pt_regs via the syscall_xyz helpers. The only change 652 - * it will make to pt_regs is via syscall_set_return_value, and it will 653 - * only do that if it returns SECCOMP_PHASE1_SKIP. 654 - * 655 - * If sd is provided, it will not read pt_regs at all. 656 - * 657 - * It may also call do_exit or force a signal; these actions must be 658 - * safe. 659 - * 660 - * If it returns SECCOMP_PHASE1_OK, the syscall passes checks and should 661 - * be processed normally. 662 - * 663 - * If it returns SECCOMP_PHASE1_SKIP, then the syscall should not be 664 - * invoked. In this case, seccomp_phase1 will have set the return value 665 - * using syscall_set_return_value. 666 - * 667 - * If it returns anything else, then the return value should be passed 668 - * to seccomp_phase2 from a context in which ptrace hooks are safe. 669 - */ 670 - u32 seccomp_phase1(struct seccomp_data *sd) 627 + int __secure_computing(const struct seccomp_data *sd) 671 628 { 672 629 int mode = current->seccomp.mode; 673 - int this_syscall = sd ? sd->nr : 674 - syscall_get_nr(current, task_pt_regs(current)); 630 + int this_syscall; 675 631 676 632 if (config_enabled(CONFIG_CHECKPOINT_RESTORE) && 677 633 unlikely(current->ptrace & PT_SUSPEND_SECCOMP)) 678 - return SECCOMP_PHASE1_OK; 634 + return 0; 635 + 636 + this_syscall = sd ? sd->nr : 637 + syscall_get_nr(current, task_pt_regs(current)); 679 638 680 639 switch (mode) { 681 640 case SECCOMP_MODE_STRICT: 682 641 __secure_computing_strict(this_syscall); /* may call do_exit */ 683 - return SECCOMP_PHASE1_OK; 684 - #ifdef CONFIG_SECCOMP_FILTER 642 + return 0; 685 643 case SECCOMP_MODE_FILTER: 686 - return __seccomp_phase1_filter(this_syscall, sd); 687 - #endif 644 + return __seccomp_filter(this_syscall, sd, false); 688 645 default: 689 646 BUG(); 690 647 } 691 - } 692 - 693 - /** 694 - * seccomp_phase2() - finish slow path seccomp work for the current syscall 695 - * @phase1_result: The return value from seccomp_phase1() 696 - * 697 - * This must be called from a context in which ptrace hooks can be used. 698 - * 699 - * Returns 0 if the syscall should be processed or -1 to skip the syscall. 700 - */ 701 - int seccomp_phase2(u32 phase1_result) 702 - { 703 - struct pt_regs *regs = task_pt_regs(current); 704 - u32 action = phase1_result & SECCOMP_RET_ACTION; 705 - int data = phase1_result & SECCOMP_RET_DATA; 706 - 707 - BUG_ON(action != SECCOMP_RET_TRACE); 708 - 709 - audit_seccomp(syscall_get_nr(current, regs), 0, action); 710 - 711 - /* Skip these calls if there is no tracer. */ 712 - if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) { 713 - syscall_set_return_value(current, regs, 714 - -ENOSYS, 0); 715 - return -1; 716 - } 717 - 718 - /* Allow the BPF to provide the event message */ 719 - ptrace_event(PTRACE_EVENT_SECCOMP, data); 720 - /* 721 - * The delivery of a fatal signal during event 722 - * notification may silently skip tracer notification. 723 - * Terminating the task now avoids executing a system 724 - * call that may not be intended. 725 - */ 726 - if (fatal_signal_pending(current)) 727 - do_exit(SIGSYS); 728 - if (syscall_get_nr(current, regs) < 0) 729 - return -1; /* Explicit request to skip. */ 730 - 731 - return 0; 732 648 } 733 649 #endif /* CONFIG_HAVE_ARCH_SECCOMP_FILTER */ 734 650
+9 -3
net/dccp/ipv6.c
··· 216 216 skb = dccp_make_response(sk, dst, req); 217 217 if (skb != NULL) { 218 218 struct dccp_hdr *dh = dccp_hdr(skb); 219 + struct ipv6_txoptions *opt; 219 220 220 221 dh->dccph_checksum = dccp_v6_csum_finish(skb, 221 222 &ireq->ir_v6_loc_addr, 222 223 &ireq->ir_v6_rmt_addr); 223 224 fl6.daddr = ireq->ir_v6_rmt_addr; 224 225 rcu_read_lock(); 225 - err = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt), 226 - np->tclass); 226 + opt = ireq->ipv6_opt; 227 + if (!opt) 228 + opt = rcu_dereference(np->opt); 229 + err = ip6_xmit(sk, skb, &fl6, opt, np->tclass); 227 230 rcu_read_unlock(); 228 231 err = net_xmit_eval(err); 229 232 } ··· 239 236 static void dccp_v6_reqsk_destructor(struct request_sock *req) 240 237 { 241 238 dccp_feat_list_purge(&dccp_rsk(req)->dreq_featneg); 239 + kfree(inet_rsk(req)->ipv6_opt); 242 240 kfree_skb(inet_rsk(req)->pktopts); 243 241 } 244 242 ··· 498 494 * Yes, keeping reference count would be much more clever, but we make 499 495 * one more one thing there: reattach optmem to newsk. 500 496 */ 501 - opt = rcu_dereference(np->opt); 497 + opt = ireq->ipv6_opt; 498 + if (!opt) 499 + opt = rcu_dereference(np->opt); 502 500 if (opt) { 503 501 opt = ipv6_dup_options(newsk, opt); 504 502 RCU_INIT_POINTER(newnp->opt, opt);
+9 -79
net/ipv4/cipso_ipv4.c
··· 135 135 */ 136 136 137 137 /** 138 - * cipso_v4_bitmap_walk - Walk a bitmap looking for a bit 139 - * @bitmap: the bitmap 140 - * @bitmap_len: length in bits 141 - * @offset: starting offset 142 - * @state: if non-zero, look for a set (1) bit else look for a cleared (0) bit 143 - * 144 - * Description: 145 - * Starting at @offset, walk the bitmap from left to right until either the 146 - * desired bit is found or we reach the end. Return the bit offset, -1 if 147 - * not found, or -2 if error. 148 - */ 149 - static int cipso_v4_bitmap_walk(const unsigned char *bitmap, 150 - u32 bitmap_len, 151 - u32 offset, 152 - u8 state) 153 - { 154 - u32 bit_spot; 155 - u32 byte_offset; 156 - unsigned char bitmask; 157 - unsigned char byte; 158 - 159 - /* gcc always rounds to zero when doing integer division */ 160 - byte_offset = offset / 8; 161 - byte = bitmap[byte_offset]; 162 - bit_spot = offset; 163 - bitmask = 0x80 >> (offset % 8); 164 - 165 - while (bit_spot < bitmap_len) { 166 - if ((state && (byte & bitmask) == bitmask) || 167 - (state == 0 && (byte & bitmask) == 0)) 168 - return bit_spot; 169 - 170 - bit_spot++; 171 - bitmask >>= 1; 172 - if (bitmask == 0) { 173 - byte = bitmap[++byte_offset]; 174 - bitmask = 0x80; 175 - } 176 - } 177 - 178 - return -1; 179 - } 180 - 181 - /** 182 - * cipso_v4_bitmap_setbit - Sets a single bit in a bitmap 183 - * @bitmap: the bitmap 184 - * @bit: the bit 185 - * @state: if non-zero, set the bit (1) else clear the bit (0) 186 - * 187 - * Description: 188 - * Set a single bit in the bitmask. Returns zero on success, negative values 189 - * on error. 190 - */ 191 - static void cipso_v4_bitmap_setbit(unsigned char *bitmap, 192 - u32 bit, 193 - u8 state) 194 - { 195 - u32 byte_spot; 196 - u8 bitmask; 197 - 198 - /* gcc always rounds to zero when doing integer division */ 199 - byte_spot = bit / 8; 200 - bitmask = 0x80 >> (bit % 8); 201 - if (state) 202 - bitmap[byte_spot] |= bitmask; 203 - else 204 - bitmap[byte_spot] &= ~bitmask; 205 - } 206 - 207 - /** 208 138 * cipso_v4_cache_entry_free - Frees a cache entry 209 139 * @entry: the entry to free 210 140 * ··· 770 840 cipso_cat_size = doi_def->map.std->cat.cipso_size; 771 841 cipso_array = doi_def->map.std->cat.cipso; 772 842 for (;;) { 773 - cat = cipso_v4_bitmap_walk(bitmap, 774 - bitmap_len_bits, 775 - cat + 1, 776 - 1); 843 + cat = netlbl_bitmap_walk(bitmap, 844 + bitmap_len_bits, 845 + cat + 1, 846 + 1); 777 847 if (cat < 0) 778 848 break; 779 849 if (cat >= cipso_cat_size || ··· 839 909 } 840 910 if (net_spot >= net_clen_bits) 841 911 return -ENOSPC; 842 - cipso_v4_bitmap_setbit(net_cat, net_spot, 1); 912 + netlbl_bitmap_setbit(net_cat, net_spot, 1); 843 913 844 914 if (net_spot > net_spot_max) 845 915 net_spot_max = net_spot; ··· 881 951 } 882 952 883 953 for (;;) { 884 - net_spot = cipso_v4_bitmap_walk(net_cat, 885 - net_clen_bits, 886 - net_spot + 1, 887 - 1); 954 + net_spot = netlbl_bitmap_walk(net_cat, 955 + net_clen_bits, 956 + net_spot + 1, 957 + 1); 888 958 if (net_spot < 0) { 889 959 if (net_spot == -2) 890 960 return -EFAULT;
+3
net/ipv4/tcp_input.c
··· 6147 6147 6148 6148 kmemcheck_annotate_bitfield(ireq, flags); 6149 6149 ireq->opt = NULL; 6150 + #if IS_ENABLED(CONFIG_IPV6) 6151 + ireq->pktopts = NULL; 6152 + #endif 6150 6153 atomic64_set(&ireq->ir_cookie, 0); 6151 6154 ireq->ireq_state = TCP_NEW_SYN_RECV; 6152 6155 write_pnet(&ireq->ireq_net, sock_net(sk_listener));
+1
net/ipv6/Makefile
··· 22 22 ipv6-$(CONFIG_IPV6_MULTIPLE_TABLES) += fib6_rules.o 23 23 ipv6-$(CONFIG_PROC_FS) += proc.o 24 24 ipv6-$(CONFIG_SYN_COOKIES) += syncookies.o 25 + ipv6-$(CONFIG_NETLABEL) += calipso.o 25 26 26 27 ipv6-objs += $(ipv6-y) 27 28
+8 -1
net/ipv6/af_inet6.c
··· 60 60 #ifdef CONFIG_IPV6_TUNNEL 61 61 #include <net/ip6_tunnel.h> 62 62 #endif 63 + #include <net/calipso.h> 63 64 64 65 #include <asm/uaccess.h> 65 66 #include <linux/mroute6.h> ··· 984 983 if (err) 985 984 goto pingv6_fail; 986 985 986 + err = calipso_init(); 987 + if (err) 988 + goto calipso_fail; 989 + 987 990 #ifdef CONFIG_SYSCTL 988 991 err = ipv6_sysctl_register(); 989 992 if (err) ··· 998 993 999 994 #ifdef CONFIG_SYSCTL 1000 995 sysctl_fail: 1001 - pingv6_exit(); 996 + calipso_exit(); 1002 997 #endif 998 + calipso_fail: 999 + pingv6_exit(); 1003 1000 pingv6_fail: 1004 1001 ipv6_packet_cleanup(); 1005 1002 ipv6_packet_fail:
+1473
net/ipv6/calipso.c
··· 1 + /* 2 + * CALIPSO - Common Architecture Label IPv6 Security Option 3 + * 4 + * This is an implementation of the CALIPSO protocol as specified in 5 + * RFC 5570. 6 + * 7 + * Authors: Paul Moore <paul.moore@hp.com> 8 + * Huw Davies <huw@codeweavers.com> 9 + * 10 + */ 11 + 12 + /* (c) Copyright Hewlett-Packard Development Company, L.P., 2006, 2008 13 + * (c) Copyright Huw Davies <huw@codeweavers.com>, 2015 14 + * 15 + * This program is free software; you can redistribute it and/or modify 16 + * it under the terms of the GNU General Public License as published by 17 + * the Free Software Foundation; either version 2 of the License, or 18 + * (at your option) any later version. 19 + * 20 + * This program is distributed in the hope that it will be useful, 21 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 23 + * the GNU General Public License for more details. 24 + * 25 + * You should have received a copy of the GNU General Public License 26 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 27 + * 28 + */ 29 + 30 + #include <linux/init.h> 31 + #include <linux/types.h> 32 + #include <linux/rcupdate.h> 33 + #include <linux/list.h> 34 + #include <linux/spinlock.h> 35 + #include <linux/string.h> 36 + #include <linux/jhash.h> 37 + #include <linux/audit.h> 38 + #include <linux/slab.h> 39 + #include <net/ip.h> 40 + #include <net/icmp.h> 41 + #include <net/tcp.h> 42 + #include <net/netlabel.h> 43 + #include <net/calipso.h> 44 + #include <linux/atomic.h> 45 + #include <linux/bug.h> 46 + #include <asm/unaligned.h> 47 + #include <linux/crc-ccitt.h> 48 + 49 + /* Maximium size of the calipso option including 50 + * the two-byte TLV header. 51 + */ 52 + #define CALIPSO_OPT_LEN_MAX (2 + 252) 53 + 54 + /* Size of the minimum calipso option including 55 + * the two-byte TLV header. 56 + */ 57 + #define CALIPSO_HDR_LEN (2 + 8) 58 + 59 + /* Maximium size of the calipso option including 60 + * the two-byte TLV header and upto 3 bytes of 61 + * leading pad and 7 bytes of trailing pad. 62 + */ 63 + #define CALIPSO_OPT_LEN_MAX_WITH_PAD (3 + CALIPSO_OPT_LEN_MAX + 7) 64 + 65 + /* Maximium size of u32 aligned buffer required to hold calipso 66 + * option. Max of 3 initial pad bytes starting from buffer + 3. 67 + * i.e. the worst case is when the previous tlv finishes on 4n + 3. 68 + */ 69 + #define CALIPSO_MAX_BUFFER (6 + CALIPSO_OPT_LEN_MAX) 70 + 71 + /* List of available DOI definitions */ 72 + static DEFINE_SPINLOCK(calipso_doi_list_lock); 73 + static LIST_HEAD(calipso_doi_list); 74 + 75 + /* Label mapping cache */ 76 + int calipso_cache_enabled = 1; 77 + int calipso_cache_bucketsize = 10; 78 + #define CALIPSO_CACHE_BUCKETBITS 7 79 + #define CALIPSO_CACHE_BUCKETS BIT(CALIPSO_CACHE_BUCKETBITS) 80 + #define CALIPSO_CACHE_REORDERLIMIT 10 81 + struct calipso_map_cache_bkt { 82 + spinlock_t lock; 83 + u32 size; 84 + struct list_head list; 85 + }; 86 + 87 + struct calipso_map_cache_entry { 88 + u32 hash; 89 + unsigned char *key; 90 + size_t key_len; 91 + 92 + struct netlbl_lsm_cache *lsm_data; 93 + 94 + u32 activity; 95 + struct list_head list; 96 + }; 97 + 98 + static struct calipso_map_cache_bkt *calipso_cache; 99 + 100 + /* Label Mapping Cache Functions 101 + */ 102 + 103 + /** 104 + * calipso_cache_entry_free - Frees a cache entry 105 + * @entry: the entry to free 106 + * 107 + * Description: 108 + * This function frees the memory associated with a cache entry including the 109 + * LSM cache data if there are no longer any users, i.e. reference count == 0. 110 + * 111 + */ 112 + static void calipso_cache_entry_free(struct calipso_map_cache_entry *entry) 113 + { 114 + if (entry->lsm_data) 115 + netlbl_secattr_cache_free(entry->lsm_data); 116 + kfree(entry->key); 117 + kfree(entry); 118 + } 119 + 120 + /** 121 + * calipso_map_cache_hash - Hashing function for the CALIPSO cache 122 + * @key: the hash key 123 + * @key_len: the length of the key in bytes 124 + * 125 + * Description: 126 + * The CALIPSO tag hashing function. Returns a 32-bit hash value. 127 + * 128 + */ 129 + static u32 calipso_map_cache_hash(const unsigned char *key, u32 key_len) 130 + { 131 + return jhash(key, key_len, 0); 132 + } 133 + 134 + /** 135 + * calipso_cache_init - Initialize the CALIPSO cache 136 + * 137 + * Description: 138 + * Initializes the CALIPSO label mapping cache, this function should be called 139 + * before any of the other functions defined in this file. Returns zero on 140 + * success, negative values on error. 141 + * 142 + */ 143 + static int __init calipso_cache_init(void) 144 + { 145 + u32 iter; 146 + 147 + calipso_cache = kcalloc(CALIPSO_CACHE_BUCKETS, 148 + sizeof(struct calipso_map_cache_bkt), 149 + GFP_KERNEL); 150 + if (!calipso_cache) 151 + return -ENOMEM; 152 + 153 + for (iter = 0; iter < CALIPSO_CACHE_BUCKETS; iter++) { 154 + spin_lock_init(&calipso_cache[iter].lock); 155 + calipso_cache[iter].size = 0; 156 + INIT_LIST_HEAD(&calipso_cache[iter].list); 157 + } 158 + 159 + return 0; 160 + } 161 + 162 + /** 163 + * calipso_cache_invalidate - Invalidates the current CALIPSO cache 164 + * 165 + * Description: 166 + * Invalidates and frees any entries in the CALIPSO cache. Returns zero on 167 + * success and negative values on failure. 168 + * 169 + */ 170 + static void calipso_cache_invalidate(void) 171 + { 172 + struct calipso_map_cache_entry *entry, *tmp_entry; 173 + u32 iter; 174 + 175 + for (iter = 0; iter < CALIPSO_CACHE_BUCKETS; iter++) { 176 + spin_lock_bh(&calipso_cache[iter].lock); 177 + list_for_each_entry_safe(entry, 178 + tmp_entry, 179 + &calipso_cache[iter].list, list) { 180 + list_del(&entry->list); 181 + calipso_cache_entry_free(entry); 182 + } 183 + calipso_cache[iter].size = 0; 184 + spin_unlock_bh(&calipso_cache[iter].lock); 185 + } 186 + } 187 + 188 + /** 189 + * calipso_cache_check - Check the CALIPSO cache for a label mapping 190 + * @key: the buffer to check 191 + * @key_len: buffer length in bytes 192 + * @secattr: the security attribute struct to use 193 + * 194 + * Description: 195 + * This function checks the cache to see if a label mapping already exists for 196 + * the given key. If there is a match then the cache is adjusted and the 197 + * @secattr struct is populated with the correct LSM security attributes. The 198 + * cache is adjusted in the following manner if the entry is not already the 199 + * first in the cache bucket: 200 + * 201 + * 1. The cache entry's activity counter is incremented 202 + * 2. The previous (higher ranking) entry's activity counter is decremented 203 + * 3. If the difference between the two activity counters is geater than 204 + * CALIPSO_CACHE_REORDERLIMIT the two entries are swapped 205 + * 206 + * Returns zero on success, -ENOENT for a cache miss, and other negative values 207 + * on error. 208 + * 209 + */ 210 + static int calipso_cache_check(const unsigned char *key, 211 + u32 key_len, 212 + struct netlbl_lsm_secattr *secattr) 213 + { 214 + u32 bkt; 215 + struct calipso_map_cache_entry *entry; 216 + struct calipso_map_cache_entry *prev_entry = NULL; 217 + u32 hash; 218 + 219 + if (!calipso_cache_enabled) 220 + return -ENOENT; 221 + 222 + hash = calipso_map_cache_hash(key, key_len); 223 + bkt = hash & (CALIPSO_CACHE_BUCKETS - 1); 224 + spin_lock_bh(&calipso_cache[bkt].lock); 225 + list_for_each_entry(entry, &calipso_cache[bkt].list, list) { 226 + if (entry->hash == hash && 227 + entry->key_len == key_len && 228 + memcmp(entry->key, key, key_len) == 0) { 229 + entry->activity += 1; 230 + atomic_inc(&entry->lsm_data->refcount); 231 + secattr->cache = entry->lsm_data; 232 + secattr->flags |= NETLBL_SECATTR_CACHE; 233 + secattr->type = NETLBL_NLTYPE_CALIPSO; 234 + if (!prev_entry) { 235 + spin_unlock_bh(&calipso_cache[bkt].lock); 236 + return 0; 237 + } 238 + 239 + if (prev_entry->activity > 0) 240 + prev_entry->activity -= 1; 241 + if (entry->activity > prev_entry->activity && 242 + entry->activity - prev_entry->activity > 243 + CALIPSO_CACHE_REORDERLIMIT) { 244 + __list_del(entry->list.prev, entry->list.next); 245 + __list_add(&entry->list, 246 + prev_entry->list.prev, 247 + &prev_entry->list); 248 + } 249 + 250 + spin_unlock_bh(&calipso_cache[bkt].lock); 251 + return 0; 252 + } 253 + prev_entry = entry; 254 + } 255 + spin_unlock_bh(&calipso_cache[bkt].lock); 256 + 257 + return -ENOENT; 258 + } 259 + 260 + /** 261 + * calipso_cache_add - Add an entry to the CALIPSO cache 262 + * @calipso_ptr: the CALIPSO option 263 + * @secattr: the packet's security attributes 264 + * 265 + * Description: 266 + * Add a new entry into the CALIPSO label mapping cache. Add the new entry to 267 + * head of the cache bucket's list, if the cache bucket is out of room remove 268 + * the last entry in the list first. It is important to note that there is 269 + * currently no checking for duplicate keys. Returns zero on success, 270 + * negative values on failure. The key stored starts at calipso_ptr + 2, 271 + * i.e. the type and length bytes are not stored, this corresponds to 272 + * calipso_ptr[1] bytes of data. 273 + * 274 + */ 275 + static int calipso_cache_add(const unsigned char *calipso_ptr, 276 + const struct netlbl_lsm_secattr *secattr) 277 + { 278 + int ret_val = -EPERM; 279 + u32 bkt; 280 + struct calipso_map_cache_entry *entry = NULL; 281 + struct calipso_map_cache_entry *old_entry = NULL; 282 + u32 calipso_ptr_len; 283 + 284 + if (!calipso_cache_enabled || calipso_cache_bucketsize <= 0) 285 + return 0; 286 + 287 + calipso_ptr_len = calipso_ptr[1]; 288 + 289 + entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 290 + if (!entry) 291 + return -ENOMEM; 292 + entry->key = kmemdup(calipso_ptr + 2, calipso_ptr_len, GFP_ATOMIC); 293 + if (!entry->key) { 294 + ret_val = -ENOMEM; 295 + goto cache_add_failure; 296 + } 297 + entry->key_len = calipso_ptr_len; 298 + entry->hash = calipso_map_cache_hash(calipso_ptr, calipso_ptr_len); 299 + atomic_inc(&secattr->cache->refcount); 300 + entry->lsm_data = secattr->cache; 301 + 302 + bkt = entry->hash & (CALIPSO_CACHE_BUCKETS - 1); 303 + spin_lock_bh(&calipso_cache[bkt].lock); 304 + if (calipso_cache[bkt].size < calipso_cache_bucketsize) { 305 + list_add(&entry->list, &calipso_cache[bkt].list); 306 + calipso_cache[bkt].size += 1; 307 + } else { 308 + old_entry = list_entry(calipso_cache[bkt].list.prev, 309 + struct calipso_map_cache_entry, list); 310 + list_del(&old_entry->list); 311 + list_add(&entry->list, &calipso_cache[bkt].list); 312 + calipso_cache_entry_free(old_entry); 313 + } 314 + spin_unlock_bh(&calipso_cache[bkt].lock); 315 + 316 + return 0; 317 + 318 + cache_add_failure: 319 + if (entry) 320 + calipso_cache_entry_free(entry); 321 + return ret_val; 322 + } 323 + 324 + /* DOI List Functions 325 + */ 326 + 327 + /** 328 + * calipso_doi_search - Searches for a DOI definition 329 + * @doi: the DOI to search for 330 + * 331 + * Description: 332 + * Search the DOI definition list for a DOI definition with a DOI value that 333 + * matches @doi. The caller is responsible for calling rcu_read_[un]lock(). 334 + * Returns a pointer to the DOI definition on success and NULL on failure. 335 + */ 336 + static struct calipso_doi *calipso_doi_search(u32 doi) 337 + { 338 + struct calipso_doi *iter; 339 + 340 + list_for_each_entry_rcu(iter, &calipso_doi_list, list) 341 + if (iter->doi == doi && atomic_read(&iter->refcount)) 342 + return iter; 343 + return NULL; 344 + } 345 + 346 + /** 347 + * calipso_doi_add - Add a new DOI to the CALIPSO protocol engine 348 + * @doi_def: the DOI structure 349 + * @audit_info: NetLabel audit information 350 + * 351 + * Description: 352 + * The caller defines a new DOI for use by the CALIPSO engine and calls this 353 + * function to add it to the list of acceptable domains. The caller must 354 + * ensure that the mapping table specified in @doi_def->map meets all of the 355 + * requirements of the mapping type (see calipso.h for details). Returns 356 + * zero on success and non-zero on failure. 357 + * 358 + */ 359 + static int calipso_doi_add(struct calipso_doi *doi_def, 360 + struct netlbl_audit *audit_info) 361 + { 362 + int ret_val = -EINVAL; 363 + u32 doi; 364 + u32 doi_type; 365 + struct audit_buffer *audit_buf; 366 + 367 + doi = doi_def->doi; 368 + doi_type = doi_def->type; 369 + 370 + if (doi_def->doi == CALIPSO_DOI_UNKNOWN) 371 + goto doi_add_return; 372 + 373 + atomic_set(&doi_def->refcount, 1); 374 + 375 + spin_lock(&calipso_doi_list_lock); 376 + if (calipso_doi_search(doi_def->doi)) { 377 + spin_unlock(&calipso_doi_list_lock); 378 + ret_val = -EEXIST; 379 + goto doi_add_return; 380 + } 381 + list_add_tail_rcu(&doi_def->list, &calipso_doi_list); 382 + spin_unlock(&calipso_doi_list_lock); 383 + ret_val = 0; 384 + 385 + doi_add_return: 386 + audit_buf = netlbl_audit_start(AUDIT_MAC_CALIPSO_ADD, audit_info); 387 + if (audit_buf) { 388 + const char *type_str; 389 + 390 + switch (doi_type) { 391 + case CALIPSO_MAP_PASS: 392 + type_str = "pass"; 393 + break; 394 + default: 395 + type_str = "(unknown)"; 396 + } 397 + audit_log_format(audit_buf, 398 + " calipso_doi=%u calipso_type=%s res=%u", 399 + doi, type_str, ret_val == 0 ? 1 : 0); 400 + audit_log_end(audit_buf); 401 + } 402 + 403 + return ret_val; 404 + } 405 + 406 + /** 407 + * calipso_doi_free - Frees a DOI definition 408 + * @doi_def: the DOI definition 409 + * 410 + * Description: 411 + * This function frees all of the memory associated with a DOI definition. 412 + * 413 + */ 414 + static void calipso_doi_free(struct calipso_doi *doi_def) 415 + { 416 + kfree(doi_def); 417 + } 418 + 419 + /** 420 + * calipso_doi_free_rcu - Frees a DOI definition via the RCU pointer 421 + * @entry: the entry's RCU field 422 + * 423 + * Description: 424 + * This function is designed to be used as a callback to the call_rcu() 425 + * function so that the memory allocated to the DOI definition can be released 426 + * safely. 427 + * 428 + */ 429 + static void calipso_doi_free_rcu(struct rcu_head *entry) 430 + { 431 + struct calipso_doi *doi_def; 432 + 433 + doi_def = container_of(entry, struct calipso_doi, rcu); 434 + calipso_doi_free(doi_def); 435 + } 436 + 437 + /** 438 + * calipso_doi_remove - Remove an existing DOI from the CALIPSO protocol engine 439 + * @doi: the DOI value 440 + * @audit_secid: the LSM secid to use in the audit message 441 + * 442 + * Description: 443 + * Removes a DOI definition from the CALIPSO engine. The NetLabel routines will 444 + * be called to release their own LSM domain mappings as well as our own 445 + * domain list. Returns zero on success and negative values on failure. 446 + * 447 + */ 448 + static int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info) 449 + { 450 + int ret_val; 451 + struct calipso_doi *doi_def; 452 + struct audit_buffer *audit_buf; 453 + 454 + spin_lock(&calipso_doi_list_lock); 455 + doi_def = calipso_doi_search(doi); 456 + if (!doi_def) { 457 + spin_unlock(&calipso_doi_list_lock); 458 + ret_val = -ENOENT; 459 + goto doi_remove_return; 460 + } 461 + if (!atomic_dec_and_test(&doi_def->refcount)) { 462 + spin_unlock(&calipso_doi_list_lock); 463 + ret_val = -EBUSY; 464 + goto doi_remove_return; 465 + } 466 + list_del_rcu(&doi_def->list); 467 + spin_unlock(&calipso_doi_list_lock); 468 + 469 + call_rcu(&doi_def->rcu, calipso_doi_free_rcu); 470 + ret_val = 0; 471 + 472 + doi_remove_return: 473 + audit_buf = netlbl_audit_start(AUDIT_MAC_CALIPSO_DEL, audit_info); 474 + if (audit_buf) { 475 + audit_log_format(audit_buf, 476 + " calipso_doi=%u res=%u", 477 + doi, ret_val == 0 ? 1 : 0); 478 + audit_log_end(audit_buf); 479 + } 480 + 481 + return ret_val; 482 + } 483 + 484 + /** 485 + * calipso_doi_getdef - Returns a reference to a valid DOI definition 486 + * @doi: the DOI value 487 + * 488 + * Description: 489 + * Searches for a valid DOI definition and if one is found it is returned to 490 + * the caller. Otherwise NULL is returned. The caller must ensure that 491 + * calipso_doi_putdef() is called when the caller is done. 492 + * 493 + */ 494 + static struct calipso_doi *calipso_doi_getdef(u32 doi) 495 + { 496 + struct calipso_doi *doi_def; 497 + 498 + rcu_read_lock(); 499 + doi_def = calipso_doi_search(doi); 500 + if (!doi_def) 501 + goto doi_getdef_return; 502 + if (!atomic_inc_not_zero(&doi_def->refcount)) 503 + doi_def = NULL; 504 + 505 + doi_getdef_return: 506 + rcu_read_unlock(); 507 + return doi_def; 508 + } 509 + 510 + /** 511 + * calipso_doi_putdef - Releases a reference for the given DOI definition 512 + * @doi_def: the DOI definition 513 + * 514 + * Description: 515 + * Releases a DOI definition reference obtained from calipso_doi_getdef(). 516 + * 517 + */ 518 + static void calipso_doi_putdef(struct calipso_doi *doi_def) 519 + { 520 + if (!doi_def) 521 + return; 522 + 523 + if (!atomic_dec_and_test(&doi_def->refcount)) 524 + return; 525 + spin_lock(&calipso_doi_list_lock); 526 + list_del_rcu(&doi_def->list); 527 + spin_unlock(&calipso_doi_list_lock); 528 + 529 + call_rcu(&doi_def->rcu, calipso_doi_free_rcu); 530 + } 531 + 532 + /** 533 + * calipso_doi_walk - Iterate through the DOI definitions 534 + * @skip_cnt: skip past this number of DOI definitions, updated 535 + * @callback: callback for each DOI definition 536 + * @cb_arg: argument for the callback function 537 + * 538 + * Description: 539 + * Iterate over the DOI definition list, skipping the first @skip_cnt entries. 540 + * For each entry call @callback, if @callback returns a negative value stop 541 + * 'walking' through the list and return. Updates the value in @skip_cnt upon 542 + * return. Returns zero on success, negative values on failure. 543 + * 544 + */ 545 + static int calipso_doi_walk(u32 *skip_cnt, 546 + int (*callback)(struct calipso_doi *doi_def, 547 + void *arg), 548 + void *cb_arg) 549 + { 550 + int ret_val = -ENOENT; 551 + u32 doi_cnt = 0; 552 + struct calipso_doi *iter_doi; 553 + 554 + rcu_read_lock(); 555 + list_for_each_entry_rcu(iter_doi, &calipso_doi_list, list) 556 + if (atomic_read(&iter_doi->refcount) > 0) { 557 + if (doi_cnt++ < *skip_cnt) 558 + continue; 559 + ret_val = callback(iter_doi, cb_arg); 560 + if (ret_val < 0) { 561 + doi_cnt--; 562 + goto doi_walk_return; 563 + } 564 + } 565 + 566 + doi_walk_return: 567 + rcu_read_unlock(); 568 + *skip_cnt = doi_cnt; 569 + return ret_val; 570 + } 571 + 572 + /** 573 + * calipso_validate - Validate a CALIPSO option 574 + * @skb: the packet 575 + * @option: the start of the option 576 + * 577 + * Description: 578 + * This routine is called to validate a CALIPSO option. 579 + * If the option is valid then %true is returned, otherwise 580 + * %false is returned. 581 + * 582 + * The caller should have already checked that the length of the 583 + * option (including the TLV header) is >= 10 and that the catmap 584 + * length is consistent with the option length. 585 + * 586 + * We leave checks on the level and categories to the socket layer. 587 + */ 588 + bool calipso_validate(const struct sk_buff *skb, const unsigned char *option) 589 + { 590 + struct calipso_doi *doi_def; 591 + bool ret_val; 592 + u16 crc, len = option[1] + 2; 593 + static const u8 zero[2]; 594 + 595 + /* The original CRC runs over the option including the TLV header 596 + * with the CRC-16 field (at offset 8) zeroed out. */ 597 + crc = crc_ccitt(0xffff, option, 8); 598 + crc = crc_ccitt(crc, zero, sizeof(zero)); 599 + if (len > 10) 600 + crc = crc_ccitt(crc, option + 10, len - 10); 601 + crc = ~crc; 602 + if (option[8] != (crc & 0xff) || option[9] != ((crc >> 8) & 0xff)) 603 + return false; 604 + 605 + rcu_read_lock(); 606 + doi_def = calipso_doi_search(get_unaligned_be32(option + 2)); 607 + ret_val = !!doi_def; 608 + rcu_read_unlock(); 609 + 610 + return ret_val; 611 + } 612 + 613 + /** 614 + * calipso_map_cat_hton - Perform a category mapping from host to network 615 + * @doi_def: the DOI definition 616 + * @secattr: the security attributes 617 + * @net_cat: the zero'd out category bitmap in network/CALIPSO format 618 + * @net_cat_len: the length of the CALIPSO bitmap in bytes 619 + * 620 + * Description: 621 + * Perform a label mapping to translate a local MLS category bitmap to the 622 + * correct CALIPSO bitmap using the given DOI definition. Returns the minimum 623 + * size in bytes of the network bitmap on success, negative values otherwise. 624 + * 625 + */ 626 + static int calipso_map_cat_hton(const struct calipso_doi *doi_def, 627 + const struct netlbl_lsm_secattr *secattr, 628 + unsigned char *net_cat, 629 + u32 net_cat_len) 630 + { 631 + int spot = -1; 632 + u32 net_spot_max = 0; 633 + u32 net_clen_bits = net_cat_len * 8; 634 + 635 + for (;;) { 636 + spot = netlbl_catmap_walk(secattr->attr.mls.cat, 637 + spot + 1); 638 + if (spot < 0) 639 + break; 640 + if (spot >= net_clen_bits) 641 + return -ENOSPC; 642 + netlbl_bitmap_setbit(net_cat, spot, 1); 643 + 644 + if (spot > net_spot_max) 645 + net_spot_max = spot; 646 + } 647 + 648 + return (net_spot_max / 32 + 1) * 4; 649 + } 650 + 651 + /** 652 + * calipso_map_cat_ntoh - Perform a category mapping from network to host 653 + * @doi_def: the DOI definition 654 + * @net_cat: the category bitmap in network/CALIPSO format 655 + * @net_cat_len: the length of the CALIPSO bitmap in bytes 656 + * @secattr: the security attributes 657 + * 658 + * Description: 659 + * Perform a label mapping to translate a CALIPSO bitmap to the correct local 660 + * MLS category bitmap using the given DOI definition. Returns zero on 661 + * success, negative values on failure. 662 + * 663 + */ 664 + static int calipso_map_cat_ntoh(const struct calipso_doi *doi_def, 665 + const unsigned char *net_cat, 666 + u32 net_cat_len, 667 + struct netlbl_lsm_secattr *secattr) 668 + { 669 + int ret_val; 670 + int spot = -1; 671 + u32 net_clen_bits = net_cat_len * 8; 672 + 673 + for (;;) { 674 + spot = netlbl_bitmap_walk(net_cat, 675 + net_clen_bits, 676 + spot + 1, 677 + 1); 678 + if (spot < 0) { 679 + if (spot == -2) 680 + return -EFAULT; 681 + return 0; 682 + } 683 + 684 + ret_val = netlbl_catmap_setbit(&secattr->attr.mls.cat, 685 + spot, 686 + GFP_ATOMIC); 687 + if (ret_val != 0) 688 + return ret_val; 689 + } 690 + 691 + return -EINVAL; 692 + } 693 + 694 + /** 695 + * calipso_pad_write - Writes pad bytes in TLV format 696 + * @buf: the buffer 697 + * @offset: offset from start of buffer to write padding 698 + * @count: number of pad bytes to write 699 + * 700 + * Description: 701 + * Write @count bytes of TLV padding into @buffer starting at offset @offset. 702 + * @count should be less than 8 - see RFC 4942. 703 + * 704 + */ 705 + static int calipso_pad_write(unsigned char *buf, unsigned int offset, 706 + unsigned int count) 707 + { 708 + if (WARN_ON_ONCE(count >= 8)) 709 + return -EINVAL; 710 + 711 + switch (count) { 712 + case 0: 713 + break; 714 + case 1: 715 + buf[offset] = IPV6_TLV_PAD1; 716 + break; 717 + default: 718 + buf[offset] = IPV6_TLV_PADN; 719 + buf[offset + 1] = count - 2; 720 + if (count > 2) 721 + memset(buf + offset + 2, 0, count - 2); 722 + break; 723 + } 724 + return 0; 725 + } 726 + 727 + /** 728 + * calipso_genopt - Generate a CALIPSO option 729 + * @buf: the option buffer 730 + * @start: offset from which to write 731 + * @buf_len: the size of opt_buf 732 + * @doi_def: the CALIPSO DOI to use 733 + * @secattr: the security attributes 734 + * 735 + * Description: 736 + * Generate a CALIPSO option using the DOI definition and security attributes 737 + * passed to the function. This also generates upto three bytes of leading 738 + * padding that ensures that the option is 4n + 2 aligned. It returns the 739 + * number of bytes written (including any initial padding). 740 + */ 741 + static int calipso_genopt(unsigned char *buf, u32 start, u32 buf_len, 742 + const struct calipso_doi *doi_def, 743 + const struct netlbl_lsm_secattr *secattr) 744 + { 745 + int ret_val; 746 + u32 len, pad; 747 + u16 crc; 748 + static const unsigned char padding[4] = {2, 1, 0, 3}; 749 + unsigned char *calipso; 750 + 751 + /* CALIPSO has 4n + 2 alignment */ 752 + pad = padding[start & 3]; 753 + if (buf_len <= start + pad + CALIPSO_HDR_LEN) 754 + return -ENOSPC; 755 + 756 + if ((secattr->flags & NETLBL_SECATTR_MLS_LVL) == 0) 757 + return -EPERM; 758 + 759 + len = CALIPSO_HDR_LEN; 760 + 761 + if (secattr->flags & NETLBL_SECATTR_MLS_CAT) { 762 + ret_val = calipso_map_cat_hton(doi_def, 763 + secattr, 764 + buf + start + pad + len, 765 + buf_len - start - pad - len); 766 + if (ret_val < 0) 767 + return ret_val; 768 + len += ret_val; 769 + } 770 + 771 + calipso_pad_write(buf, start, pad); 772 + calipso = buf + start + pad; 773 + 774 + calipso[0] = IPV6_TLV_CALIPSO; 775 + calipso[1] = len - 2; 776 + *(__be32 *)(calipso + 2) = htonl(doi_def->doi); 777 + calipso[6] = (len - CALIPSO_HDR_LEN) / 4; 778 + calipso[7] = secattr->attr.mls.lvl, 779 + crc = ~crc_ccitt(0xffff, calipso, len); 780 + calipso[8] = crc & 0xff; 781 + calipso[9] = (crc >> 8) & 0xff; 782 + return pad + len; 783 + } 784 + 785 + /* Hop-by-hop hdr helper functions 786 + */ 787 + 788 + /** 789 + * calipso_opt_update - Replaces socket's hop options with a new set 790 + * @sk: the socket 791 + * @hop: new hop options 792 + * 793 + * Description: 794 + * Replaces @sk's hop options with @hop. @hop may be NULL to leave 795 + * the socket with no hop options. 796 + * 797 + */ 798 + static int calipso_opt_update(struct sock *sk, struct ipv6_opt_hdr *hop) 799 + { 800 + struct ipv6_txoptions *old = txopt_get(inet6_sk(sk)), *txopts; 801 + 802 + txopts = ipv6_renew_options_kern(sk, old, IPV6_HOPOPTS, 803 + hop, hop ? ipv6_optlen(hop) : 0); 804 + txopt_put(old); 805 + if (IS_ERR(txopts)) 806 + return PTR_ERR(txopts); 807 + 808 + txopts = ipv6_update_options(sk, txopts); 809 + if (txopts) { 810 + atomic_sub(txopts->tot_len, &sk->sk_omem_alloc); 811 + txopt_put(txopts); 812 + } 813 + 814 + return 0; 815 + } 816 + 817 + /** 818 + * calipso_tlv_len - Returns the length of the TLV 819 + * @opt: the option header 820 + * @offset: offset of the TLV within the header 821 + * 822 + * Description: 823 + * Returns the length of the TLV option at offset @offset within 824 + * the option header @opt. Checks that the entire TLV fits inside 825 + * the option header, returns a negative value if this is not the case. 826 + */ 827 + static int calipso_tlv_len(struct ipv6_opt_hdr *opt, unsigned int offset) 828 + { 829 + unsigned char *tlv = (unsigned char *)opt; 830 + unsigned int opt_len = ipv6_optlen(opt), tlv_len; 831 + 832 + if (offset < sizeof(*opt) || offset >= opt_len) 833 + return -EINVAL; 834 + if (tlv[offset] == IPV6_TLV_PAD1) 835 + return 1; 836 + if (offset + 1 >= opt_len) 837 + return -EINVAL; 838 + tlv_len = tlv[offset + 1] + 2; 839 + if (offset + tlv_len > opt_len) 840 + return -EINVAL; 841 + return tlv_len; 842 + } 843 + 844 + /** 845 + * calipso_opt_find - Finds the CALIPSO option in an IPv6 hop options header 846 + * @hop: the hop options header 847 + * @start: on return holds the offset of any leading padding 848 + * @end: on return holds the offset of the first non-pad TLV after CALIPSO 849 + * 850 + * Description: 851 + * Finds the space occupied by a CALIPSO option (including any leading and 852 + * trailing padding). 853 + * 854 + * If a CALIPSO option exists set @start and @end to the 855 + * offsets within @hop of the start of padding before the first 856 + * CALIPSO option and the end of padding after the first CALIPSO 857 + * option. In this case the function returns 0. 858 + * 859 + * In the absence of a CALIPSO option, @start and @end will be 860 + * set to the start and end of any trailing padding in the header. 861 + * This is useful when appending a new option, as the caller may want 862 + * to overwrite some of this padding. In this case the function will 863 + * return -ENOENT. 864 + */ 865 + static int calipso_opt_find(struct ipv6_opt_hdr *hop, unsigned int *start, 866 + unsigned int *end) 867 + { 868 + int ret_val = -ENOENT, tlv_len; 869 + unsigned int opt_len, offset, offset_s = 0, offset_e = 0; 870 + unsigned char *opt = (unsigned char *)hop; 871 + 872 + opt_len = ipv6_optlen(hop); 873 + offset = sizeof(*hop); 874 + 875 + while (offset < opt_len) { 876 + tlv_len = calipso_tlv_len(hop, offset); 877 + if (tlv_len < 0) 878 + return tlv_len; 879 + 880 + switch (opt[offset]) { 881 + case IPV6_TLV_PAD1: 882 + case IPV6_TLV_PADN: 883 + if (offset_e) 884 + offset_e = offset; 885 + break; 886 + case IPV6_TLV_CALIPSO: 887 + ret_val = 0; 888 + offset_e = offset; 889 + break; 890 + default: 891 + if (offset_e == 0) 892 + offset_s = offset; 893 + else 894 + goto out; 895 + } 896 + offset += tlv_len; 897 + } 898 + 899 + out: 900 + if (offset_s) 901 + *start = offset_s + calipso_tlv_len(hop, offset_s); 902 + else 903 + *start = sizeof(*hop); 904 + if (offset_e) 905 + *end = offset_e + calipso_tlv_len(hop, offset_e); 906 + else 907 + *end = opt_len; 908 + 909 + return ret_val; 910 + } 911 + 912 + /** 913 + * calipso_opt_insert - Inserts a CALIPSO option into an IPv6 hop opt hdr 914 + * @hop: the original hop options header 915 + * @doi_def: the CALIPSO DOI to use 916 + * @secattr: the specific security attributes of the socket 917 + * 918 + * Description: 919 + * Creates a new hop options header based on @hop with a 920 + * CALIPSO option added to it. If @hop already contains a CALIPSO 921 + * option this is overwritten, otherwise the new option is appended 922 + * after any existing options. If @hop is NULL then the new header 923 + * will contain just the CALIPSO option and any needed padding. 924 + * 925 + */ 926 + static struct ipv6_opt_hdr * 927 + calipso_opt_insert(struct ipv6_opt_hdr *hop, 928 + const struct calipso_doi *doi_def, 929 + const struct netlbl_lsm_secattr *secattr) 930 + { 931 + unsigned int start, end, buf_len, pad, hop_len; 932 + struct ipv6_opt_hdr *new; 933 + int ret_val; 934 + 935 + if (hop) { 936 + hop_len = ipv6_optlen(hop); 937 + ret_val = calipso_opt_find(hop, &start, &end); 938 + if (ret_val && ret_val != -ENOENT) 939 + return ERR_PTR(ret_val); 940 + } else { 941 + hop_len = 0; 942 + start = sizeof(*hop); 943 + end = 0; 944 + } 945 + 946 + buf_len = hop_len + start - end + CALIPSO_OPT_LEN_MAX_WITH_PAD; 947 + new = kzalloc(buf_len, GFP_ATOMIC); 948 + if (!new) 949 + return ERR_PTR(-ENOMEM); 950 + 951 + if (start > sizeof(*hop)) 952 + memcpy(new, hop, start); 953 + ret_val = calipso_genopt((unsigned char *)new, start, buf_len, doi_def, 954 + secattr); 955 + if (ret_val < 0) 956 + return ERR_PTR(ret_val); 957 + 958 + buf_len = start + ret_val; 959 + /* At this point buf_len aligns to 4n, so (buf_len & 4) pads to 8n */ 960 + pad = ((buf_len & 4) + (end & 7)) & 7; 961 + calipso_pad_write((unsigned char *)new, buf_len, pad); 962 + buf_len += pad; 963 + 964 + if (end != hop_len) { 965 + memcpy((char *)new + buf_len, (char *)hop + end, hop_len - end); 966 + buf_len += hop_len - end; 967 + } 968 + new->nexthdr = 0; 969 + new->hdrlen = buf_len / 8 - 1; 970 + 971 + return new; 972 + } 973 + 974 + /** 975 + * calipso_opt_del - Removes the CALIPSO option from an option header 976 + * @hop: the original header 977 + * @new: the new header 978 + * 979 + * Description: 980 + * Creates a new header based on @hop without any CALIPSO option. If @hop 981 + * doesn't contain a CALIPSO option it returns -ENOENT. If @hop contains 982 + * no other non-padding options, it returns zero with @new set to NULL. 983 + * Otherwise it returns zero, creates a new header without the CALIPSO 984 + * option (and removing as much padding as possible) and returns with 985 + * @new set to that header. 986 + * 987 + */ 988 + static int calipso_opt_del(struct ipv6_opt_hdr *hop, 989 + struct ipv6_opt_hdr **new) 990 + { 991 + int ret_val; 992 + unsigned int start, end, delta, pad, hop_len; 993 + 994 + ret_val = calipso_opt_find(hop, &start, &end); 995 + if (ret_val) 996 + return ret_val; 997 + 998 + hop_len = ipv6_optlen(hop); 999 + if (start == sizeof(*hop) && end == hop_len) { 1000 + /* There's no other option in the header so return NULL */ 1001 + *new = NULL; 1002 + return 0; 1003 + } 1004 + 1005 + delta = (end - start) & ~7; 1006 + *new = kzalloc(hop_len - delta, GFP_ATOMIC); 1007 + if (!*new) 1008 + return -ENOMEM; 1009 + 1010 + memcpy(*new, hop, start); 1011 + (*new)->hdrlen -= delta / 8; 1012 + pad = (end - start) & 7; 1013 + calipso_pad_write((unsigned char *)*new, start, pad); 1014 + if (end != hop_len) 1015 + memcpy((char *)*new + start + pad, (char *)hop + end, 1016 + hop_len - end); 1017 + 1018 + return 0; 1019 + } 1020 + 1021 + /** 1022 + * calipso_opt_getattr - Get the security attributes from a memory block 1023 + * @calipso: the CALIPSO option 1024 + * @secattr: the security attributes 1025 + * 1026 + * Description: 1027 + * Inspect @calipso and return the security attributes in @secattr. 1028 + * Returns zero on success and negative values on failure. 1029 + * 1030 + */ 1031 + static int calipso_opt_getattr(const unsigned char *calipso, 1032 + struct netlbl_lsm_secattr *secattr) 1033 + { 1034 + int ret_val = -ENOMSG; 1035 + u32 doi, len = calipso[1], cat_len = calipso[6] * 4; 1036 + struct calipso_doi *doi_def; 1037 + 1038 + if (cat_len + 8 > len) 1039 + return -EINVAL; 1040 + 1041 + if (calipso_cache_check(calipso + 2, calipso[1], secattr) == 0) 1042 + return 0; 1043 + 1044 + doi = get_unaligned_be32(calipso + 2); 1045 + rcu_read_lock(); 1046 + doi_def = calipso_doi_search(doi); 1047 + if (!doi_def) 1048 + goto getattr_return; 1049 + 1050 + secattr->attr.mls.lvl = calipso[7]; 1051 + secattr->flags |= NETLBL_SECATTR_MLS_LVL; 1052 + 1053 + if (cat_len) { 1054 + ret_val = calipso_map_cat_ntoh(doi_def, 1055 + calipso + 10, 1056 + cat_len, 1057 + secattr); 1058 + if (ret_val != 0) { 1059 + netlbl_catmap_free(secattr->attr.mls.cat); 1060 + goto getattr_return; 1061 + } 1062 + 1063 + secattr->flags |= NETLBL_SECATTR_MLS_CAT; 1064 + } 1065 + 1066 + secattr->type = NETLBL_NLTYPE_CALIPSO; 1067 + 1068 + getattr_return: 1069 + rcu_read_unlock(); 1070 + return ret_val; 1071 + } 1072 + 1073 + /* sock functions. 1074 + */ 1075 + 1076 + /** 1077 + * calipso_sock_getattr - Get the security attributes from a sock 1078 + * @sk: the sock 1079 + * @secattr: the security attributes 1080 + * 1081 + * Description: 1082 + * Query @sk to see if there is a CALIPSO option attached to the sock and if 1083 + * there is return the CALIPSO security attributes in @secattr. This function 1084 + * requires that @sk be locked, or privately held, but it does not do any 1085 + * locking itself. Returns zero on success and negative values on failure. 1086 + * 1087 + */ 1088 + static int calipso_sock_getattr(struct sock *sk, 1089 + struct netlbl_lsm_secattr *secattr) 1090 + { 1091 + struct ipv6_opt_hdr *hop; 1092 + int opt_len, len, ret_val = -ENOMSG, offset; 1093 + unsigned char *opt; 1094 + struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1095 + 1096 + if (!txopts || !txopts->hopopt) 1097 + goto done; 1098 + 1099 + hop = txopts->hopopt; 1100 + opt = (unsigned char *)hop; 1101 + opt_len = ipv6_optlen(hop); 1102 + offset = sizeof(*hop); 1103 + while (offset < opt_len) { 1104 + len = calipso_tlv_len(hop, offset); 1105 + if (len < 0) { 1106 + ret_val = len; 1107 + goto done; 1108 + } 1109 + switch (opt[offset]) { 1110 + case IPV6_TLV_CALIPSO: 1111 + if (len < CALIPSO_HDR_LEN) 1112 + ret_val = -EINVAL; 1113 + else 1114 + ret_val = calipso_opt_getattr(&opt[offset], 1115 + secattr); 1116 + goto done; 1117 + default: 1118 + offset += len; 1119 + break; 1120 + } 1121 + } 1122 + done: 1123 + txopt_put(txopts); 1124 + return ret_val; 1125 + } 1126 + 1127 + /** 1128 + * calipso_sock_setattr - Add a CALIPSO option to a socket 1129 + * @sk: the socket 1130 + * @doi_def: the CALIPSO DOI to use 1131 + * @secattr: the specific security attributes of the socket 1132 + * 1133 + * Description: 1134 + * Set the CALIPSO option on the given socket using the DOI definition and 1135 + * security attributes passed to the function. This function requires 1136 + * exclusive access to @sk, which means it either needs to be in the 1137 + * process of being created or locked. Returns zero on success and negative 1138 + * values on failure. 1139 + * 1140 + */ 1141 + static int calipso_sock_setattr(struct sock *sk, 1142 + const struct calipso_doi *doi_def, 1143 + const struct netlbl_lsm_secattr *secattr) 1144 + { 1145 + int ret_val; 1146 + struct ipv6_opt_hdr *old, *new; 1147 + struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1148 + 1149 + old = NULL; 1150 + if (txopts) 1151 + old = txopts->hopopt; 1152 + 1153 + new = calipso_opt_insert(old, doi_def, secattr); 1154 + txopt_put(txopts); 1155 + if (IS_ERR(new)) 1156 + return PTR_ERR(new); 1157 + 1158 + ret_val = calipso_opt_update(sk, new); 1159 + 1160 + kfree(new); 1161 + return ret_val; 1162 + } 1163 + 1164 + /** 1165 + * calipso_sock_delattr - Delete the CALIPSO option from a socket 1166 + * @sk: the socket 1167 + * 1168 + * Description: 1169 + * Removes the CALIPSO option from a socket, if present. 1170 + * 1171 + */ 1172 + static void calipso_sock_delattr(struct sock *sk) 1173 + { 1174 + struct ipv6_opt_hdr *new_hop; 1175 + struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1176 + 1177 + if (!txopts || !txopts->hopopt) 1178 + goto done; 1179 + 1180 + if (calipso_opt_del(txopts->hopopt, &new_hop)) 1181 + goto done; 1182 + 1183 + calipso_opt_update(sk, new_hop); 1184 + kfree(new_hop); 1185 + 1186 + done: 1187 + txopt_put(txopts); 1188 + } 1189 + 1190 + /* request sock functions. 1191 + */ 1192 + 1193 + /** 1194 + * calipso_req_setattr - Add a CALIPSO option to a connection request socket 1195 + * @req: the connection request socket 1196 + * @doi_def: the CALIPSO DOI to use 1197 + * @secattr: the specific security attributes of the socket 1198 + * 1199 + * Description: 1200 + * Set the CALIPSO option on the given socket using the DOI definition and 1201 + * security attributes passed to the function. Returns zero on success and 1202 + * negative values on failure. 1203 + * 1204 + */ 1205 + static int calipso_req_setattr(struct request_sock *req, 1206 + const struct calipso_doi *doi_def, 1207 + const struct netlbl_lsm_secattr *secattr) 1208 + { 1209 + struct ipv6_txoptions *txopts; 1210 + struct inet_request_sock *req_inet = inet_rsk(req); 1211 + struct ipv6_opt_hdr *old, *new; 1212 + struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1213 + 1214 + if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt) 1215 + old = req_inet->ipv6_opt->hopopt; 1216 + else 1217 + old = NULL; 1218 + 1219 + new = calipso_opt_insert(old, doi_def, secattr); 1220 + if (IS_ERR(new)) 1221 + return PTR_ERR(new); 1222 + 1223 + txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, 1224 + new, new ? ipv6_optlen(new) : 0); 1225 + 1226 + kfree(new); 1227 + 1228 + if (IS_ERR(txopts)) 1229 + return PTR_ERR(txopts); 1230 + 1231 + txopts = xchg(&req_inet->ipv6_opt, txopts); 1232 + if (txopts) { 1233 + atomic_sub(txopts->tot_len, &sk->sk_omem_alloc); 1234 + txopt_put(txopts); 1235 + } 1236 + 1237 + return 0; 1238 + } 1239 + 1240 + /** 1241 + * calipso_req_delattr - Delete the CALIPSO option from a request socket 1242 + * @reg: the request socket 1243 + * 1244 + * Description: 1245 + * Removes the CALIPSO option from a request socket, if present. 1246 + * 1247 + */ 1248 + static void calipso_req_delattr(struct request_sock *req) 1249 + { 1250 + struct inet_request_sock *req_inet = inet_rsk(req); 1251 + struct ipv6_opt_hdr *new; 1252 + struct ipv6_txoptions *txopts; 1253 + struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1254 + 1255 + if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt) 1256 + return; 1257 + 1258 + if (calipso_opt_del(req_inet->ipv6_opt->hopopt, &new)) 1259 + return; /* Nothing to do */ 1260 + 1261 + txopts = ipv6_renew_options_kern(sk, req_inet->ipv6_opt, IPV6_HOPOPTS, 1262 + new, new ? ipv6_optlen(new) : 0); 1263 + 1264 + if (!IS_ERR(txopts)) { 1265 + txopts = xchg(&req_inet->ipv6_opt, txopts); 1266 + if (txopts) { 1267 + atomic_sub(txopts->tot_len, &sk->sk_omem_alloc); 1268 + txopt_put(txopts); 1269 + } 1270 + } 1271 + kfree(new); 1272 + } 1273 + 1274 + /* skbuff functions. 1275 + */ 1276 + 1277 + /** 1278 + * calipso_skbuff_optptr - Find the CALIPSO option in the packet 1279 + * @skb: the packet 1280 + * 1281 + * Description: 1282 + * Parse the packet's IP header looking for a CALIPSO option. Returns a pointer 1283 + * to the start of the CALIPSO option on success, NULL if one if not found. 1284 + * 1285 + */ 1286 + static unsigned char *calipso_skbuff_optptr(const struct sk_buff *skb) 1287 + { 1288 + const struct ipv6hdr *ip6_hdr = ipv6_hdr(skb); 1289 + int offset; 1290 + 1291 + if (ip6_hdr->nexthdr != NEXTHDR_HOP) 1292 + return NULL; 1293 + 1294 + offset = ipv6_find_tlv(skb, sizeof(*ip6_hdr), IPV6_TLV_CALIPSO); 1295 + if (offset >= 0) 1296 + return (unsigned char *)ip6_hdr + offset; 1297 + 1298 + return NULL; 1299 + } 1300 + 1301 + /** 1302 + * calipso_skbuff_setattr - Set the CALIPSO option on a packet 1303 + * @skb: the packet 1304 + * @doi_def: the CALIPSO DOI to use 1305 + * @secattr: the security attributes 1306 + * 1307 + * Description: 1308 + * Set the CALIPSO option on the given packet based on the security attributes. 1309 + * Returns a pointer to the IP header on success and NULL on failure. 1310 + * 1311 + */ 1312 + static int calipso_skbuff_setattr(struct sk_buff *skb, 1313 + const struct calipso_doi *doi_def, 1314 + const struct netlbl_lsm_secattr *secattr) 1315 + { 1316 + int ret_val; 1317 + struct ipv6hdr *ip6_hdr; 1318 + struct ipv6_opt_hdr *hop; 1319 + unsigned char buf[CALIPSO_MAX_BUFFER]; 1320 + int len_delta, new_end, pad; 1321 + unsigned int start, end; 1322 + 1323 + ip6_hdr = ipv6_hdr(skb); 1324 + if (ip6_hdr->nexthdr == NEXTHDR_HOP) { 1325 + hop = (struct ipv6_opt_hdr *)(ip6_hdr + 1); 1326 + ret_val = calipso_opt_find(hop, &start, &end); 1327 + if (ret_val && ret_val != -ENOENT) 1328 + return ret_val; 1329 + } else { 1330 + start = 0; 1331 + end = 0; 1332 + } 1333 + 1334 + memset(buf, 0, sizeof(buf)); 1335 + ret_val = calipso_genopt(buf, start & 3, sizeof(buf), doi_def, secattr); 1336 + if (ret_val < 0) 1337 + return ret_val; 1338 + 1339 + new_end = start + ret_val; 1340 + /* At this point new_end aligns to 4n, so (new_end & 4) pads to 8n */ 1341 + pad = ((new_end & 4) + (end & 7)) & 7; 1342 + len_delta = new_end - (int)end + pad; 1343 + ret_val = skb_cow(skb, skb_headroom(skb) + len_delta); 1344 + if (ret_val < 0) 1345 + return ret_val; 1346 + 1347 + if (len_delta) { 1348 + if (len_delta > 0) 1349 + skb_push(skb, len_delta); 1350 + else 1351 + skb_pull(skb, -len_delta); 1352 + memmove((char *)ip6_hdr - len_delta, ip6_hdr, 1353 + sizeof(*ip6_hdr) + start); 1354 + skb_reset_network_header(skb); 1355 + ip6_hdr = ipv6_hdr(skb); 1356 + } 1357 + 1358 + hop = (struct ipv6_opt_hdr *)(ip6_hdr + 1); 1359 + if (start == 0) { 1360 + struct ipv6_opt_hdr *new_hop = (struct ipv6_opt_hdr *)buf; 1361 + 1362 + new_hop->nexthdr = ip6_hdr->nexthdr; 1363 + new_hop->hdrlen = len_delta / 8 - 1; 1364 + ip6_hdr->nexthdr = NEXTHDR_HOP; 1365 + } else { 1366 + hop->hdrlen += len_delta / 8; 1367 + } 1368 + memcpy((char *)hop + start, buf + (start & 3), new_end - start); 1369 + calipso_pad_write((unsigned char *)hop, new_end, pad); 1370 + 1371 + return 0; 1372 + } 1373 + 1374 + /** 1375 + * calipso_skbuff_delattr - Delete any CALIPSO options from a packet 1376 + * @skb: the packet 1377 + * 1378 + * Description: 1379 + * Removes any and all CALIPSO options from the given packet. Returns zero on 1380 + * success, negative values on failure. 1381 + * 1382 + */ 1383 + static int calipso_skbuff_delattr(struct sk_buff *skb) 1384 + { 1385 + int ret_val; 1386 + struct ipv6hdr *ip6_hdr; 1387 + struct ipv6_opt_hdr *old_hop; 1388 + u32 old_hop_len, start = 0, end = 0, delta, size, pad; 1389 + 1390 + if (!calipso_skbuff_optptr(skb)) 1391 + return 0; 1392 + 1393 + /* since we are changing the packet we should make a copy */ 1394 + ret_val = skb_cow(skb, skb_headroom(skb)); 1395 + if (ret_val < 0) 1396 + return ret_val; 1397 + 1398 + ip6_hdr = ipv6_hdr(skb); 1399 + old_hop = (struct ipv6_opt_hdr *)(ip6_hdr + 1); 1400 + old_hop_len = ipv6_optlen(old_hop); 1401 + 1402 + ret_val = calipso_opt_find(old_hop, &start, &end); 1403 + if (ret_val) 1404 + return ret_val; 1405 + 1406 + if (start == sizeof(*old_hop) && end == old_hop_len) { 1407 + /* There's no other option in the header so we delete 1408 + * the whole thing. */ 1409 + delta = old_hop_len; 1410 + size = sizeof(*ip6_hdr); 1411 + ip6_hdr->nexthdr = old_hop->nexthdr; 1412 + } else { 1413 + delta = (end - start) & ~7; 1414 + if (delta) 1415 + old_hop->hdrlen -= delta / 8; 1416 + pad = (end - start) & 7; 1417 + size = sizeof(*ip6_hdr) + start + pad; 1418 + calipso_pad_write((unsigned char *)old_hop, start, pad); 1419 + } 1420 + 1421 + if (delta) { 1422 + skb_pull(skb, delta); 1423 + memmove((char *)ip6_hdr + delta, ip6_hdr, size); 1424 + skb_reset_network_header(skb); 1425 + } 1426 + 1427 + return 0; 1428 + } 1429 + 1430 + static const struct netlbl_calipso_ops ops = { 1431 + .doi_add = calipso_doi_add, 1432 + .doi_free = calipso_doi_free, 1433 + .doi_remove = calipso_doi_remove, 1434 + .doi_getdef = calipso_doi_getdef, 1435 + .doi_putdef = calipso_doi_putdef, 1436 + .doi_walk = calipso_doi_walk, 1437 + .sock_getattr = calipso_sock_getattr, 1438 + .sock_setattr = calipso_sock_setattr, 1439 + .sock_delattr = calipso_sock_delattr, 1440 + .req_setattr = calipso_req_setattr, 1441 + .req_delattr = calipso_req_delattr, 1442 + .opt_getattr = calipso_opt_getattr, 1443 + .skbuff_optptr = calipso_skbuff_optptr, 1444 + .skbuff_setattr = calipso_skbuff_setattr, 1445 + .skbuff_delattr = calipso_skbuff_delattr, 1446 + .cache_invalidate = calipso_cache_invalidate, 1447 + .cache_add = calipso_cache_add 1448 + }; 1449 + 1450 + /** 1451 + * calipso_init - Initialize the CALIPSO module 1452 + * 1453 + * Description: 1454 + * Initialize the CALIPSO module and prepare it for use. Returns zero on 1455 + * success and negative values on failure. 1456 + * 1457 + */ 1458 + int __init calipso_init(void) 1459 + { 1460 + int ret_val; 1461 + 1462 + ret_val = calipso_cache_init(); 1463 + if (!ret_val) 1464 + netlbl_calipso_ops_register(&ops); 1465 + return ret_val; 1466 + } 1467 + 1468 + void calipso_exit(void) 1469 + { 1470 + netlbl_calipso_ops_register(NULL); 1471 + calipso_cache_invalidate(); 1472 + kfree(calipso_cache); 1473 + }
+76
net/ipv6/exthdrs.c
··· 43 43 #include <net/ndisc.h> 44 44 #include <net/ip6_route.h> 45 45 #include <net/addrconf.h> 46 + #include <net/calipso.h> 46 47 #if IS_ENABLED(CONFIG_IPV6_MIP6) 47 48 #include <net/xfrm.h> 48 49 #endif ··· 604 603 return false; 605 604 } 606 605 606 + /* CALIPSO RFC 5570 */ 607 + 608 + static bool ipv6_hop_calipso(struct sk_buff *skb, int optoff) 609 + { 610 + const unsigned char *nh = skb_network_header(skb); 611 + 612 + if (nh[optoff + 1] < 8) 613 + goto drop; 614 + 615 + if (nh[optoff + 6] * 4 + 8 > nh[optoff + 1]) 616 + goto drop; 617 + 618 + if (!calipso_validate(skb, nh + optoff)) 619 + goto drop; 620 + 621 + return true; 622 + 623 + drop: 624 + kfree_skb(skb); 625 + return false; 626 + } 627 + 607 628 static const struct tlvtype_proc tlvprochopopt_lst[] = { 608 629 { 609 630 .type = IPV6_TLV_ROUTERALERT, ··· 634 611 { 635 612 .type = IPV6_TLV_JUMBO, 636 613 .func = ipv6_hop_jumbo, 614 + }, 615 + { 616 + .type = IPV6_TLV_CALIPSO, 617 + .func = ipv6_hop_calipso, 637 618 }, 638 619 { -1, } 639 620 }; ··· 785 758 return 0; 786 759 } 787 760 761 + /** 762 + * ipv6_renew_options - replace a specific ext hdr with a new one. 763 + * 764 + * @sk: sock from which to allocate memory 765 + * @opt: original options 766 + * @newtype: option type to replace in @opt 767 + * @newopt: new option of type @newtype to replace (user-mem) 768 + * @newoptlen: length of @newopt 769 + * 770 + * Returns a new set of options which is a copy of @opt with the 771 + * option type @newtype replaced with @newopt. 772 + * 773 + * @opt may be NULL, in which case a new set of options is returned 774 + * containing just @newopt. 775 + * 776 + * @newopt may be NULL, in which case the specified option type is 777 + * not copied into the new set of options. 778 + * 779 + * The new set of options is allocated from the socket option memory 780 + * buffer of @sk. 781 + */ 788 782 struct ipv6_txoptions * 789 783 ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt, 790 784 int newtype, ··· 876 828 out: 877 829 sock_kfree_s(sk, opt2, opt2->tot_len); 878 830 return ERR_PTR(err); 831 + } 832 + 833 + /** 834 + * ipv6_renew_options_kern - replace a specific ext hdr with a new one. 835 + * 836 + * @sk: sock from which to allocate memory 837 + * @opt: original options 838 + * @newtype: option type to replace in @opt 839 + * @newopt: new option of type @newtype to replace (kernel-mem) 840 + * @newoptlen: length of @newopt 841 + * 842 + * See ipv6_renew_options(). The difference is that @newopt is 843 + * kernel memory, rather than user memory. 844 + */ 845 + struct ipv6_txoptions * 846 + ipv6_renew_options_kern(struct sock *sk, struct ipv6_txoptions *opt, 847 + int newtype, struct ipv6_opt_hdr *newopt, 848 + int newoptlen) 849 + { 850 + struct ipv6_txoptions *ret_val; 851 + const mm_segment_t old_fs = get_fs(); 852 + 853 + set_fs(KERNEL_DS); 854 + ret_val = ipv6_renew_options(sk, opt, newtype, 855 + (struct ipv6_opt_hdr __user *)newopt, 856 + newoptlen); 857 + set_fs(old_fs); 858 + return ret_val; 879 859 } 880 860 881 861 struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
+1 -1
net/ipv6/exthdrs_core.c
··· 112 112 } 113 113 EXPORT_SYMBOL(ipv6_skip_exthdr); 114 114 115 - int ipv6_find_tlv(struct sk_buff *skb, int offset, int type) 115 + int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type) 116 116 { 117 117 const unsigned char *nh = skb_network_header(skb); 118 118 int packet_len = skb_tail_pointer(skb) - skb_network_header(skb);
-1
net/ipv6/ipv6_sockglue.c
··· 98 98 return 0; 99 99 } 100 100 101 - static 102 101 struct ipv6_txoptions *ipv6_update_options(struct sock *sk, 103 102 struct ipv6_txoptions *opt) 104 103 {
+19
net/ipv6/sysctl_net_ipv6.c
··· 15 15 #include <net/ipv6.h> 16 16 #include <net/addrconf.h> 17 17 #include <net/inet_frag.h> 18 + #ifdef CONFIG_NETLABEL 19 + #include <net/calipso.h> 20 + #endif 18 21 19 22 static int one = 1; 20 23 static int auto_flowlabels_min; ··· 109 106 .proc_handler = proc_dointvec_minmax, 110 107 .extra1 = &one 111 108 }, 109 + #ifdef CONFIG_NETLABEL 110 + { 111 + .procname = "calipso_cache_enable", 112 + .data = &calipso_cache_enabled, 113 + .maxlen = sizeof(int), 114 + .mode = 0644, 115 + .proc_handler = proc_dointvec, 116 + }, 117 + { 118 + .procname = "calipso_cache_bucket_size", 119 + .data = &calipso_cache_bucketsize, 120 + .maxlen = sizeof(int), 121 + .mode = 0644, 122 + .proc_handler = proc_dointvec, 123 + }, 124 + #endif /* CONFIG_NETLABEL */ 112 125 { } 113 126 }; 114 127
+9 -3
net/ipv6/tcp_ipv6.c
··· 443 443 { 444 444 struct inet_request_sock *ireq = inet_rsk(req); 445 445 struct ipv6_pinfo *np = inet6_sk(sk); 446 + struct ipv6_txoptions *opt; 446 447 struct flowi6 *fl6 = &fl->u.ip6; 447 448 struct sk_buff *skb; 448 449 int err = -ENOMEM; ··· 464 463 fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts)); 465 464 466 465 rcu_read_lock(); 467 - err = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt), 468 - np->tclass); 466 + opt = ireq->ipv6_opt; 467 + if (!opt) 468 + opt = rcu_dereference(np->opt); 469 + err = ip6_xmit(sk, skb, fl6, opt, np->tclass); 469 470 rcu_read_unlock(); 470 471 err = net_xmit_eval(err); 471 472 } ··· 479 476 480 477 static void tcp_v6_reqsk_destructor(struct request_sock *req) 481 478 { 479 + kfree(inet_rsk(req)->ipv6_opt); 482 480 kfree_skb(inet_rsk(req)->pktopts); 483 481 } 484 482 ··· 1116 1112 but we make one more one thing there: reattach optmem 1117 1113 to newsk. 1118 1114 */ 1119 - opt = rcu_dereference(np->opt); 1115 + opt = ireq->ipv6_opt; 1116 + if (!opt) 1117 + opt = rcu_dereference(np->opt); 1120 1118 if (opt) { 1121 1119 opt = ipv6_dup_options(newsk, opt); 1122 1120 RCU_INIT_POINTER(newnp->opt, opt);
+4 -1
net/iucv/af_iucv.c
··· 22 22 #include <linux/skbuff.h> 23 23 #include <linux/init.h> 24 24 #include <linux/poll.h> 25 + #include <linux/security.h> 25 26 #include <net/sock.h> 26 27 #include <asm/ebcdic.h> 27 28 #include <asm/cpcmd.h> ··· 531 530 532 531 static void iucv_sock_init(struct sock *sk, struct sock *parent) 533 532 { 534 - if (parent) 533 + if (parent) { 535 534 sk->sk_type = parent->sk_type; 535 + security_sk_clone(parent, sk); 536 + } 536 537 } 537 538 538 539 static struct sock *iucv_sock_alloc(struct socket *sock, int proto, gfp_t prio, int kern)
+1
net/netlabel/Kconfig
··· 5 5 config NETLABEL 6 6 bool "NetLabel subsystem support" 7 7 depends on SECURITY 8 + select CRC_CCITT if IPV6 8 9 default n 9 10 ---help--- 10 11 NetLabel provides support for explicit network packet labeling
+1 -1
net/netlabel/Makefile
··· 12 12 # protocol modules 13 13 obj-y += netlabel_unlabeled.o 14 14 obj-y += netlabel_cipso_v4.o 15 - 15 + obj-$(subst m,y,$(CONFIG_IPV6)) += netlabel_calipso.o
+740
net/netlabel/netlabel_calipso.c
··· 1 + /* 2 + * NetLabel CALIPSO/IPv6 Support 3 + * 4 + * This file defines the CALIPSO/IPv6 functions for the NetLabel system. The 5 + * NetLabel system manages static and dynamic label mappings for network 6 + * protocols such as CIPSO and CALIPSO. 7 + * 8 + * Authors: Paul Moore <paul@paul-moore.com> 9 + * Huw Davies <huw@codeweavers.com> 10 + * 11 + */ 12 + 13 + /* (c) Copyright Hewlett-Packard Development Company, L.P., 2006 14 + * (c) Copyright Huw Davies <huw@codeweavers.com>, 2015 15 + * 16 + * This program is free software; you can redistribute it and/or modify 17 + * it under the terms of the GNU General Public License as published by 18 + * the Free Software Foundation; either version 2 of the License, or 19 + * (at your option) any later version. 20 + * 21 + * This program is distributed in the hope that it will be useful, 22 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 23 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 24 + * the GNU General Public License for more details. 25 + * 26 + * You should have received a copy of the GNU General Public License 27 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 28 + * 29 + */ 30 + 31 + #include <linux/types.h> 32 + #include <linux/socket.h> 33 + #include <linux/string.h> 34 + #include <linux/skbuff.h> 35 + #include <linux/audit.h> 36 + #include <linux/slab.h> 37 + #include <net/sock.h> 38 + #include <net/netlink.h> 39 + #include <net/genetlink.h> 40 + #include <net/netlabel.h> 41 + #include <net/calipso.h> 42 + #include <linux/atomic.h> 43 + 44 + #include "netlabel_user.h" 45 + #include "netlabel_calipso.h" 46 + #include "netlabel_mgmt.h" 47 + #include "netlabel_domainhash.h" 48 + 49 + /* Argument struct for calipso_doi_walk() */ 50 + struct netlbl_calipso_doiwalk_arg { 51 + struct netlink_callback *nl_cb; 52 + struct sk_buff *skb; 53 + u32 seq; 54 + }; 55 + 56 + /* Argument struct for netlbl_domhsh_walk() */ 57 + struct netlbl_domhsh_walk_arg { 58 + struct netlbl_audit *audit_info; 59 + u32 doi; 60 + }; 61 + 62 + /* NetLabel Generic NETLINK CALIPSO family */ 63 + static struct genl_family netlbl_calipso_gnl_family = { 64 + .id = GENL_ID_GENERATE, 65 + .hdrsize = 0, 66 + .name = NETLBL_NLTYPE_CALIPSO_NAME, 67 + .version = NETLBL_PROTO_VERSION, 68 + .maxattr = NLBL_CALIPSO_A_MAX, 69 + }; 70 + 71 + /* NetLabel Netlink attribute policy */ 72 + static const struct nla_policy calipso_genl_policy[NLBL_CALIPSO_A_MAX + 1] = { 73 + [NLBL_CALIPSO_A_DOI] = { .type = NLA_U32 }, 74 + [NLBL_CALIPSO_A_MTYPE] = { .type = NLA_U32 }, 75 + }; 76 + 77 + /* NetLabel Command Handlers 78 + */ 79 + /** 80 + * netlbl_calipso_add_pass - Adds a CALIPSO pass DOI definition 81 + * @info: the Generic NETLINK info block 82 + * @audit_info: NetLabel audit information 83 + * 84 + * Description: 85 + * Create a new CALIPSO_MAP_PASS DOI definition based on the given ADD message 86 + * and add it to the CALIPSO engine. Return zero on success and non-zero on 87 + * error. 88 + * 89 + */ 90 + static int netlbl_calipso_add_pass(struct genl_info *info, 91 + struct netlbl_audit *audit_info) 92 + { 93 + int ret_val; 94 + struct calipso_doi *doi_def = NULL; 95 + 96 + doi_def = kmalloc(sizeof(*doi_def), GFP_KERNEL); 97 + if (!doi_def) 98 + return -ENOMEM; 99 + doi_def->type = CALIPSO_MAP_PASS; 100 + doi_def->doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]); 101 + ret_val = calipso_doi_add(doi_def, audit_info); 102 + if (ret_val != 0) 103 + calipso_doi_free(doi_def); 104 + 105 + return ret_val; 106 + } 107 + 108 + /** 109 + * netlbl_calipso_add - Handle an ADD message 110 + * @skb: the NETLINK buffer 111 + * @info: the Generic NETLINK info block 112 + * 113 + * Description: 114 + * Create a new DOI definition based on the given ADD message and add it to the 115 + * CALIPSO engine. Returns zero on success, negative values on failure. 116 + * 117 + */ 118 + static int netlbl_calipso_add(struct sk_buff *skb, struct genl_info *info) 119 + 120 + { 121 + int ret_val = -EINVAL; 122 + struct netlbl_audit audit_info; 123 + 124 + if (!info->attrs[NLBL_CALIPSO_A_DOI] || 125 + !info->attrs[NLBL_CALIPSO_A_MTYPE]) 126 + return -EINVAL; 127 + 128 + netlbl_netlink_auditinfo(skb, &audit_info); 129 + switch (nla_get_u32(info->attrs[NLBL_CALIPSO_A_MTYPE])) { 130 + case CALIPSO_MAP_PASS: 131 + ret_val = netlbl_calipso_add_pass(info, &audit_info); 132 + break; 133 + } 134 + if (ret_val == 0) 135 + atomic_inc(&netlabel_mgmt_protocount); 136 + 137 + return ret_val; 138 + } 139 + 140 + /** 141 + * netlbl_calipso_list - Handle a LIST message 142 + * @skb: the NETLINK buffer 143 + * @info: the Generic NETLINK info block 144 + * 145 + * Description: 146 + * Process a user generated LIST message and respond accordingly. 147 + * Returns zero on success and negative values on error. 148 + * 149 + */ 150 + static int netlbl_calipso_list(struct sk_buff *skb, struct genl_info *info) 151 + { 152 + int ret_val; 153 + struct sk_buff *ans_skb = NULL; 154 + void *data; 155 + u32 doi; 156 + struct calipso_doi *doi_def; 157 + 158 + if (!info->attrs[NLBL_CALIPSO_A_DOI]) { 159 + ret_val = -EINVAL; 160 + goto list_failure; 161 + } 162 + 163 + doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]); 164 + 165 + doi_def = calipso_doi_getdef(doi); 166 + if (!doi_def) { 167 + ret_val = -EINVAL; 168 + goto list_failure; 169 + } 170 + 171 + ans_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 172 + if (!ans_skb) { 173 + ret_val = -ENOMEM; 174 + goto list_failure_put; 175 + } 176 + data = genlmsg_put_reply(ans_skb, info, &netlbl_calipso_gnl_family, 177 + 0, NLBL_CALIPSO_C_LIST); 178 + if (!data) { 179 + ret_val = -ENOMEM; 180 + goto list_failure_put; 181 + } 182 + 183 + ret_val = nla_put_u32(ans_skb, NLBL_CALIPSO_A_MTYPE, doi_def->type); 184 + if (ret_val != 0) 185 + goto list_failure_put; 186 + 187 + calipso_doi_putdef(doi_def); 188 + 189 + genlmsg_end(ans_skb, data); 190 + return genlmsg_reply(ans_skb, info); 191 + 192 + list_failure_put: 193 + calipso_doi_putdef(doi_def); 194 + list_failure: 195 + kfree_skb(ans_skb); 196 + return ret_val; 197 + } 198 + 199 + /** 200 + * netlbl_calipso_listall_cb - calipso_doi_walk() callback for LISTALL 201 + * @doi_def: the CALIPSO DOI definition 202 + * @arg: the netlbl_calipso_doiwalk_arg structure 203 + * 204 + * Description: 205 + * This function is designed to be used as a callback to the 206 + * calipso_doi_walk() function for use in generating a response for a LISTALL 207 + * message. Returns the size of the message on success, negative values on 208 + * failure. 209 + * 210 + */ 211 + static int netlbl_calipso_listall_cb(struct calipso_doi *doi_def, void *arg) 212 + { 213 + int ret_val = -ENOMEM; 214 + struct netlbl_calipso_doiwalk_arg *cb_arg = arg; 215 + void *data; 216 + 217 + data = genlmsg_put(cb_arg->skb, NETLINK_CB(cb_arg->nl_cb->skb).portid, 218 + cb_arg->seq, &netlbl_calipso_gnl_family, 219 + NLM_F_MULTI, NLBL_CALIPSO_C_LISTALL); 220 + if (!data) 221 + goto listall_cb_failure; 222 + 223 + ret_val = nla_put_u32(cb_arg->skb, NLBL_CALIPSO_A_DOI, doi_def->doi); 224 + if (ret_val != 0) 225 + goto listall_cb_failure; 226 + ret_val = nla_put_u32(cb_arg->skb, 227 + NLBL_CALIPSO_A_MTYPE, 228 + doi_def->type); 229 + if (ret_val != 0) 230 + goto listall_cb_failure; 231 + 232 + genlmsg_end(cb_arg->skb, data); 233 + return 0; 234 + 235 + listall_cb_failure: 236 + genlmsg_cancel(cb_arg->skb, data); 237 + return ret_val; 238 + } 239 + 240 + /** 241 + * netlbl_calipso_listall - Handle a LISTALL message 242 + * @skb: the NETLINK buffer 243 + * @cb: the NETLINK callback 244 + * 245 + * Description: 246 + * Process a user generated LISTALL message and respond accordingly. Returns 247 + * zero on success and negative values on error. 248 + * 249 + */ 250 + static int netlbl_calipso_listall(struct sk_buff *skb, 251 + struct netlink_callback *cb) 252 + { 253 + struct netlbl_calipso_doiwalk_arg cb_arg; 254 + u32 doi_skip = cb->args[0]; 255 + 256 + cb_arg.nl_cb = cb; 257 + cb_arg.skb = skb; 258 + cb_arg.seq = cb->nlh->nlmsg_seq; 259 + 260 + calipso_doi_walk(&doi_skip, netlbl_calipso_listall_cb, &cb_arg); 261 + 262 + cb->args[0] = doi_skip; 263 + return skb->len; 264 + } 265 + 266 + /** 267 + * netlbl_calipso_remove_cb - netlbl_calipso_remove() callback for REMOVE 268 + * @entry: LSM domain mapping entry 269 + * @arg: the netlbl_domhsh_walk_arg structure 270 + * 271 + * Description: 272 + * This function is intended for use by netlbl_calipso_remove() as the callback 273 + * for the netlbl_domhsh_walk() function; it removes LSM domain map entries 274 + * which are associated with the CALIPSO DOI specified in @arg. Returns zero on 275 + * success, negative values on failure. 276 + * 277 + */ 278 + static int netlbl_calipso_remove_cb(struct netlbl_dom_map *entry, void *arg) 279 + { 280 + struct netlbl_domhsh_walk_arg *cb_arg = arg; 281 + 282 + if (entry->def.type == NETLBL_NLTYPE_CALIPSO && 283 + entry->def.calipso->doi == cb_arg->doi) 284 + return netlbl_domhsh_remove_entry(entry, cb_arg->audit_info); 285 + 286 + return 0; 287 + } 288 + 289 + /** 290 + * netlbl_calipso_remove - Handle a REMOVE message 291 + * @skb: the NETLINK buffer 292 + * @info: the Generic NETLINK info block 293 + * 294 + * Description: 295 + * Process a user generated REMOVE message and respond accordingly. Returns 296 + * zero on success, negative values on failure. 297 + * 298 + */ 299 + static int netlbl_calipso_remove(struct sk_buff *skb, struct genl_info *info) 300 + { 301 + int ret_val = -EINVAL; 302 + struct netlbl_domhsh_walk_arg cb_arg; 303 + struct netlbl_audit audit_info; 304 + u32 skip_bkt = 0; 305 + u32 skip_chain = 0; 306 + 307 + if (!info->attrs[NLBL_CALIPSO_A_DOI]) 308 + return -EINVAL; 309 + 310 + netlbl_netlink_auditinfo(skb, &audit_info); 311 + cb_arg.doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]); 312 + cb_arg.audit_info = &audit_info; 313 + ret_val = netlbl_domhsh_walk(&skip_bkt, &skip_chain, 314 + netlbl_calipso_remove_cb, &cb_arg); 315 + if (ret_val == 0 || ret_val == -ENOENT) { 316 + ret_val = calipso_doi_remove(cb_arg.doi, &audit_info); 317 + if (ret_val == 0) 318 + atomic_dec(&netlabel_mgmt_protocount); 319 + } 320 + 321 + return ret_val; 322 + } 323 + 324 + /* NetLabel Generic NETLINK Command Definitions 325 + */ 326 + 327 + static const struct genl_ops netlbl_calipso_ops[] = { 328 + { 329 + .cmd = NLBL_CALIPSO_C_ADD, 330 + .flags = GENL_ADMIN_PERM, 331 + .policy = calipso_genl_policy, 332 + .doit = netlbl_calipso_add, 333 + .dumpit = NULL, 334 + }, 335 + { 336 + .cmd = NLBL_CALIPSO_C_REMOVE, 337 + .flags = GENL_ADMIN_PERM, 338 + .policy = calipso_genl_policy, 339 + .doit = netlbl_calipso_remove, 340 + .dumpit = NULL, 341 + }, 342 + { 343 + .cmd = NLBL_CALIPSO_C_LIST, 344 + .flags = 0, 345 + .policy = calipso_genl_policy, 346 + .doit = netlbl_calipso_list, 347 + .dumpit = NULL, 348 + }, 349 + { 350 + .cmd = NLBL_CALIPSO_C_LISTALL, 351 + .flags = 0, 352 + .policy = calipso_genl_policy, 353 + .doit = NULL, 354 + .dumpit = netlbl_calipso_listall, 355 + }, 356 + }; 357 + 358 + /* NetLabel Generic NETLINK Protocol Functions 359 + */ 360 + 361 + /** 362 + * netlbl_calipso_genl_init - Register the CALIPSO NetLabel component 363 + * 364 + * Description: 365 + * Register the CALIPSO packet NetLabel component with the Generic NETLINK 366 + * mechanism. Returns zero on success, negative values on failure. 367 + * 368 + */ 369 + int __init netlbl_calipso_genl_init(void) 370 + { 371 + return genl_register_family_with_ops(&netlbl_calipso_gnl_family, 372 + netlbl_calipso_ops); 373 + } 374 + 375 + static const struct netlbl_calipso_ops *calipso_ops; 376 + 377 + /** 378 + * netlbl_calipso_ops_register - Register the CALIPSO operations 379 + * 380 + * Description: 381 + * Register the CALIPSO packet engine operations. 382 + * 383 + */ 384 + const struct netlbl_calipso_ops * 385 + netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops) 386 + { 387 + return xchg(&calipso_ops, ops); 388 + } 389 + EXPORT_SYMBOL(netlbl_calipso_ops_register); 390 + 391 + static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void) 392 + { 393 + return ACCESS_ONCE(calipso_ops); 394 + } 395 + 396 + /** 397 + * calipso_doi_add - Add a new DOI to the CALIPSO protocol engine 398 + * @doi_def: the DOI structure 399 + * @audit_info: NetLabel audit information 400 + * 401 + * Description: 402 + * The caller defines a new DOI for use by the CALIPSO engine and calls this 403 + * function to add it to the list of acceptable domains. The caller must 404 + * ensure that the mapping table specified in @doi_def->map meets all of the 405 + * requirements of the mapping type (see calipso.h for details). Returns 406 + * zero on success and non-zero on failure. 407 + * 408 + */ 409 + int calipso_doi_add(struct calipso_doi *doi_def, 410 + struct netlbl_audit *audit_info) 411 + { 412 + int ret_val = -ENOMSG; 413 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 414 + 415 + if (ops) 416 + ret_val = ops->doi_add(doi_def, audit_info); 417 + return ret_val; 418 + } 419 + 420 + /** 421 + * calipso_doi_free - Frees a DOI definition 422 + * @doi_def: the DOI definition 423 + * 424 + * Description: 425 + * This function frees all of the memory associated with a DOI definition. 426 + * 427 + */ 428 + void calipso_doi_free(struct calipso_doi *doi_def) 429 + { 430 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 431 + 432 + if (ops) 433 + ops->doi_free(doi_def); 434 + } 435 + 436 + /** 437 + * calipso_doi_remove - Remove an existing DOI from the CALIPSO protocol engine 438 + * @doi: the DOI value 439 + * @audit_secid: the LSM secid to use in the audit message 440 + * 441 + * Description: 442 + * Removes a DOI definition from the CALIPSO engine. The NetLabel routines will 443 + * be called to release their own LSM domain mappings as well as our own 444 + * domain list. Returns zero on success and negative values on failure. 445 + * 446 + */ 447 + int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info) 448 + { 449 + int ret_val = -ENOMSG; 450 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 451 + 452 + if (ops) 453 + ret_val = ops->doi_remove(doi, audit_info); 454 + return ret_val; 455 + } 456 + 457 + /** 458 + * calipso_doi_getdef - Returns a reference to a valid DOI definition 459 + * @doi: the DOI value 460 + * 461 + * Description: 462 + * Searches for a valid DOI definition and if one is found it is returned to 463 + * the caller. Otherwise NULL is returned. The caller must ensure that 464 + * calipso_doi_putdef() is called when the caller is done. 465 + * 466 + */ 467 + struct calipso_doi *calipso_doi_getdef(u32 doi) 468 + { 469 + struct calipso_doi *ret_val = NULL; 470 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 471 + 472 + if (ops) 473 + ret_val = ops->doi_getdef(doi); 474 + return ret_val; 475 + } 476 + 477 + /** 478 + * calipso_doi_putdef - Releases a reference for the given DOI definition 479 + * @doi_def: the DOI definition 480 + * 481 + * Description: 482 + * Releases a DOI definition reference obtained from calipso_doi_getdef(). 483 + * 484 + */ 485 + void calipso_doi_putdef(struct calipso_doi *doi_def) 486 + { 487 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 488 + 489 + if (ops) 490 + ops->doi_putdef(doi_def); 491 + } 492 + 493 + /** 494 + * calipso_doi_walk - Iterate through the DOI definitions 495 + * @skip_cnt: skip past this number of DOI definitions, updated 496 + * @callback: callback for each DOI definition 497 + * @cb_arg: argument for the callback function 498 + * 499 + * Description: 500 + * Iterate over the DOI definition list, skipping the first @skip_cnt entries. 501 + * For each entry call @callback, if @callback returns a negative value stop 502 + * 'walking' through the list and return. Updates the value in @skip_cnt upon 503 + * return. Returns zero on success, negative values on failure. 504 + * 505 + */ 506 + int calipso_doi_walk(u32 *skip_cnt, 507 + int (*callback)(struct calipso_doi *doi_def, void *arg), 508 + void *cb_arg) 509 + { 510 + int ret_val = -ENOMSG; 511 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 512 + 513 + if (ops) 514 + ret_val = ops->doi_walk(skip_cnt, callback, cb_arg); 515 + return ret_val; 516 + } 517 + 518 + /** 519 + * calipso_sock_getattr - Get the security attributes from a sock 520 + * @sk: the sock 521 + * @secattr: the security attributes 522 + * 523 + * Description: 524 + * Query @sk to see if there is a CALIPSO option attached to the sock and if 525 + * there is return the CALIPSO security attributes in @secattr. This function 526 + * requires that @sk be locked, or privately held, but it does not do any 527 + * locking itself. Returns zero on success and negative values on failure. 528 + * 529 + */ 530 + int calipso_sock_getattr(struct sock *sk, struct netlbl_lsm_secattr *secattr) 531 + { 532 + int ret_val = -ENOMSG; 533 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 534 + 535 + if (ops) 536 + ret_val = ops->sock_getattr(sk, secattr); 537 + return ret_val; 538 + } 539 + 540 + /** 541 + * calipso_sock_setattr - Add a CALIPSO option to a socket 542 + * @sk: the socket 543 + * @doi_def: the CALIPSO DOI to use 544 + * @secattr: the specific security attributes of the socket 545 + * 546 + * Description: 547 + * Set the CALIPSO option on the given socket using the DOI definition and 548 + * security attributes passed to the function. This function requires 549 + * exclusive access to @sk, which means it either needs to be in the 550 + * process of being created or locked. Returns zero on success and negative 551 + * values on failure. 552 + * 553 + */ 554 + int calipso_sock_setattr(struct sock *sk, 555 + const struct calipso_doi *doi_def, 556 + const struct netlbl_lsm_secattr *secattr) 557 + { 558 + int ret_val = -ENOMSG; 559 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 560 + 561 + if (ops) 562 + ret_val = ops->sock_setattr(sk, doi_def, secattr); 563 + return ret_val; 564 + } 565 + 566 + /** 567 + * calipso_sock_delattr - Delete the CALIPSO option from a socket 568 + * @sk: the socket 569 + * 570 + * Description: 571 + * Removes the CALIPSO option from a socket, if present. 572 + * 573 + */ 574 + void calipso_sock_delattr(struct sock *sk) 575 + { 576 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 577 + 578 + if (ops) 579 + ops->sock_delattr(sk); 580 + } 581 + 582 + /** 583 + * calipso_req_setattr - Add a CALIPSO option to a connection request socket 584 + * @req: the connection request socket 585 + * @doi_def: the CALIPSO DOI to use 586 + * @secattr: the specific security attributes of the socket 587 + * 588 + * Description: 589 + * Set the CALIPSO option on the given socket using the DOI definition and 590 + * security attributes passed to the function. Returns zero on success and 591 + * negative values on failure. 592 + * 593 + */ 594 + int calipso_req_setattr(struct request_sock *req, 595 + const struct calipso_doi *doi_def, 596 + const struct netlbl_lsm_secattr *secattr) 597 + { 598 + int ret_val = -ENOMSG; 599 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 600 + 601 + if (ops) 602 + ret_val = ops->req_setattr(req, doi_def, secattr); 603 + return ret_val; 604 + } 605 + 606 + /** 607 + * calipso_req_delattr - Delete the CALIPSO option from a request socket 608 + * @reg: the request socket 609 + * 610 + * Description: 611 + * Removes the CALIPSO option from a request socket, if present. 612 + * 613 + */ 614 + void calipso_req_delattr(struct request_sock *req) 615 + { 616 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 617 + 618 + if (ops) 619 + ops->req_delattr(req); 620 + } 621 + 622 + /** 623 + * calipso_optptr - Find the CALIPSO option in the packet 624 + * @skb: the packet 625 + * 626 + * Description: 627 + * Parse the packet's IP header looking for a CALIPSO option. Returns a pointer 628 + * to the start of the CALIPSO option on success, NULL if one if not found. 629 + * 630 + */ 631 + unsigned char *calipso_optptr(const struct sk_buff *skb) 632 + { 633 + unsigned char *ret_val = NULL; 634 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 635 + 636 + if (ops) 637 + ret_val = ops->skbuff_optptr(skb); 638 + return ret_val; 639 + } 640 + 641 + /** 642 + * calipso_getattr - Get the security attributes from a memory block. 643 + * @calipso: the CALIPSO option 644 + * @secattr: the security attributes 645 + * 646 + * Description: 647 + * Inspect @calipso and return the security attributes in @secattr. 648 + * Returns zero on success and negative values on failure. 649 + * 650 + */ 651 + int calipso_getattr(const unsigned char *calipso, 652 + struct netlbl_lsm_secattr *secattr) 653 + { 654 + int ret_val = -ENOMSG; 655 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 656 + 657 + if (ops) 658 + ret_val = ops->opt_getattr(calipso, secattr); 659 + return ret_val; 660 + } 661 + 662 + /** 663 + * calipso_skbuff_setattr - Set the CALIPSO option on a packet 664 + * @skb: the packet 665 + * @doi_def: the CALIPSO DOI to use 666 + * @secattr: the security attributes 667 + * 668 + * Description: 669 + * Set the CALIPSO option on the given packet based on the security attributes. 670 + * Returns a pointer to the IP header on success and NULL on failure. 671 + * 672 + */ 673 + int calipso_skbuff_setattr(struct sk_buff *skb, 674 + const struct calipso_doi *doi_def, 675 + const struct netlbl_lsm_secattr *secattr) 676 + { 677 + int ret_val = -ENOMSG; 678 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 679 + 680 + if (ops) 681 + ret_val = ops->skbuff_setattr(skb, doi_def, secattr); 682 + return ret_val; 683 + } 684 + 685 + /** 686 + * calipso_skbuff_delattr - Delete any CALIPSO options from a packet 687 + * @skb: the packet 688 + * 689 + * Description: 690 + * Removes any and all CALIPSO options from the given packet. Returns zero on 691 + * success, negative values on failure. 692 + * 693 + */ 694 + int calipso_skbuff_delattr(struct sk_buff *skb) 695 + { 696 + int ret_val = -ENOMSG; 697 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 698 + 699 + if (ops) 700 + ret_val = ops->skbuff_delattr(skb); 701 + return ret_val; 702 + } 703 + 704 + /** 705 + * calipso_cache_invalidate - Invalidates the current CALIPSO cache 706 + * 707 + * Description: 708 + * Invalidates and frees any entries in the CALIPSO cache. Returns zero on 709 + * success and negative values on failure. 710 + * 711 + */ 712 + void calipso_cache_invalidate(void) 713 + { 714 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 715 + 716 + if (ops) 717 + ops->cache_invalidate(); 718 + } 719 + 720 + /** 721 + * calipso_cache_add - Add an entry to the CALIPSO cache 722 + * @calipso_ptr: the CALIPSO option 723 + * @secattr: the packet's security attributes 724 + * 725 + * Description: 726 + * Add a new entry into the CALIPSO label mapping cache. 727 + * Returns zero on success, negative values on failure. 728 + * 729 + */ 730 + int calipso_cache_add(const unsigned char *calipso_ptr, 731 + const struct netlbl_lsm_secattr *secattr) 732 + 733 + { 734 + int ret_val = -ENOMSG; 735 + const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get(); 736 + 737 + if (ops) 738 + ret_val = ops->cache_add(calipso_ptr, secattr); 739 + return ret_val; 740 + }
+151
net/netlabel/netlabel_calipso.h
··· 1 + /* 2 + * NetLabel CALIPSO Support 3 + * 4 + * This file defines the CALIPSO functions for the NetLabel system. The 5 + * NetLabel system manages static and dynamic label mappings for network 6 + * protocols such as CIPSO and RIPSO. 7 + * 8 + * Authors: Paul Moore <paul@paul-moore.com> 9 + * Huw Davies <huw@codeweavers.com> 10 + * 11 + */ 12 + 13 + /* (c) Copyright Hewlett-Packard Development Company, L.P., 2006 14 + * (c) Copyright Huw Davies <huw@codeweavers.com>, 2015 15 + * 16 + * This program is free software; you can redistribute it and/or modify 17 + * it under the terms of the GNU General Public License as published by 18 + * the Free Software Foundation; either version 2 of the License, or 19 + * (at your option) any later version. 20 + * 21 + * This program is distributed in the hope that it will be useful, 22 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 23 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 24 + * the GNU General Public License for more details. 25 + * 26 + * You should have received a copy of the GNU General Public License 27 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 28 + * 29 + */ 30 + 31 + #ifndef _NETLABEL_CALIPSO 32 + #define _NETLABEL_CALIPSO 33 + 34 + #include <net/netlabel.h> 35 + #include <net/calipso.h> 36 + 37 + /* The following NetLabel payloads are supported by the CALIPSO subsystem. 38 + * 39 + * o ADD: 40 + * Sent by an application to add a new DOI mapping table. 41 + * 42 + * Required attributes: 43 + * 44 + * NLBL_CALIPSO_A_DOI 45 + * NLBL_CALIPSO_A_MTYPE 46 + * 47 + * If using CALIPSO_MAP_PASS no additional attributes are required. 48 + * 49 + * o REMOVE: 50 + * Sent by an application to remove a specific DOI mapping table from the 51 + * CALIPSO system. 52 + * 53 + * Required attributes: 54 + * 55 + * NLBL_CALIPSO_A_DOI 56 + * 57 + * o LIST: 58 + * Sent by an application to list the details of a DOI definition. On 59 + * success the kernel should send a response using the following format. 60 + * 61 + * Required attributes: 62 + * 63 + * NLBL_CALIPSO_A_DOI 64 + * 65 + * The valid response message format depends on the type of the DOI mapping, 66 + * the defined formats are shown below. 67 + * 68 + * Required attributes: 69 + * 70 + * NLBL_CALIPSO_A_MTYPE 71 + * 72 + * If using CALIPSO_MAP_PASS no additional attributes are required. 73 + * 74 + * o LISTALL: 75 + * This message is sent by an application to list the valid DOIs on the 76 + * system. When sent by an application there is no payload and the 77 + * NLM_F_DUMP flag should be set. The kernel should respond with a series of 78 + * the following messages. 79 + * 80 + * Required attributes: 81 + * 82 + * NLBL_CALIPSO_A_DOI 83 + * NLBL_CALIPSO_A_MTYPE 84 + * 85 + */ 86 + 87 + /* NetLabel CALIPSO commands */ 88 + enum { 89 + NLBL_CALIPSO_C_UNSPEC, 90 + NLBL_CALIPSO_C_ADD, 91 + NLBL_CALIPSO_C_REMOVE, 92 + NLBL_CALIPSO_C_LIST, 93 + NLBL_CALIPSO_C_LISTALL, 94 + __NLBL_CALIPSO_C_MAX, 95 + }; 96 + 97 + /* NetLabel CALIPSO attributes */ 98 + enum { 99 + NLBL_CALIPSO_A_UNSPEC, 100 + NLBL_CALIPSO_A_DOI, 101 + /* (NLA_U32) 102 + * the DOI value */ 103 + NLBL_CALIPSO_A_MTYPE, 104 + /* (NLA_U32) 105 + * the mapping table type (defined in the calipso.h header as 106 + * CALIPSO_MAP_*) */ 107 + __NLBL_CALIPSO_A_MAX, 108 + }; 109 + 110 + #define NLBL_CALIPSO_A_MAX (__NLBL_CALIPSO_A_MAX - 1) 111 + 112 + /* NetLabel protocol functions */ 113 + #if IS_ENABLED(CONFIG_IPV6) 114 + int netlbl_calipso_genl_init(void); 115 + #else 116 + static inline int netlbl_calipso_genl_init(void) 117 + { 118 + return 0; 119 + } 120 + #endif 121 + 122 + int calipso_doi_add(struct calipso_doi *doi_def, 123 + struct netlbl_audit *audit_info); 124 + void calipso_doi_free(struct calipso_doi *doi_def); 125 + int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info); 126 + struct calipso_doi *calipso_doi_getdef(u32 doi); 127 + void calipso_doi_putdef(struct calipso_doi *doi_def); 128 + int calipso_doi_walk(u32 *skip_cnt, 129 + int (*callback)(struct calipso_doi *doi_def, void *arg), 130 + void *cb_arg); 131 + int calipso_sock_getattr(struct sock *sk, struct netlbl_lsm_secattr *secattr); 132 + int calipso_sock_setattr(struct sock *sk, 133 + const struct calipso_doi *doi_def, 134 + const struct netlbl_lsm_secattr *secattr); 135 + void calipso_sock_delattr(struct sock *sk); 136 + int calipso_req_setattr(struct request_sock *req, 137 + const struct calipso_doi *doi_def, 138 + const struct netlbl_lsm_secattr *secattr); 139 + void calipso_req_delattr(struct request_sock *req); 140 + unsigned char *calipso_optptr(const struct sk_buff *skb); 141 + int calipso_getattr(const unsigned char *calipso, 142 + struct netlbl_lsm_secattr *secattr); 143 + int calipso_skbuff_setattr(struct sk_buff *skb, 144 + const struct calipso_doi *doi_def, 145 + const struct netlbl_lsm_secattr *secattr); 146 + int calipso_skbuff_delattr(struct sk_buff *skb); 147 + void calipso_cache_invalidate(void); 148 + int calipso_cache_add(const unsigned char *calipso_ptr, 149 + const struct netlbl_lsm_secattr *secattr); 150 + 151 + #endif
+244 -49
net/netlabel/netlabel_domainhash.c
··· 37 37 #include <linux/slab.h> 38 38 #include <net/netlabel.h> 39 39 #include <net/cipso_ipv4.h> 40 + #include <net/calipso.h> 40 41 #include <asm/bug.h> 41 42 42 43 #include "netlabel_mgmt.h" 43 44 #include "netlabel_addrlist.h" 45 + #include "netlabel_calipso.h" 44 46 #include "netlabel_domainhash.h" 45 47 #include "netlabel_user.h" 46 48 ··· 57 55 static DEFINE_SPINLOCK(netlbl_domhsh_lock); 58 56 #define netlbl_domhsh_rcu_deref(p) \ 59 57 rcu_dereference_check(p, lockdep_is_held(&netlbl_domhsh_lock)) 60 - static struct netlbl_domhsh_tbl *netlbl_domhsh; 61 - static struct netlbl_dom_map *netlbl_domhsh_def; 58 + static struct netlbl_domhsh_tbl __rcu *netlbl_domhsh; 59 + static struct netlbl_dom_map __rcu *netlbl_domhsh_def_ipv4; 60 + static struct netlbl_dom_map __rcu *netlbl_domhsh_def_ipv6; 62 61 63 62 /* 64 63 * Domain Hash Table Helper Functions ··· 129 126 return val & (netlbl_domhsh_rcu_deref(netlbl_domhsh)->size - 1); 130 127 } 131 128 129 + static bool netlbl_family_match(u16 f1, u16 f2) 130 + { 131 + return (f1 == f2) || (f1 == AF_UNSPEC) || (f2 == AF_UNSPEC); 132 + } 133 + 132 134 /** 133 135 * netlbl_domhsh_search - Search for a domain entry 134 136 * @domain: the domain 137 + * @family: the address family 135 138 * 136 139 * Description: 137 140 * Searches the domain hash table and returns a pointer to the hash table 138 - * entry if found, otherwise NULL is returned. The caller is responsible for 141 + * entry if found, otherwise NULL is returned. @family may be %AF_UNSPEC 142 + * which matches any address family entries. The caller is responsible for 139 143 * ensuring that the hash table is protected with either a RCU read lock or the 140 144 * hash table lock. 141 145 * 142 146 */ 143 - static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain) 147 + static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain, 148 + u16 family) 144 149 { 145 150 u32 bkt; 146 151 struct list_head *bkt_list; ··· 158 147 bkt = netlbl_domhsh_hash(domain); 159 148 bkt_list = &netlbl_domhsh_rcu_deref(netlbl_domhsh)->tbl[bkt]; 160 149 list_for_each_entry_rcu(iter, bkt_list, list) 161 - if (iter->valid && strcmp(iter->domain, domain) == 0) 150 + if (iter->valid && 151 + netlbl_family_match(iter->family, family) && 152 + strcmp(iter->domain, domain) == 0) 162 153 return iter; 163 154 } 164 155 ··· 170 157 /** 171 158 * netlbl_domhsh_search_def - Search for a domain entry 172 159 * @domain: the domain 173 - * @def: return default if no match is found 160 + * @family: the address family 174 161 * 175 162 * Description: 176 163 * Searches the domain hash table and returns a pointer to the hash table 177 164 * entry if an exact match is found, if an exact match is not present in the 178 165 * hash table then the default entry is returned if valid otherwise NULL is 179 - * returned. The caller is responsible ensuring that the hash table is 166 + * returned. @family may be %AF_UNSPEC which matches any address family 167 + * entries. The caller is responsible ensuring that the hash table is 180 168 * protected with either a RCU read lock or the hash table lock. 181 169 * 182 170 */ 183 - static struct netlbl_dom_map *netlbl_domhsh_search_def(const char *domain) 171 + static struct netlbl_dom_map *netlbl_domhsh_search_def(const char *domain, 172 + u16 family) 184 173 { 185 174 struct netlbl_dom_map *entry; 186 175 187 - entry = netlbl_domhsh_search(domain); 188 - if (entry == NULL) { 189 - entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def); 190 - if (entry != NULL && !entry->valid) 191 - entry = NULL; 176 + entry = netlbl_domhsh_search(domain, family); 177 + if (entry != NULL) 178 + return entry; 179 + if (family == AF_INET || family == AF_UNSPEC) { 180 + entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def_ipv4); 181 + if (entry != NULL && entry->valid) 182 + return entry; 183 + } 184 + if (family == AF_INET6 || family == AF_UNSPEC) { 185 + entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def_ipv6); 186 + if (entry != NULL && entry->valid) 187 + return entry; 192 188 } 193 189 194 - return entry; 190 + return NULL; 195 191 } 196 192 197 193 /** ··· 225 203 { 226 204 struct audit_buffer *audit_buf; 227 205 struct cipso_v4_doi *cipsov4 = NULL; 206 + struct calipso_doi *calipso = NULL; 228 207 u32 type; 229 208 230 209 audit_buf = netlbl_audit_start_common(AUDIT_MAC_MAP_ADD, audit_info); ··· 244 221 struct netlbl_domaddr6_map *map6; 245 222 map6 = netlbl_domhsh_addr6_entry(addr6); 246 223 type = map6->def.type; 224 + calipso = map6->def.calipso; 247 225 netlbl_af6list_audit_addr(audit_buf, 0, NULL, 248 226 &addr6->addr, &addr6->mask); 249 227 #endif /* IPv6 */ 250 228 } else { 251 229 type = entry->def.type; 252 230 cipsov4 = entry->def.cipso; 231 + calipso = entry->def.calipso; 253 232 } 254 233 switch (type) { 255 234 case NETLBL_NLTYPE_UNLABELED: ··· 262 237 audit_log_format(audit_buf, 263 238 " nlbl_protocol=cipsov4 cipso_doi=%u", 264 239 cipsov4->doi); 240 + break; 241 + case NETLBL_NLTYPE_CALIPSO: 242 + BUG_ON(calipso == NULL); 243 + audit_log_format(audit_buf, 244 + " nlbl_protocol=calipso calipso_doi=%u", 245 + calipso->doi); 265 246 break; 266 247 } 267 248 audit_log_format(audit_buf, " res=%u", result == 0 ? 1 : 0); ··· 295 264 if (entry == NULL) 296 265 return -EINVAL; 297 266 267 + if (entry->family != AF_INET && entry->family != AF_INET6 && 268 + (entry->family != AF_UNSPEC || 269 + entry->def.type != NETLBL_NLTYPE_UNLABELED)) 270 + return -EINVAL; 271 + 298 272 switch (entry->def.type) { 299 273 case NETLBL_NLTYPE_UNLABELED: 300 - if (entry->def.cipso != NULL || entry->def.addrsel != NULL) 274 + if (entry->def.cipso != NULL || entry->def.calipso != NULL || 275 + entry->def.addrsel != NULL) 301 276 return -EINVAL; 302 277 break; 303 278 case NETLBL_NLTYPE_CIPSOV4: 304 - if (entry->def.cipso == NULL) 279 + if (entry->family != AF_INET || 280 + entry->def.cipso == NULL) 281 + return -EINVAL; 282 + break; 283 + case NETLBL_NLTYPE_CALIPSO: 284 + if (entry->family != AF_INET6 || 285 + entry->def.calipso == NULL) 305 286 return -EINVAL; 306 287 break; 307 288 case NETLBL_NLTYPE_ADDRSELECT: ··· 337 294 map6 = netlbl_domhsh_addr6_entry(iter6); 338 295 switch (map6->def.type) { 339 296 case NETLBL_NLTYPE_UNLABELED: 297 + if (map6->def.calipso != NULL) 298 + return -EINVAL; 299 + break; 300 + case NETLBL_NLTYPE_CALIPSO: 301 + if (map6->def.calipso == NULL) 302 + return -EINVAL; 340 303 break; 341 304 default: 342 305 return -EINVAL; ··· 407 358 * 408 359 * Description: 409 360 * Adds a new entry to the domain hash table and handles any updates to the 410 - * lower level protocol handler (i.e. CIPSO). Returns zero on success, 411 - * negative on failure. 361 + * lower level protocol handler (i.e. CIPSO). @entry->family may be set to 362 + * %AF_UNSPEC which will add an entry that matches all address families. This 363 + * is only useful for the unlabelled type and will only succeed if there is no 364 + * existing entry for any address family with the same domain. Returns zero 365 + * on success, negative on failure. 412 366 * 413 367 */ 414 368 int netlbl_domhsh_add(struct netlbl_dom_map *entry, 415 369 struct netlbl_audit *audit_info) 416 370 { 417 371 int ret_val = 0; 418 - struct netlbl_dom_map *entry_old; 372 + struct netlbl_dom_map *entry_old, *entry_b; 419 373 struct netlbl_af4list *iter4; 420 374 struct netlbl_af4list *tmp4; 421 375 #if IS_ENABLED(CONFIG_IPV6) ··· 437 385 rcu_read_lock(); 438 386 spin_lock(&netlbl_domhsh_lock); 439 387 if (entry->domain != NULL) 440 - entry_old = netlbl_domhsh_search(entry->domain); 388 + entry_old = netlbl_domhsh_search(entry->domain, entry->family); 441 389 else 442 - entry_old = netlbl_domhsh_search_def(entry->domain); 390 + entry_old = netlbl_domhsh_search_def(entry->domain, 391 + entry->family); 443 392 if (entry_old == NULL) { 444 393 entry->valid = 1; 445 394 ··· 450 397 &rcu_dereference(netlbl_domhsh)->tbl[bkt]); 451 398 } else { 452 399 INIT_LIST_HEAD(&entry->list); 453 - rcu_assign_pointer(netlbl_domhsh_def, entry); 400 + switch (entry->family) { 401 + case AF_INET: 402 + rcu_assign_pointer(netlbl_domhsh_def_ipv4, 403 + entry); 404 + break; 405 + case AF_INET6: 406 + rcu_assign_pointer(netlbl_domhsh_def_ipv6, 407 + entry); 408 + break; 409 + case AF_UNSPEC: 410 + if (entry->def.type != 411 + NETLBL_NLTYPE_UNLABELED) { 412 + ret_val = -EINVAL; 413 + goto add_return; 414 + } 415 + entry_b = kzalloc(sizeof(*entry_b), GFP_ATOMIC); 416 + if (entry_b == NULL) { 417 + ret_val = -ENOMEM; 418 + goto add_return; 419 + } 420 + entry_b->family = AF_INET6; 421 + entry_b->def.type = NETLBL_NLTYPE_UNLABELED; 422 + entry_b->valid = 1; 423 + entry->family = AF_INET; 424 + rcu_assign_pointer(netlbl_domhsh_def_ipv4, 425 + entry); 426 + rcu_assign_pointer(netlbl_domhsh_def_ipv6, 427 + entry_b); 428 + break; 429 + default: 430 + /* Already checked in 431 + * netlbl_domhsh_validate(). */ 432 + ret_val = -EINVAL; 433 + goto add_return; 434 + } 454 435 } 455 436 456 437 if (entry->def.type == NETLBL_NLTYPE_ADDRSELECT) { ··· 600 513 spin_lock(&netlbl_domhsh_lock); 601 514 if (entry->valid) { 602 515 entry->valid = 0; 603 - if (entry != rcu_dereference(netlbl_domhsh_def)) 604 - list_del_rcu(&entry->list); 516 + if (entry == rcu_dereference(netlbl_domhsh_def_ipv4)) 517 + RCU_INIT_POINTER(netlbl_domhsh_def_ipv4, NULL); 518 + else if (entry == rcu_dereference(netlbl_domhsh_def_ipv6)) 519 + RCU_INIT_POINTER(netlbl_domhsh_def_ipv6, NULL); 605 520 else 606 - RCU_INIT_POINTER(netlbl_domhsh_def, NULL); 521 + list_del_rcu(&entry->list); 607 522 } else 608 523 ret_val = -ENOENT; 609 524 spin_unlock(&netlbl_domhsh_lock); ··· 622 533 if (ret_val == 0) { 623 534 struct netlbl_af4list *iter4; 624 535 struct netlbl_domaddr4_map *map4; 536 + #if IS_ENABLED(CONFIG_IPV6) 537 + struct netlbl_af6list *iter6; 538 + struct netlbl_domaddr6_map *map6; 539 + #endif /* IPv6 */ 625 540 626 541 switch (entry->def.type) { 627 542 case NETLBL_NLTYPE_ADDRSELECT: ··· 634 541 map4 = netlbl_domhsh_addr4_entry(iter4); 635 542 cipso_v4_doi_putdef(map4->def.cipso); 636 543 } 637 - /* no need to check the IPv6 list since we currently 638 - * support only unlabeled protocols for IPv6 */ 544 + #if IS_ENABLED(CONFIG_IPV6) 545 + netlbl_af6list_foreach_rcu(iter6, 546 + &entry->def.addrsel->list6) { 547 + map6 = netlbl_domhsh_addr6_entry(iter6); 548 + calipso_doi_putdef(map6->def.calipso); 549 + } 550 + #endif /* IPv6 */ 639 551 break; 640 552 case NETLBL_NLTYPE_CIPSOV4: 641 553 cipso_v4_doi_putdef(entry->def.cipso); 642 554 break; 555 + #if IS_ENABLED(CONFIG_IPV6) 556 + case NETLBL_NLTYPE_CALIPSO: 557 + calipso_doi_putdef(entry->def.calipso); 558 + break; 559 + #endif /* IPv6 */ 643 560 } 644 561 call_rcu(&entry->rcu, netlbl_domhsh_free_entry); 645 562 } ··· 686 583 rcu_read_lock(); 687 584 688 585 if (domain) 689 - entry_map = netlbl_domhsh_search(domain); 586 + entry_map = netlbl_domhsh_search(domain, AF_INET); 690 587 else 691 - entry_map = netlbl_domhsh_search_def(domain); 588 + entry_map = netlbl_domhsh_search_def(domain, AF_INET); 692 589 if (entry_map == NULL || 693 590 entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT) 694 591 goto remove_af4_failure; ··· 725 622 return -ENOENT; 726 623 } 727 624 625 + #if IS_ENABLED(CONFIG_IPV6) 626 + /** 627 + * netlbl_domhsh_remove_af6 - Removes an address selector entry 628 + * @domain: the domain 629 + * @addr: IPv6 address 630 + * @mask: IPv6 address mask 631 + * @audit_info: NetLabel audit information 632 + * 633 + * Description: 634 + * Removes an individual address selector from a domain mapping and potentially 635 + * the entire mapping if it is empty. Returns zero on success, negative values 636 + * on failure. 637 + * 638 + */ 639 + int netlbl_domhsh_remove_af6(const char *domain, 640 + const struct in6_addr *addr, 641 + const struct in6_addr *mask, 642 + struct netlbl_audit *audit_info) 643 + { 644 + struct netlbl_dom_map *entry_map; 645 + struct netlbl_af6list *entry_addr; 646 + struct netlbl_af4list *iter4; 647 + struct netlbl_af6list *iter6; 648 + struct netlbl_domaddr6_map *entry; 649 + 650 + rcu_read_lock(); 651 + 652 + if (domain) 653 + entry_map = netlbl_domhsh_search(domain, AF_INET6); 654 + else 655 + entry_map = netlbl_domhsh_search_def(domain, AF_INET6); 656 + if (entry_map == NULL || 657 + entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT) 658 + goto remove_af6_failure; 659 + 660 + spin_lock(&netlbl_domhsh_lock); 661 + entry_addr = netlbl_af6list_remove(addr, mask, 662 + &entry_map->def.addrsel->list6); 663 + spin_unlock(&netlbl_domhsh_lock); 664 + 665 + if (entry_addr == NULL) 666 + goto remove_af6_failure; 667 + netlbl_af4list_foreach_rcu(iter4, &entry_map->def.addrsel->list4) 668 + goto remove_af6_single_addr; 669 + netlbl_af6list_foreach_rcu(iter6, &entry_map->def.addrsel->list6) 670 + goto remove_af6_single_addr; 671 + /* the domain mapping is empty so remove it from the mapping table */ 672 + netlbl_domhsh_remove_entry(entry_map, audit_info); 673 + 674 + remove_af6_single_addr: 675 + rcu_read_unlock(); 676 + /* yick, we can't use call_rcu here because we don't have a rcu head 677 + * pointer but hopefully this should be a rare case so the pause 678 + * shouldn't be a problem */ 679 + synchronize_rcu(); 680 + entry = netlbl_domhsh_addr6_entry(entry_addr); 681 + calipso_doi_putdef(entry->def.calipso); 682 + kfree(entry); 683 + return 0; 684 + 685 + remove_af6_failure: 686 + rcu_read_unlock(); 687 + return -ENOENT; 688 + } 689 + #endif /* IPv6 */ 690 + 728 691 /** 729 692 * netlbl_domhsh_remove - Removes an entry from the domain hash table 730 693 * @domain: the domain to remove 694 + * @family: address family 731 695 * @audit_info: NetLabel audit information 732 696 * 733 697 * Description: 734 698 * Removes an entry from the domain hash table and handles any updates to the 735 - * lower level protocol handler (i.e. CIPSO). Returns zero on success, 736 - * negative on failure. 699 + * lower level protocol handler (i.e. CIPSO). @family may be %AF_UNSPEC which 700 + * removes all address family entries. Returns zero on success, negative on 701 + * failure. 737 702 * 738 703 */ 739 - int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info) 704 + int netlbl_domhsh_remove(const char *domain, u16 family, 705 + struct netlbl_audit *audit_info) 740 706 { 741 - int ret_val; 707 + int ret_val = -EINVAL; 742 708 struct netlbl_dom_map *entry; 743 709 744 710 rcu_read_lock(); 745 - if (domain) 746 - entry = netlbl_domhsh_search(domain); 747 - else 748 - entry = netlbl_domhsh_search_def(domain); 749 - ret_val = netlbl_domhsh_remove_entry(entry, audit_info); 711 + 712 + if (family == AF_INET || family == AF_UNSPEC) { 713 + if (domain) 714 + entry = netlbl_domhsh_search(domain, AF_INET); 715 + else 716 + entry = netlbl_domhsh_search_def(domain, AF_INET); 717 + ret_val = netlbl_domhsh_remove_entry(entry, audit_info); 718 + if (ret_val && ret_val != -ENOENT) 719 + goto done; 720 + } 721 + if (family == AF_INET6 || family == AF_UNSPEC) { 722 + int ret_val2; 723 + 724 + if (domain) 725 + entry = netlbl_domhsh_search(domain, AF_INET6); 726 + else 727 + entry = netlbl_domhsh_search_def(domain, AF_INET6); 728 + ret_val2 = netlbl_domhsh_remove_entry(entry, audit_info); 729 + if (ret_val2 != -ENOENT) 730 + ret_val = ret_val2; 731 + } 732 + done: 750 733 rcu_read_unlock(); 751 734 752 735 return ret_val; ··· 840 651 841 652 /** 842 653 * netlbl_domhsh_remove_default - Removes the default entry from the table 654 + * @family: address family 843 655 * @audit_info: NetLabel audit information 844 656 * 845 657 * Description: 846 - * Removes/resets the default entry for the domain hash table and handles any 847 - * updates to the lower level protocol handler (i.e. CIPSO). Returns zero on 848 - * success, non-zero on failure. 658 + * Removes/resets the default entry corresponding to @family from the domain 659 + * hash table and handles any updates to the lower level protocol handler 660 + * (i.e. CIPSO). @family may be %AF_UNSPEC which removes all address family 661 + * entries. Returns zero on success, negative on failure. 849 662 * 850 663 */ 851 - int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info) 664 + int netlbl_domhsh_remove_default(u16 family, struct netlbl_audit *audit_info) 852 665 { 853 - return netlbl_domhsh_remove(NULL, audit_info); 666 + return netlbl_domhsh_remove(NULL, family, audit_info); 854 667 } 855 668 856 669 /** 857 670 * netlbl_domhsh_getentry - Get an entry from the domain hash table 858 671 * @domain: the domain name to search for 672 + * @family: address family 859 673 * 860 674 * Description: 861 675 * Look through the domain hash table searching for an entry to match @domain, 862 - * return a pointer to a copy of the entry or NULL. The caller is responsible 863 - * for ensuring that rcu_read_[un]lock() is called. 676 + * with address family @family, return a pointer to a copy of the entry or 677 + * NULL. The caller is responsible for ensuring that rcu_read_[un]lock() is 678 + * called. 864 679 * 865 680 */ 866 - struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain) 681 + struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain, u16 family) 867 682 { 868 - return netlbl_domhsh_search_def(domain); 683 + if (family == AF_UNSPEC) 684 + return NULL; 685 + return netlbl_domhsh_search_def(domain, family); 869 686 } 870 687 871 688 /** ··· 891 696 struct netlbl_dom_map *dom_iter; 892 697 struct netlbl_af4list *addr_iter; 893 698 894 - dom_iter = netlbl_domhsh_search_def(domain); 699 + dom_iter = netlbl_domhsh_search_def(domain, AF_INET); 895 700 if (dom_iter == NULL) 896 701 return NULL; 897 702 ··· 921 726 struct netlbl_dom_map *dom_iter; 922 727 struct netlbl_af6list *addr_iter; 923 728 924 - dom_iter = netlbl_domhsh_search_def(domain); 729 + dom_iter = netlbl_domhsh_search_def(domain, AF_INET6); 925 730 if (dom_iter == NULL) 926 731 return NULL; 927 732
+14 -3
net/netlabel/netlabel_domainhash.h
··· 51 51 union { 52 52 struct netlbl_domaddr_map *addrsel; 53 53 struct cipso_v4_doi *cipso; 54 + struct calipso_doi *calipso; 54 55 }; 55 56 }; 56 57 #define netlbl_domhsh_addr4_entry(iter) \ ··· 71 70 72 71 struct netlbl_dom_map { 73 72 char *domain; 73 + u16 family; 74 74 struct netlbl_dommap_def def; 75 75 76 76 u32 valid; ··· 93 91 const struct in_addr *addr, 94 92 const struct in_addr *mask, 95 93 struct netlbl_audit *audit_info); 96 - int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info); 97 - int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info); 98 - struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain); 94 + int netlbl_domhsh_remove_af6(const char *domain, 95 + const struct in6_addr *addr, 96 + const struct in6_addr *mask, 97 + struct netlbl_audit *audit_info); 98 + int netlbl_domhsh_remove(const char *domain, u16 family, 99 + struct netlbl_audit *audit_info); 100 + int netlbl_domhsh_remove_default(u16 family, struct netlbl_audit *audit_info); 101 + struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain, u16 family); 99 102 struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain, 100 103 __be32 addr); 101 104 #if IS_ENABLED(CONFIG_IPV6) 102 105 struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain, 103 106 const struct in6_addr *addr); 107 + int netlbl_domhsh_remove_af6(const char *domain, 108 + const struct in6_addr *addr, 109 + const struct in6_addr *mask, 110 + struct netlbl_audit *audit_info); 104 111 #endif /* IPv6 */ 105 112 106 113 int netlbl_domhsh_walk(u32 *skip_bkt,
+358 -36
net/netlabel/netlabel_kapi.c
··· 37 37 #include <net/ipv6.h> 38 38 #include <net/netlabel.h> 39 39 #include <net/cipso_ipv4.h> 40 + #include <net/calipso.h> 40 41 #include <asm/bug.h> 41 42 #include <linux/atomic.h> 42 43 43 44 #include "netlabel_domainhash.h" 44 45 #include "netlabel_unlabeled.h" 45 46 #include "netlabel_cipso_v4.h" 47 + #include "netlabel_calipso.h" 46 48 #include "netlabel_user.h" 47 49 #include "netlabel_mgmt.h" 48 50 #include "netlabel_addrlist.h" ··· 74 72 struct netlbl_audit *audit_info) 75 73 { 76 74 if (addr == NULL && mask == NULL) { 77 - return netlbl_domhsh_remove(domain, audit_info); 75 + return netlbl_domhsh_remove(domain, family, audit_info); 78 76 } else if (addr != NULL && mask != NULL) { 79 77 switch (family) { 80 78 case AF_INET: 81 79 return netlbl_domhsh_remove_af4(domain, addr, mask, 82 80 audit_info); 81 + #if IS_ENABLED(CONFIG_IPV6) 82 + case AF_INET6: 83 + return netlbl_domhsh_remove_af6(domain, addr, mask, 84 + audit_info); 85 + #endif /* IPv6 */ 83 86 default: 84 87 return -EPFNOSUPPORT; 85 88 } ··· 126 119 if (entry->domain == NULL) 127 120 goto cfg_unlbl_map_add_failure; 128 121 } 122 + entry->family = family; 129 123 130 124 if (addr == NULL && mask == NULL) 131 125 entry->def.type = NETLBL_NLTYPE_UNLABELED; ··· 353 345 entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 354 346 if (entry == NULL) 355 347 goto out_entry; 348 + entry->family = AF_INET; 356 349 if (domain != NULL) { 357 350 entry->domain = kstrdup(domain, GFP_ATOMIC); 358 351 if (entry->domain == NULL) ··· 406 397 out_entry: 407 398 cipso_v4_doi_putdef(doi_def); 408 399 return ret_val; 400 + } 401 + 402 + /** 403 + * netlbl_cfg_calipso_add - Add a new CALIPSO DOI definition 404 + * @doi_def: CALIPSO DOI definition 405 + * @audit_info: NetLabel audit information 406 + * 407 + * Description: 408 + * Add a new CALIPSO DOI definition as defined by @doi_def. Returns zero on 409 + * success and negative values on failure. 410 + * 411 + */ 412 + int netlbl_cfg_calipso_add(struct calipso_doi *doi_def, 413 + struct netlbl_audit *audit_info) 414 + { 415 + #if IS_ENABLED(CONFIG_IPV6) 416 + return calipso_doi_add(doi_def, audit_info); 417 + #else /* IPv6 */ 418 + return -ENOSYS; 419 + #endif /* IPv6 */ 420 + } 421 + 422 + /** 423 + * netlbl_cfg_calipso_del - Remove an existing CALIPSO DOI definition 424 + * @doi: CALIPSO DOI 425 + * @audit_info: NetLabel audit information 426 + * 427 + * Description: 428 + * Remove an existing CALIPSO DOI definition matching @doi. Returns zero on 429 + * success and negative values on failure. 430 + * 431 + */ 432 + void netlbl_cfg_calipso_del(u32 doi, struct netlbl_audit *audit_info) 433 + { 434 + #if IS_ENABLED(CONFIG_IPV6) 435 + calipso_doi_remove(doi, audit_info); 436 + #endif /* IPv6 */ 437 + } 438 + 439 + /** 440 + * netlbl_cfg_calipso_map_add - Add a new CALIPSO DOI mapping 441 + * @doi: the CALIPSO DOI 442 + * @domain: the domain mapping to add 443 + * @addr: IP address 444 + * @mask: IP address mask 445 + * @audit_info: NetLabel audit information 446 + * 447 + * Description: 448 + * Add a new NetLabel/LSM domain mapping for the given CALIPSO DOI to the 449 + * NetLabel subsystem. A @domain value of NULL adds a new default domain 450 + * mapping. Returns zero on success, negative values on failure. 451 + * 452 + */ 453 + int netlbl_cfg_calipso_map_add(u32 doi, 454 + const char *domain, 455 + const struct in6_addr *addr, 456 + const struct in6_addr *mask, 457 + struct netlbl_audit *audit_info) 458 + { 459 + #if IS_ENABLED(CONFIG_IPV6) 460 + int ret_val = -ENOMEM; 461 + struct calipso_doi *doi_def; 462 + struct netlbl_dom_map *entry; 463 + struct netlbl_domaddr_map *addrmap = NULL; 464 + struct netlbl_domaddr6_map *addrinfo = NULL; 465 + 466 + doi_def = calipso_doi_getdef(doi); 467 + if (doi_def == NULL) 468 + return -ENOENT; 469 + 470 + entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 471 + if (entry == NULL) 472 + goto out_entry; 473 + entry->family = AF_INET6; 474 + if (domain != NULL) { 475 + entry->domain = kstrdup(domain, GFP_ATOMIC); 476 + if (entry->domain == NULL) 477 + goto out_domain; 478 + } 479 + 480 + if (addr == NULL && mask == NULL) { 481 + entry->def.calipso = doi_def; 482 + entry->def.type = NETLBL_NLTYPE_CALIPSO; 483 + } else if (addr != NULL && mask != NULL) { 484 + addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC); 485 + if (addrmap == NULL) 486 + goto out_addrmap; 487 + INIT_LIST_HEAD(&addrmap->list4); 488 + INIT_LIST_HEAD(&addrmap->list6); 489 + 490 + addrinfo = kzalloc(sizeof(*addrinfo), GFP_ATOMIC); 491 + if (addrinfo == NULL) 492 + goto out_addrinfo; 493 + addrinfo->def.calipso = doi_def; 494 + addrinfo->def.type = NETLBL_NLTYPE_CALIPSO; 495 + addrinfo->list.addr = *addr; 496 + addrinfo->list.addr.s6_addr32[0] &= mask->s6_addr32[0]; 497 + addrinfo->list.addr.s6_addr32[1] &= mask->s6_addr32[1]; 498 + addrinfo->list.addr.s6_addr32[2] &= mask->s6_addr32[2]; 499 + addrinfo->list.addr.s6_addr32[3] &= mask->s6_addr32[3]; 500 + addrinfo->list.mask = *mask; 501 + addrinfo->list.valid = 1; 502 + ret_val = netlbl_af6list_add(&addrinfo->list, &addrmap->list6); 503 + if (ret_val != 0) 504 + goto cfg_calipso_map_add_failure; 505 + 506 + entry->def.addrsel = addrmap; 507 + entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 508 + } else { 509 + ret_val = -EINVAL; 510 + goto out_addrmap; 511 + } 512 + 513 + ret_val = netlbl_domhsh_add(entry, audit_info); 514 + if (ret_val != 0) 515 + goto cfg_calipso_map_add_failure; 516 + 517 + return 0; 518 + 519 + cfg_calipso_map_add_failure: 520 + kfree(addrinfo); 521 + out_addrinfo: 522 + kfree(addrmap); 523 + out_addrmap: 524 + kfree(entry->domain); 525 + out_domain: 526 + kfree(entry); 527 + out_entry: 528 + calipso_doi_putdef(doi_def); 529 + return ret_val; 530 + #else /* IPv6 */ 531 + return -ENOSYS; 532 + #endif /* IPv6 */ 409 533 } 410 534 411 535 /* ··· 661 519 662 520 return -ENOENT; 663 521 } 522 + EXPORT_SYMBOL(netlbl_catmap_walk); 664 523 665 524 /** 666 525 * netlbl_catmap_walkrng - Find the end of a string of set bits ··· 752 609 off = catmap->startbit; 753 610 *offset = off; 754 611 } 755 - iter = _netlbl_catmap_getnode(&catmap, off, _CM_F_NONE, 0); 612 + iter = _netlbl_catmap_getnode(&catmap, off, _CM_F_WALK, 0); 756 613 if (iter == NULL) { 757 614 *offset = (u32)-1; 758 615 return 0; 759 616 } 760 617 761 618 if (off < iter->startbit) { 762 - off = iter->startbit; 763 - *offset = off; 619 + *offset = iter->startbit; 620 + off = 0; 764 621 } else 765 622 off -= iter->startbit; 766 - 767 623 idx = off / NETLBL_CATMAP_MAPSIZE; 768 - *bitmap = iter->bitmap[idx] >> (off % NETLBL_CATMAP_SIZE); 624 + *bitmap = iter->bitmap[idx] >> (off % NETLBL_CATMAP_MAPSIZE); 769 625 770 626 return 0; 771 627 } ··· 797 655 798 656 return 0; 799 657 } 658 + EXPORT_SYMBOL(netlbl_catmap_setbit); 800 659 801 660 /** 802 661 * netlbl_catmap_setrng - Set a range of bits in a LSM secattr catmap ··· 870 727 return 0; 871 728 } 872 729 730 + /* Bitmap functions 731 + */ 732 + 733 + /** 734 + * netlbl_bitmap_walk - Walk a bitmap looking for a bit 735 + * @bitmap: the bitmap 736 + * @bitmap_len: length in bits 737 + * @offset: starting offset 738 + * @state: if non-zero, look for a set (1) bit else look for a cleared (0) bit 739 + * 740 + * Description: 741 + * Starting at @offset, walk the bitmap from left to right until either the 742 + * desired bit is found or we reach the end. Return the bit offset, -1 if 743 + * not found, or -2 if error. 744 + */ 745 + int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len, 746 + u32 offset, u8 state) 747 + { 748 + u32 bit_spot; 749 + u32 byte_offset; 750 + unsigned char bitmask; 751 + unsigned char byte; 752 + 753 + byte_offset = offset / 8; 754 + byte = bitmap[byte_offset]; 755 + bit_spot = offset; 756 + bitmask = 0x80 >> (offset % 8); 757 + 758 + while (bit_spot < bitmap_len) { 759 + if ((state && (byte & bitmask) == bitmask) || 760 + (state == 0 && (byte & bitmask) == 0)) 761 + return bit_spot; 762 + 763 + bit_spot++; 764 + bitmask >>= 1; 765 + if (bitmask == 0) { 766 + byte = bitmap[++byte_offset]; 767 + bitmask = 0x80; 768 + } 769 + } 770 + 771 + return -1; 772 + } 773 + EXPORT_SYMBOL(netlbl_bitmap_walk); 774 + 775 + /** 776 + * netlbl_bitmap_setbit - Sets a single bit in a bitmap 777 + * @bitmap: the bitmap 778 + * @bit: the bit 779 + * @state: if non-zero, set the bit (1) else clear the bit (0) 780 + * 781 + * Description: 782 + * Set a single bit in the bitmask. Returns zero on success, negative values 783 + * on error. 784 + */ 785 + void netlbl_bitmap_setbit(unsigned char *bitmap, u32 bit, u8 state) 786 + { 787 + u32 byte_spot; 788 + u8 bitmask; 789 + 790 + /* gcc always rounds to zero when doing integer division */ 791 + byte_spot = bit / 8; 792 + bitmask = 0x80 >> (bit % 8); 793 + if (state) 794 + bitmap[byte_spot] |= bitmask; 795 + else 796 + bitmap[byte_spot] &= ~bitmask; 797 + } 798 + EXPORT_SYMBOL(netlbl_bitmap_setbit); 799 + 873 800 /* 874 801 * LSM Functions 875 802 */ ··· 987 774 struct netlbl_dom_map *dom_entry; 988 775 989 776 rcu_read_lock(); 990 - dom_entry = netlbl_domhsh_getentry(secattr->domain); 777 + dom_entry = netlbl_domhsh_getentry(secattr->domain, family); 991 778 if (dom_entry == NULL) { 992 779 ret_val = -ENOENT; 993 780 goto socket_setattr_return; ··· 1012 799 break; 1013 800 #if IS_ENABLED(CONFIG_IPV6) 1014 801 case AF_INET6: 1015 - /* since we don't support any IPv6 labeling protocols right 1016 - * now we can optimize everything away until we do */ 1017 - ret_val = 0; 802 + switch (dom_entry->def.type) { 803 + case NETLBL_NLTYPE_ADDRSELECT: 804 + ret_val = -EDESTADDRREQ; 805 + break; 806 + case NETLBL_NLTYPE_CALIPSO: 807 + ret_val = calipso_sock_setattr(sk, 808 + dom_entry->def.calipso, 809 + secattr); 810 + break; 811 + case NETLBL_NLTYPE_UNLABELED: 812 + ret_val = 0; 813 + break; 814 + default: 815 + ret_val = -ENOENT; 816 + } 1018 817 break; 1019 818 #endif /* IPv6 */ 1020 819 default: ··· 1049 824 */ 1050 825 void netlbl_sock_delattr(struct sock *sk) 1051 826 { 1052 - cipso_v4_sock_delattr(sk); 827 + switch (sk->sk_family) { 828 + case AF_INET: 829 + cipso_v4_sock_delattr(sk); 830 + break; 831 + #if IS_ENABLED(CONFIG_IPV6) 832 + case AF_INET6: 833 + calipso_sock_delattr(sk); 834 + break; 835 + #endif /* IPv6 */ 836 + } 1053 837 } 1054 838 1055 839 /** ··· 1084 850 break; 1085 851 #if IS_ENABLED(CONFIG_IPV6) 1086 852 case AF_INET6: 1087 - ret_val = -ENOMSG; 853 + ret_val = calipso_sock_getattr(sk, secattr); 1088 854 break; 1089 855 #endif /* IPv6 */ 1090 856 default: ··· 1112 878 { 1113 879 int ret_val; 1114 880 struct sockaddr_in *addr4; 881 + #if IS_ENABLED(CONFIG_IPV6) 882 + struct sockaddr_in6 *addr6; 883 + #endif 1115 884 struct netlbl_dommap_def *entry; 1116 885 1117 886 rcu_read_lock(); ··· 1135 898 case NETLBL_NLTYPE_UNLABELED: 1136 899 /* just delete the protocols we support for right now 1137 900 * but we could remove other protocols if needed */ 1138 - cipso_v4_sock_delattr(sk); 901 + netlbl_sock_delattr(sk); 1139 902 ret_val = 0; 1140 903 break; 1141 904 default: ··· 1144 907 break; 1145 908 #if IS_ENABLED(CONFIG_IPV6) 1146 909 case AF_INET6: 1147 - /* since we don't support any IPv6 labeling protocols right 1148 - * now we can optimize everything away until we do */ 1149 - ret_val = 0; 910 + addr6 = (struct sockaddr_in6 *)addr; 911 + entry = netlbl_domhsh_getentry_af6(secattr->domain, 912 + &addr6->sin6_addr); 913 + if (entry == NULL) { 914 + ret_val = -ENOENT; 915 + goto conn_setattr_return; 916 + } 917 + switch (entry->type) { 918 + case NETLBL_NLTYPE_CALIPSO: 919 + ret_val = calipso_sock_setattr(sk, 920 + entry->calipso, secattr); 921 + break; 922 + case NETLBL_NLTYPE_UNLABELED: 923 + /* just delete the protocols we support for right now 924 + * but we could remove other protocols if needed */ 925 + netlbl_sock_delattr(sk); 926 + ret_val = 0; 927 + break; 928 + default: 929 + ret_val = -ENOENT; 930 + } 1150 931 break; 1151 932 #endif /* IPv6 */ 1152 933 default: ··· 1191 936 { 1192 937 int ret_val; 1193 938 struct netlbl_dommap_def *entry; 939 + struct inet_request_sock *ireq = inet_rsk(req); 1194 940 1195 941 rcu_read_lock(); 1196 942 switch (req->rsk_ops->family) { 1197 943 case AF_INET: 1198 944 entry = netlbl_domhsh_getentry_af4(secattr->domain, 1199 - inet_rsk(req)->ir_rmt_addr); 945 + ireq->ir_rmt_addr); 1200 946 if (entry == NULL) { 1201 947 ret_val = -ENOENT; 1202 948 goto req_setattr_return; ··· 1208 952 entry->cipso, secattr); 1209 953 break; 1210 954 case NETLBL_NLTYPE_UNLABELED: 1211 - /* just delete the protocols we support for right now 1212 - * but we could remove other protocols if needed */ 1213 - cipso_v4_req_delattr(req); 955 + netlbl_req_delattr(req); 1214 956 ret_val = 0; 1215 957 break; 1216 958 default: ··· 1217 963 break; 1218 964 #if IS_ENABLED(CONFIG_IPV6) 1219 965 case AF_INET6: 1220 - /* since we don't support any IPv6 labeling protocols right 1221 - * now we can optimize everything away until we do */ 1222 - ret_val = 0; 966 + entry = netlbl_domhsh_getentry_af6(secattr->domain, 967 + &ireq->ir_v6_rmt_addr); 968 + if (entry == NULL) { 969 + ret_val = -ENOENT; 970 + goto req_setattr_return; 971 + } 972 + switch (entry->type) { 973 + case NETLBL_NLTYPE_CALIPSO: 974 + ret_val = calipso_req_setattr(req, 975 + entry->calipso, secattr); 976 + break; 977 + case NETLBL_NLTYPE_UNLABELED: 978 + netlbl_req_delattr(req); 979 + ret_val = 0; 980 + break; 981 + default: 982 + ret_val = -ENOENT; 983 + } 1223 984 break; 1224 985 #endif /* IPv6 */ 1225 986 default: ··· 1256 987 */ 1257 988 void netlbl_req_delattr(struct request_sock *req) 1258 989 { 1259 - cipso_v4_req_delattr(req); 990 + switch (req->rsk_ops->family) { 991 + case AF_INET: 992 + cipso_v4_req_delattr(req); 993 + break; 994 + #if IS_ENABLED(CONFIG_IPV6) 995 + case AF_INET6: 996 + calipso_req_delattr(req); 997 + break; 998 + #endif /* IPv6 */ 999 + } 1260 1000 } 1261 1001 1262 1002 /** ··· 1285 1007 { 1286 1008 int ret_val; 1287 1009 struct iphdr *hdr4; 1010 + #if IS_ENABLED(CONFIG_IPV6) 1011 + struct ipv6hdr *hdr6; 1012 + #endif 1288 1013 struct netlbl_dommap_def *entry; 1289 1014 1290 1015 rcu_read_lock(); 1291 1016 switch (family) { 1292 1017 case AF_INET: 1293 1018 hdr4 = ip_hdr(skb); 1294 - entry = netlbl_domhsh_getentry_af4(secattr->domain,hdr4->daddr); 1019 + entry = netlbl_domhsh_getentry_af4(secattr->domain, 1020 + hdr4->daddr); 1295 1021 if (entry == NULL) { 1296 1022 ret_val = -ENOENT; 1297 1023 goto skbuff_setattr_return; ··· 1316 1034 break; 1317 1035 #if IS_ENABLED(CONFIG_IPV6) 1318 1036 case AF_INET6: 1319 - /* since we don't support any IPv6 labeling protocols right 1320 - * now we can optimize everything away until we do */ 1321 - ret_val = 0; 1037 + hdr6 = ipv6_hdr(skb); 1038 + entry = netlbl_domhsh_getentry_af6(secattr->domain, 1039 + &hdr6->daddr); 1040 + if (entry == NULL) { 1041 + ret_val = -ENOENT; 1042 + goto skbuff_setattr_return; 1043 + } 1044 + switch (entry->type) { 1045 + case NETLBL_NLTYPE_CALIPSO: 1046 + ret_val = calipso_skbuff_setattr(skb, entry->calipso, 1047 + secattr); 1048 + break; 1049 + case NETLBL_NLTYPE_UNLABELED: 1050 + /* just delete the protocols we support for right now 1051 + * but we could remove other protocols if needed */ 1052 + ret_val = calipso_skbuff_delattr(skb); 1053 + break; 1054 + default: 1055 + ret_val = -ENOENT; 1056 + } 1322 1057 break; 1323 1058 #endif /* IPv6 */ 1324 1059 default: ··· 1374 1075 break; 1375 1076 #if IS_ENABLED(CONFIG_IPV6) 1376 1077 case AF_INET6: 1078 + ptr = calipso_optptr(skb); 1079 + if (ptr && calipso_getattr(ptr, secattr) == 0) 1080 + return 0; 1377 1081 break; 1378 1082 #endif /* IPv6 */ 1379 1083 } ··· 1387 1085 /** 1388 1086 * netlbl_skbuff_err - Handle a LSM error on a sk_buff 1389 1087 * @skb: the packet 1088 + * @family: the family 1390 1089 * @error: the error code 1391 1090 * @gateway: true if host is acting as a gateway, false otherwise 1392 1091 * ··· 1397 1094 * according to the packet's labeling protocol. 1398 1095 * 1399 1096 */ 1400 - void netlbl_skbuff_err(struct sk_buff *skb, int error, int gateway) 1097 + void netlbl_skbuff_err(struct sk_buff *skb, u16 family, int error, int gateway) 1401 1098 { 1402 - if (cipso_v4_optptr(skb)) 1403 - cipso_v4_error(skb, error, gateway); 1099 + switch (family) { 1100 + case AF_INET: 1101 + if (cipso_v4_optptr(skb)) 1102 + cipso_v4_error(skb, error, gateway); 1103 + break; 1104 + } 1404 1105 } 1405 1106 1406 1107 /** ··· 1419 1112 void netlbl_cache_invalidate(void) 1420 1113 { 1421 1114 cipso_v4_cache_invalidate(); 1115 + #if IS_ENABLED(CONFIG_IPV6) 1116 + calipso_cache_invalidate(); 1117 + #endif /* IPv6 */ 1422 1118 } 1423 1119 1424 1120 /** 1425 1121 * netlbl_cache_add - Add an entry to a NetLabel protocol cache 1426 1122 * @skb: the packet 1123 + * @family: the family 1427 1124 * @secattr: the packet's security attributes 1428 1125 * 1429 1126 * Description: ··· 1436 1125 * values on error. 1437 1126 * 1438 1127 */ 1439 - int netlbl_cache_add(const struct sk_buff *skb, 1128 + int netlbl_cache_add(const struct sk_buff *skb, u16 family, 1440 1129 const struct netlbl_lsm_secattr *secattr) 1441 1130 { 1442 1131 unsigned char *ptr; ··· 1444 1133 if ((secattr->flags & NETLBL_SECATTR_CACHE) == 0) 1445 1134 return -ENOMSG; 1446 1135 1447 - ptr = cipso_v4_optptr(skb); 1448 - if (ptr) 1449 - return cipso_v4_cache_add(ptr, secattr); 1450 - 1136 + switch (family) { 1137 + case AF_INET: 1138 + ptr = cipso_v4_optptr(skb); 1139 + if (ptr) 1140 + return cipso_v4_cache_add(ptr, secattr); 1141 + break; 1142 + #if IS_ENABLED(CONFIG_IPV6) 1143 + case AF_INET6: 1144 + ptr = calipso_optptr(skb); 1145 + if (ptr) 1146 + return calipso_cache_add(ptr, secattr); 1147 + break; 1148 + #endif /* IPv6 */ 1149 + } 1451 1150 return -ENOMSG; 1452 1151 } 1453 1152 ··· 1482 1161 { 1483 1162 return netlbl_audit_start_common(type, audit_info); 1484 1163 } 1164 + EXPORT_SYMBOL(netlbl_audit_start); 1485 1165 1486 1166 /* 1487 1167 * Setup Functions
+80 -5
net/netlabel/netlabel_mgmt.c
··· 41 41 #include <net/ipv6.h> 42 42 #include <net/netlabel.h> 43 43 #include <net/cipso_ipv4.h> 44 + #include <net/calipso.h> 44 45 #include <linux/atomic.h> 45 46 47 + #include "netlabel_calipso.h" 46 48 #include "netlabel_domainhash.h" 47 49 #include "netlabel_user.h" 48 50 #include "netlabel_mgmt.h" ··· 74 72 [NLBL_MGMT_A_PROTOCOL] = { .type = NLA_U32 }, 75 73 [NLBL_MGMT_A_VERSION] = { .type = NLA_U32 }, 76 74 [NLBL_MGMT_A_CV4DOI] = { .type = NLA_U32 }, 75 + [NLBL_MGMT_A_FAMILY] = { .type = NLA_U16 }, 76 + [NLBL_MGMT_A_CLPDOI] = { .type = NLA_U32 }, 77 77 }; 78 78 79 79 /* ··· 99 95 int ret_val = -EINVAL; 100 96 struct netlbl_domaddr_map *addrmap = NULL; 101 97 struct cipso_v4_doi *cipsov4 = NULL; 98 + #if IS_ENABLED(CONFIG_IPV6) 99 + struct calipso_doi *calipso = NULL; 100 + #endif 102 101 u32 tmp_val; 103 102 struct netlbl_dom_map *entry = kzalloc(sizeof(*entry), GFP_KERNEL); 104 103 ··· 126 119 127 120 switch (entry->def.type) { 128 121 case NETLBL_NLTYPE_UNLABELED: 122 + if (info->attrs[NLBL_MGMT_A_FAMILY]) 123 + entry->family = 124 + nla_get_u16(info->attrs[NLBL_MGMT_A_FAMILY]); 125 + else 126 + entry->family = AF_UNSPEC; 129 127 break; 130 128 case NETLBL_NLTYPE_CIPSOV4: 131 129 if (!info->attrs[NLBL_MGMT_A_CV4DOI]) ··· 140 128 cipsov4 = cipso_v4_doi_getdef(tmp_val); 141 129 if (cipsov4 == NULL) 142 130 goto add_free_domain; 131 + entry->family = AF_INET; 143 132 entry->def.cipso = cipsov4; 144 133 break; 134 + #if IS_ENABLED(CONFIG_IPV6) 135 + case NETLBL_NLTYPE_CALIPSO: 136 + if (!info->attrs[NLBL_MGMT_A_CLPDOI]) 137 + goto add_free_domain; 138 + 139 + tmp_val = nla_get_u32(info->attrs[NLBL_MGMT_A_CLPDOI]); 140 + calipso = calipso_doi_getdef(tmp_val); 141 + if (calipso == NULL) 142 + goto add_free_domain; 143 + entry->family = AF_INET6; 144 + entry->def.calipso = calipso; 145 + break; 146 + #endif /* IPv6 */ 145 147 default: 146 148 goto add_free_domain; 147 149 } 150 + 151 + if ((entry->family == AF_INET && info->attrs[NLBL_MGMT_A_IPV6ADDR]) || 152 + (entry->family == AF_INET6 && info->attrs[NLBL_MGMT_A_IPV4ADDR])) 153 + goto add_doi_put_def; 148 154 149 155 if (info->attrs[NLBL_MGMT_A_IPV4ADDR]) { 150 156 struct in_addr *addr; ··· 208 178 goto add_free_addrmap; 209 179 } 210 180 181 + entry->family = AF_INET; 211 182 entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 212 183 entry->def.addrsel = addrmap; 213 184 #if IS_ENABLED(CONFIG_IPV6) ··· 251 220 map->list.mask = *mask; 252 221 map->list.valid = 1; 253 222 map->def.type = entry->def.type; 223 + if (calipso) 224 + map->def.calipso = calipso; 254 225 255 226 ret_val = netlbl_af6list_add(&map->list, &addrmap->list6); 256 227 if (ret_val != 0) { ··· 260 227 goto add_free_addrmap; 261 228 } 262 229 230 + entry->family = AF_INET6; 263 231 entry->def.type = NETLBL_NLTYPE_ADDRSELECT; 264 232 entry->def.addrsel = addrmap; 265 233 #endif /* IPv6 */ ··· 276 242 kfree(addrmap); 277 243 add_doi_put_def: 278 244 cipso_v4_doi_putdef(cipsov4); 245 + #if IS_ENABLED(CONFIG_IPV6) 246 + calipso_doi_putdef(calipso); 247 + #endif 279 248 add_free_domain: 280 249 kfree(entry->domain); 281 250 add_free_entry: ··· 314 277 if (ret_val != 0) 315 278 return ret_val; 316 279 } 280 + 281 + ret_val = nla_put_u16(skb, NLBL_MGMT_A_FAMILY, entry->family); 282 + if (ret_val != 0) 283 + return ret_val; 317 284 318 285 switch (entry->def.type) { 319 286 case NETLBL_NLTYPE_ADDRSELECT: ··· 381 340 if (ret_val != 0) 382 341 return ret_val; 383 342 343 + switch (map6->def.type) { 344 + case NETLBL_NLTYPE_CALIPSO: 345 + ret_val = nla_put_u32(skb, NLBL_MGMT_A_CLPDOI, 346 + map6->def.calipso->doi); 347 + if (ret_val != 0) 348 + return ret_val; 349 + break; 350 + } 351 + 384 352 nla_nest_end(skb, nla_b); 385 353 } 386 354 #endif /* IPv6 */ ··· 397 347 nla_nest_end(skb, nla_a); 398 348 break; 399 349 case NETLBL_NLTYPE_UNLABELED: 400 - ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 350 + ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 351 + entry->def.type); 401 352 break; 402 353 case NETLBL_NLTYPE_CIPSOV4: 403 - ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type); 354 + ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 355 + entry->def.type); 404 356 if (ret_val != 0) 405 357 return ret_val; 406 358 ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI, 407 359 entry->def.cipso->doi); 360 + break; 361 + case NETLBL_NLTYPE_CALIPSO: 362 + ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL, 363 + entry->def.type); 364 + if (ret_val != 0) 365 + return ret_val; 366 + ret_val = nla_put_u32(skb, NLBL_MGMT_A_CLPDOI, 367 + entry->def.calipso->doi); 408 368 break; 409 369 } 410 370 ··· 478 418 netlbl_netlink_auditinfo(skb, &audit_info); 479 419 480 420 domain = nla_data(info->attrs[NLBL_MGMT_A_DOMAIN]); 481 - return netlbl_domhsh_remove(domain, &audit_info); 421 + return netlbl_domhsh_remove(domain, AF_UNSPEC, &audit_info); 482 422 } 483 423 484 424 /** ··· 596 536 597 537 netlbl_netlink_auditinfo(skb, &audit_info); 598 538 599 - return netlbl_domhsh_remove_default(&audit_info); 539 + return netlbl_domhsh_remove_default(AF_UNSPEC, &audit_info); 600 540 } 601 541 602 542 /** ··· 616 556 struct sk_buff *ans_skb = NULL; 617 557 void *data; 618 558 struct netlbl_dom_map *entry; 559 + u16 family; 560 + 561 + if (info->attrs[NLBL_MGMT_A_FAMILY]) 562 + family = nla_get_u16(info->attrs[NLBL_MGMT_A_FAMILY]); 563 + else 564 + family = AF_INET; 619 565 620 566 ans_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 621 567 if (ans_skb == NULL) ··· 632 566 goto listdef_failure; 633 567 634 568 rcu_read_lock(); 635 - entry = netlbl_domhsh_getentry(NULL); 569 + entry = netlbl_domhsh_getentry(NULL, family); 636 570 if (entry == NULL) { 637 571 ret_val = -ENOENT; 638 572 goto listdef_failure_lock; ··· 717 651 goto protocols_return; 718 652 protos_sent++; 719 653 } 654 + #if IS_ENABLED(CONFIG_IPV6) 655 + if (protos_sent == 2) { 656 + if (netlbl_mgmt_protocols_cb(skb, 657 + cb, 658 + NETLBL_NLTYPE_CALIPSO) < 0) 659 + goto protocols_return; 660 + protos_sent++; 661 + } 662 + #endif 720 663 721 664 protocols_return: 722 665 cb->args[0] = protos_sent;
+22 -5
net/netlabel/netlabel_mgmt.h
··· 58 58 * 59 59 * NLBL_MGMT_A_CV4DOI 60 60 * 61 - * If using NETLBL_NLTYPE_UNLABELED no other attributes are required. 61 + * If using NETLBL_NLTYPE_UNLABELED no other attributes are required, 62 + * however the following attribute may optionally be sent: 63 + * 64 + * NLBL_MGMT_A_FAMILY 62 65 * 63 66 * o REMOVE: 64 67 * Sent by an application to remove a domain mapping from the NetLabel ··· 80 77 * Required attributes: 81 78 * 82 79 * NLBL_MGMT_A_DOMAIN 80 + * NLBL_MGMT_A_FAMILY 83 81 * 84 82 * If the IP address selectors are not used the following attribute is 85 83 * required: ··· 112 108 * 113 109 * NLBL_MGMT_A_CV4DOI 114 110 * 115 - * If using NETLBL_NLTYPE_UNLABELED no other attributes are required. 111 + * If using NETLBL_NLTYPE_UNLABELED no other attributes are required, 112 + * however the following attribute may optionally be sent: 113 + * 114 + * NLBL_MGMT_A_FAMILY 116 115 * 117 116 * o REMOVEDEF: 118 117 * Sent by an application to remove the default domain mapping from the ··· 124 117 * o LISTDEF: 125 118 * This message can be sent either from an application or by the kernel in 126 119 * response to an application generated LISTDEF message. When sent by an 127 - * application there is no payload. On success the kernel should send a 128 - * response using the following format. 120 + * application there may be an optional payload. 129 121 * 130 - * If the IP address selectors are not used the following attribute is 122 + * NLBL_MGMT_A_FAMILY 123 + * 124 + * On success the kernel should send a response using the following format: 125 + * 126 + * If the IP address selectors are not used the following attributes are 131 127 * required: 132 128 * 133 129 * NLBL_MGMT_A_PROTOCOL 130 + * NLBL_MGMT_A_FAMILY 134 131 * 135 132 * If the IP address selectors are used then the following attritbute is 136 133 * required: ··· 220 209 /* (NLA_NESTED) 221 210 * the selector list, there must be at least one 222 211 * NLBL_MGMT_A_ADDRSELECTOR attribute */ 212 + NLBL_MGMT_A_FAMILY, 213 + /* (NLA_U16) 214 + * The address family */ 215 + NLBL_MGMT_A_CLPDOI, 216 + /* (NLA_U32) 217 + * the CALIPSO DOI value */ 223 218 __NLBL_MGMT_A_MAX, 224 219 }; 225 220 #define NLBL_MGMT_A_MAX (__NLBL_MGMT_A_MAX - 1)
+3 -2
net/netlabel/netlabel_unlabeled.c
··· 116 116 static DEFINE_SPINLOCK(netlbl_unlhsh_lock); 117 117 #define netlbl_unlhsh_rcu_deref(p) \ 118 118 rcu_dereference_check(p, lockdep_is_held(&netlbl_unlhsh_lock)) 119 - static struct netlbl_unlhsh_tbl *netlbl_unlhsh; 120 - static struct netlbl_unlhsh_iface *netlbl_unlhsh_def; 119 + static struct netlbl_unlhsh_tbl __rcu *netlbl_unlhsh; 120 + static struct netlbl_unlhsh_iface __rcu *netlbl_unlhsh_def; 121 121 122 122 /* Accept unlabeled packets flag */ 123 123 static u8 netlabel_unlabel_acceptflg; ··· 1537 1537 entry = kzalloc(sizeof(*entry), GFP_KERNEL); 1538 1538 if (entry == NULL) 1539 1539 return -ENOMEM; 1540 + entry->family = AF_UNSPEC; 1540 1541 entry->def.type = NETLBL_NLTYPE_UNLABELED; 1541 1542 ret_val = netlbl_domhsh_add_default(entry, &audit_info); 1542 1543 if (ret_val != 0)
+5
net/netlabel/netlabel_user.c
··· 44 44 #include "netlabel_mgmt.h" 45 45 #include "netlabel_unlabeled.h" 46 46 #include "netlabel_cipso_v4.h" 47 + #include "netlabel_calipso.h" 47 48 #include "netlabel_user.h" 48 49 49 50 /* ··· 69 68 return ret_val; 70 69 71 70 ret_val = netlbl_cipsov4_genl_init(); 71 + if (ret_val != 0) 72 + return ret_val; 73 + 74 + ret_val = netlbl_calipso_genl_init(); 72 75 if (ret_val != 0) 73 76 return ret_val; 74 77
+1 -1
net/sysctl_net.c
··· 46 46 kgid_t root_gid = make_kgid(net->user_ns, 0); 47 47 48 48 /* Allow network administrator to have same access as root. */ 49 - if (ns_capable(net->user_ns, CAP_NET_ADMIN) || 49 + if (ns_capable_noaudit(net->user_ns, CAP_NET_ADMIN) || 50 50 uid_eq(root_uid, current_euid())) { 51 51 int mode = (table->mode >> 6) & 7; 52 52 return (mode << 6) | (mode << 3) | mode;
+7
samples/Kconfig
··· 92 92 with it. 93 93 See also Documentation/connector/connector.txt 94 94 95 + config SAMPLE_SECCOMP 96 + tristate "Build seccomp sample code -- loadable modules only" 97 + depends on SECCOMP_FILTER && m 98 + help 99 + Build samples of seccomp filters using various methods of 100 + BPF filter construction. 101 + 95 102 endif # SAMPLES
+1 -1
samples/seccomp/Makefile
··· 1 1 # kbuild trick to avoid linker error. Can be omitted if a module is built. 2 2 obj- := dummy.o 3 3 4 - hostprogs-$(CONFIG_SECCOMP_FILTER) := bpf-fancy dropper bpf-direct 4 + hostprogs-$(CONFIG_SAMPLE_SECCOMP) := bpf-fancy dropper bpf-direct 5 5 6 6 HOSTCFLAGS_bpf-fancy.o += -I$(objtree)/usr/include 7 7 HOSTCFLAGS_bpf-fancy.o += -idirafter $(objtree)/include
+26 -8
scripts/sign-file.c
··· 1 1 /* Sign a module file using the given key. 2 2 * 3 - * Copyright © 2014-2015 Red Hat, Inc. All Rights Reserved. 3 + * Copyright © 2014-2016 Red Hat, Inc. All Rights Reserved. 4 4 * Copyright © 2015 Intel Corporation. 5 5 * Copyright © 2016 Hewlett Packard Enterprise Development LP 6 6 * ··· 167 167 168 168 static X509 *read_x509(const char *x509_name) 169 169 { 170 + unsigned char buf[2]; 170 171 X509 *x509; 171 172 BIO *b; 173 + int n; 172 174 173 175 b = BIO_new_file(x509_name, "rb"); 174 176 ERR(!b, "%s", x509_name); 175 - x509 = d2i_X509_bio(b, NULL); /* Binary encoded X.509 */ 176 - if (!x509) { 177 - ERR(BIO_reset(b) != 1, "%s", x509_name); 178 - x509 = PEM_read_bio_X509(b, NULL, NULL, 179 - NULL); /* PEM encoded X.509 */ 180 - if (x509) 181 - drain_openssl_errors(); 177 + 178 + /* Look at the first two bytes of the file to determine the encoding */ 179 + n = BIO_read(b, buf, 2); 180 + if (n != 2) { 181 + if (BIO_should_retry(b)) { 182 + fprintf(stderr, "%s: Read wanted retry\n", x509_name); 183 + exit(1); 184 + } 185 + if (n >= 0) { 186 + fprintf(stderr, "%s: Short read\n", x509_name); 187 + exit(1); 188 + } 189 + ERR(1, "%s", x509_name); 182 190 } 191 + 192 + ERR(BIO_reset(b) != 0, "%s", x509_name); 193 + 194 + if (buf[0] == 0x30 && buf[1] >= 0x81 && buf[1] <= 0x84) 195 + /* Assume raw DER encoded X.509 */ 196 + x509 = d2i_X509_bio(b, NULL); 197 + else 198 + /* Assume PEM encoded X.509 */ 199 + x509 = PEM_read_bio_X509(b, NULL, NULL, NULL); 200 + 183 201 BIO_free(b); 184 202 ERR(!x509, "%s", x509_name); 185 203
+17 -4
security/apparmor/Kconfig
··· 31 31 If you are unsure how to answer this question, answer 1. 32 32 33 33 config SECURITY_APPARMOR_HASH 34 - bool "SHA1 hash of loaded profiles" 34 + bool "Enable introspection of sha1 hashes for loaded profiles" 35 35 depends on SECURITY_APPARMOR 36 36 select CRYPTO 37 37 select CRYPTO_SHA1 38 38 default y 39 39 40 40 help 41 - This option selects whether sha1 hashing is done against loaded 42 - profiles and exported for inspection to user space via the apparmor 43 - filesystem. 41 + This option selects whether introspection of loaded policy 42 + is available to userspace via the apparmor filesystem. 43 + 44 + config SECURITY_APPARMOR_HASH_DEFAULT 45 + bool "Enable policy hash introspection by default" 46 + depends on SECURITY_APPARMOR_HASH 47 + default y 48 + 49 + help 50 + This option selects whether sha1 hashing of loaded policy 51 + is enabled by default. The generation of sha1 hashes for 52 + loaded policy provide system administrators a quick way 53 + to verify that policy in the kernel matches what is expected, 54 + however it can slow down policy load on some devices. In 55 + these cases policy hashing can be disabled by default and 56 + enabled only if needed.
+6 -5
security/apparmor/apparmorfs.c
··· 331 331 seq_printf(seq, "%.2x", profile->hash[i]); 332 332 seq_puts(seq, "\n"); 333 333 } 334 + aa_put_profile(profile); 334 335 335 336 return 0; 336 337 } ··· 380 379 381 380 for (i = 0; i < AAFS_PROF_SIZEOF; i++) { 382 381 new->dents[i] = old->dents[i]; 382 + if (new->dents[i]) 383 + new->dents[i]->d_inode->i_mtime = CURRENT_TIME; 383 384 old->dents[i] = NULL; 384 385 } 385 386 } ··· 553 550 } 554 551 555 552 556 - #define list_entry_next(pos, member) \ 557 - list_entry(pos->member.next, typeof(*pos), member) 558 553 #define list_entry_is_head(pos, head, member) (&pos->member == (head)) 559 554 560 555 /** ··· 583 582 parent = ns->parent; 584 583 while (ns != root) { 585 584 mutex_unlock(&ns->lock); 586 - next = list_entry_next(ns, base.list); 585 + next = list_next_entry(ns, base.list); 587 586 if (!list_entry_is_head(next, &parent->sub_ns, base.list)) { 588 587 mutex_lock(&next->lock); 589 588 return next; ··· 637 636 parent = rcu_dereference_protected(p->parent, 638 637 mutex_is_locked(&p->ns->lock)); 639 638 while (parent) { 640 - p = list_entry_next(p, base.list); 639 + p = list_next_entry(p, base.list); 641 640 if (!list_entry_is_head(p, &parent->base.profiles, base.list)) 642 641 return p; 643 642 p = parent; ··· 646 645 } 647 646 648 647 /* is next another profile in the namespace */ 649 - p = list_entry_next(p, base.list); 648 + p = list_next_entry(p, base.list); 650 649 if (!list_entry_is_head(p, &ns->base.profiles, base.list)) 651 650 return p; 652 651
+2 -1
security/apparmor/audit.c
··· 200 200 201 201 if (sa->aad->type == AUDIT_APPARMOR_KILL) 202 202 (void)send_sig_info(SIGKILL, NULL, 203 - sa->u.tsk ? sa->u.tsk : current); 203 + sa->type == LSM_AUDIT_DATA_TASK && sa->u.tsk ? 204 + sa->u.tsk : current); 204 205 205 206 if (sa->aad->type == AUDIT_APPARMOR_ALLOWED) 206 207 return complain_error(sa->aad->error);
+3
security/apparmor/crypto.c
··· 39 39 int error = -ENOMEM; 40 40 u32 le32_version = cpu_to_le32(version); 41 41 42 + if (!aa_g_hash_policy) 43 + return 0; 44 + 42 45 if (!apparmor_tfm) 43 46 return 0; 44 47
+10 -12
security/apparmor/domain.c
··· 346 346 file_inode(bprm->file)->i_uid, 347 347 file_inode(bprm->file)->i_mode 348 348 }; 349 - const char *name = NULL, *target = NULL, *info = NULL; 349 + const char *name = NULL, *info = NULL; 350 350 int error = 0; 351 351 352 352 if (bprm->cred_prepared) ··· 399 399 if (cxt->onexec) { 400 400 struct file_perms cp; 401 401 info = "change_profile onexec"; 402 + new_profile = aa_get_newest_profile(cxt->onexec); 402 403 if (!(perms.allow & AA_MAY_ONEXEC)) 403 404 goto audit; 404 405 ··· 414 413 415 414 if (!(cp.allow & AA_MAY_ONEXEC)) 416 415 goto audit; 417 - new_profile = aa_get_newest_profile(cxt->onexec); 418 416 goto apply; 419 417 } 420 418 ··· 433 433 new_profile = aa_get_newest_profile(ns->unconfined); 434 434 info = "ux fallback"; 435 435 } else { 436 - error = -ENOENT; 436 + error = -EACCES; 437 437 info = "profile not found"; 438 438 /* remove MAY_EXEC to audit as failure */ 439 439 perms.allow &= ~MAY_EXEC; ··· 445 445 if (!new_profile) { 446 446 error = -ENOMEM; 447 447 info = "could not create null profile"; 448 - } else { 448 + } else 449 449 error = -EACCES; 450 - target = new_profile->base.hname; 451 - } 452 450 perms.xindex |= AA_X_UNSAFE; 453 451 } else 454 452 /* fail exec */ ··· 457 459 * fail the exec. 458 460 */ 459 461 if (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) { 460 - aa_put_profile(new_profile); 461 462 error = -EPERM; 462 463 goto cleanup; 463 464 } ··· 471 474 472 475 if (bprm->unsafe & (LSM_UNSAFE_PTRACE | LSM_UNSAFE_PTRACE_CAP)) { 473 476 error = may_change_ptraced_domain(new_profile); 474 - if (error) { 475 - aa_put_profile(new_profile); 477 + if (error) 476 478 goto audit; 477 - } 478 479 } 479 480 480 481 /* Determine if secure exec is needed. ··· 493 498 bprm->unsafe |= AA_SECURE_X_NEEDED; 494 499 } 495 500 apply: 496 - target = new_profile->base.hname; 497 501 /* when transitioning profiles clear unsafe personality bits */ 498 502 bprm->per_clear |= PER_CLEAR_ON_SETID; 499 503 ··· 500 506 aa_put_profile(cxt->profile); 501 507 /* transfer new profile reference will be released when cxt is freed */ 502 508 cxt->profile = new_profile; 509 + new_profile = NULL; 503 510 504 511 /* clear out all temporary/transitional state from the context */ 505 512 aa_clear_task_cxt_trans(cxt); 506 513 507 514 audit: 508 515 error = aa_audit_file(profile, &perms, GFP_KERNEL, OP_EXEC, MAY_EXEC, 509 - name, target, cond.uid, info, error); 516 + name, 517 + new_profile ? new_profile->base.hname : NULL, 518 + cond.uid, info, error); 510 519 511 520 cleanup: 521 + aa_put_profile(new_profile); 512 522 aa_put_profile(profile); 513 523 kfree(buffer); 514 524
+2 -1
security/apparmor/file.c
··· 110 110 int type = AUDIT_APPARMOR_AUTO; 111 111 struct common_audit_data sa; 112 112 struct apparmor_audit_data aad = {0,}; 113 - sa.type = LSM_AUDIT_DATA_NONE; 113 + sa.type = LSM_AUDIT_DATA_TASK; 114 + sa.u.tsk = NULL; 114 115 sa.aad = &aad; 115 116 aad.op = op, 116 117 aad.fs.request = request;
+1
security/apparmor/include/apparmor.h
··· 37 37 extern enum audit_mode aa_g_audit; 38 38 extern bool aa_g_audit_header; 39 39 extern bool aa_g_debug; 40 + extern bool aa_g_hash_policy; 40 41 extern bool aa_g_lock_policy; 41 42 extern bool aa_g_logsyscall; 42 43 extern bool aa_g_paranoid_load;
+1
security/apparmor/include/match.h
··· 62 62 #define YYTD_ID_ACCEPT2 6 63 63 #define YYTD_ID_NXT 7 64 64 #define YYTD_ID_TSIZE 8 65 + #define YYTD_ID_MAX 8 65 66 66 67 #define YYTD_DATA8 1 67 68 #define YYTD_DATA16 2
+2
security/apparmor/include/policy.h
··· 403 403 return profile->audit; 404 404 } 405 405 406 + bool policy_view_capable(void); 407 + bool policy_admin_capable(void); 406 408 bool aa_may_manage_policy(int op); 407 409 408 410 #endif /* __AA_POLICY_H */
+17 -13
security/apparmor/lsm.c
··· 529 529 if (!*args) 530 530 goto out; 531 531 532 - arg_size = size - (args - (char *) value); 532 + arg_size = size - (args - (largs ? largs : (char *) value)); 533 533 if (strcmp(name, "current") == 0) { 534 534 if (strcmp(command, "changehat") == 0) { 535 535 error = aa_setprocattr_changehat(args, arg_size, ··· 671 671 module_param_call(mode, param_set_mode, param_get_mode, 672 672 &aa_g_profile_mode, S_IRUSR | S_IWUSR); 673 673 674 + #ifdef CONFIG_SECURITY_APPARMOR_HASH 675 + /* whether policy verification hashing is enabled */ 676 + bool aa_g_hash_policy = IS_ENABLED(CONFIG_SECURITY_APPARMOR_HASH_DEFAULT); 677 + module_param_named(hash_policy, aa_g_hash_policy, aabool, S_IRUSR | S_IWUSR); 678 + #endif 679 + 674 680 /* Debug mode */ 675 681 bool aa_g_debug; 676 682 module_param_named(debug, aa_g_debug, aabool, S_IRUSR | S_IWUSR); ··· 734 728 /* set global flag turning off the ability to load policy */ 735 729 static int param_set_aalockpolicy(const char *val, const struct kernel_param *kp) 736 730 { 737 - if (!capable(CAP_MAC_ADMIN)) 731 + if (!policy_admin_capable()) 738 732 return -EPERM; 739 - if (aa_g_lock_policy) 740 - return -EACCES; 741 733 return param_set_bool(val, kp); 742 734 } 743 735 744 736 static int param_get_aalockpolicy(char *buffer, const struct kernel_param *kp) 745 737 { 746 - if (!capable(CAP_MAC_ADMIN)) 738 + if (!policy_view_capable()) 747 739 return -EPERM; 748 740 return param_get_bool(buffer, kp); 749 741 } 750 742 751 743 static int param_set_aabool(const char *val, const struct kernel_param *kp) 752 744 { 753 - if (!capable(CAP_MAC_ADMIN)) 745 + if (!policy_admin_capable()) 754 746 return -EPERM; 755 747 return param_set_bool(val, kp); 756 748 } 757 749 758 750 static int param_get_aabool(char *buffer, const struct kernel_param *kp) 759 751 { 760 - if (!capable(CAP_MAC_ADMIN)) 752 + if (!policy_view_capable()) 761 753 return -EPERM; 762 754 return param_get_bool(buffer, kp); 763 755 } 764 756 765 757 static int param_set_aauint(const char *val, const struct kernel_param *kp) 766 758 { 767 - if (!capable(CAP_MAC_ADMIN)) 759 + if (!policy_admin_capable()) 768 760 return -EPERM; 769 761 return param_set_uint(val, kp); 770 762 } 771 763 772 764 static int param_get_aauint(char *buffer, const struct kernel_param *kp) 773 765 { 774 - if (!capable(CAP_MAC_ADMIN)) 766 + if (!policy_view_capable()) 775 767 return -EPERM; 776 768 return param_get_uint(buffer, kp); 777 769 } 778 770 779 771 static int param_get_audit(char *buffer, struct kernel_param *kp) 780 772 { 781 - if (!capable(CAP_MAC_ADMIN)) 773 + if (!policy_view_capable()) 782 774 return -EPERM; 783 775 784 776 if (!apparmor_enabled) ··· 788 784 static int param_set_audit(const char *val, struct kernel_param *kp) 789 785 { 790 786 int i; 791 - if (!capable(CAP_MAC_ADMIN)) 787 + if (!policy_admin_capable()) 792 788 return -EPERM; 793 789 794 790 if (!apparmor_enabled) ··· 809 805 810 806 static int param_get_mode(char *buffer, struct kernel_param *kp) 811 807 { 812 - if (!capable(CAP_MAC_ADMIN)) 808 + if (!policy_admin_capable()) 813 809 return -EPERM; 814 810 815 811 if (!apparmor_enabled) ··· 821 817 static int param_set_mode(const char *val, struct kernel_param *kp) 822 818 { 823 819 int i; 824 - if (!capable(CAP_MAC_ADMIN)) 820 + if (!policy_admin_capable()) 825 821 return -EPERM; 826 822 827 823 if (!apparmor_enabled)
+10 -6
security/apparmor/match.c
··· 47 47 * it every time we use td_id as an index 48 48 */ 49 49 th.td_id = be16_to_cpu(*(u16 *) (blob)) - 1; 50 + if (th.td_id > YYTD_ID_MAX) 51 + goto out; 50 52 th.td_flags = be16_to_cpu(*(u16 *) (blob + 2)); 51 53 th.td_lolen = be32_to_cpu(*(u32 *) (blob + 8)); 52 54 blob += sizeof(struct table_header); ··· 63 61 64 62 table = kvzalloc(tsize); 65 63 if (table) { 66 - *table = th; 64 + table->td_id = th.td_id; 65 + table->td_flags = th.td_flags; 66 + table->td_lolen = th.td_lolen; 67 67 if (th.td_flags == YYTD_DATA8) 68 68 UNPACK_ARRAY(table->td_data, blob, th.td_lolen, 69 69 u8, byte_to_byte); ··· 77 73 u32, be32_to_cpu); 78 74 else 79 75 goto fail; 76 + /* if table was vmalloced make sure the page tables are synced 77 + * before it is used, as it goes live to all cpus. 78 + */ 79 + if (is_vmalloc_addr(table)) 80 + vm_unmap_aliases(); 80 81 } 81 82 82 83 out: 83 - /* if table was vmalloced make sure the page tables are synced 84 - * before it is used, as it goes live to all cpus. 85 - */ 86 - if (is_vmalloc_addr(table)) 87 - vm_unmap_aliases(); 88 84 return table; 89 85 fail: 90 86 kvfree(table);
+36 -25
security/apparmor/path.c
··· 25 25 #include "include/path.h" 26 26 #include "include/policy.h" 27 27 28 - 29 28 /* modified from dcache.c */ 30 29 static int prepend(char **buffer, int buflen, const char *str, int namelen) 31 30 { ··· 37 38 } 38 39 39 40 #define CHROOT_NSCONNECT (PATH_CHROOT_REL | PATH_CHROOT_NSCONNECT) 41 + 42 + /* If the path is not connected to the expected root, 43 + * check if it is a sysctl and handle specially else remove any 44 + * leading / that __d_path may have returned. 45 + * Unless 46 + * specifically directed to connect the path, 47 + * OR 48 + * if in a chroot and doing chroot relative paths and the path 49 + * resolves to the namespace root (would be connected outside 50 + * of chroot) and specifically directed to connect paths to 51 + * namespace root. 52 + */ 53 + static int disconnect(const struct path *path, char *buf, char **name, 54 + int flags) 55 + { 56 + int error = 0; 57 + 58 + if (!(flags & PATH_CONNECT_PATH) && 59 + !(((flags & CHROOT_NSCONNECT) == CHROOT_NSCONNECT) && 60 + our_mnt(path->mnt))) { 61 + /* disconnected path, don't return pathname starting 62 + * with '/' 63 + */ 64 + error = -EACCES; 65 + if (**name == '/') 66 + *name = *name + 1; 67 + } else if (**name != '/') 68 + /* CONNECT_PATH with missing root */ 69 + error = prepend(name, *name - buf, "/", 1); 70 + 71 + return error; 72 + } 40 73 41 74 /** 42 75 * d_namespace_path - lookup a name associated with a given path ··· 105 74 * control instead of hard coded /proc 106 75 */ 107 76 return prepend(name, *name - buf, "/proc", 5); 108 - } 77 + } else 78 + return disconnect(path, buf, name, flags); 109 79 return 0; 110 80 } 111 81 ··· 152 120 goto out; 153 121 } 154 122 155 - /* If the path is not connected to the expected root, 156 - * check if it is a sysctl and handle specially else remove any 157 - * leading / that __d_path may have returned. 158 - * Unless 159 - * specifically directed to connect the path, 160 - * OR 161 - * if in a chroot and doing chroot relative paths and the path 162 - * resolves to the namespace root (would be connected outside 163 - * of chroot) and specifically directed to connect paths to 164 - * namespace root. 165 - */ 166 - if (!connected) { 167 - if (!(flags & PATH_CONNECT_PATH) && 168 - !(((flags & CHROOT_NSCONNECT) == CHROOT_NSCONNECT) && 169 - our_mnt(path->mnt))) { 170 - /* disconnected path, don't return pathname starting 171 - * with '/' 172 - */ 173 - error = -EACCES; 174 - if (*res == '/') 175 - *name = res + 1; 176 - } 177 - } 123 + if (!connected) 124 + error = disconnect(path, buf, name, flags); 178 125 179 126 out: 180 127 return error;
+44 -17
security/apparmor/policy.c
··· 766 766 struct aa_profile *profile; 767 767 768 768 rcu_read_lock(); 769 - profile = aa_get_profile(__find_child(&parent->base.profiles, name)); 769 + do { 770 + profile = __find_child(&parent->base.profiles, name); 771 + } while (profile && !aa_get_profile_not0(profile)); 770 772 rcu_read_unlock(); 771 773 772 774 /* refcount released by caller */ ··· 918 916 &sa, NULL); 919 917 } 920 918 919 + bool policy_view_capable(void) 920 + { 921 + struct user_namespace *user_ns = current_user_ns(); 922 + bool response = false; 923 + 924 + if (ns_capable(user_ns, CAP_MAC_ADMIN)) 925 + response = true; 926 + 927 + return response; 928 + } 929 + 930 + bool policy_admin_capable(void) 931 + { 932 + return policy_view_capable() && !aa_g_lock_policy; 933 + } 934 + 921 935 /** 922 936 * aa_may_manage_policy - can the current task manage policy 923 937 * @op: the policy manipulation operation being done ··· 948 930 return 0; 949 931 } 950 932 951 - if (!capable(CAP_MAC_ADMIN)) { 933 + if (!policy_admin_capable()) { 952 934 audit_policy(op, GFP_KERNEL, NULL, "not policy admin", -EACCES); 953 935 return 0; 954 936 } ··· 1085 1067 */ 1086 1068 ssize_t aa_replace_profiles(void *udata, size_t size, bool noreplace) 1087 1069 { 1088 - const char *ns_name, *name = NULL, *info = NULL; 1070 + const char *ns_name, *info = NULL; 1089 1071 struct aa_namespace *ns = NULL; 1090 1072 struct aa_load_ent *ent, *tmp; 1091 1073 int op = OP_PROF_REPL; ··· 1100 1082 /* released below */ 1101 1083 ns = aa_prepare_namespace(ns_name); 1102 1084 if (!ns) { 1103 - info = "failed to prepare namespace"; 1104 - error = -ENOMEM; 1105 - name = ns_name; 1106 - goto fail; 1085 + error = audit_policy(op, GFP_KERNEL, ns_name, 1086 + "failed to prepare namespace", -ENOMEM); 1087 + goto free; 1107 1088 } 1108 1089 1109 1090 mutex_lock(&ns->lock); 1110 1091 /* setup parent and ns info */ 1111 1092 list_for_each_entry(ent, &lh, list) { 1112 1093 struct aa_policy *policy; 1113 - 1114 - name = ent->new->base.hname; 1115 1094 error = __lookup_replace(ns, ent->new->base.hname, noreplace, 1116 1095 &ent->old, &info); 1117 1096 if (error) ··· 1136 1121 if (!p) { 1137 1122 error = -ENOENT; 1138 1123 info = "parent does not exist"; 1139 - name = ent->new->base.hname; 1140 1124 goto fail_lock; 1141 1125 } 1142 1126 rcu_assign_pointer(ent->new->parent, aa_get_profile(p)); ··· 1177 1163 list_del_init(&ent->list); 1178 1164 op = (!ent->old && !ent->rename) ? OP_PROF_LOAD : OP_PROF_REPL; 1179 1165 1180 - audit_policy(op, GFP_ATOMIC, ent->new->base.name, NULL, error); 1166 + audit_policy(op, GFP_ATOMIC, ent->new->base.hname, NULL, error); 1181 1167 1182 1168 if (ent->old) { 1183 1169 __replace_profile(ent->old, ent->new, 1); ··· 1201 1187 /* parent replaced in this atomic set? */ 1202 1188 if (newest != parent) { 1203 1189 aa_get_profile(newest); 1204 - aa_put_profile(parent); 1205 1190 rcu_assign_pointer(ent->new->parent, newest); 1206 - } else 1207 - aa_put_profile(newest); 1191 + aa_put_profile(parent); 1192 + } 1208 1193 /* aafs interface uses replacedby */ 1209 1194 rcu_assign_pointer(ent->new->replacedby->profile, 1210 1195 aa_get_profile(ent->new)); 1211 - __list_add_profile(&parent->base.profiles, ent->new); 1196 + __list_add_profile(&newest->base.profiles, ent->new); 1197 + aa_put_profile(newest); 1212 1198 } else { 1213 1199 /* aafs interface uses replacedby */ 1214 1200 rcu_assign_pointer(ent->new->replacedby->profile, ··· 1228 1214 1229 1215 fail_lock: 1230 1216 mutex_unlock(&ns->lock); 1231 - fail: 1232 - error = audit_policy(op, GFP_KERNEL, name, info, error); 1233 1217 1218 + /* audit cause of failure */ 1219 + op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; 1220 + audit_policy(op, GFP_KERNEL, ent->new->base.hname, info, error); 1221 + /* audit status that rest of profiles in the atomic set failed too */ 1222 + info = "valid profile in failed atomic policy load"; 1223 + list_for_each_entry(tmp, &lh, list) { 1224 + if (tmp == ent) { 1225 + info = "unchecked profile in failed atomic policy load"; 1226 + /* skip entry that caused failure */ 1227 + continue; 1228 + } 1229 + op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; 1230 + audit_policy(op, GFP_KERNEL, tmp->new->base.hname, info, error); 1231 + } 1232 + free: 1234 1233 list_for_each_entry_safe(ent, tmp, &lh, list) { 1235 1234 list_del_init(&ent->list); 1236 1235 aa_load_ent_free(ent);
+5 -2
security/apparmor/policy_unpack.c
··· 583 583 error = PTR_ERR(profile->policy.dfa); 584 584 profile->policy.dfa = NULL; 585 585 goto fail; 586 + } else if (!profile->policy.dfa) { 587 + error = -EPROTO; 588 + goto fail; 586 589 } 587 590 if (!unpack_u32(e, &profile->policy.start[0], "start")) 588 591 /* default start state */ ··· 679 676 int index, xtype; 680 677 xtype = xindex & AA_X_TYPE_MASK; 681 678 index = xindex & AA_X_INDEX_MASK; 682 - if (xtype == AA_X_TABLE && index > table_size) 679 + if (xtype == AA_X_TABLE && index >= table_size) 683 680 return 0; 684 681 return 1; 685 682 } ··· 779 776 goto fail_profile; 780 777 781 778 error = aa_calc_profile_hash(profile, e.version, start, 782 - e.pos - start); 779 + e.pos - start); 783 780 if (error) 784 781 goto fail_profile; 785 782
+4 -2
security/apparmor/resource.c
··· 101 101 /* TODO: extend resource control to handle other (non current) 102 102 * profiles. AppArmor rules currently have the implicit assumption 103 103 * that the task is setting the resource of a task confined with 104 - * the same profile. 104 + * the same profile or that the task setting the resource of another 105 + * task has CAP_SYS_RESOURCE. 105 106 */ 106 - if (profile != task_profile || 107 + if ((profile != task_profile && 108 + aa_capable(profile, CAP_SYS_RESOURCE, 1)) || 107 109 (profile->rlimits.mask & (1 << resource) && 108 110 new_rlim->rlim_max > profile->rlimits.limits[resource].rlim_max)) 109 111 error = -EACCES;
+2
security/integrity/iint.c
··· 79 79 iint->ima_bprm_status = INTEGRITY_UNKNOWN; 80 80 iint->ima_read_status = INTEGRITY_UNKNOWN; 81 81 iint->evm_status = INTEGRITY_UNKNOWN; 82 + iint->measured_pcrs = 0; 82 83 kmem_cache_free(iint_cache, iint); 83 84 } 84 85 ··· 160 159 iint->ima_bprm_status = INTEGRITY_UNKNOWN; 161 160 iint->ima_read_status = INTEGRITY_UNKNOWN; 162 161 iint->evm_status = INTEGRITY_UNKNOWN; 162 + iint->measured_pcrs = 0; 163 163 } 164 164 165 165 static int __init integrity_iintcache_init(void)
+7 -4
security/integrity/ima/ima.h
··· 88 88 }; 89 89 90 90 struct ima_template_entry { 91 + int pcr; 91 92 u8 digest[TPM_DIGEST_SIZE]; /* sha1 or md5 measurement hash */ 92 93 struct ima_template_desc *template_desc; /* template descriptor */ 93 94 u32 template_data_len; ··· 155 154 }; 156 155 157 156 /* LIM API function definitions */ 158 - int ima_get_action(struct inode *inode, int mask, enum ima_hooks func); 157 + int ima_get_action(struct inode *inode, int mask, 158 + enum ima_hooks func, int *pcr); 159 159 int ima_must_measure(struct inode *inode, int mask, enum ima_hooks func); 160 160 int ima_collect_measurement(struct integrity_iint_cache *iint, 161 161 struct file *file, void *buf, loff_t size, ··· 164 162 void ima_store_measurement(struct integrity_iint_cache *iint, struct file *file, 165 163 const unsigned char *filename, 166 164 struct evm_ima_xattr_data *xattr_value, 167 - int xattr_len); 165 + int xattr_len, int pcr); 168 166 void ima_audit_measurement(struct integrity_iint_cache *iint, 169 167 const unsigned char *filename); 170 168 int ima_alloc_init_template(struct ima_event_data *event_data, 171 169 struct ima_template_entry **entry); 172 170 int ima_store_template(struct ima_template_entry *entry, int violation, 173 - struct inode *inode, const unsigned char *filename); 171 + struct inode *inode, 172 + const unsigned char *filename, int pcr); 174 173 void ima_free_template_entry(struct ima_template_entry *entry); 175 174 const char *ima_d_path(const struct path *path, char **pathbuf); 176 175 177 176 /* IMA policy related functions */ 178 177 int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask, 179 - int flags); 178 + int flags, int *pcr); 180 179 void ima_init_policy(void); 181 180 void ima_update_policy(void); 182 181 void ima_update_policy_flag(void);
+13 -8
security/integrity/ima/ima_api.c
··· 87 87 */ 88 88 int ima_store_template(struct ima_template_entry *entry, 89 89 int violation, struct inode *inode, 90 - const unsigned char *filename) 90 + const unsigned char *filename, int pcr) 91 91 { 92 92 static const char op[] = "add_template_measure"; 93 93 static const char audit_cause[] = "hashing_error"; ··· 114 114 } 115 115 memcpy(entry->digest, hash.hdr.digest, hash.hdr.length); 116 116 } 117 + entry->pcr = pcr; 117 118 result = ima_add_template_entry(entry, violation, op, inode, filename); 118 119 return result; 119 120 } ··· 145 144 result = -ENOMEM; 146 145 goto err_out; 147 146 } 148 - result = ima_store_template(entry, violation, inode, filename); 147 + result = ima_store_template(entry, violation, inode, 148 + filename, CONFIG_IMA_MEASURE_PCR_IDX); 149 149 if (result < 0) 150 150 ima_free_template_entry(entry); 151 151 err_out: ··· 159 157 * @inode: pointer to inode to measure 160 158 * @mask: contains the permission mask (MAY_READ, MAY_WRITE, MAY_EXECUTE) 161 159 * @func: caller identifier 160 + * @pcr: pointer filled in if matched measure policy sets pcr= 162 161 * 163 162 * The policy is defined in terms of keypairs: 164 163 * subj=, obj=, type=, func=, mask=, fsmagic= ··· 171 168 * Returns IMA_MEASURE, IMA_APPRAISE mask. 172 169 * 173 170 */ 174 - int ima_get_action(struct inode *inode, int mask, enum ima_hooks func) 171 + int ima_get_action(struct inode *inode, int mask, enum ima_hooks func, int *pcr) 175 172 { 176 173 int flags = IMA_MEASURE | IMA_AUDIT | IMA_APPRAISE; 177 174 178 175 flags &= ima_policy_flag; 179 176 180 - return ima_match_policy(inode, func, mask, flags); 177 + return ima_match_policy(inode, func, mask, flags, pcr); 181 178 } 182 179 183 180 /* ··· 255 252 void ima_store_measurement(struct integrity_iint_cache *iint, 256 253 struct file *file, const unsigned char *filename, 257 254 struct evm_ima_xattr_data *xattr_value, 258 - int xattr_len) 255 + int xattr_len, int pcr) 259 256 { 260 257 static const char op[] = "add_template_measure"; 261 258 static const char audit_cause[] = "ENOMEM"; ··· 266 263 xattr_len, NULL}; 267 264 int violation = 0; 268 265 269 - if (iint->flags & IMA_MEASURED) 266 + if (iint->measured_pcrs & (0x1 << pcr)) 270 267 return; 271 268 272 269 result = ima_alloc_init_template(&event_data, &entry); ··· 276 273 return; 277 274 } 278 275 279 - result = ima_store_template(entry, violation, inode, filename); 280 - if (!result || result == -EEXIST) 276 + result = ima_store_template(entry, violation, inode, filename, pcr); 277 + if (!result || result == -EEXIST) { 281 278 iint->flags |= IMA_MEASURED; 279 + iint->measured_pcrs |= (0x1 << pcr); 280 + } 282 281 if (result < 0) 283 282 ima_free_template_entry(entry); 284 283 }
+2 -1
security/integrity/ima/ima_appraise.c
··· 41 41 if (!ima_appraise) 42 42 return 0; 43 43 44 - return ima_match_policy(inode, func, mask, IMA_APPRAISE); 44 + return ima_match_policy(inode, func, mask, IMA_APPRAISE, NULL); 45 45 } 46 46 47 47 static int ima_fix_xattr(struct dentry *dentry, ··· 370 370 return; 371 371 372 372 iint->flags &= ~IMA_DONE_MASK; 373 + iint->measured_pcrs = 0; 373 374 if (digsig) 374 375 iint->flags |= IMA_DIGSIG; 375 376 return;
+4 -5
security/integrity/ima/ima_fs.c
··· 123 123 struct ima_template_entry *e; 124 124 char *template_name; 125 125 int namelen; 126 - u32 pcr = CONFIG_IMA_MEASURE_PCR_IDX; 127 126 bool is_ima_template = false; 128 127 int i; 129 128 ··· 136 137 137 138 /* 138 139 * 1st: PCRIndex 139 - * PCR used is always the same (config option) in 140 - * little-endian format 140 + * PCR used defaults to the same (config option) in 141 + * little-endian format, unless set in policy 141 142 */ 142 - ima_putc(m, &pcr, sizeof(pcr)); 143 + ima_putc(m, &e->pcr, sizeof(e->pcr)); 143 144 144 145 /* 2nd: template digest */ 145 146 ima_putc(m, e->digest, TPM_DIGEST_SIZE); ··· 218 219 e->template_desc->name : e->template_desc->fmt; 219 220 220 221 /* 1st: PCR used (config option) */ 221 - seq_printf(m, "%2d ", CONFIG_IMA_MEASURE_PCR_IDX); 222 + seq_printf(m, "%2d ", e->pcr); 222 223 223 224 /* 2nd: SHA1 template hash */ 224 225 ima_print_digest(m, e->digest, TPM_DIGEST_SIZE);
+2 -1
security/integrity/ima/ima_init.c
··· 79 79 } 80 80 81 81 result = ima_store_template(entry, violation, NULL, 82 - boot_aggregate_name); 82 + boot_aggregate_name, 83 + CONFIG_IMA_MEASURE_PCR_IDX); 83 84 if (result < 0) { 84 85 ima_free_template_entry(entry); 85 86 audit_cause = "store_entry";
+9 -3
security/integrity/ima/ima_main.c
··· 125 125 if ((iint->version != inode->i_version) || 126 126 (iint->flags & IMA_NEW_FILE)) { 127 127 iint->flags &= ~(IMA_DONE_MASK | IMA_NEW_FILE); 128 + iint->measured_pcrs = 0; 128 129 if (iint->flags & IMA_APPRAISE) 129 130 ima_update_xattr(iint, file); 130 131 } ··· 163 162 char *pathbuf = NULL; 164 163 const char *pathname = NULL; 165 164 int rc = -ENOMEM, action, must_appraise; 165 + int pcr = CONFIG_IMA_MEASURE_PCR_IDX; 166 166 struct evm_ima_xattr_data *xattr_value = NULL; 167 167 int xattr_len = 0; 168 168 bool violation_check; ··· 176 174 * bitmask based on the appraise/audit/measurement policy. 177 175 * Included is the appraise submask. 178 176 */ 179 - action = ima_get_action(inode, mask, func); 177 + action = ima_get_action(inode, mask, func, &pcr); 180 178 violation_check = ((func == FILE_CHECK || func == MMAP_CHECK) && 181 179 (ima_policy_flag & IMA_MEASURE)); 182 180 if (!action && !violation_check) ··· 211 209 */ 212 210 iint->flags |= action; 213 211 action &= IMA_DO_MASK; 214 - action &= ~((iint->flags & IMA_DONE_MASK) >> 1); 212 + action &= ~((iint->flags & (IMA_DONE_MASK ^ IMA_MEASURED)) >> 1); 213 + 214 + /* If target pcr is already measured, unset IMA_MEASURE action */ 215 + if ((action & IMA_MEASURE) && (iint->measured_pcrs & (0x1 << pcr))) 216 + action ^= IMA_MEASURE; 215 217 216 218 /* Nothing to do, just return existing appraised status */ 217 219 if (!action) { ··· 244 238 245 239 if (action & IMA_MEASURE) 246 240 ima_store_measurement(iint, file, pathname, 247 - xattr_value, xattr_len); 241 + xattr_value, xattr_len, pcr); 248 242 if (action & IMA_APPRAISE_SUBMASK) 249 243 rc = ima_appraise_measurement(func, iint, file, pathname, 250 244 xattr_value, xattr_len, opened);
+33 -2
security/integrity/ima/ima_policy.c
··· 32 32 #define IMA_FSUUID 0x0020 33 33 #define IMA_INMASK 0x0040 34 34 #define IMA_EUID 0x0080 35 + #define IMA_PCR 0x0100 35 36 36 37 #define UNKNOWN 0 37 38 #define MEASURE 0x0001 /* same as IMA_MEASURE */ ··· 40 39 #define APPRAISE 0x0004 /* same as IMA_APPRAISE */ 41 40 #define DONT_APPRAISE 0x0008 42 41 #define AUDIT 0x0040 42 + 43 + #define INVALID_PCR(a) (((a) < 0) || \ 44 + (a) >= (FIELD_SIZEOF(struct integrity_iint_cache, measured_pcrs) * 8)) 43 45 44 46 int ima_policy_flag; 45 47 static int temp_ima_appraise; ··· 64 60 u8 fsuuid[16]; 65 61 kuid_t uid; 66 62 kuid_t fowner; 63 + int pcr; 67 64 struct { 68 65 void *rule; /* LSM file metadata specific */ 69 66 void *args_p; /* audit value */ ··· 324 319 * @inode: pointer to an inode for which the policy decision is being made 325 320 * @func: IMA hook identifier 326 321 * @mask: requested action (MAY_READ | MAY_WRITE | MAY_APPEND | MAY_EXEC) 322 + * @pcr: set the pcr to extend 327 323 * 328 324 * Measure decision based on func/mask/fsmagic and LSM(subj/obj/type) 329 325 * conditions. ··· 334 328 * than writes so ima_match_policy() is classical RCU candidate. 335 329 */ 336 330 int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask, 337 - int flags) 331 + int flags, int *pcr) 338 332 { 339 333 struct ima_rule_entry *entry; 340 334 int action = 0, actmask = flags | (flags << 1); ··· 358 352 actmask &= ~(entry->action | entry->action << 1); 359 353 else 360 354 actmask &= ~(entry->action | entry->action >> 1); 355 + 356 + if ((pcr) && (entry->flags & IMA_PCR)) 357 + *pcr = entry->pcr; 361 358 362 359 if (!actmask) 363 360 break; ··· 487 478 Opt_subj_user, Opt_subj_role, Opt_subj_type, 488 479 Opt_func, Opt_mask, Opt_fsmagic, 489 480 Opt_fsuuid, Opt_uid, Opt_euid, Opt_fowner, 490 - Opt_appraise_type, Opt_permit_directio 481 + Opt_appraise_type, Opt_permit_directio, 482 + Opt_pcr 491 483 }; 492 484 493 485 static match_table_t policy_tokens = { ··· 512 502 {Opt_fowner, "fowner=%s"}, 513 503 {Opt_appraise_type, "appraise_type=%s"}, 514 504 {Opt_permit_directio, "permit_directio"}, 505 + {Opt_pcr, "pcr=%s"}, 515 506 {Opt_err, NULL} 516 507 }; 517 508 ··· 785 774 case Opt_permit_directio: 786 775 entry->flags |= IMA_PERMIT_DIRECTIO; 787 776 break; 777 + case Opt_pcr: 778 + if (entry->action != MEASURE) { 779 + result = -EINVAL; 780 + break; 781 + } 782 + ima_log_string(ab, "pcr", args[0].from); 783 + 784 + result = kstrtoint(args[0].from, 10, &entry->pcr); 785 + if (result || INVALID_PCR(entry->pcr)) 786 + result = -EINVAL; 787 + else 788 + entry->flags |= IMA_PCR; 789 + 790 + break; 788 791 case Opt_err: 789 792 ima_log_string(ab, "UNKNOWN", p); 790 793 result = -EINVAL; ··· 1033 1008 if (entry->flags & IMA_FSMAGIC) { 1034 1009 snprintf(tbuf, sizeof(tbuf), "0x%lx", entry->fsmagic); 1035 1010 seq_printf(m, pt(Opt_fsmagic), tbuf); 1011 + seq_puts(m, " "); 1012 + } 1013 + 1014 + if (entry->flags & IMA_PCR) { 1015 + snprintf(tbuf, sizeof(tbuf), "%d", entry->pcr); 1016 + seq_printf(m, pt(Opt_pcr), tbuf); 1036 1017 seq_puts(m, " "); 1037 1018 } 1038 1019
+7 -6
security/integrity/ima/ima_queue.c
··· 44 44 static DEFINE_MUTEX(ima_extend_list_mutex); 45 45 46 46 /* lookup up the digest value in the hash table, and return the entry */ 47 - static struct ima_queue_entry *ima_lookup_digest_entry(u8 *digest_value) 47 + static struct ima_queue_entry *ima_lookup_digest_entry(u8 *digest_value, 48 + int pcr) 48 49 { 49 50 struct ima_queue_entry *qe, *ret = NULL; 50 51 unsigned int key; ··· 55 54 rcu_read_lock(); 56 55 hlist_for_each_entry_rcu(qe, &ima_htable.queue[key], hnext) { 57 56 rc = memcmp(qe->entry->digest, digest_value, TPM_DIGEST_SIZE); 58 - if (rc == 0) { 57 + if ((rc == 0) && (qe->entry->pcr == pcr)) { 59 58 ret = qe; 60 59 break; 61 60 } ··· 90 89 return 0; 91 90 } 92 91 93 - static int ima_pcr_extend(const u8 *hash) 92 + static int ima_pcr_extend(const u8 *hash, int pcr) 94 93 { 95 94 int result = 0; 96 95 97 96 if (!ima_used_chip) 98 97 return result; 99 98 100 - result = tpm_pcr_extend(TPM_ANY_NUM, CONFIG_IMA_MEASURE_PCR_IDX, hash); 99 + result = tpm_pcr_extend(TPM_ANY_NUM, pcr, hash); 101 100 if (result != 0) 102 101 pr_err("Error Communicating to TPM chip, result: %d\n", result); 103 102 return result; ··· 119 118 mutex_lock(&ima_extend_list_mutex); 120 119 if (!violation) { 121 120 memcpy(digest, entry->digest, sizeof(digest)); 122 - if (ima_lookup_digest_entry(digest)) { 121 + if (ima_lookup_digest_entry(digest, entry->pcr)) { 123 122 audit_cause = "hash_exists"; 124 123 result = -EEXIST; 125 124 goto out; ··· 136 135 if (violation) /* invalidate pcr */ 137 136 memset(digest, 0xff, sizeof(digest)); 138 137 139 - tpmresult = ima_pcr_extend(digest); 138 + tpmresult = ima_pcr_extend(digest, entry->pcr); 140 139 if (tpmresult != 0) { 141 140 snprintf(tpm_audit_cause, AUDIT_CAUSE_LEN_MAX, "TPM_error(%d)", 142 141 tpmresult);
+1
security/integrity/integrity.h
··· 103 103 struct inode *inode; /* back pointer to inode in question */ 104 104 u64 version; /* track inode changes */ 105 105 unsigned long flags; 106 + unsigned long measured_pcrs; 106 107 enum integrity_status ima_file_status:4; 107 108 enum integrity_status ima_mmap_status:4; 108 109 enum integrity_status ima_bprm_status:4;
+1 -1
security/keys/persistent.c
··· 114 114 ret = key_link(key_ref_to_ptr(dest_ref), persistent); 115 115 if (ret == 0) { 116 116 key_set_timeout(persistent, persistent_keyring_expiry); 117 - ret = persistent->serial; 117 + ret = persistent->serial; 118 118 } 119 119 } 120 120
+1 -1
security/keys/request_key.c
··· 442 442 443 443 if (ctx->index_key.type == &key_type_keyring) 444 444 return ERR_PTR(-EPERM); 445 - 445 + 446 446 user = key_user_lookup(current_fsuid()); 447 447 if (!user) 448 448 return ERR_PTR(-ENOMEM);
+25 -4
security/security.c
··· 700 700 701 701 int security_inode_getsecurity(struct inode *inode, const char *name, void **buffer, bool alloc) 702 702 { 703 + struct security_hook_list *hp; 704 + int rc; 705 + 703 706 if (unlikely(IS_PRIVATE(inode))) 704 707 return -EOPNOTSUPP; 705 - return call_int_hook(inode_getsecurity, -EOPNOTSUPP, inode, name, 706 - buffer, alloc); 708 + /* 709 + * Only one module will provide an attribute with a given name. 710 + */ 711 + list_for_each_entry(hp, &security_hook_heads.inode_getsecurity, list) { 712 + rc = hp->hook.inode_getsecurity(inode, name, buffer, alloc); 713 + if (rc != -EOPNOTSUPP) 714 + return rc; 715 + } 716 + return -EOPNOTSUPP; 707 717 } 708 718 709 719 int security_inode_setsecurity(struct inode *inode, const char *name, const void *value, size_t size, int flags) 710 720 { 721 + struct security_hook_list *hp; 722 + int rc; 723 + 711 724 if (unlikely(IS_PRIVATE(inode))) 712 725 return -EOPNOTSUPP; 713 - return call_int_hook(inode_setsecurity, -EOPNOTSUPP, inode, name, 714 - value, size, flags); 726 + /* 727 + * Only one module will provide an attribute with a given name. 728 + */ 729 + list_for_each_entry(hp, &security_hook_heads.inode_setsecurity, list) { 730 + rc = hp->hook.inode_setsecurity(inode, name, value, size, 731 + flags); 732 + if (rc != -EOPNOTSUPP) 733 + return rc; 734 + } 735 + return -EOPNOTSUPP; 715 736 } 716 737 717 738 int security_inode_listsecurity(struct inode *inode, char *buffer, size_t buffer_size)
+18 -3
security/selinux/hooks.c
··· 4627 4627 err = selinux_inet_sys_rcv_skb(sock_net(sk), skb->skb_iif, 4628 4628 addrp, family, peer_sid, &ad); 4629 4629 if (err) { 4630 - selinux_netlbl_err(skb, err, 0); 4630 + selinux_netlbl_err(skb, family, err, 0); 4631 4631 return err; 4632 4632 } 4633 4633 err = avc_has_perm(sk_sid, peer_sid, SECCLASS_PEER, 4634 4634 PEER__RECV, &ad); 4635 4635 if (err) { 4636 - selinux_netlbl_err(skb, err, 0); 4636 + selinux_netlbl_err(skb, family, err, 0); 4637 4637 return err; 4638 4638 } 4639 4639 } ··· 5001 5001 err = selinux_inet_sys_rcv_skb(dev_net(indev), indev->ifindex, 5002 5002 addrp, family, peer_sid, &ad); 5003 5003 if (err) { 5004 - selinux_netlbl_err(skb, err, 1); 5004 + selinux_netlbl_err(skb, family, err, 1); 5005 5005 return NF_DROP; 5006 5006 } 5007 5007 } ··· 5086 5086 { 5087 5087 return selinux_ip_output(skb, PF_INET); 5088 5088 } 5089 + 5090 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 5091 + static unsigned int selinux_ipv6_output(void *priv, 5092 + struct sk_buff *skb, 5093 + const struct nf_hook_state *state) 5094 + { 5095 + return selinux_ip_output(skb, PF_INET6); 5096 + } 5097 + #endif /* IPV6 */ 5089 5098 5090 5099 static unsigned int selinux_ip_postroute_compat(struct sk_buff *skb, 5091 5100 int ifindex, ··· 6328 6319 .hook = selinux_ipv6_forward, 6329 6320 .pf = NFPROTO_IPV6, 6330 6321 .hooknum = NF_INET_FORWARD, 6322 + .priority = NF_IP6_PRI_SELINUX_FIRST, 6323 + }, 6324 + { 6325 + .hook = selinux_ipv6_output, 6326 + .pf = NFPROTO_IPV6, 6327 + .hooknum = NF_INET_LOCAL_OUT, 6331 6328 .priority = NF_IP6_PRI_SELINUX_FIRST, 6332 6329 }, 6333 6330 #endif /* IPV6 */
+3 -1
security/selinux/include/netlabel.h
··· 40 40 #ifdef CONFIG_NETLABEL 41 41 void selinux_netlbl_cache_invalidate(void); 42 42 43 - void selinux_netlbl_err(struct sk_buff *skb, int error, int gateway); 43 + void selinux_netlbl_err(struct sk_buff *skb, u16 family, int error, 44 + int gateway); 44 45 45 46 void selinux_netlbl_sk_security_free(struct sk_security_struct *sksec); 46 47 void selinux_netlbl_sk_security_reset(struct sk_security_struct *sksec); ··· 73 72 } 74 73 75 74 static inline void selinux_netlbl_err(struct sk_buff *skb, 75 + u16 family, 76 76 int error, 77 77 int gateway) 78 78 {
+27 -9
security/selinux/netlabel.c
··· 54 54 * 55 55 */ 56 56 static int selinux_netlbl_sidlookup_cached(struct sk_buff *skb, 57 + u16 family, 57 58 struct netlbl_lsm_secattr *secattr, 58 59 u32 *sid) 59 60 { ··· 64 63 if (rc == 0 && 65 64 (secattr->flags & NETLBL_SECATTR_CACHEABLE) && 66 65 (secattr->flags & NETLBL_SECATTR_CACHE)) 67 - netlbl_cache_add(skb, secattr); 66 + netlbl_cache_add(skb, family, secattr); 68 67 69 68 return rc; 70 69 } ··· 152 151 * present on the packet, NetLabel is smart enough to only act when it should. 153 152 * 154 153 */ 155 - void selinux_netlbl_err(struct sk_buff *skb, int error, int gateway) 154 + void selinux_netlbl_err(struct sk_buff *skb, u16 family, int error, int gateway) 156 155 { 157 - netlbl_skbuff_err(skb, error, gateway); 156 + netlbl_skbuff_err(skb, family, error, gateway); 158 157 } 159 158 160 159 /** ··· 215 214 netlbl_secattr_init(&secattr); 216 215 rc = netlbl_skbuff_getattr(skb, family, &secattr); 217 216 if (rc == 0 && secattr.flags != NETLBL_SECATTR_NONE) 218 - rc = selinux_netlbl_sidlookup_cached(skb, &secattr, sid); 217 + rc = selinux_netlbl_sidlookup_cached(skb, family, 218 + &secattr, sid); 219 219 else 220 220 *sid = SECSID_NULL; 221 221 *type = secattr.type; ··· 286 284 int rc; 287 285 struct netlbl_lsm_secattr secattr; 288 286 289 - if (family != PF_INET) 287 + if (family != PF_INET && family != PF_INET6) 290 288 return 0; 291 289 292 290 netlbl_secattr_init(&secattr); ··· 335 333 struct sk_security_struct *sksec = sk->sk_security; 336 334 struct netlbl_lsm_secattr *secattr; 337 335 338 - if (family != PF_INET) 336 + if (family != PF_INET && family != PF_INET6) 339 337 return 0; 340 338 341 339 secattr = selinux_netlbl_sock_genattr(sk); ··· 384 382 netlbl_secattr_init(&secattr); 385 383 rc = netlbl_skbuff_getattr(skb, family, &secattr); 386 384 if (rc == 0 && secattr.flags != NETLBL_SECATTR_NONE) 387 - rc = selinux_netlbl_sidlookup_cached(skb, &secattr, &nlbl_sid); 385 + rc = selinux_netlbl_sidlookup_cached(skb, family, 386 + &secattr, &nlbl_sid); 388 387 else 389 388 nlbl_sid = SECINITSID_UNLABELED; 390 389 netlbl_secattr_destroy(&secattr); ··· 408 405 return 0; 409 406 410 407 if (nlbl_sid != SECINITSID_UNLABELED) 411 - netlbl_skbuff_err(skb, rc, 0); 408 + netlbl_skbuff_err(skb, family, rc, 0); 412 409 return rc; 410 + } 411 + 412 + /** 413 + * selinux_netlbl_option - Is this a NetLabel option 414 + * @level: the socket level or protocol 415 + * @optname: the socket option name 416 + * 417 + * Description: 418 + * Returns true if @level and @optname refer to a NetLabel option. 419 + * Helper for selinux_netlbl_socket_setsockopt(). 420 + */ 421 + static inline int selinux_netlbl_option(int level, int optname) 422 + { 423 + return (level == IPPROTO_IP && optname == IP_OPTIONS) || 424 + (level == IPPROTO_IPV6 && optname == IPV6_HOPOPTS); 413 425 } 414 426 415 427 /** ··· 449 431 struct sk_security_struct *sksec = sk->sk_security; 450 432 struct netlbl_lsm_secattr secattr; 451 433 452 - if (level == IPPROTO_IP && optname == IP_OPTIONS && 434 + if (selinux_netlbl_option(level, optname) && 453 435 (sksec->nlbl_state == NLBL_LABELED || 454 436 sksec->nlbl_state == NLBL_CONNLABELED)) { 455 437 netlbl_secattr_init(&secattr);
+1 -1
security/selinux/selinuxfs.c
··· 1347 1347 { 1348 1348 char *page; 1349 1349 ssize_t ret; 1350 - int new_value; 1350 + unsigned int new_value; 1351 1351 1352 1352 ret = task_has_security(current, SECURITY__SETSECPARAM); 1353 1353 if (ret)
+1 -1
security/selinux/ss/ebitmap.c
··· 165 165 e_iter = kzalloc(sizeof(*e_iter), GFP_ATOMIC); 166 166 if (e_iter == NULL) 167 167 goto netlbl_import_failure; 168 - e_iter->startbit = offset & ~(EBITMAP_SIZE - 1); 168 + e_iter->startbit = offset - (offset % EBITMAP_SIZE); 169 169 if (e_prev == NULL) 170 170 ebmap->node = e_iter; 171 171 else
+22 -48
security/selinux/ss/services.c
··· 543 543 struct av_decision *avd) 544 544 { 545 545 struct context lo_scontext; 546 - struct context lo_tcontext; 546 + struct context lo_tcontext, *tcontextp = tcontext; 547 547 struct av_decision lo_avd; 548 548 struct type_datum *source; 549 549 struct type_datum *target; ··· 553 553 scontext->type - 1); 554 554 BUG_ON(!source); 555 555 556 + if (!source->bounds) 557 + return; 558 + 556 559 target = flex_array_get_ptr(policydb.type_val_to_struct_array, 557 560 tcontext->type - 1); 558 561 BUG_ON(!target); 559 562 560 - if (source->bounds) { 561 - memset(&lo_avd, 0, sizeof(lo_avd)); 563 + memset(&lo_avd, 0, sizeof(lo_avd)); 562 564 563 - memcpy(&lo_scontext, scontext, sizeof(lo_scontext)); 564 - lo_scontext.type = source->bounds; 565 - 566 - context_struct_compute_av(&lo_scontext, 567 - tcontext, 568 - tclass, 569 - &lo_avd, 570 - NULL); 571 - if ((lo_avd.allowed & avd->allowed) == avd->allowed) 572 - return; /* no masked permission */ 573 - masked = ~lo_avd.allowed & avd->allowed; 574 - } 565 + memcpy(&lo_scontext, scontext, sizeof(lo_scontext)); 566 + lo_scontext.type = source->bounds; 575 567 576 568 if (target->bounds) { 577 - memset(&lo_avd, 0, sizeof(lo_avd)); 578 - 579 569 memcpy(&lo_tcontext, tcontext, sizeof(lo_tcontext)); 580 570 lo_tcontext.type = target->bounds; 581 - 582 - context_struct_compute_av(scontext, 583 - &lo_tcontext, 584 - tclass, 585 - &lo_avd, 586 - NULL); 587 - if ((lo_avd.allowed & avd->allowed) == avd->allowed) 588 - return; /* no masked permission */ 589 - masked = ~lo_avd.allowed & avd->allowed; 571 + tcontextp = &lo_tcontext; 590 572 } 591 573 592 - if (source->bounds && target->bounds) { 593 - memset(&lo_avd, 0, sizeof(lo_avd)); 594 - /* 595 - * lo_scontext and lo_tcontext are already 596 - * set up. 597 - */ 574 + context_struct_compute_av(&lo_scontext, 575 + tcontextp, 576 + tclass, 577 + &lo_avd, 578 + NULL); 598 579 599 - context_struct_compute_av(&lo_scontext, 600 - &lo_tcontext, 601 - tclass, 602 - &lo_avd, 603 - NULL); 604 - if ((lo_avd.allowed & avd->allowed) == avd->allowed) 605 - return; /* no masked permission */ 606 - masked = ~lo_avd.allowed & avd->allowed; 607 - } 580 + masked = ~lo_avd.allowed & avd->allowed; 608 581 609 - if (masked) { 610 - /* mask violated permissions */ 611 - avd->allowed &= ~masked; 582 + if (likely(!masked)) 583 + return; /* no masked permission */ 612 584 613 - /* audit masked permissions */ 614 - security_dump_masked_av(scontext, tcontext, 615 - tclass, masked, "bounds"); 616 - } 585 + /* mask violated permissions */ 586 + avd->allowed &= ~masked; 587 + 588 + /* audit masked permissions */ 589 + security_dump_masked_av(scontext, tcontext, 590 + tclass, masked, "bounds"); 617 591 } 618 592 619 593 /*
+4 -1
security/smack/smack_lsm.c
··· 2255 2255 struct smack_known *tkp = smk_of_task_struct(p); 2256 2256 int rc; 2257 2257 2258 + if (!sig) 2259 + return 0; /* null signal; existence test */ 2260 + 2258 2261 smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK); 2259 2262 smk_ad_setfield_u_tsk(&ad, p); 2260 2263 /* ··· 4023 4020 rc = smk_bu_note("IPv4 delivery", skp, ssp->smk_in, 4024 4021 MAY_WRITE, rc); 4025 4022 if (rc != 0) 4026 - netlbl_skbuff_err(skb, rc, 0); 4023 + netlbl_skbuff_err(skb, sk->sk_family, rc, 0); 4027 4024 break; 4028 4025 #if IS_ENABLED(CONFIG_IPV6) 4029 4026 case PF_INET6:
+2 -7
security/tomoyo/gc.c
··· 645 645 } 646 646 } 647 647 spin_unlock(&tomoyo_io_buffer_list_lock); 648 - if (is_write) { 649 - struct task_struct *task = kthread_create(tomoyo_gc_thread, 650 - NULL, 651 - "GC for TOMOYO"); 652 - if (!IS_ERR(task)) 653 - wake_up_process(task); 654 - } 648 + if (is_write) 649 + kthread_run(tomoyo_gc_thread, NULL, "GC for TOMOYO"); 655 650 }
+165 -11
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 1021 1021 typedef void tracer_func_t(struct __test_metadata *_metadata, 1022 1022 pid_t tracee, int status, void *args); 1023 1023 1024 - void tracer(struct __test_metadata *_metadata, int fd, pid_t tracee, 1025 - tracer_func_t tracer_func, void *args) 1024 + void start_tracer(struct __test_metadata *_metadata, int fd, pid_t tracee, 1025 + tracer_func_t tracer_func, void *args, bool ptrace_syscall) 1026 1026 { 1027 1027 int ret = -1; 1028 1028 struct sigaction action = { ··· 1042 1042 /* Wait for attach stop */ 1043 1043 wait(NULL); 1044 1044 1045 - ret = ptrace(PTRACE_SETOPTIONS, tracee, NULL, PTRACE_O_TRACESECCOMP); 1045 + ret = ptrace(PTRACE_SETOPTIONS, tracee, NULL, ptrace_syscall ? 1046 + PTRACE_O_TRACESYSGOOD : 1047 + PTRACE_O_TRACESECCOMP); 1046 1048 ASSERT_EQ(0, ret) { 1047 1049 TH_LOG("Failed to set PTRACE_O_TRACESECCOMP"); 1048 1050 kill(tracee, SIGKILL); 1049 1051 } 1050 - ptrace(PTRACE_CONT, tracee, NULL, 0); 1052 + ret = ptrace(ptrace_syscall ? PTRACE_SYSCALL : PTRACE_CONT, 1053 + tracee, NULL, 0); 1054 + ASSERT_EQ(0, ret); 1051 1055 1052 1056 /* Unblock the tracee */ 1053 1057 ASSERT_EQ(1, write(fd, "A", 1)); ··· 1067 1063 /* Child is dead. Time to go. */ 1068 1064 return; 1069 1065 1070 - /* Make sure this is a seccomp event. */ 1071 - ASSERT_EQ(true, IS_SECCOMP_EVENT(status)); 1066 + /* Check if this is a seccomp event. */ 1067 + ASSERT_EQ(!ptrace_syscall, IS_SECCOMP_EVENT(status)); 1072 1068 1073 1069 tracer_func(_metadata, tracee, status, args); 1074 1070 1075 - ret = ptrace(PTRACE_CONT, tracee, NULL, NULL); 1071 + ret = ptrace(ptrace_syscall ? PTRACE_SYSCALL : PTRACE_CONT, 1072 + tracee, NULL, 0); 1076 1073 ASSERT_EQ(0, ret); 1077 1074 } 1078 1075 /* Directly report the status of our test harness results. */ ··· 1084 1079 void cont_handler(int num) 1085 1080 { } 1086 1081 pid_t setup_trace_fixture(struct __test_metadata *_metadata, 1087 - tracer_func_t func, void *args) 1082 + tracer_func_t func, void *args, bool ptrace_syscall) 1088 1083 { 1089 1084 char sync; 1090 1085 int pipefd[2]; ··· 1100 1095 signal(SIGALRM, cont_handler); 1101 1096 if (tracer_pid == 0) { 1102 1097 close(pipefd[0]); 1103 - tracer(_metadata, pipefd[1], tracee, func, args); 1098 + start_tracer(_metadata, pipefd[1], tracee, func, args, 1099 + ptrace_syscall); 1104 1100 syscall(__NR_exit, 0); 1105 1101 } 1106 1102 close(pipefd[1]); ··· 1183 1177 1184 1178 /* Launch tracer. */ 1185 1179 self->tracer = setup_trace_fixture(_metadata, tracer_poke, 1186 - &self->tracer_args); 1180 + &self->tracer_args, false); 1187 1181 } 1188 1182 1189 1183 FIXTURE_TEARDOWN(TRACE_poke) ··· 1405 1399 1406 1400 } 1407 1401 1402 + void tracer_ptrace(struct __test_metadata *_metadata, pid_t tracee, 1403 + int status, void *args) 1404 + { 1405 + int ret, nr; 1406 + unsigned long msg; 1407 + static bool entry; 1408 + 1409 + /* Make sure we got an empty message. */ 1410 + ret = ptrace(PTRACE_GETEVENTMSG, tracee, NULL, &msg); 1411 + EXPECT_EQ(0, ret); 1412 + EXPECT_EQ(0, msg); 1413 + 1414 + /* The only way to tell PTRACE_SYSCALL entry/exit is by counting. */ 1415 + entry = !entry; 1416 + if (!entry) 1417 + return; 1418 + 1419 + nr = get_syscall(_metadata, tracee); 1420 + 1421 + if (nr == __NR_getpid) 1422 + change_syscall(_metadata, tracee, __NR_getppid); 1423 + } 1424 + 1408 1425 FIXTURE_DATA(TRACE_syscall) { 1409 1426 struct sock_fprog prog; 1410 1427 pid_t tracer, mytid, mypid, parent; ··· 1469 1440 ASSERT_NE(self->parent, self->mypid); 1470 1441 1471 1442 /* Launch tracer. */ 1472 - self->tracer = setup_trace_fixture(_metadata, tracer_syscall, NULL); 1443 + self->tracer = setup_trace_fixture(_metadata, tracer_syscall, NULL, 1444 + false); 1473 1445 } 1474 1446 1475 1447 FIXTURE_TEARDOWN(TRACE_syscall) ··· 1528 1498 EXPECT_EQ(1, syscall(__NR_gettid)); 1529 1499 #endif 1530 1500 EXPECT_NE(self->mytid, syscall(__NR_gettid)); 1501 + } 1502 + 1503 + TEST_F(TRACE_syscall, skip_after_RET_TRACE) 1504 + { 1505 + struct sock_filter filter[] = { 1506 + BPF_STMT(BPF_LD|BPF_W|BPF_ABS, 1507 + offsetof(struct seccomp_data, nr)), 1508 + BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1), 1509 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | EPERM), 1510 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW), 1511 + }; 1512 + struct sock_fprog prog = { 1513 + .len = (unsigned short)ARRAY_SIZE(filter), 1514 + .filter = filter, 1515 + }; 1516 + long ret; 1517 + 1518 + ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 1519 + ASSERT_EQ(0, ret); 1520 + 1521 + /* Install fixture filter. */ 1522 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->prog, 0, 0); 1523 + ASSERT_EQ(0, ret); 1524 + 1525 + /* Install "errno on getppid" filter. */ 1526 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog, 0, 0); 1527 + ASSERT_EQ(0, ret); 1528 + 1529 + /* Tracer will redirect getpid to getppid, and we should see EPERM. */ 1530 + EXPECT_EQ(-1, syscall(__NR_getpid)); 1531 + EXPECT_EQ(EPERM, errno); 1532 + } 1533 + 1534 + TEST_F_SIGNAL(TRACE_syscall, kill_after_RET_TRACE, SIGSYS) 1535 + { 1536 + struct sock_filter filter[] = { 1537 + BPF_STMT(BPF_LD|BPF_W|BPF_ABS, 1538 + offsetof(struct seccomp_data, nr)), 1539 + BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1), 1540 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL), 1541 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW), 1542 + }; 1543 + struct sock_fprog prog = { 1544 + .len = (unsigned short)ARRAY_SIZE(filter), 1545 + .filter = filter, 1546 + }; 1547 + long ret; 1548 + 1549 + ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 1550 + ASSERT_EQ(0, ret); 1551 + 1552 + /* Install fixture filter. */ 1553 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->prog, 0, 0); 1554 + ASSERT_EQ(0, ret); 1555 + 1556 + /* Install "death on getppid" filter. */ 1557 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog, 0, 0); 1558 + ASSERT_EQ(0, ret); 1559 + 1560 + /* Tracer will redirect getpid to getppid, and we should die. */ 1561 + EXPECT_NE(self->mypid, syscall(__NR_getpid)); 1562 + } 1563 + 1564 + TEST_F(TRACE_syscall, skip_after_ptrace) 1565 + { 1566 + struct sock_filter filter[] = { 1567 + BPF_STMT(BPF_LD|BPF_W|BPF_ABS, 1568 + offsetof(struct seccomp_data, nr)), 1569 + BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1), 1570 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | EPERM), 1571 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW), 1572 + }; 1573 + struct sock_fprog prog = { 1574 + .len = (unsigned short)ARRAY_SIZE(filter), 1575 + .filter = filter, 1576 + }; 1577 + long ret; 1578 + 1579 + /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */ 1580 + teardown_trace_fixture(_metadata, self->tracer); 1581 + self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL, 1582 + true); 1583 + 1584 + ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 1585 + ASSERT_EQ(0, ret); 1586 + 1587 + /* Install "errno on getppid" filter. */ 1588 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog, 0, 0); 1589 + ASSERT_EQ(0, ret); 1590 + 1591 + /* Tracer will redirect getpid to getppid, and we should see EPERM. */ 1592 + EXPECT_EQ(-1, syscall(__NR_getpid)); 1593 + EXPECT_EQ(EPERM, errno); 1594 + } 1595 + 1596 + TEST_F_SIGNAL(TRACE_syscall, kill_after_ptrace, SIGSYS) 1597 + { 1598 + struct sock_filter filter[] = { 1599 + BPF_STMT(BPF_LD|BPF_W|BPF_ABS, 1600 + offsetof(struct seccomp_data, nr)), 1601 + BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1), 1602 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL), 1603 + BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW), 1604 + }; 1605 + struct sock_fprog prog = { 1606 + .len = (unsigned short)ARRAY_SIZE(filter), 1607 + .filter = filter, 1608 + }; 1609 + long ret; 1610 + 1611 + /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */ 1612 + teardown_trace_fixture(_metadata, self->tracer); 1613 + self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL, 1614 + true); 1615 + 1616 + ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 1617 + ASSERT_EQ(0, ret); 1618 + 1619 + /* Install "death on getppid" filter. */ 1620 + ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog, 0, 0); 1621 + ASSERT_EQ(0, ret); 1622 + 1623 + /* Tracer will redirect getpid to getppid, and we should die. */ 1624 + EXPECT_NE(self->mypid, syscall(__NR_getpid)); 1531 1625 } 1532 1626 1533 1627 #ifndef __NR_seccomp