Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
"New notable features:
- The seccomp work from Will Drewry
- PR_{GET,SET}_NO_NEW_PRIVS from Andy Lutomirski
- Longer security labels for Smack from Casey Schaufler
- Additional ptrace restriction modes for Yama by Kees Cook"

Fix up trivial context conflicts in arch/x86/Kconfig and include/linux/filter.h

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (65 commits)
apparmor: fix long path failure due to disconnected path
apparmor: fix profile lookup for unconfined
ima: fix filename hint to reflect script interpreter name
KEYS: Don't check for NULL key pointer in key_validate()
Smack: allow for significantly longer Smack labels v4
gfp flags for security_inode_alloc()?
Smack: recursive tramsmute
Yama: replace capable() with ns_capable()
TOMOYO: Accept manager programs which do not start with / .
KEYS: Add invalidation support
KEYS: Do LRU discard in full keyrings
KEYS: Permit in-place link replacement in keyring list
KEYS: Perform RCU synchronisation on keys prior to key destruction
KEYS: Announce key type (un)registration
KEYS: Reorganise keys Makefile
KEYS: Move the key config into security/keys/Kconfig
KEYS: Use the compat keyctl() syscall wrapper on Sparc64 for Sparc32 compat
Yama: remove an unused variable
samples/seccomp: fix dependencies on arch macros
Yama: add additional ptrace scopes
...

+3687 -1239
+163
Documentation/prctl/seccomp_filter.txt
··· 1 + SECure COMPuting with filters 2 + ============================= 3 + 4 + Introduction 5 + ------------ 6 + 7 + A large number of system calls are exposed to every userland process 8 + with many of them going unused for the entire lifetime of the process. 9 + As system calls change and mature, bugs are found and eradicated. A 10 + certain subset of userland applications benefit by having a reduced set 11 + of available system calls. The resulting set reduces the total kernel 12 + surface exposed to the application. System call filtering is meant for 13 + use with those applications. 14 + 15 + Seccomp filtering provides a means for a process to specify a filter for 16 + incoming system calls. The filter is expressed as a Berkeley Packet 17 + Filter (BPF) program, as with socket filters, except that the data 18 + operated on is related to the system call being made: system call 19 + number and the system call arguments. This allows for expressive 20 + filtering of system calls using a filter program language with a long 21 + history of being exposed to userland and a straightforward data set. 22 + 23 + Additionally, BPF makes it impossible for users of seccomp to fall prey 24 + to time-of-check-time-of-use (TOCTOU) attacks that are common in system 25 + call interposition frameworks. BPF programs may not dereference 26 + pointers which constrains all filters to solely evaluating the system 27 + call arguments directly. 28 + 29 + What it isn't 30 + ------------- 31 + 32 + System call filtering isn't a sandbox. It provides a clearly defined 33 + mechanism for minimizing the exposed kernel surface. It is meant to be 34 + a tool for sandbox developers to use. Beyond that, policy for logical 35 + behavior and information flow should be managed with a combination of 36 + other system hardening techniques and, potentially, an LSM of your 37 + choosing. Expressive, dynamic filters provide further options down this 38 + path (avoiding pathological sizes or selecting which of the multiplexed 39 + system calls in socketcall() is allowed, for instance) which could be 40 + construed, incorrectly, as a more complete sandboxing solution. 41 + 42 + Usage 43 + ----- 44 + 45 + An additional seccomp mode is added and is enabled using the same 46 + prctl(2) call as the strict seccomp. If the architecture has 47 + CONFIG_HAVE_ARCH_SECCOMP_FILTER, then filters may be added as below: 48 + 49 + PR_SET_SECCOMP: 50 + Now takes an additional argument which specifies a new filter 51 + using a BPF program. 52 + The BPF program will be executed over struct seccomp_data 53 + reflecting the system call number, arguments, and other 54 + metadata. The BPF program must then return one of the 55 + acceptable values to inform the kernel which action should be 56 + taken. 57 + 58 + Usage: 59 + prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, prog); 60 + 61 + The 'prog' argument is a pointer to a struct sock_fprog which 62 + will contain the filter program. If the program is invalid, the 63 + call will return -1 and set errno to EINVAL. 64 + 65 + If fork/clone and execve are allowed by @prog, any child 66 + processes will be constrained to the same filters and system 67 + call ABI as the parent. 68 + 69 + Prior to use, the task must call prctl(PR_SET_NO_NEW_PRIVS, 1) or 70 + run with CAP_SYS_ADMIN privileges in its namespace. If these are not 71 + true, -EACCES will be returned. This requirement ensures that filter 72 + programs cannot be applied to child processes with greater privileges 73 + than the task that installed them. 74 + 75 + Additionally, if prctl(2) is allowed by the attached filter, 76 + additional filters may be layered on which will increase evaluation 77 + time, but allow for further decreasing the attack surface during 78 + execution of a process. 79 + 80 + The above call returns 0 on success and non-zero on error. 81 + 82 + Return values 83 + ------------- 84 + A seccomp filter may return any of the following values. If multiple 85 + filters exist, the return value for the evaluation of a given system 86 + call will always use the highest precedent value. (For example, 87 + SECCOMP_RET_KILL will always take precedence.) 88 + 89 + In precedence order, they are: 90 + 91 + SECCOMP_RET_KILL: 92 + Results in the task exiting immediately without executing the 93 + system call. The exit status of the task (status & 0x7f) will 94 + be SIGSYS, not SIGKILL. 95 + 96 + SECCOMP_RET_TRAP: 97 + Results in the kernel sending a SIGSYS signal to the triggering 98 + task without executing the system call. The kernel will 99 + rollback the register state to just before the system call 100 + entry such that a signal handler in the task will be able to 101 + inspect the ucontext_t->uc_mcontext registers and emulate 102 + system call success or failure upon return from the signal 103 + handler. 104 + 105 + The SECCOMP_RET_DATA portion of the return value will be passed 106 + as si_errno. 107 + 108 + SIGSYS triggered by seccomp will have a si_code of SYS_SECCOMP. 109 + 110 + SECCOMP_RET_ERRNO: 111 + Results in the lower 16-bits of the return value being passed 112 + to userland as the errno without executing the system call. 113 + 114 + SECCOMP_RET_TRACE: 115 + When returned, this value will cause the kernel to attempt to 116 + notify a ptrace()-based tracer prior to executing the system 117 + call. If there is no tracer present, -ENOSYS is returned to 118 + userland and the system call is not executed. 119 + 120 + A tracer will be notified if it requests PTRACE_O_TRACESECCOMP 121 + using ptrace(PTRACE_SETOPTIONS). The tracer will be notified 122 + of a PTRACE_EVENT_SECCOMP and the SECCOMP_RET_DATA portion of 123 + the BPF program return value will be available to the tracer 124 + via PTRACE_GETEVENTMSG. 125 + 126 + SECCOMP_RET_ALLOW: 127 + Results in the system call being executed. 128 + 129 + If multiple filters exist, the return value for the evaluation of a 130 + given system call will always use the highest precedent value. 131 + 132 + Precedence is only determined using the SECCOMP_RET_ACTION mask. When 133 + multiple filters return values of the same precedence, only the 134 + SECCOMP_RET_DATA from the most recently installed filter will be 135 + returned. 136 + 137 + Pitfalls 138 + -------- 139 + 140 + The biggest pitfall to avoid during use is filtering on system call 141 + number without checking the architecture value. Why? On any 142 + architecture that supports multiple system call invocation conventions, 143 + the system call numbers may vary based on the specific invocation. If 144 + the numbers in the different calling conventions overlap, then checks in 145 + the filters may be abused. Always check the arch value! 146 + 147 + Example 148 + ------- 149 + 150 + The samples/seccomp/ directory contains both an x86-specific example 151 + and a more generic example of a higher level macro interface for BPF 152 + program generation. 153 + 154 + 155 + 156 + Adding architecture support 157 + ----------------------- 158 + 159 + See arch/Kconfig for the authoritative requirements. In general, if an 160 + architecture supports both ptrace_event and seccomp, it will be able to 161 + support seccomp filter with minor fixup: SIGSYS support and seccomp return 162 + value checking. Then it must just add CONFIG_HAVE_ARCH_SECCOMP_FILTER 163 + to its arch-specific Kconfig.
+163 -39
Documentation/security/Smack.txt
··· 15 15 16 16 Smack consists of three major components: 17 17 - The kernel 18 - - A start-up script and a few modified applications 18 + - Basic utilities, which are helpful but not required 19 19 - Configuration data 20 20 21 21 The kernel component of Smack is implemented as a Linux ··· 23 23 works best with file systems that support extended attributes, 24 24 although xattr support is not strictly required. 25 25 It is safe to run a Smack kernel under a "vanilla" distribution. 26 + 26 27 Smack kernels use the CIPSO IP option. Some network 27 28 configurations are intolerant of IP options and can impede 28 29 access to systems that use them as Smack does. 29 30 30 - The startup script etc-init.d-smack should be installed 31 - in /etc/init.d/smack and should be invoked early in the 32 - start-up process. On Fedora rc5.d/S02smack is recommended. 33 - This script ensures that certain devices have the correct 34 - Smack attributes and loads the Smack configuration if 35 - any is defined. This script invokes two programs that 36 - ensure configuration data is properly formatted. These 37 - programs are /usr/sbin/smackload and /usr/sin/smackcipso. 38 - The system will run just fine without these programs, 39 - but it will be difficult to set access rules properly. 31 + The current git repositories for Smack user space are: 40 32 41 - A version of "ls" that provides a "-M" option to display 42 - Smack labels on long listing is available. 33 + git@gitorious.org:meego-platform-security/smackutil.git 34 + git@gitorious.org:meego-platform-security/libsmack.git 43 35 44 - A hacked version of sshd that allows network logins by users 45 - with specific Smack labels is available. This version does 46 - not work for scp. You must set the /etc/ssh/sshd_config 47 - line: 48 - UsePrivilegeSeparation no 36 + These should make and install on most modern distributions. 37 + There are three commands included in smackutil: 49 38 50 - The format of /etc/smack/usr is: 51 - 52 - username smack 39 + smackload - properly formats data for writing to /smack/load 40 + smackcipso - properly formats data for writing to /smack/cipso 41 + chsmack - display or set Smack extended attribute values 53 42 54 43 In keeping with the intent of Smack, configuration data is 55 44 minimal and not strictly required. The most important 56 45 configuration step is mounting the smackfs pseudo filesystem. 46 + If smackutil is installed the startup script will take care 47 + of this, but it can be manually as well. 57 48 58 49 Add this line to /etc/fstab: 59 50 ··· 52 61 53 62 and create the /smack directory for mounting. 54 63 55 - Smack uses extended attributes (xattrs) to store file labels. 56 - The command to set a Smack label on a file is: 64 + Smack uses extended attributes (xattrs) to store labels on filesystem 65 + objects. The attributes are stored in the extended attribute security 66 + name space. A process must have CAP_MAC_ADMIN to change any of these 67 + attributes. 68 + 69 + The extended attributes that Smack uses are: 70 + 71 + SMACK64 72 + Used to make access control decisions. In almost all cases 73 + the label given to a new filesystem object will be the label 74 + of the process that created it. 75 + SMACK64EXEC 76 + The Smack label of a process that execs a program file with 77 + this attribute set will run with this attribute's value. 78 + SMACK64MMAP 79 + Don't allow the file to be mmapped by a process whose Smack 80 + label does not allow all of the access permitted to a process 81 + with the label contained in this attribute. This is a very 82 + specific use case for shared libraries. 83 + SMACK64TRANSMUTE 84 + Can only have the value "TRUE". If this attribute is present 85 + on a directory when an object is created in the directory and 86 + the Smack rule (more below) that permitted the write access 87 + to the directory includes the transmute ("t") mode the object 88 + gets the label of the directory instead of the label of the 89 + creating process. If the object being created is a directory 90 + the SMACK64TRANSMUTE attribute is set as well. 91 + SMACK64IPIN 92 + This attribute is only available on file descriptors for sockets. 93 + Use the Smack label in this attribute for access control 94 + decisions on packets being delivered to this socket. 95 + SMACK64IPOUT 96 + This attribute is only available on file descriptors for sockets. 97 + Use the Smack label in this attribute for access control 98 + decisions on packets coming from this socket. 99 + 100 + There are multiple ways to set a Smack label on a file: 57 101 58 102 # attr -S -s SMACK64 -V "value" path 103 + # chsmack -a value path 59 104 60 - NOTE: Smack labels are limited to 23 characters. The attr command 61 - does not enforce this restriction and can be used to set 62 - invalid Smack labels on files. 105 + A process can see the smack label it is running with by 106 + reading /proc/self/attr/current. A process with CAP_MAC_ADMIN 107 + can set the process smack by writing there. 63 108 64 - If you don't do anything special all users will get the floor ("_") 65 - label when they log in. If you do want to log in via the hacked ssh 66 - at other labels use the attr command to set the smack value on the 67 - home directory and its contents. 109 + Most Smack configuration is accomplished by writing to files 110 + in the smackfs filesystem. This pseudo-filesystem is usually 111 + mounted on /smack. 112 + 113 + access 114 + This interface reports whether a subject with the specified 115 + Smack label has a particular access to an object with a 116 + specified Smack label. Write a fixed format access rule to 117 + this file. The next read will indicate whether the access 118 + would be permitted. The text will be either "1" indicating 119 + access, or "0" indicating denial. 120 + access2 121 + This interface reports whether a subject with the specified 122 + Smack label has a particular access to an object with a 123 + specified Smack label. Write a long format access rule to 124 + this file. The next read will indicate whether the access 125 + would be permitted. The text will be either "1" indicating 126 + access, or "0" indicating denial. 127 + ambient 128 + This contains the Smack label applied to unlabeled network 129 + packets. 130 + cipso 131 + This interface allows a specific CIPSO header to be assigned 132 + to a Smack label. The format accepted on write is: 133 + "%24s%4d%4d"["%4d"]... 134 + The first string is a fixed Smack label. The first number is 135 + the level to use. The second number is the number of categories. 136 + The following numbers are the categories. 137 + "level-3-cats-5-19 3 2 5 19" 138 + cipso2 139 + This interface allows a specific CIPSO header to be assigned 140 + to a Smack label. The format accepted on write is: 141 + "%s%4d%4d"["%4d"]... 142 + The first string is a long Smack label. The first number is 143 + the level to use. The second number is the number of categories. 144 + The following numbers are the categories. 145 + "level-3-cats-5-19 3 2 5 19" 146 + direct 147 + This contains the CIPSO level used for Smack direct label 148 + representation in network packets. 149 + doi 150 + This contains the CIPSO domain of interpretation used in 151 + network packets. 152 + load 153 + This interface allows access control rules in addition to 154 + the system defined rules to be specified. The format accepted 155 + on write is: 156 + "%24s%24s%5s" 157 + where the first string is the subject label, the second the 158 + object label, and the third the requested access. The access 159 + string may contain only the characters "rwxat-", and specifies 160 + which sort of access is allowed. The "-" is a placeholder for 161 + permissions that are not allowed. The string "r-x--" would 162 + specify read and execute access. Labels are limited to 23 163 + characters in length. 164 + load2 165 + This interface allows access control rules in addition to 166 + the system defined rules to be specified. The format accepted 167 + on write is: 168 + "%s %s %s" 169 + where the first string is the subject label, the second the 170 + object label, and the third the requested access. The access 171 + string may contain only the characters "rwxat-", and specifies 172 + which sort of access is allowed. The "-" is a placeholder for 173 + permissions that are not allowed. The string "r-x--" would 174 + specify read and execute access. 175 + load-self 176 + This interface allows process specific access rules to be 177 + defined. These rules are only consulted if access would 178 + otherwise be permitted, and are intended to provide additional 179 + restrictions on the process. The format is the same as for 180 + the load interface. 181 + load-self2 182 + This interface allows process specific access rules to be 183 + defined. These rules are only consulted if access would 184 + otherwise be permitted, and are intended to provide additional 185 + restrictions on the process. The format is the same as for 186 + the load2 interface. 187 + logging 188 + This contains the Smack logging state. 189 + mapped 190 + This contains the CIPSO level used for Smack mapped label 191 + representation in network packets. 192 + netlabel 193 + This interface allows specific internet addresses to be 194 + treated as single label hosts. Packets are sent to single 195 + label hosts without CIPSO headers, but only from processes 196 + that have Smack write access to the host label. All packets 197 + received from single label hosts are given the specified 198 + label. The format accepted on write is: 199 + "%d.%d.%d.%d label" or "%d.%d.%d.%d/%d label". 200 + onlycap 201 + This contains the label processes must have for CAP_MAC_ADMIN 202 + and CAP_MAC_OVERRIDE to be effective. If this file is empty 203 + these capabilities are effective at for processes with any 204 + label. The value is set by writing the desired label to the 205 + file or cleared by writing "-" to the file. 68 206 69 207 You can add access rules in /etc/smack/accesses. They take the form: 70 208 ··· 202 82 access is a combination of the letters rwxa which specify the 203 83 kind of access permitted a subject with subjectlabel on an 204 84 object with objectlabel. If there is no rule no access is allowed. 205 - 206 - A process can see the smack label it is running with by 207 - reading /proc/self/attr/current. A privileged process can 208 - set the process smack by writing there. 209 85 210 86 Look for additional programs on http://schaufler-ca.com 211 87 ··· 302 186 ever performed on them is comparison for equality. Smack labels cannot 303 187 contain unprintable characters, the "/" (slash), the "\" (backslash), the "'" 304 188 (quote) and '"' (double-quote) characters. 305 - Smack labels cannot begin with a '-', which is reserved for special options. 189 + Smack labels cannot begin with a '-'. This is reserved for special options. 306 190 307 191 There are some predefined labels: 308 192 ··· 310 194 ^ Pronounced "hat", a single circumflex character. 311 195 * Pronounced "star", a single asterisk character. 312 196 ? Pronounced "huh", a single question mark character. 313 - @ Pronounced "Internet", a single at sign character. 197 + @ Pronounced "web", a single at sign character. 314 198 315 199 Every task on a Smack system is assigned a label. System tasks, such as 316 200 init(8) and systems daemons, are run with the floor ("_") label. User tasks ··· 362 246 363 247 Where subject-label is the Smack label of the task, object-label is the Smack 364 248 label of the thing being accessed, and access is a string specifying the sort 365 - of access allowed. The Smack labels are limited to 23 characters. The access 366 - specification is searched for letters that describe access modes: 249 + of access allowed. The access specification is searched for letters that 250 + describe access modes: 367 251 368 252 a: indicates that append access should be granted. 369 253 r: indicates that read access should be granted. 370 254 w: indicates that write access should be granted. 371 255 x: indicates that execute access should be granted. 256 + t: indicates that the rule requests transmutation. 372 257 373 258 Uppercase values for the specification letters are allowed as well. 374 259 Access mode specifications can be in any order. Examples of acceptable rules ··· 390 273 391 274 Spaces are not allowed in labels. Since a subject always has access to files 392 275 with the same label specifying a rule for that case is pointless. Only 393 - valid letters (rwxaRWXA) and the dash ('-') character are allowed in 276 + valid letters (rwxatRWXAT) and the dash ('-') character are allowed in 394 277 access specifications. The dash is a placeholder, so "a-r" is the same 395 278 as "ar". A lone dash is used to specify that no access should be allowed. 396 279 ··· 413 296 but not any of its attributes by the circumstance of having read access to the 414 297 containing directory but not to the differently labeled file. This is an 415 298 artifact of the file name being data in the directory, not a part of the file. 299 + 300 + If a directory is marked as transmuting (SMACK64TRANSMUTE=TRUE) and the 301 + access rule that allows a process to create an object in that directory 302 + includes 't' access the label assigned to the new object will be that 303 + of the directory, not the creating process. This makes it much easier 304 + for two processes with different labels to share data without granting 305 + access to all of their files. 416 306 417 307 IPC objects, message queues, semaphore sets, and memory segments exist in flat 418 308 namespaces and access requests are only required to match the object in
+9 -1
Documentation/security/Yama.txt
··· 34 34 work), or with CAP_SYS_PTRACE (i.e. "gdb --pid=PID", and "strace -p PID" 35 35 still work as root). 36 36 37 - For software that has defined application-specific relationships 37 + In mode 1, software that has defined application-specific relationships 38 38 between a debugging process and its inferior (crash handlers, etc), 39 39 prctl(PR_SET_PTRACER, pid, ...) can be used. An inferior can declare which 40 40 other process (and its descendents) are allowed to call PTRACE_ATTACH ··· 45 45 restrictions, it can call prctl(PR_SET_PTRACER, PR_SET_PTRACER_ANY, ...) 46 46 so that any otherwise allowed process (even those in external pid namespaces) 47 47 may attach. 48 + 49 + These restrictions do not change how ptrace via PTRACE_TRACEME operates. 48 50 49 51 The sysctl settings are: 50 52 ··· 61 59 classic criteria is also met. To change the relationship, an 62 60 inferior can call prctl(PR_SET_PTRACER, debugger, ...) to declare 63 61 an allowed debugger PID to call PTRACE_ATTACH on the inferior. 62 + 63 + 2 - admin-only attach: only processes with CAP_SYS_PTRACE may use ptrace 64 + with PTRACE_ATTACH. 65 + 66 + 3 - no attach: no processes may use ptrace with PTRACE_ATTACH. Once set, 67 + this sysctl cannot be changed to a lower value. 64 68 65 69 The original children-only logic was based on the restrictions in grsecurity. 66 70
+17
Documentation/security/keys.txt
··· 805 805 kernel and resumes executing userspace. 806 806 807 807 808 + (*) Invalidate a key. 809 + 810 + long keyctl(KEYCTL_INVALIDATE, key_serial_t key); 811 + 812 + This function marks a key as being invalidated and then wakes up the 813 + garbage collector. The garbage collector immediately removes invalidated 814 + keys from all keyrings and deletes the key when its reference count 815 + reaches zero. 816 + 817 + Keys that are marked invalidated become invisible to normal key operations 818 + immediately, though they are still visible in /proc/keys until deleted 819 + (they're marked with an 'i' flag). 820 + 821 + A process must have search permission on the key for this function to be 822 + successful. 823 + 824 + 808 825 =============== 809 826 KERNEL SERVICES 810 827 ===============
+2 -1
MAINTAINERS
··· 1733 1733 F: include/linux/capability.h 1734 1734 F: security/capability.c 1735 1735 F: security/commoncap.c 1736 + F: kernel/capability.c 1736 1737 1737 1738 CELL BROADBAND ENGINE ARCHITECTURE 1738 1739 M: Arnd Bergmann <arnd@arndb.de> ··· 5951 5950 M: James Morris <james.l.morris@oracle.com> 5952 5951 L: linux-security-module@vger.kernel.org (suggested Cc:) 5953 5952 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security.git 5954 - W: http://security.wiki.kernel.org/ 5953 + W: http://kernsec.org/ 5955 5954 S: Supported 5956 5955 F: security/ 5957 5956
+23
arch/Kconfig
··· 231 231 config ARCH_WANT_OLD_COMPAT_IPC 232 232 bool 233 233 234 + config HAVE_ARCH_SECCOMP_FILTER 235 + bool 236 + help 237 + An arch should select this symbol if it provides all of these things: 238 + - syscall_get_arch() 239 + - syscall_get_arguments() 240 + - syscall_rollback() 241 + - syscall_set_return_value() 242 + - SIGSYS siginfo_t support 243 + - secure_computing is called from a ptrace_event()-safe context 244 + - secure_computing return value is checked and a return value of -1 245 + results in the system call being skipped immediately. 246 + 247 + config SECCOMP_FILTER 248 + def_bool y 249 + depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET 250 + help 251 + Enable tasks to build secure computing environments defined 252 + in terms of Berkeley Packet Filter programs which implement 253 + task-defined system call filtering polices. 254 + 255 + See Documentation/prctl/seccomp_filter.txt for details. 256 + 234 257 source "kernel/gcov/Kconfig"
+1 -1
arch/microblaze/kernel/ptrace.c
··· 136 136 { 137 137 long ret = 0; 138 138 139 - secure_computing(regs->r12); 139 + secure_computing_strict(regs->r12); 140 140 141 141 if (test_thread_flag(TIF_SYSCALL_TRACE) && 142 142 tracehook_report_syscall_entry(regs))
+1 -1
arch/mips/kernel/ptrace.c
··· 535 535 asmlinkage void syscall_trace_enter(struct pt_regs *regs) 536 536 { 537 537 /* do the secure computing check first */ 538 - secure_computing(regs->regs[2]); 538 + secure_computing_strict(regs->regs[2]); 539 539 540 540 if (!(current->ptrace & PT_PTRACED)) 541 541 goto out;
+1 -1
arch/powerpc/kernel/ptrace.c
··· 1710 1710 { 1711 1711 long ret = 0; 1712 1712 1713 - secure_computing(regs->gpr[0]); 1713 + secure_computing_strict(regs->gpr[0]); 1714 1714 1715 1715 if (test_thread_flag(TIF_SYSCALL_TRACE) && 1716 1716 tracehook_report_syscall_entry(regs))
+1 -1
arch/s390/kernel/ptrace.c
··· 719 719 long ret = 0; 720 720 721 721 /* Do the secure computing check first. */ 722 - secure_computing(regs->gprs[2]); 722 + secure_computing_strict(regs->gprs[2]); 723 723 724 724 /* 725 725 * The sysc_tracesys code in entry.S stored the system
+1 -1
arch/sh/kernel/ptrace_32.c
··· 503 503 { 504 504 long ret = 0; 505 505 506 - secure_computing(regs->regs[0]); 506 + secure_computing_strict(regs->regs[0]); 507 507 508 508 if (test_thread_flag(TIF_SYSCALL_TRACE) && 509 509 tracehook_report_syscall_entry(regs))
+1 -1
arch/sh/kernel/ptrace_64.c
··· 522 522 { 523 523 long long ret = 0; 524 524 525 - secure_computing(regs->regs[9]); 525 + secure_computing_strict(regs->regs[9]); 526 526 527 527 if (test_thread_flag(TIF_SYSCALL_TRACE) && 528 528 tracehook_report_syscall_entry(regs))
+3
arch/sparc/Kconfig
··· 583 583 depends on COMPAT && SYSVIPC 584 584 default y 585 585 586 + config KEYS_COMPAT 587 + def_bool y if COMPAT && KEYS 588 + 586 589 endmenu 587 590 588 591 source "net/Kconfig"
+1 -1
arch/sparc/kernel/ptrace_64.c
··· 1062 1062 int ret = 0; 1063 1063 1064 1064 /* do the secure computing check first */ 1065 - secure_computing(regs->u_regs[UREG_G1]); 1065 + secure_computing_strict(regs->u_regs[UREG_G1]); 1066 1066 1067 1067 if (test_thread_flag(TIF_SYSCALL_TRACE)) 1068 1068 ret = tracehook_report_syscall_entry(regs);
+1 -1
arch/sparc/kernel/systbls_64.S
··· 74 74 .word sys_timer_delete, compat_sys_timer_create, sys_ni_syscall, compat_sys_io_setup, sys_io_destroy 75 75 /*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink 76 76 .word compat_sys_mq_timedsend, compat_sys_mq_timedreceive, compat_sys_mq_notify, compat_sys_mq_getsetattr, compat_sys_waitid 77 - /*280*/ .word sys32_tee, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat 77 + /*280*/ .word sys32_tee, sys_add_key, sys_request_key, compat_sys_keyctl, compat_sys_openat 78 78 .word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_fstatat64 79 79 /*290*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat 80 80 .word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare
+1
arch/x86/Kconfig
··· 83 83 select GENERIC_IOMAP 84 84 select DCACHE_WORD_ACCESS 85 85 select GENERIC_SMP_IDLE_THREAD 86 + select HAVE_ARCH_SECCOMP_FILTER 86 87 87 88 config INSTRUCTION_DECODER 88 89 def_bool (KPROBES || PERF_EVENTS)
+4
arch/x86/ia32/ia32_signal.c
··· 67 67 switch (from->si_code >> 16) { 68 68 case __SI_FAULT >> 16: 69 69 break; 70 + case __SI_SYS >> 16: 71 + put_user_ex(from->si_syscall, &to->si_syscall); 72 + put_user_ex(from->si_arch, &to->si_arch); 73 + break; 70 74 case __SI_CHLD >> 16: 71 75 if (ia32) { 72 76 put_user_ex(from->si_utime, &to->si_utime);
+6
arch/x86/include/asm/ia32.h
··· 144 144 int _band; /* POLL_IN, POLL_OUT, POLL_MSG */ 145 145 int _fd; 146 146 } _sigpoll; 147 + 148 + struct { 149 + unsigned int _call_addr; /* calling insn */ 150 + int _syscall; /* triggering system call number */ 151 + unsigned int _arch; /* AUDIT_ARCH_* of syscall */ 152 + } _sigsys; 147 153 } _sifields; 148 154 } compat_siginfo_t; 149 155
+27
arch/x86/include/asm/syscall.h
··· 13 13 #ifndef _ASM_X86_SYSCALL_H 14 14 #define _ASM_X86_SYSCALL_H 15 15 16 + #include <linux/audit.h> 16 17 #include <linux/sched.h> 17 18 #include <linux/err.h> 18 19 #include <asm/asm-offsets.h> /* For NR_syscalls */ 20 + #include <asm/thread_info.h> /* for TS_COMPAT */ 19 21 #include <asm/unistd.h> 20 22 21 23 extern const unsigned long sys_call_table[]; ··· 88 86 { 89 87 BUG_ON(i + n > 6); 90 88 memcpy(&regs->bx + i, args, n * sizeof(args[0])); 89 + } 90 + 91 + static inline int syscall_get_arch(struct task_struct *task, 92 + struct pt_regs *regs) 93 + { 94 + return AUDIT_ARCH_I386; 91 95 } 92 96 93 97 #else /* CONFIG_X86_64 */ ··· 220 212 } 221 213 } 222 214 215 + static inline int syscall_get_arch(struct task_struct *task, 216 + struct pt_regs *regs) 217 + { 218 + #ifdef CONFIG_IA32_EMULATION 219 + /* 220 + * TS_COMPAT is set for 32-bit syscall entry and then 221 + * remains set until we return to user mode. 222 + * 223 + * TIF_IA32 tasks should always have TS_COMPAT set at 224 + * system call time. 225 + * 226 + * x32 tasks should be considered AUDIT_ARCH_X86_64. 227 + */ 228 + if (task_thread_info(task)->status & TS_COMPAT) 229 + return AUDIT_ARCH_I386; 230 + #endif 231 + /* Both x32 and x86_64 are considered "64-bit". */ 232 + return AUDIT_ARCH_X86_64; 233 + } 223 234 #endif /* CONFIG_X86_32 */ 224 235 225 236 #endif /* _ASM_X86_SYSCALL_H */
+6 -1
arch/x86/kernel/ptrace.c
··· 1480 1480 regs->flags |= X86_EFLAGS_TF; 1481 1481 1482 1482 /* do the secure computing check first */ 1483 - secure_computing(regs->orig_ax); 1483 + if (secure_computing(regs->orig_ax)) { 1484 + /* seccomp failures shouldn't expose any additional code. */ 1485 + ret = -1L; 1486 + goto out; 1487 + } 1484 1488 1485 1489 if (unlikely(test_thread_flag(TIF_SYSCALL_EMU))) 1486 1490 ret = -1L; ··· 1509 1505 regs->dx, regs->r10); 1510 1506 #endif 1511 1507 1508 + out: 1512 1509 return ret ?: regs->orig_ax; 1513 1510 } 1514 1511
+9 -1
fs/exec.c
··· 1245 1245 bprm->unsafe |= LSM_UNSAFE_PTRACE; 1246 1246 } 1247 1247 1248 + /* 1249 + * This isn't strictly necessary, but it makes it harder for LSMs to 1250 + * mess up. 1251 + */ 1252 + if (current->no_new_privs) 1253 + bprm->unsafe |= LSM_UNSAFE_NO_NEW_PRIVS; 1254 + 1248 1255 n_fs = 1; 1249 1256 spin_lock(&p->fs->lock); 1250 1257 rcu_read_lock(); ··· 1295 1288 bprm->cred->euid = current_euid(); 1296 1289 bprm->cred->egid = current_egid(); 1297 1290 1298 - if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)) { 1291 + if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) && 1292 + !current->no_new_privs) { 1299 1293 /* Set-uid? */ 1300 1294 if (mode & S_ISUID) { 1301 1295 bprm->per_clear |= PER_CLEAR_ON_SETID;
+1 -1
fs/open.c
··· 681 681 682 682 f->f_op = fops_get(inode->i_fop); 683 683 684 - error = security_dentry_open(f, cred); 684 + error = security_file_open(f, cred); 685 685 if (error) 686 686 goto cleanup_all; 687 687
+22
include/asm-generic/siginfo.h
··· 98 98 __ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */ 99 99 int _fd; 100 100 } _sigpoll; 101 + 102 + /* SIGSYS */ 103 + struct { 104 + void __user *_call_addr; /* calling user insn */ 105 + int _syscall; /* triggering system call number */ 106 + unsigned int _arch; /* AUDIT_ARCH_* of syscall */ 107 + } _sigsys; 101 108 } _sifields; 102 109 } __ARCH_SI_ATTRIBUTES siginfo_t; 103 110 111 + /* If the arch shares siginfo, then it has SIGSYS. */ 112 + #define __ARCH_SIGSYS 104 113 #endif 105 114 106 115 /* ··· 133 124 #define si_addr_lsb _sifields._sigfault._addr_lsb 134 125 #define si_band _sifields._sigpoll._band 135 126 #define si_fd _sifields._sigpoll._fd 127 + #ifdef __ARCH_SIGSYS 128 + #define si_call_addr _sifields._sigsys._call_addr 129 + #define si_syscall _sifields._sigsys._syscall 130 + #define si_arch _sifields._sigsys._arch 131 + #endif 136 132 137 133 #ifdef __KERNEL__ 138 134 #define __SI_MASK 0xffff0000u ··· 148 134 #define __SI_CHLD (4 << 16) 149 135 #define __SI_RT (5 << 16) 150 136 #define __SI_MESGQ (6 << 16) 137 + #define __SI_SYS (7 << 16) 151 138 #define __SI_CODE(T,N) ((T) | ((N) & 0xffff)) 152 139 #else 153 140 #define __SI_KILL 0 ··· 158 143 #define __SI_CHLD 0 159 144 #define __SI_RT 0 160 145 #define __SI_MESGQ 0 146 + #define __SI_SYS 0 161 147 #define __SI_CODE(T,N) (N) 162 148 #endif 163 149 ··· 254 238 #define POLL_PRI (__SI_POLL|5) /* high priority input available */ 255 239 #define POLL_HUP (__SI_POLL|6) /* device disconnected */ 256 240 #define NSIGPOLL 6 241 + 242 + /* 243 + * SIGSYS si_codes 244 + */ 245 + #define SYS_SECCOMP (__SI_SYS|1) /* seccomp triggered */ 246 + #define NSIGSYS 1 257 247 258 248 /* 259 249 * sigevent definitions
+14
include/asm-generic/syscall.h
··· 142 142 unsigned int i, unsigned int n, 143 143 const unsigned long *args); 144 144 145 + /** 146 + * syscall_get_arch - return the AUDIT_ARCH for the current system call 147 + * @task: task of interest, must be in system call entry tracing 148 + * @regs: task_pt_regs() of @task 149 + * 150 + * Returns the AUDIT_ARCH_* based on the system call convention in use. 151 + * 152 + * It's only valid to call this when @task is stopped on entry to a system 153 + * call, due to %TIF_SYSCALL_TRACE, %TIF_SYSCALL_AUDIT, or %TIF_SECCOMP. 154 + * 155 + * Architectures which permit CONFIG_HAVE_ARCH_SECCOMP_FILTER must 156 + * provide an implementation of this. 157 + */ 158 + int syscall_get_arch(struct task_struct *task, struct pt_regs *regs); 145 159 #endif /* _ASM_SYSCALL_H */
+1 -1
include/keys/keyring-type.h
··· 24 24 unsigned short maxkeys; /* max keys this list can hold */ 25 25 unsigned short nkeys; /* number of keys currently held */ 26 26 unsigned short delkey; /* key to be unlinked by RCU */ 27 - struct key *keys[0]; 27 + struct key __rcu *keys[0]; 28 28 }; 29 29 30 30
+1
include/linux/Kbuild
··· 330 330 header-y += sched.h 331 331 header-y += screen_info.h 332 332 header-y += sdla.h 333 + header-y += seccomp.h 333 334 header-y += securebits.h 334 335 header-y += selinux_netlink.h 335 336 header-y += sem.h
+4 -4
include/linux/audit.h
··· 463 463 extern void __audit_inode(const char *name, const struct dentry *dentry); 464 464 extern void __audit_inode_child(const struct dentry *dentry, 465 465 const struct inode *parent); 466 - extern void __audit_seccomp(unsigned long syscall); 466 + extern void __audit_seccomp(unsigned long syscall, long signr, int code); 467 467 extern void __audit_ptrace(struct task_struct *t); 468 468 469 469 static inline int audit_dummy_context(void) ··· 508 508 } 509 509 void audit_core_dumps(long signr); 510 510 511 - static inline void audit_seccomp(unsigned long syscall) 511 + static inline void audit_seccomp(unsigned long syscall, long signr, int code) 512 512 { 513 513 if (unlikely(!audit_dummy_context())) 514 - __audit_seccomp(syscall); 514 + __audit_seccomp(syscall, signr, code); 515 515 } 516 516 517 517 static inline void audit_ptrace(struct task_struct *t) ··· 634 634 #define audit_inode(n,d) do { (void)(d); } while (0) 635 635 #define audit_inode_child(i,p) do { ; } while (0) 636 636 #define audit_core_dumps(i) do { ; } while (0) 637 - #define audit_seccomp(i) do { ; } while (0) 637 + #define audit_seccomp(i,s,c) do { ; } while (0) 638 638 #define auditsc_get_stamp(c,t,s) (0) 639 639 #define audit_get_loginuid(t) (-1) 640 640 #define audit_get_sessionid(t) (-1)
+12
include/linux/filter.h
··· 10 10 11 11 #ifdef __KERNEL__ 12 12 #include <linux/atomic.h> 13 + #include <linux/compat.h> 13 14 #endif 14 15 15 16 /* ··· 134 133 135 134 #ifdef __KERNEL__ 136 135 136 + #ifdef CONFIG_COMPAT 137 + /* 138 + * A struct sock_filter is architecture independent. 139 + */ 140 + struct compat_sock_fprog { 141 + u16 len; 142 + compat_uptr_t filter; /* struct sock_filter * */ 143 + }; 144 + #endif 145 + 137 146 struct sk_buff; 138 147 struct sock; 139 148 ··· 244 233 BPF_S_ANC_RXHASH, 245 234 BPF_S_ANC_CPU, 246 235 BPF_S_ANC_ALU_XOR_X, 236 + BPF_S_ANC_SECCOMP_LD_W, 247 237 }; 248 238 249 239 #endif /* __KERNEL__ */
+9 -2
include/linux/key.h
··· 124 124 struct key { 125 125 atomic_t usage; /* number of references */ 126 126 key_serial_t serial; /* key serial number */ 127 - struct rb_node serial_node; 127 + union { 128 + struct list_head graveyard_link; 129 + struct rb_node serial_node; 130 + }; 128 131 struct key_type *type; /* type of key */ 129 132 struct rw_semaphore sem; /* change vs change sem */ 130 133 struct key_user *user; /* owner of this key */ ··· 136 133 time_t expiry; /* time at which key expires (or 0) */ 137 134 time_t revoked_at; /* time at which key was revoked */ 138 135 }; 136 + time_t last_used_at; /* last time used for LRU keyring discard */ 139 137 uid_t uid; 140 138 gid_t gid; 141 139 key_perm_t perm; /* access permissions */ ··· 160 156 #define KEY_FLAG_USER_CONSTRUCT 4 /* set if key is being constructed in userspace */ 161 157 #define KEY_FLAG_NEGATIVE 5 /* set if key is negative */ 162 158 #define KEY_FLAG_ROOT_CAN_CLEAR 6 /* set if key can be cleared by root without permission */ 159 + #define KEY_FLAG_INVALIDATED 7 /* set if key has been invalidated */ 163 160 164 161 /* the description string 165 162 * - this is used to match a key against search criteria ··· 204 199 #define KEY_ALLOC_NOT_IN_QUOTA 0x0002 /* not in quota */ 205 200 206 201 extern void key_revoke(struct key *key); 202 + extern void key_invalidate(struct key *key); 207 203 extern void key_put(struct key *key); 208 204 209 205 static inline struct key *key_get(struct key *key) ··· 242 236 243 237 extern int wait_for_key_construction(struct key *key, bool intr); 244 238 245 - extern int key_validate(struct key *key); 239 + extern int key_validate(const struct key *key); 246 240 247 241 extern key_ref_t key_create_or_update(key_ref_t keyring, 248 242 const char *type, ··· 325 319 #define key_serial(k) 0 326 320 #define key_get(k) ({ NULL; }) 327 321 #define key_revoke(k) do { } while(0) 322 + #define key_invalidate(k) do { } while(0) 328 323 #define key_put(k) do { } while(0) 329 324 #define key_ref_put(k) do { } while(0) 330 325 #define make_key_ref(k, p) NULL
+1
include/linux/keyctl.h
··· 55 55 #define KEYCTL_SESSION_TO_PARENT 18 /* apply session keyring to parent process */ 56 56 #define KEYCTL_REJECT 19 /* reject a partially constructed key */ 57 57 #define KEYCTL_INSTANTIATE_IOV 20 /* instantiate a partially constructed key */ 58 + #define KEYCTL_INVALIDATE 21 /* invalidate a key */ 58 59 59 60 #endif /* _LINUX_KEYCTL_H */
-6
include/linux/lsm_audit.h
··· 53 53 #define LSM_AUDIT_DATA_KMOD 8 54 54 #define LSM_AUDIT_DATA_INODE 9 55 55 #define LSM_AUDIT_DATA_DENTRY 10 56 - struct task_struct *tsk; 57 56 union { 58 57 struct path path; 59 58 struct dentry *dentry; ··· 91 92 92 93 int ipv6_skb_to_auditdata(struct sk_buff *skb, 93 94 struct common_audit_data *ad, u8 *proto); 94 - 95 - /* Initialize an LSM audit data structure. */ 96 - #define COMMON_AUDIT_DATA_INIT(_d, _t) \ 97 - { memset((_d), 0, sizeof(struct common_audit_data)); \ 98 - (_d)->type = LSM_AUDIT_DATA_##_t; } 99 95 100 96 void common_lsm_audit(struct common_audit_data *a, 101 97 void (*pre_audit)(struct audit_buffer *, void *),
+15
include/linux/prctl.h
··· 124 124 #define PR_SET_CHILD_SUBREAPER 36 125 125 #define PR_GET_CHILD_SUBREAPER 37 126 126 127 + /* 128 + * If no_new_privs is set, then operations that grant new privileges (i.e. 129 + * execve) will either fail or not grant them. This affects suid/sgid, 130 + * file capabilities, and LSMs. 131 + * 132 + * Operations that merely manipulate or drop existing privileges (setresuid, 133 + * capset, etc.) will still work. Drop those privileges if you want them gone. 134 + * 135 + * Changing LSM security domain is considered a new privilege. So, for example, 136 + * asking selinux for a specific new context (e.g. with runcon) will result 137 + * in execve returning -EPERM. 138 + */ 139 + #define PR_SET_NO_NEW_PRIVS 38 140 + #define PR_GET_NO_NEW_PRIVS 39 141 + 127 142 #endif /* _LINUX_PRCTL_H */
+4 -1
include/linux/ptrace.h
··· 58 58 #define PTRACE_EVENT_EXEC 4 59 59 #define PTRACE_EVENT_VFORK_DONE 5 60 60 #define PTRACE_EVENT_EXIT 6 61 + #define PTRACE_EVENT_SECCOMP 7 61 62 /* Extended result codes which enabled by means other than options. */ 62 63 #define PTRACE_EVENT_STOP 128 63 64 ··· 70 69 #define PTRACE_O_TRACEEXEC (1 << PTRACE_EVENT_EXEC) 71 70 #define PTRACE_O_TRACEVFORKDONE (1 << PTRACE_EVENT_VFORK_DONE) 72 71 #define PTRACE_O_TRACEEXIT (1 << PTRACE_EVENT_EXIT) 72 + #define PTRACE_O_TRACESECCOMP (1 << PTRACE_EVENT_SECCOMP) 73 73 74 - #define PTRACE_O_MASK 0x0000007f 74 + #define PTRACE_O_MASK 0x000000ff 75 75 76 76 #include <asm/ptrace.h> 77 77 ··· 100 98 #define PT_TRACE_EXEC PT_EVENT_FLAG(PTRACE_EVENT_EXEC) 101 99 #define PT_TRACE_VFORK_DONE PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE) 102 100 #define PT_TRACE_EXIT PT_EVENT_FLAG(PTRACE_EVENT_EXIT) 101 + #define PT_TRACE_SECCOMP PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP) 103 102 104 103 /* single stepping state bits (used on ARM and PA-RISC) */ 105 104 #define PT_SINGLESTEP_BIT 31
+3 -1
include/linux/sched.h
··· 1341 1341 * execve */ 1342 1342 unsigned in_iowait:1; 1343 1343 1344 + /* task may not gain privileges */ 1345 + unsigned no_new_privs:1; 1344 1346 1345 1347 /* Revert to default priority/policy when forking */ 1346 1348 unsigned sched_reset_on_fork:1; ··· 1452 1450 uid_t loginuid; 1453 1451 unsigned int sessionid; 1454 1452 #endif 1455 - seccomp_t seccomp; 1453 + struct seccomp seccomp; 1456 1454 1457 1455 /* Thread group tracking */ 1458 1456 u32 parent_exec_id;
+92 -11
include/linux/seccomp.h
··· 1 1 #ifndef _LINUX_SECCOMP_H 2 2 #define _LINUX_SECCOMP_H 3 3 4 + #include <linux/compiler.h> 5 + #include <linux/types.h> 4 6 7 + 8 + /* Valid values for seccomp.mode and prctl(PR_SET_SECCOMP, <mode>) */ 9 + #define SECCOMP_MODE_DISABLED 0 /* seccomp is not in use. */ 10 + #define SECCOMP_MODE_STRICT 1 /* uses hard-coded filter. */ 11 + #define SECCOMP_MODE_FILTER 2 /* uses user-supplied filter. */ 12 + 13 + /* 14 + * All BPF programs must return a 32-bit value. 15 + * The bottom 16-bits are for optional return data. 16 + * The upper 16-bits are ordered from least permissive values to most. 17 + * 18 + * The ordering ensures that a min_t() over composed return values always 19 + * selects the least permissive choice. 20 + */ 21 + #define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */ 22 + #define SECCOMP_RET_TRAP 0x00030000U /* disallow and force a SIGSYS */ 23 + #define SECCOMP_RET_ERRNO 0x00050000U /* returns an errno */ 24 + #define SECCOMP_RET_TRACE 0x7ff00000U /* pass to a tracer or disallow */ 25 + #define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */ 26 + 27 + /* Masks for the return value sections. */ 28 + #define SECCOMP_RET_ACTION 0x7fff0000U 29 + #define SECCOMP_RET_DATA 0x0000ffffU 30 + 31 + /** 32 + * struct seccomp_data - the format the BPF program executes over. 33 + * @nr: the system call number 34 + * @arch: indicates system call convention as an AUDIT_ARCH_* value 35 + * as defined in <linux/audit.h>. 36 + * @instruction_pointer: at the time of the system call. 37 + * @args: up to 6 system call arguments always stored as 64-bit values 38 + * regardless of the architecture. 39 + */ 40 + struct seccomp_data { 41 + int nr; 42 + __u32 arch; 43 + __u64 instruction_pointer; 44 + __u64 args[6]; 45 + }; 46 + 47 + #ifdef __KERNEL__ 5 48 #ifdef CONFIG_SECCOMP 6 49 7 50 #include <linux/thread_info.h> 8 51 #include <asm/seccomp.h> 9 52 10 - typedef struct { int mode; } seccomp_t; 53 + struct seccomp_filter; 54 + /** 55 + * struct seccomp - the state of a seccomp'ed process 56 + * 57 + * @mode: indicates one of the valid values above for controlled 58 + * system calls available to a process. 59 + * @filter: The metadata and ruleset for determining what system calls 60 + * are allowed for a task. 61 + * 62 + * @filter must only be accessed from the context of current as there 63 + * is no locking. 64 + */ 65 + struct seccomp { 66 + int mode; 67 + struct seccomp_filter *filter; 68 + }; 11 69 12 - extern void __secure_computing(int); 13 - static inline void secure_computing(int this_syscall) 70 + extern int __secure_computing(int); 71 + static inline int secure_computing(int this_syscall) 14 72 { 15 73 if (unlikely(test_thread_flag(TIF_SECCOMP))) 16 - __secure_computing(this_syscall); 74 + return __secure_computing(this_syscall); 75 + return 0; 76 + } 77 + 78 + /* A wrapper for architectures supporting only SECCOMP_MODE_STRICT. */ 79 + static inline void secure_computing_strict(int this_syscall) 80 + { 81 + BUG_ON(secure_computing(this_syscall) != 0); 17 82 } 18 83 19 84 extern long prctl_get_seccomp(void); 20 - extern long prctl_set_seccomp(unsigned long); 85 + extern long prctl_set_seccomp(unsigned long, char __user *); 21 86 22 - static inline int seccomp_mode(seccomp_t *s) 87 + static inline int seccomp_mode(struct seccomp *s) 23 88 { 24 89 return s->mode; 25 90 } ··· 93 28 94 29 #include <linux/errno.h> 95 30 96 - typedef struct { } seccomp_t; 31 + struct seccomp { }; 32 + struct seccomp_filter { }; 97 33 98 - #define secure_computing(x) do { } while (0) 34 + static inline int secure_computing(int this_syscall) { return 0; } 35 + static inline void secure_computing_strict(int this_syscall) { return; } 99 36 100 37 static inline long prctl_get_seccomp(void) 101 38 { 102 39 return -EINVAL; 103 40 } 104 41 105 - static inline long prctl_set_seccomp(unsigned long arg2) 42 + static inline long prctl_set_seccomp(unsigned long arg2, char __user *arg3) 106 43 { 107 44 return -EINVAL; 108 45 } 109 46 110 - static inline int seccomp_mode(seccomp_t *s) 47 + static inline int seccomp_mode(struct seccomp *s) 111 48 { 112 49 return 0; 113 50 } 114 - 115 51 #endif /* CONFIG_SECCOMP */ 116 52 53 + #ifdef CONFIG_SECCOMP_FILTER 54 + extern void put_seccomp_filter(struct task_struct *tsk); 55 + extern void get_seccomp_filter(struct task_struct *tsk); 56 + extern u32 seccomp_bpf_load(int off); 57 + #else /* CONFIG_SECCOMP_FILTER */ 58 + static inline void put_seccomp_filter(struct task_struct *tsk) 59 + { 60 + return; 61 + } 62 + static inline void get_seccomp_filter(struct task_struct *tsk) 63 + { 64 + return; 65 + } 66 + #endif /* CONFIG_SECCOMP_FILTER */ 67 + #endif /* __KERNEL__ */ 117 68 #endif /* _LINUX_SECCOMP_H */
+6 -8
include/linux/security.h
··· 144 144 #define LSM_UNSAFE_SHARE 1 145 145 #define LSM_UNSAFE_PTRACE 2 146 146 #define LSM_UNSAFE_PTRACE_CAP 4 147 + #define LSM_UNSAFE_NO_NEW_PRIVS 8 147 148 148 149 #ifdef CONFIG_MMU 149 150 extern int mmap_min_addr_handler(struct ctl_table *table, int write, ··· 640 639 * to receive an open file descriptor via socket IPC. 641 640 * @file contains the file structure being received. 642 641 * Return 0 if permission is granted. 643 - * 644 - * Security hook for dentry 645 - * 646 - * @dentry_open 642 + * @file_open 647 643 * Save open-time permission checking state for later use upon 648 644 * file_permission, and recheck access if anything has changed 649 645 * since inode_permission. ··· 1495 1497 int (*file_send_sigiotask) (struct task_struct *tsk, 1496 1498 struct fown_struct *fown, int sig); 1497 1499 int (*file_receive) (struct file *file); 1498 - int (*dentry_open) (struct file *file, const struct cred *cred); 1500 + int (*file_open) (struct file *file, const struct cred *cred); 1499 1501 1500 1502 int (*task_create) (unsigned long clone_flags); 1501 1503 void (*task_free) (struct task_struct *task); ··· 1754 1756 int security_file_send_sigiotask(struct task_struct *tsk, 1755 1757 struct fown_struct *fown, int sig); 1756 1758 int security_file_receive(struct file *file); 1757 - int security_dentry_open(struct file *file, const struct cred *cred); 1759 + int security_file_open(struct file *file, const struct cred *cred); 1758 1760 int security_task_create(unsigned long clone_flags); 1759 1761 void security_task_free(struct task_struct *task); 1760 1762 int security_cred_alloc_blank(struct cred *cred, gfp_t gfp); ··· 2225 2227 return 0; 2226 2228 } 2227 2229 2228 - static inline int security_dentry_open(struct file *file, 2229 - const struct cred *cred) 2230 + static inline int security_file_open(struct file *file, 2231 + const struct cred *cred) 2230 2232 { 2231 2233 return 0; 2232 2234 }
+6 -2
kernel/auditsc.c
··· 67 67 #include <linux/syscalls.h> 68 68 #include <linux/capability.h> 69 69 #include <linux/fs_struct.h> 70 + #include <linux/compat.h> 70 71 71 72 #include "audit.h" 72 73 ··· 2711 2710 audit_log_end(ab); 2712 2711 } 2713 2712 2714 - void __audit_seccomp(unsigned long syscall) 2713 + void __audit_seccomp(unsigned long syscall, long signr, int code) 2715 2714 { 2716 2715 struct audit_buffer *ab; 2717 2716 2718 2717 ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_ANOM_ABEND); 2719 - audit_log_abend(ab, "seccomp", SIGKILL); 2718 + audit_log_abend(ab, "seccomp", signr); 2720 2719 audit_log_format(ab, " syscall=%ld", syscall); 2720 + audit_log_format(ab, " compat=%d", is_compat_task()); 2721 + audit_log_format(ab, " ip=0x%lx", KSTK_EIP(current)); 2722 + audit_log_format(ab, " code=0x%x", code); 2721 2723 audit_log_end(ab); 2722 2724 } 2723 2725
+3
kernel/fork.c
··· 34 34 #include <linux/cgroup.h> 35 35 #include <linux/security.h> 36 36 #include <linux/hugetlb.h> 37 + #include <linux/seccomp.h> 37 38 #include <linux/swap.h> 38 39 #include <linux/syscalls.h> 39 40 #include <linux/jiffies.h> ··· 207 206 free_thread_info(tsk->stack); 208 207 rt_mutex_debug_task_free(tsk); 209 208 ftrace_graph_exit_task(tsk); 209 + put_seccomp_filter(tsk); 210 210 free_task_struct(tsk); 211 211 } 212 212 EXPORT_SYMBOL(free_task); ··· 1194 1192 goto fork_out; 1195 1193 1196 1194 ftrace_graph_init_task(p); 1195 + get_seccomp_filter(p); 1197 1196 1198 1197 rt_mutex_init_task(p); 1199 1198
+437 -21
kernel/seccomp.c
··· 3 3 * 4 4 * Copyright 2004-2005 Andrea Arcangeli <andrea@cpushare.com> 5 5 * 6 - * This defines a simple but solid secure-computing mode. 6 + * Copyright (C) 2012 Google, Inc. 7 + * Will Drewry <wad@chromium.org> 8 + * 9 + * This defines a simple but solid secure-computing facility. 10 + * 11 + * Mode 1 uses a fixed list of allowed system calls. 12 + * Mode 2 allows user-defined system call filters in the form 13 + * of Berkeley Packet Filters/Linux Socket Filters. 7 14 */ 8 15 16 + #include <linux/atomic.h> 9 17 #include <linux/audit.h> 10 - #include <linux/seccomp.h> 11 - #include <linux/sched.h> 12 18 #include <linux/compat.h> 19 + #include <linux/sched.h> 20 + #include <linux/seccomp.h> 13 21 14 22 /* #define SECCOMP_DEBUG 1 */ 15 - #define NR_SECCOMP_MODES 1 23 + 24 + #ifdef CONFIG_SECCOMP_FILTER 25 + #include <asm/syscall.h> 26 + #include <linux/filter.h> 27 + #include <linux/ptrace.h> 28 + #include <linux/security.h> 29 + #include <linux/slab.h> 30 + #include <linux/tracehook.h> 31 + #include <linux/uaccess.h> 32 + 33 + /** 34 + * struct seccomp_filter - container for seccomp BPF programs 35 + * 36 + * @usage: reference count to manage the object lifetime. 37 + * get/put helpers should be used when accessing an instance 38 + * outside of a lifetime-guarded section. In general, this 39 + * is only needed for handling filters shared across tasks. 40 + * @prev: points to a previously installed, or inherited, filter 41 + * @len: the number of instructions in the program 42 + * @insns: the BPF program instructions to evaluate 43 + * 44 + * seccomp_filter objects are organized in a tree linked via the @prev 45 + * pointer. For any task, it appears to be a singly-linked list starting 46 + * with current->seccomp.filter, the most recently attached or inherited filter. 47 + * However, multiple filters may share a @prev node, by way of fork(), which 48 + * results in a unidirectional tree existing in memory. This is similar to 49 + * how namespaces work. 50 + * 51 + * seccomp_filter objects should never be modified after being attached 52 + * to a task_struct (other than @usage). 53 + */ 54 + struct seccomp_filter { 55 + atomic_t usage; 56 + struct seccomp_filter *prev; 57 + unsigned short len; /* Instruction count */ 58 + struct sock_filter insns[]; 59 + }; 60 + 61 + /* Limit any path through the tree to 256KB worth of instructions. */ 62 + #define MAX_INSNS_PER_PATH ((1 << 18) / sizeof(struct sock_filter)) 63 + 64 + /** 65 + * get_u32 - returns a u32 offset into data 66 + * @data: a unsigned 64 bit value 67 + * @index: 0 or 1 to return the first or second 32-bits 68 + * 69 + * This inline exists to hide the length of unsigned long. If a 32-bit 70 + * unsigned long is passed in, it will be extended and the top 32-bits will be 71 + * 0. If it is a 64-bit unsigned long, then whatever data is resident will be 72 + * properly returned. 73 + * 74 + * Endianness is explicitly ignored and left for BPF program authors to manage 75 + * as per the specific architecture. 76 + */ 77 + static inline u32 get_u32(u64 data, int index) 78 + { 79 + return ((u32 *)&data)[index]; 80 + } 81 + 82 + /* Helper for bpf_load below. */ 83 + #define BPF_DATA(_name) offsetof(struct seccomp_data, _name) 84 + /** 85 + * bpf_load: checks and returns a pointer to the requested offset 86 + * @off: offset into struct seccomp_data to load from 87 + * 88 + * Returns the requested 32-bits of data. 89 + * seccomp_check_filter() should assure that @off is 32-bit aligned 90 + * and not out of bounds. Failure to do so is a BUG. 91 + */ 92 + u32 seccomp_bpf_load(int off) 93 + { 94 + struct pt_regs *regs = task_pt_regs(current); 95 + if (off == BPF_DATA(nr)) 96 + return syscall_get_nr(current, regs); 97 + if (off == BPF_DATA(arch)) 98 + return syscall_get_arch(current, regs); 99 + if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) { 100 + unsigned long value; 101 + int arg = (off - BPF_DATA(args[0])) / sizeof(u64); 102 + int index = !!(off % sizeof(u64)); 103 + syscall_get_arguments(current, regs, arg, 1, &value); 104 + return get_u32(value, index); 105 + } 106 + if (off == BPF_DATA(instruction_pointer)) 107 + return get_u32(KSTK_EIP(current), 0); 108 + if (off == BPF_DATA(instruction_pointer) + sizeof(u32)) 109 + return get_u32(KSTK_EIP(current), 1); 110 + /* seccomp_check_filter should make this impossible. */ 111 + BUG(); 112 + } 113 + 114 + /** 115 + * seccomp_check_filter - verify seccomp filter code 116 + * @filter: filter to verify 117 + * @flen: length of filter 118 + * 119 + * Takes a previously checked filter (by sk_chk_filter) and 120 + * redirects all filter code that loads struct sk_buff data 121 + * and related data through seccomp_bpf_load. It also 122 + * enforces length and alignment checking of those loads. 123 + * 124 + * Returns 0 if the rule set is legal or -EINVAL if not. 125 + */ 126 + static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen) 127 + { 128 + int pc; 129 + for (pc = 0; pc < flen; pc++) { 130 + struct sock_filter *ftest = &filter[pc]; 131 + u16 code = ftest->code; 132 + u32 k = ftest->k; 133 + 134 + switch (code) { 135 + case BPF_S_LD_W_ABS: 136 + ftest->code = BPF_S_ANC_SECCOMP_LD_W; 137 + /* 32-bit aligned and not out of bounds. */ 138 + if (k >= sizeof(struct seccomp_data) || k & 3) 139 + return -EINVAL; 140 + continue; 141 + case BPF_S_LD_W_LEN: 142 + ftest->code = BPF_S_LD_IMM; 143 + ftest->k = sizeof(struct seccomp_data); 144 + continue; 145 + case BPF_S_LDX_W_LEN: 146 + ftest->code = BPF_S_LDX_IMM; 147 + ftest->k = sizeof(struct seccomp_data); 148 + continue; 149 + /* Explicitly include allowed calls. */ 150 + case BPF_S_RET_K: 151 + case BPF_S_RET_A: 152 + case BPF_S_ALU_ADD_K: 153 + case BPF_S_ALU_ADD_X: 154 + case BPF_S_ALU_SUB_K: 155 + case BPF_S_ALU_SUB_X: 156 + case BPF_S_ALU_MUL_K: 157 + case BPF_S_ALU_MUL_X: 158 + case BPF_S_ALU_DIV_X: 159 + case BPF_S_ALU_AND_K: 160 + case BPF_S_ALU_AND_X: 161 + case BPF_S_ALU_OR_K: 162 + case BPF_S_ALU_OR_X: 163 + case BPF_S_ALU_LSH_K: 164 + case BPF_S_ALU_LSH_X: 165 + case BPF_S_ALU_RSH_K: 166 + case BPF_S_ALU_RSH_X: 167 + case BPF_S_ALU_NEG: 168 + case BPF_S_LD_IMM: 169 + case BPF_S_LDX_IMM: 170 + case BPF_S_MISC_TAX: 171 + case BPF_S_MISC_TXA: 172 + case BPF_S_ALU_DIV_K: 173 + case BPF_S_LD_MEM: 174 + case BPF_S_LDX_MEM: 175 + case BPF_S_ST: 176 + case BPF_S_STX: 177 + case BPF_S_JMP_JA: 178 + case BPF_S_JMP_JEQ_K: 179 + case BPF_S_JMP_JEQ_X: 180 + case BPF_S_JMP_JGE_K: 181 + case BPF_S_JMP_JGE_X: 182 + case BPF_S_JMP_JGT_K: 183 + case BPF_S_JMP_JGT_X: 184 + case BPF_S_JMP_JSET_K: 185 + case BPF_S_JMP_JSET_X: 186 + continue; 187 + default: 188 + return -EINVAL; 189 + } 190 + } 191 + return 0; 192 + } 193 + 194 + /** 195 + * seccomp_run_filters - evaluates all seccomp filters against @syscall 196 + * @syscall: number of the current system call 197 + * 198 + * Returns valid seccomp BPF response codes. 199 + */ 200 + static u32 seccomp_run_filters(int syscall) 201 + { 202 + struct seccomp_filter *f; 203 + u32 ret = SECCOMP_RET_ALLOW; 204 + 205 + /* Ensure unexpected behavior doesn't result in failing open. */ 206 + if (WARN_ON(current->seccomp.filter == NULL)) 207 + return SECCOMP_RET_KILL; 208 + 209 + /* 210 + * All filters in the list are evaluated and the lowest BPF return 211 + * value always takes priority (ignoring the DATA). 212 + */ 213 + for (f = current->seccomp.filter; f; f = f->prev) { 214 + u32 cur_ret = sk_run_filter(NULL, f->insns); 215 + if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION)) 216 + ret = cur_ret; 217 + } 218 + return ret; 219 + } 220 + 221 + /** 222 + * seccomp_attach_filter: Attaches a seccomp filter to current. 223 + * @fprog: BPF program to install 224 + * 225 + * Returns 0 on success or an errno on failure. 226 + */ 227 + static long seccomp_attach_filter(struct sock_fprog *fprog) 228 + { 229 + struct seccomp_filter *filter; 230 + unsigned long fp_size = fprog->len * sizeof(struct sock_filter); 231 + unsigned long total_insns = fprog->len; 232 + long ret; 233 + 234 + if (fprog->len == 0 || fprog->len > BPF_MAXINSNS) 235 + return -EINVAL; 236 + 237 + for (filter = current->seccomp.filter; filter; filter = filter->prev) 238 + total_insns += filter->len + 4; /* include a 4 instr penalty */ 239 + if (total_insns > MAX_INSNS_PER_PATH) 240 + return -ENOMEM; 241 + 242 + /* 243 + * Installing a seccomp filter requires that the task have 244 + * CAP_SYS_ADMIN in its namespace or be running with no_new_privs. 245 + * This avoids scenarios where unprivileged tasks can affect the 246 + * behavior of privileged children. 247 + */ 248 + if (!current->no_new_privs && 249 + security_capable_noaudit(current_cred(), current_user_ns(), 250 + CAP_SYS_ADMIN) != 0) 251 + return -EACCES; 252 + 253 + /* Allocate a new seccomp_filter */ 254 + filter = kzalloc(sizeof(struct seccomp_filter) + fp_size, 255 + GFP_KERNEL|__GFP_NOWARN); 256 + if (!filter) 257 + return -ENOMEM; 258 + atomic_set(&filter->usage, 1); 259 + filter->len = fprog->len; 260 + 261 + /* Copy the instructions from fprog. */ 262 + ret = -EFAULT; 263 + if (copy_from_user(filter->insns, fprog->filter, fp_size)) 264 + goto fail; 265 + 266 + /* Check and rewrite the fprog via the skb checker */ 267 + ret = sk_chk_filter(filter->insns, filter->len); 268 + if (ret) 269 + goto fail; 270 + 271 + /* Check and rewrite the fprog for seccomp use */ 272 + ret = seccomp_check_filter(filter->insns, filter->len); 273 + if (ret) 274 + goto fail; 275 + 276 + /* 277 + * If there is an existing filter, make it the prev and don't drop its 278 + * task reference. 279 + */ 280 + filter->prev = current->seccomp.filter; 281 + current->seccomp.filter = filter; 282 + return 0; 283 + fail: 284 + kfree(filter); 285 + return ret; 286 + } 287 + 288 + /** 289 + * seccomp_attach_user_filter - attaches a user-supplied sock_fprog 290 + * @user_filter: pointer to the user data containing a sock_fprog. 291 + * 292 + * Returns 0 on success and non-zero otherwise. 293 + */ 294 + long seccomp_attach_user_filter(char __user *user_filter) 295 + { 296 + struct sock_fprog fprog; 297 + long ret = -EFAULT; 298 + 299 + #ifdef CONFIG_COMPAT 300 + if (is_compat_task()) { 301 + struct compat_sock_fprog fprog32; 302 + if (copy_from_user(&fprog32, user_filter, sizeof(fprog32))) 303 + goto out; 304 + fprog.len = fprog32.len; 305 + fprog.filter = compat_ptr(fprog32.filter); 306 + } else /* falls through to the if below. */ 307 + #endif 308 + if (copy_from_user(&fprog, user_filter, sizeof(fprog))) 309 + goto out; 310 + ret = seccomp_attach_filter(&fprog); 311 + out: 312 + return ret; 313 + } 314 + 315 + /* get_seccomp_filter - increments the reference count of the filter on @tsk */ 316 + void get_seccomp_filter(struct task_struct *tsk) 317 + { 318 + struct seccomp_filter *orig = tsk->seccomp.filter; 319 + if (!orig) 320 + return; 321 + /* Reference count is bounded by the number of total processes. */ 322 + atomic_inc(&orig->usage); 323 + } 324 + 325 + /* put_seccomp_filter - decrements the ref count of tsk->seccomp.filter */ 326 + void put_seccomp_filter(struct task_struct *tsk) 327 + { 328 + struct seccomp_filter *orig = tsk->seccomp.filter; 329 + /* Clean up single-reference branches iteratively. */ 330 + while (orig && atomic_dec_and_test(&orig->usage)) { 331 + struct seccomp_filter *freeme = orig; 332 + orig = orig->prev; 333 + kfree(freeme); 334 + } 335 + } 336 + 337 + /** 338 + * seccomp_send_sigsys - signals the task to allow in-process syscall emulation 339 + * @syscall: syscall number to send to userland 340 + * @reason: filter-supplied reason code to send to userland (via si_errno) 341 + * 342 + * Forces a SIGSYS with a code of SYS_SECCOMP and related sigsys info. 343 + */ 344 + static void seccomp_send_sigsys(int syscall, int reason) 345 + { 346 + struct siginfo info; 347 + memset(&info, 0, sizeof(info)); 348 + info.si_signo = SIGSYS; 349 + info.si_code = SYS_SECCOMP; 350 + info.si_call_addr = (void __user *)KSTK_EIP(current); 351 + info.si_errno = reason; 352 + info.si_arch = syscall_get_arch(current, task_pt_regs(current)); 353 + info.si_syscall = syscall; 354 + force_sig_info(SIGSYS, &info, current); 355 + } 356 + #endif /* CONFIG_SECCOMP_FILTER */ 16 357 17 358 /* 18 359 * Secure computing mode 1 allows only read/write/exit/sigreturn. ··· 372 31 }; 373 32 #endif 374 33 375 - void __secure_computing(int this_syscall) 34 + int __secure_computing(int this_syscall) 376 35 { 377 36 int mode = current->seccomp.mode; 378 - int * syscall; 37 + int exit_sig = 0; 38 + int *syscall; 39 + u32 ret; 379 40 380 41 switch (mode) { 381 - case 1: 42 + case SECCOMP_MODE_STRICT: 382 43 syscall = mode1_syscalls; 383 44 #ifdef CONFIG_COMPAT 384 45 if (is_compat_task()) ··· 388 45 #endif 389 46 do { 390 47 if (*syscall == this_syscall) 391 - return; 48 + return 0; 392 49 } while (*++syscall); 50 + exit_sig = SIGKILL; 51 + ret = SECCOMP_RET_KILL; 393 52 break; 53 + #ifdef CONFIG_SECCOMP_FILTER 54 + case SECCOMP_MODE_FILTER: { 55 + int data; 56 + ret = seccomp_run_filters(this_syscall); 57 + data = ret & SECCOMP_RET_DATA; 58 + ret &= SECCOMP_RET_ACTION; 59 + switch (ret) { 60 + case SECCOMP_RET_ERRNO: 61 + /* Set the low-order 16-bits as a errno. */ 62 + syscall_set_return_value(current, task_pt_regs(current), 63 + -data, 0); 64 + goto skip; 65 + case SECCOMP_RET_TRAP: 66 + /* Show the handler the original registers. */ 67 + syscall_rollback(current, task_pt_regs(current)); 68 + /* Let the filter pass back 16 bits of data. */ 69 + seccomp_send_sigsys(this_syscall, data); 70 + goto skip; 71 + case SECCOMP_RET_TRACE: 72 + /* Skip these calls if there is no tracer. */ 73 + if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) 74 + goto skip; 75 + /* Allow the BPF to provide the event message */ 76 + ptrace_event(PTRACE_EVENT_SECCOMP, data); 77 + /* 78 + * The delivery of a fatal signal during event 79 + * notification may silently skip tracer notification. 80 + * Terminating the task now avoids executing a system 81 + * call that may not be intended. 82 + */ 83 + if (fatal_signal_pending(current)) 84 + break; 85 + return 0; 86 + case SECCOMP_RET_ALLOW: 87 + return 0; 88 + case SECCOMP_RET_KILL: 89 + default: 90 + break; 91 + } 92 + exit_sig = SIGSYS; 93 + break; 94 + } 95 + #endif 394 96 default: 395 97 BUG(); 396 98 } ··· 443 55 #ifdef SECCOMP_DEBUG 444 56 dump_stack(); 445 57 #endif 446 - audit_seccomp(this_syscall); 447 - do_exit(SIGKILL); 58 + audit_seccomp(this_syscall, exit_sig, ret); 59 + do_exit(exit_sig); 60 + #ifdef CONFIG_SECCOMP_FILTER 61 + skip: 62 + audit_seccomp(this_syscall, exit_sig, ret); 63 + #endif 64 + return -1; 448 65 } 449 66 450 67 long prctl_get_seccomp(void) ··· 457 64 return current->seccomp.mode; 458 65 } 459 66 460 - long prctl_set_seccomp(unsigned long seccomp_mode) 67 + /** 68 + * prctl_set_seccomp: configures current->seccomp.mode 69 + * @seccomp_mode: requested mode to use 70 + * @filter: optional struct sock_fprog for use with SECCOMP_MODE_FILTER 71 + * 72 + * This function may be called repeatedly with a @seccomp_mode of 73 + * SECCOMP_MODE_FILTER to install additional filters. Every filter 74 + * successfully installed will be evaluated (in reverse order) for each system 75 + * call the task makes. 76 + * 77 + * Once current->seccomp.mode is non-zero, it may not be changed. 78 + * 79 + * Returns 0 on success or -EINVAL on failure. 80 + */ 81 + long prctl_set_seccomp(unsigned long seccomp_mode, char __user *filter) 461 82 { 462 - long ret; 83 + long ret = -EINVAL; 463 84 464 - /* can set it only once to be even more secure */ 465 - ret = -EPERM; 466 - if (unlikely(current->seccomp.mode)) 85 + if (current->seccomp.mode && 86 + current->seccomp.mode != seccomp_mode) 467 87 goto out; 468 88 469 - ret = -EINVAL; 470 - if (seccomp_mode && seccomp_mode <= NR_SECCOMP_MODES) { 471 - current->seccomp.mode = seccomp_mode; 472 - set_thread_flag(TIF_SECCOMP); 89 + switch (seccomp_mode) { 90 + case SECCOMP_MODE_STRICT: 91 + ret = 0; 473 92 #ifdef TIF_NOTSC 474 93 disable_TSC(); 475 94 #endif 476 - ret = 0; 95 + break; 96 + #ifdef CONFIG_SECCOMP_FILTER 97 + case SECCOMP_MODE_FILTER: 98 + ret = seccomp_attach_user_filter(filter); 99 + if (ret) 100 + goto out; 101 + break; 102 + #endif 103 + default: 104 + goto out; 477 105 } 478 106 479 - out: 107 + current->seccomp.mode = seccomp_mode; 108 + set_thread_flag(TIF_SECCOMP); 109 + out: 480 110 return ret; 481 111 }
+8 -1
kernel/signal.c
··· 160 160 161 161 #define SYNCHRONOUS_MASK \ 162 162 (sigmask(SIGSEGV) | sigmask(SIGBUS) | sigmask(SIGILL) | \ 163 - sigmask(SIGTRAP) | sigmask(SIGFPE)) 163 + sigmask(SIGTRAP) | sigmask(SIGFPE) | sigmask(SIGSYS)) 164 164 165 165 int next_signal(struct sigpending *pending, sigset_t *mask) 166 166 { ··· 2706 2706 err |= __put_user(from->si_uid, &to->si_uid); 2707 2707 err |= __put_user(from->si_ptr, &to->si_ptr); 2708 2708 break; 2709 + #ifdef __ARCH_SIGSYS 2710 + case __SI_SYS: 2711 + err |= __put_user(from->si_call_addr, &to->si_call_addr); 2712 + err |= __put_user(from->si_syscall, &to->si_syscall); 2713 + err |= __put_user(from->si_arch, &to->si_arch); 2714 + break; 2715 + #endif 2709 2716 default: /* this is just in case for now ... */ 2710 2717 err |= __put_user(from->si_pid, &to->si_pid); 2711 2718 err |= __put_user(from->si_uid, &to->si_uid);
+11 -1
kernel/sys.c
··· 1908 1908 error = prctl_get_seccomp(); 1909 1909 break; 1910 1910 case PR_SET_SECCOMP: 1911 - error = prctl_set_seccomp(arg2); 1911 + error = prctl_set_seccomp(arg2, (char __user *)arg3); 1912 1912 break; 1913 1913 case PR_GET_TSC: 1914 1914 error = GET_TSC_CTL(arg2); ··· 1979 1979 error = put_user(me->signal->is_child_subreaper, 1980 1980 (int __user *) arg2); 1981 1981 break; 1982 + case PR_SET_NO_NEW_PRIVS: 1983 + if (arg2 != 1 || arg3 || arg4 || arg5) 1984 + return -EINVAL; 1985 + 1986 + current->no_new_privs = 1; 1987 + break; 1988 + case PR_GET_NO_NEW_PRIVS: 1989 + if (arg2 || arg3 || arg4 || arg5) 1990 + return -EINVAL; 1991 + return current->no_new_privs ? 1 : 0; 1982 1992 default: 1983 1993 error = -EINVAL; 1984 1994 break;
-8
net/compat.c
··· 328 328 __scm_destroy(scm); 329 329 } 330 330 331 - /* 332 - * A struct sock_filter is architecture independent. 333 - */ 334 - struct compat_sock_fprog { 335 - u16 len; 336 - compat_uptr_t filter; /* struct sock_filter * */ 337 - }; 338 - 339 331 static int do_set_attach_filter(struct socket *sock, int level, int optname, 340 332 char __user *optval, unsigned int optlen) 341 333 {
+6
net/core/filter.c
··· 38 38 #include <linux/filter.h> 39 39 #include <linux/reciprocal_div.h> 40 40 #include <linux/ratelimit.h> 41 + #include <linux/seccomp.h> 41 42 42 43 /* No hurry in this branch 43 44 * ··· 356 355 A = 0; 357 356 continue; 358 357 } 358 + #ifdef CONFIG_SECCOMP_FILTER 359 + case BPF_S_ANC_SECCOMP_LD_W: 360 + A = seccomp_bpf_load(fentry->k); 361 + continue; 362 + #endif 359 363 default: 360 364 WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n", 361 365 fentry->code, fentry->jt,
-5
net/dns_resolver/dns_key.c
··· 249 249 struct key *keyring; 250 250 int ret; 251 251 252 - printk(KERN_NOTICE "Registering the %s key type\n", 253 - key_type_dns_resolver.name); 254 - 255 252 /* create an override credential set with a special thread keyring in 256 253 * which DNS requests are cached 257 254 * ··· 298 301 key_revoke(dns_resolver_cache->thread_keyring); 299 302 unregister_key_type(&key_type_dns_resolver); 300 303 put_cred(dns_resolver_cache); 301 - printk(KERN_NOTICE "Unregistered %s key type\n", 302 - key_type_dns_resolver.name); 303 304 } 304 305 305 306 module_init(init_dns_resolver)
+1
net/xfrm/xfrm_policy.c
··· 26 26 #include <linux/cache.h> 27 27 #include <linux/audit.h> 28 28 #include <net/dst.h> 29 + #include <net/flow.h> 29 30 #include <net/xfrm.h> 30 31 #include <net/ip.h> 31 32 #ifdef CONFIG_XFRM_STATISTICS
+1 -1
samples/Makefile
··· 1 1 # Makefile for Linux samples code 2 2 3 3 obj-$(CONFIG_SAMPLES) += kobject/ kprobes/ tracepoints/ trace_events/ \ 4 - hw_breakpoint/ kfifo/ kdb/ hidraw/ rpmsg/ 4 + hw_breakpoint/ kfifo/ kdb/ hidraw/ rpmsg/ seccomp/
+32
samples/seccomp/Makefile
··· 1 + # kbuild trick to avoid linker error. Can be omitted if a module is built. 2 + obj- := dummy.o 3 + 4 + hostprogs-$(CONFIG_SECCOMP_FILTER) := bpf-fancy dropper bpf-direct 5 + 6 + HOSTCFLAGS_bpf-fancy.o += -I$(objtree)/usr/include 7 + HOSTCFLAGS_bpf-fancy.o += -idirafter $(objtree)/include 8 + HOSTCFLAGS_bpf-helper.o += -I$(objtree)/usr/include 9 + HOSTCFLAGS_bpf-helper.o += -idirafter $(objtree)/include 10 + bpf-fancy-objs := bpf-fancy.o bpf-helper.o 11 + 12 + HOSTCFLAGS_dropper.o += -I$(objtree)/usr/include 13 + HOSTCFLAGS_dropper.o += -idirafter $(objtree)/include 14 + dropper-objs := dropper.o 15 + 16 + HOSTCFLAGS_bpf-direct.o += -I$(objtree)/usr/include 17 + HOSTCFLAGS_bpf-direct.o += -idirafter $(objtree)/include 18 + bpf-direct-objs := bpf-direct.o 19 + 20 + # Try to match the kernel target. 21 + ifeq ($(CONFIG_64BIT),) 22 + HOSTCFLAGS_bpf-direct.o += -m32 23 + HOSTCFLAGS_dropper.o += -m32 24 + HOSTCFLAGS_bpf-helper.o += -m32 25 + HOSTCFLAGS_bpf-fancy.o += -m32 26 + HOSTLOADLIBES_bpf-direct += -m32 27 + HOSTLOADLIBES_bpf-fancy += -m32 28 + HOSTLOADLIBES_dropper += -m32 29 + endif 30 + 31 + # Tell kbuild to always build the programs 32 + always := $(hostprogs-y)
+190
samples/seccomp/bpf-direct.c
··· 1 + /* 2 + * Seccomp filter example for x86 (32-bit and 64-bit) with BPF macros 3 + * 4 + * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org> 5 + * Author: Will Drewry <wad@chromium.org> 6 + * 7 + * The code may be used by anyone for any purpose, 8 + * and can serve as a starting point for developing 9 + * applications using prctl(PR_SET_SECCOMP, 2, ...). 10 + */ 11 + #if defined(__i386__) || defined(__x86_64__) 12 + #define SUPPORTED_ARCH 1 13 + #endif 14 + 15 + #if defined(SUPPORTED_ARCH) 16 + #define __USE_GNU 1 17 + #define _GNU_SOURCE 1 18 + 19 + #include <linux/types.h> 20 + #include <linux/filter.h> 21 + #include <linux/seccomp.h> 22 + #include <linux/unistd.h> 23 + #include <signal.h> 24 + #include <stdio.h> 25 + #include <stddef.h> 26 + #include <string.h> 27 + #include <sys/prctl.h> 28 + #include <unistd.h> 29 + 30 + #define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n])) 31 + #define syscall_nr (offsetof(struct seccomp_data, nr)) 32 + 33 + #if defined(__i386__) 34 + #define REG_RESULT REG_EAX 35 + #define REG_SYSCALL REG_EAX 36 + #define REG_ARG0 REG_EBX 37 + #define REG_ARG1 REG_ECX 38 + #define REG_ARG2 REG_EDX 39 + #define REG_ARG3 REG_ESI 40 + #define REG_ARG4 REG_EDI 41 + #define REG_ARG5 REG_EBP 42 + #elif defined(__x86_64__) 43 + #define REG_RESULT REG_RAX 44 + #define REG_SYSCALL REG_RAX 45 + #define REG_ARG0 REG_RDI 46 + #define REG_ARG1 REG_RSI 47 + #define REG_ARG2 REG_RDX 48 + #define REG_ARG3 REG_R10 49 + #define REG_ARG4 REG_R8 50 + #define REG_ARG5 REG_R9 51 + #endif 52 + 53 + #ifndef PR_SET_NO_NEW_PRIVS 54 + #define PR_SET_NO_NEW_PRIVS 38 55 + #endif 56 + 57 + #ifndef SYS_SECCOMP 58 + #define SYS_SECCOMP 1 59 + #endif 60 + 61 + static void emulator(int nr, siginfo_t *info, void *void_context) 62 + { 63 + ucontext_t *ctx = (ucontext_t *)(void_context); 64 + int syscall; 65 + char *buf; 66 + ssize_t bytes; 67 + size_t len; 68 + if (info->si_code != SYS_SECCOMP) 69 + return; 70 + if (!ctx) 71 + return; 72 + syscall = ctx->uc_mcontext.gregs[REG_SYSCALL]; 73 + buf = (char *) ctx->uc_mcontext.gregs[REG_ARG1]; 74 + len = (size_t) ctx->uc_mcontext.gregs[REG_ARG2]; 75 + 76 + if (syscall != __NR_write) 77 + return; 78 + if (ctx->uc_mcontext.gregs[REG_ARG0] != STDERR_FILENO) 79 + return; 80 + /* Redirect stderr messages to stdout. Doesn't handle EINTR, etc */ 81 + ctx->uc_mcontext.gregs[REG_RESULT] = -1; 82 + if (write(STDOUT_FILENO, "[ERR] ", 6) > 0) { 83 + bytes = write(STDOUT_FILENO, buf, len); 84 + ctx->uc_mcontext.gregs[REG_RESULT] = bytes; 85 + } 86 + return; 87 + } 88 + 89 + static int install_emulator(void) 90 + { 91 + struct sigaction act; 92 + sigset_t mask; 93 + memset(&act, 0, sizeof(act)); 94 + sigemptyset(&mask); 95 + sigaddset(&mask, SIGSYS); 96 + 97 + act.sa_sigaction = &emulator; 98 + act.sa_flags = SA_SIGINFO; 99 + if (sigaction(SIGSYS, &act, NULL) < 0) { 100 + perror("sigaction"); 101 + return -1; 102 + } 103 + if (sigprocmask(SIG_UNBLOCK, &mask, NULL)) { 104 + perror("sigprocmask"); 105 + return -1; 106 + } 107 + return 0; 108 + } 109 + 110 + static int install_filter(void) 111 + { 112 + struct sock_filter filter[] = { 113 + /* Grab the system call number */ 114 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_nr), 115 + /* Jump table for the allowed syscalls */ 116 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1), 117 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 118 + #ifdef __NR_sigreturn 119 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1), 120 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 121 + #endif 122 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1), 123 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 124 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit, 0, 1), 125 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 126 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_read, 1, 0), 127 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_write, 3, 2), 128 + 129 + /* Check that read is only using stdin. */ 130 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)), 131 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0), 132 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL), 133 + 134 + /* Check that write is only using stdout */ 135 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)), 136 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0), 137 + /* Trap attempts to write to stderr */ 138 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2), 139 + 140 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 141 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRAP), 142 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL), 143 + }; 144 + struct sock_fprog prog = { 145 + .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])), 146 + .filter = filter, 147 + }; 148 + 149 + if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { 150 + perror("prctl(NO_NEW_PRIVS)"); 151 + return 1; 152 + } 153 + 154 + 155 + if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) { 156 + perror("prctl"); 157 + return 1; 158 + } 159 + return 0; 160 + } 161 + 162 + #define payload(_c) (_c), sizeof((_c)) 163 + int main(int argc, char **argv) 164 + { 165 + char buf[4096]; 166 + ssize_t bytes = 0; 167 + if (install_emulator()) 168 + return 1; 169 + if (install_filter()) 170 + return 1; 171 + syscall(__NR_write, STDOUT_FILENO, 172 + payload("OHAI! WHAT IS YOUR NAME? ")); 173 + bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf)); 174 + syscall(__NR_write, STDOUT_FILENO, payload("HELLO, ")); 175 + syscall(__NR_write, STDOUT_FILENO, buf, bytes); 176 + syscall(__NR_write, STDERR_FILENO, 177 + payload("Error message going to STDERR\n")); 178 + return 0; 179 + } 180 + #else /* SUPPORTED_ARCH */ 181 + /* 182 + * This sample is x86-only. Since kernel samples are compiled with the 183 + * host toolchain, a non-x86 host will result in using only the main() 184 + * below. 185 + */ 186 + int main(void) 187 + { 188 + return 1; 189 + } 190 + #endif /* SUPPORTED_ARCH */
+102
samples/seccomp/bpf-fancy.c
··· 1 + /* 2 + * Seccomp BPF example using a macro-based generator. 3 + * 4 + * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org> 5 + * Author: Will Drewry <wad@chromium.org> 6 + * 7 + * The code may be used by anyone for any purpose, 8 + * and can serve as a starting point for developing 9 + * applications using prctl(PR_ATTACH_SECCOMP_FILTER). 10 + */ 11 + 12 + #include <linux/filter.h> 13 + #include <linux/seccomp.h> 14 + #include <linux/unistd.h> 15 + #include <stdio.h> 16 + #include <string.h> 17 + #include <sys/prctl.h> 18 + #include <unistd.h> 19 + 20 + #include "bpf-helper.h" 21 + 22 + #ifndef PR_SET_NO_NEW_PRIVS 23 + #define PR_SET_NO_NEW_PRIVS 38 24 + #endif 25 + 26 + int main(int argc, char **argv) 27 + { 28 + struct bpf_labels l; 29 + static const char msg1[] = "Please type something: "; 30 + static const char msg2[] = "You typed: "; 31 + char buf[256]; 32 + struct sock_filter filter[] = { 33 + /* TODO: LOAD_SYSCALL_NR(arch) and enforce an arch */ 34 + LOAD_SYSCALL_NR, 35 + SYSCALL(__NR_exit, ALLOW), 36 + SYSCALL(__NR_exit_group, ALLOW), 37 + SYSCALL(__NR_write, JUMP(&l, write_fd)), 38 + SYSCALL(__NR_read, JUMP(&l, read)), 39 + DENY, /* Don't passthrough into a label */ 40 + 41 + LABEL(&l, read), 42 + ARG(0), 43 + JNE(STDIN_FILENO, DENY), 44 + ARG(1), 45 + JNE((unsigned long)buf, DENY), 46 + ARG(2), 47 + JGE(sizeof(buf), DENY), 48 + ALLOW, 49 + 50 + LABEL(&l, write_fd), 51 + ARG(0), 52 + JEQ(STDOUT_FILENO, JUMP(&l, write_buf)), 53 + JEQ(STDERR_FILENO, JUMP(&l, write_buf)), 54 + DENY, 55 + 56 + LABEL(&l, write_buf), 57 + ARG(1), 58 + JEQ((unsigned long)msg1, JUMP(&l, msg1_len)), 59 + JEQ((unsigned long)msg2, JUMP(&l, msg2_len)), 60 + JEQ((unsigned long)buf, JUMP(&l, buf_len)), 61 + DENY, 62 + 63 + LABEL(&l, msg1_len), 64 + ARG(2), 65 + JLT(sizeof(msg1), ALLOW), 66 + DENY, 67 + 68 + LABEL(&l, msg2_len), 69 + ARG(2), 70 + JLT(sizeof(msg2), ALLOW), 71 + DENY, 72 + 73 + LABEL(&l, buf_len), 74 + ARG(2), 75 + JLT(sizeof(buf), ALLOW), 76 + DENY, 77 + }; 78 + struct sock_fprog prog = { 79 + .filter = filter, 80 + .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])), 81 + }; 82 + ssize_t bytes; 83 + bpf_resolve_jumps(&l, filter, sizeof(filter)/sizeof(*filter)); 84 + 85 + if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { 86 + perror("prctl(NO_NEW_PRIVS)"); 87 + return 1; 88 + } 89 + 90 + if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) { 91 + perror("prctl(SECCOMP)"); 92 + return 1; 93 + } 94 + syscall(__NR_write, STDOUT_FILENO, msg1, strlen(msg1)); 95 + bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf)-1); 96 + bytes = (bytes > 0 ? bytes : 0); 97 + syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2)); 98 + syscall(__NR_write, STDERR_FILENO, buf, bytes); 99 + /* Now get killed */ 100 + syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2)+2); 101 + return 0; 102 + }
+89
samples/seccomp/bpf-helper.c
··· 1 + /* 2 + * Seccomp BPF helper functions 3 + * 4 + * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org> 5 + * Author: Will Drewry <wad@chromium.org> 6 + * 7 + * The code may be used by anyone for any purpose, 8 + * and can serve as a starting point for developing 9 + * applications using prctl(PR_ATTACH_SECCOMP_FILTER). 10 + */ 11 + 12 + #include <stdio.h> 13 + #include <string.h> 14 + 15 + #include "bpf-helper.h" 16 + 17 + int bpf_resolve_jumps(struct bpf_labels *labels, 18 + struct sock_filter *filter, size_t count) 19 + { 20 + struct sock_filter *begin = filter; 21 + __u8 insn = count - 1; 22 + 23 + if (count < 1) 24 + return -1; 25 + /* 26 + * Walk it once, backwards, to build the label table and do fixups. 27 + * Since backward jumps are disallowed by BPF, this is easy. 28 + */ 29 + filter += insn; 30 + for (; filter >= begin; --insn, --filter) { 31 + if (filter->code != (BPF_JMP+BPF_JA)) 32 + continue; 33 + switch ((filter->jt<<8)|filter->jf) { 34 + case (JUMP_JT<<8)|JUMP_JF: 35 + if (labels->labels[filter->k].location == 0xffffffff) { 36 + fprintf(stderr, "Unresolved label: '%s'\n", 37 + labels->labels[filter->k].label); 38 + return 1; 39 + } 40 + filter->k = labels->labels[filter->k].location - 41 + (insn + 1); 42 + filter->jt = 0; 43 + filter->jf = 0; 44 + continue; 45 + case (LABEL_JT<<8)|LABEL_JF: 46 + if (labels->labels[filter->k].location != 0xffffffff) { 47 + fprintf(stderr, "Duplicate label use: '%s'\n", 48 + labels->labels[filter->k].label); 49 + return 1; 50 + } 51 + labels->labels[filter->k].location = insn; 52 + filter->k = 0; /* fall through */ 53 + filter->jt = 0; 54 + filter->jf = 0; 55 + continue; 56 + } 57 + } 58 + return 0; 59 + } 60 + 61 + /* Simple lookup table for labels. */ 62 + __u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label) 63 + { 64 + struct __bpf_label *begin = labels->labels, *end; 65 + int id; 66 + if (labels->count == 0) { 67 + begin->label = label; 68 + begin->location = 0xffffffff; 69 + labels->count++; 70 + return 0; 71 + } 72 + end = begin + labels->count; 73 + for (id = 0; begin < end; ++begin, ++id) { 74 + if (!strcmp(label, begin->label)) 75 + return id; 76 + } 77 + begin->label = label; 78 + begin->location = 0xffffffff; 79 + labels->count++; 80 + return id; 81 + } 82 + 83 + void seccomp_bpf_print(struct sock_filter *filter, size_t count) 84 + { 85 + struct sock_filter *end = filter + count; 86 + for ( ; filter < end; ++filter) 87 + printf("{ code=%u,jt=%u,jf=%u,k=%u },\n", 88 + filter->code, filter->jt, filter->jf, filter->k); 89 + }
+238
samples/seccomp/bpf-helper.h
··· 1 + /* 2 + * Example wrapper around BPF macros. 3 + * 4 + * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org> 5 + * Author: Will Drewry <wad@chromium.org> 6 + * 7 + * The code may be used by anyone for any purpose, 8 + * and can serve as a starting point for developing 9 + * applications using prctl(PR_SET_SECCOMP, 2, ...). 10 + * 11 + * No guarantees are provided with respect to the correctness 12 + * or functionality of this code. 13 + */ 14 + #ifndef __BPF_HELPER_H__ 15 + #define __BPF_HELPER_H__ 16 + 17 + #include <asm/bitsperlong.h> /* for __BITS_PER_LONG */ 18 + #include <endian.h> 19 + #include <linux/filter.h> 20 + #include <linux/seccomp.h> /* for seccomp_data */ 21 + #include <linux/types.h> 22 + #include <linux/unistd.h> 23 + #include <stddef.h> 24 + 25 + #define BPF_LABELS_MAX 256 26 + struct bpf_labels { 27 + int count; 28 + struct __bpf_label { 29 + const char *label; 30 + __u32 location; 31 + } labels[BPF_LABELS_MAX]; 32 + }; 33 + 34 + int bpf_resolve_jumps(struct bpf_labels *labels, 35 + struct sock_filter *filter, size_t count); 36 + __u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label); 37 + void seccomp_bpf_print(struct sock_filter *filter, size_t count); 38 + 39 + #define JUMP_JT 0xff 40 + #define JUMP_JF 0xff 41 + #define LABEL_JT 0xfe 42 + #define LABEL_JF 0xfe 43 + 44 + #define ALLOW \ 45 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW) 46 + #define DENY \ 47 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL) 48 + #define JUMP(labels, label) \ 49 + BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \ 50 + JUMP_JT, JUMP_JF) 51 + #define LABEL(labels, label) \ 52 + BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \ 53 + LABEL_JT, LABEL_JF) 54 + #define SYSCALL(nr, jt) \ 55 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (nr), 0, 1), \ 56 + jt 57 + 58 + /* Lame, but just an example */ 59 + #define FIND_LABEL(labels, label) seccomp_bpf_label((labels), #label) 60 + 61 + #define EXPAND(...) __VA_ARGS__ 62 + /* Map all width-sensitive operations */ 63 + #if __BITS_PER_LONG == 32 64 + 65 + #define JEQ(x, jt) JEQ32(x, EXPAND(jt)) 66 + #define JNE(x, jt) JNE32(x, EXPAND(jt)) 67 + #define JGT(x, jt) JGT32(x, EXPAND(jt)) 68 + #define JLT(x, jt) JLT32(x, EXPAND(jt)) 69 + #define JGE(x, jt) JGE32(x, EXPAND(jt)) 70 + #define JLE(x, jt) JLE32(x, EXPAND(jt)) 71 + #define JA(x, jt) JA32(x, EXPAND(jt)) 72 + #define ARG(i) ARG_32(i) 73 + #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 74 + 75 + #elif __BITS_PER_LONG == 64 76 + 77 + /* Ensure that we load the logically correct offset. */ 78 + #if __BYTE_ORDER == __LITTLE_ENDIAN 79 + #define ENDIAN(_lo, _hi) _lo, _hi 80 + #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 81 + #define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32) 82 + #elif __BYTE_ORDER == __BIG_ENDIAN 83 + #define ENDIAN(_lo, _hi) _hi, _lo 84 + #define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32) 85 + #define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) 86 + #else 87 + #error "Unknown endianness" 88 + #endif 89 + 90 + union arg64 { 91 + struct { 92 + __u32 ENDIAN(lo32, hi32); 93 + }; 94 + __u64 u64; 95 + }; 96 + 97 + #define JEQ(x, jt) \ 98 + JEQ64(((union arg64){.u64 = (x)}).lo32, \ 99 + ((union arg64){.u64 = (x)}).hi32, \ 100 + EXPAND(jt)) 101 + #define JGT(x, jt) \ 102 + JGT64(((union arg64){.u64 = (x)}).lo32, \ 103 + ((union arg64){.u64 = (x)}).hi32, \ 104 + EXPAND(jt)) 105 + #define JGE(x, jt) \ 106 + JGE64(((union arg64){.u64 = (x)}).lo32, \ 107 + ((union arg64){.u64 = (x)}).hi32, \ 108 + EXPAND(jt)) 109 + #define JNE(x, jt) \ 110 + JNE64(((union arg64){.u64 = (x)}).lo32, \ 111 + ((union arg64){.u64 = (x)}).hi32, \ 112 + EXPAND(jt)) 113 + #define JLT(x, jt) \ 114 + JLT64(((union arg64){.u64 = (x)}).lo32, \ 115 + ((union arg64){.u64 = (x)}).hi32, \ 116 + EXPAND(jt)) 117 + #define JLE(x, jt) \ 118 + JLE64(((union arg64){.u64 = (x)}).lo32, \ 119 + ((union arg64){.u64 = (x)}).hi32, \ 120 + EXPAND(jt)) 121 + 122 + #define JA(x, jt) \ 123 + JA64(((union arg64){.u64 = (x)}).lo32, \ 124 + ((union arg64){.u64 = (x)}).hi32, \ 125 + EXPAND(jt)) 126 + #define ARG(i) ARG_64(i) 127 + 128 + #else 129 + #error __BITS_PER_LONG value unusable. 130 + #endif 131 + 132 + /* Loads the arg into A */ 133 + #define ARG_32(idx) \ 134 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx)) 135 + 136 + /* Loads hi into A and lo in X */ 137 + #define ARG_64(idx) \ 138 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx)), \ 139 + BPF_STMT(BPF_ST, 0), /* lo -> M[0] */ \ 140 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, HI_ARG(idx)), \ 141 + BPF_STMT(BPF_ST, 1) /* hi -> M[1] */ 142 + 143 + #define JEQ32(value, jt) \ 144 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 0, 1), \ 145 + jt 146 + 147 + #define JNE32(value, jt) \ 148 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 1, 0), \ 149 + jt 150 + 151 + /* Checks the lo, then swaps to check the hi. A=lo,X=hi */ 152 + #define JEQ64(lo, hi, jt) \ 153 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \ 154 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 155 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 0, 2), \ 156 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 157 + jt, \ 158 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 159 + 160 + #define JNE64(lo, hi, jt) \ 161 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 5, 0), \ 162 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 163 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 2, 0), \ 164 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 165 + jt, \ 166 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 167 + 168 + #define JA32(value, jt) \ 169 + BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \ 170 + jt 171 + 172 + #define JA64(lo, hi, jt) \ 173 + BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (hi), 3, 0), \ 174 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 175 + BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (lo), 0, 2), \ 176 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 177 + jt, \ 178 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 179 + 180 + #define JGE32(value, jt) \ 181 + BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \ 182 + jt 183 + 184 + #define JLT32(value, jt) \ 185 + BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \ 186 + jt 187 + 188 + /* Shortcut checking if hi > arg.hi. */ 189 + #define JGE64(lo, hi, jt) \ 190 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \ 191 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \ 192 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 193 + BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 0, 2), \ 194 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 195 + jt, \ 196 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 197 + 198 + #define JLT64(lo, hi, jt) \ 199 + BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \ 200 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \ 201 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 202 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \ 203 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 204 + jt, \ 205 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 206 + 207 + #define JGT32(value, jt) \ 208 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \ 209 + jt 210 + 211 + #define JLE32(value, jt) \ 212 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \ 213 + jt 214 + 215 + /* Check hi > args.hi first, then do the GE checking */ 216 + #define JGT64(lo, hi, jt) \ 217 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \ 218 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \ 219 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 220 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 0, 2), \ 221 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 222 + jt, \ 223 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 224 + 225 + #define JLE64(lo, hi, jt) \ 226 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 6, 0), \ 227 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \ 228 + BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \ 229 + BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \ 230 + BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \ 231 + jt, \ 232 + BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */ 233 + 234 + #define LOAD_SYSCALL_NR \ 235 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, \ 236 + offsetof(struct seccomp_data, nr)) 237 + 238 + #endif /* __BPF_HELPER_H__ */
+68
samples/seccomp/dropper.c
··· 1 + /* 2 + * Naive system call dropper built on seccomp_filter. 3 + * 4 + * Copyright (c) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org> 5 + * Author: Will Drewry <wad@chromium.org> 6 + * 7 + * The code may be used by anyone for any purpose, 8 + * and can serve as a starting point for developing 9 + * applications using prctl(PR_SET_SECCOMP, 2, ...). 10 + * 11 + * When run, returns the specified errno for the specified 12 + * system call number against the given architecture. 13 + * 14 + * Run this one as root as PR_SET_NO_NEW_PRIVS is not called. 15 + */ 16 + 17 + #include <errno.h> 18 + #include <linux/audit.h> 19 + #include <linux/filter.h> 20 + #include <linux/seccomp.h> 21 + #include <linux/unistd.h> 22 + #include <stdio.h> 23 + #include <stddef.h> 24 + #include <stdlib.h> 25 + #include <sys/prctl.h> 26 + #include <unistd.h> 27 + 28 + static int install_filter(int nr, int arch, int error) 29 + { 30 + struct sock_filter filter[] = { 31 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, 32 + (offsetof(struct seccomp_data, arch))), 33 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, arch, 0, 3), 34 + BPF_STMT(BPF_LD+BPF_W+BPF_ABS, 35 + (offsetof(struct seccomp_data, nr))), 36 + BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, nr, 0, 1), 37 + BPF_STMT(BPF_RET+BPF_K, 38 + SECCOMP_RET_ERRNO|(error & SECCOMP_RET_DATA)), 39 + BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW), 40 + }; 41 + struct sock_fprog prog = { 42 + .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])), 43 + .filter = filter, 44 + }; 45 + if (prctl(PR_SET_SECCOMP, 2, &prog)) { 46 + perror("prctl"); 47 + return 1; 48 + } 49 + return 0; 50 + } 51 + 52 + int main(int argc, char **argv) 53 + { 54 + if (argc < 5) { 55 + fprintf(stderr, "Usage:\n" 56 + "dropper <syscall_nr> <arch> <errno> <prog> [<args>]\n" 57 + "Hint: AUDIT_ARCH_I386: 0x%X\n" 58 + " AUDIT_ARCH_X86_64: 0x%X\n" 59 + "\n", AUDIT_ARCH_I386, AUDIT_ARCH_X86_64); 60 + return 1; 61 + } 62 + if (install_filter(strtol(argv[1], NULL, 0), strtol(argv[2], NULL, 0), 63 + strtol(argv[3], NULL, 0))) 64 + return 1; 65 + execv(argv[4], &argv[4]); 66 + printf("Failed to execv\n"); 67 + return 255; 68 + }
+1 -67
security/Kconfig
··· 4 4 5 5 menu "Security options" 6 6 7 - config KEYS 8 - bool "Enable access key retention support" 9 - help 10 - This option provides support for retaining authentication tokens and 11 - access keys in the kernel. 12 - 13 - It also includes provision of methods by which such keys might be 14 - associated with a process so that network filesystems, encryption 15 - support and the like can find them. 16 - 17 - Furthermore, a special type of key is available that acts as keyring: 18 - a searchable sequence of keys. Each process is equipped with access 19 - to five standard keyrings: UID-specific, GID-specific, session, 20 - process and thread. 21 - 22 - If you are unsure as to whether this is required, answer N. 23 - 24 - config TRUSTED_KEYS 25 - tristate "TRUSTED KEYS" 26 - depends on KEYS && TCG_TPM 27 - select CRYPTO 28 - select CRYPTO_HMAC 29 - select CRYPTO_SHA1 30 - help 31 - This option provides support for creating, sealing, and unsealing 32 - keys in the kernel. Trusted keys are random number symmetric keys, 33 - generated and RSA-sealed by the TPM. The TPM only unseals the keys, 34 - if the boot PCRs and other criteria match. Userspace will only ever 35 - see encrypted blobs. 36 - 37 - If you are unsure as to whether this is required, answer N. 38 - 39 - config ENCRYPTED_KEYS 40 - tristate "ENCRYPTED KEYS" 41 - depends on KEYS 42 - select CRYPTO 43 - select CRYPTO_HMAC 44 - select CRYPTO_AES 45 - select CRYPTO_CBC 46 - select CRYPTO_SHA256 47 - select CRYPTO_RNG 48 - help 49 - This option provides support for create/encrypting/decrypting keys 50 - in the kernel. Encrypted keys are kernel generated random numbers, 51 - which are encrypted/decrypted with a 'master' symmetric key. The 52 - 'master' key can be either a trusted-key or user-key type. 53 - Userspace only ever sees/stores encrypted blobs. 54 - 55 - If you are unsure as to whether this is required, answer N. 56 - 57 - config KEYS_DEBUG_PROC_KEYS 58 - bool "Enable the /proc/keys file by which keys may be viewed" 59 - depends on KEYS 60 - help 61 - This option turns on support for the /proc/keys file - through which 62 - can be listed all the keys on the system that are viewable by the 63 - reading process. 64 - 65 - The only keys included in the list are those that grant View 66 - permission to the reading process whether or not it possesses them. 67 - Note that LSM security checks are still performed, and may further 68 - filter out keys that the current process is not authorised to view. 69 - 70 - Only key attributes are listed here; key payloads are not included in 71 - the resulting table. 72 - 73 - If you are unsure as to whether this is required, answer N. 7 + source security/keys/Kconfig 74 8 75 9 config SECURITY_DMESG_RESTRICT 76 10 bool "Restrict unprivileged access to the kernel syslog"
+9 -2
security/apparmor/audit.c
··· 111 111 static void audit_pre(struct audit_buffer *ab, void *ca) 112 112 { 113 113 struct common_audit_data *sa = ca; 114 - struct task_struct *tsk = sa->tsk ? sa->tsk : current; 114 + struct task_struct *tsk = sa->aad->tsk ? sa->aad->tsk : current; 115 115 116 116 if (aa_g_audit_header) { 117 117 audit_log_format(ab, "apparmor="); ··· 149 149 audit_log_format(ab, " name="); 150 150 audit_log_untrustedstring(ab, sa->aad->name); 151 151 } 152 + 153 + if (sa->aad->tsk) { 154 + audit_log_format(ab, " pid=%d comm=", tsk->pid); 155 + audit_log_untrustedstring(ab, tsk->comm); 156 + } 157 + 152 158 } 153 159 154 160 /** ··· 211 205 aa_audit_msg(type, sa, cb); 212 206 213 207 if (sa->aad->type == AUDIT_APPARMOR_KILL) 214 - (void)send_sig_info(SIGKILL, NULL, sa->tsk ? sa->tsk : current); 208 + (void)send_sig_info(SIGKILL, NULL, 209 + sa->aad->tsk ? sa->aad->tsk : current); 215 210 216 211 if (sa->aad->type == AUDIT_APPARMOR_ALLOWED) 217 212 return complain_error(sa->aad->error);
+2 -2
security/apparmor/capability.c
··· 65 65 int type = AUDIT_APPARMOR_AUTO; 66 66 struct common_audit_data sa; 67 67 struct apparmor_audit_data aad = {0,}; 68 - COMMON_AUDIT_DATA_INIT(&sa, CAP); 68 + sa.type = LSM_AUDIT_DATA_CAP; 69 69 sa.aad = &aad; 70 - sa.tsk = task; 71 70 sa.u.cap = cap; 71 + sa.aad->tsk = task; 72 72 sa.aad->op = OP_CAPABLE; 73 73 sa.aad->error = error; 74 74
+35
security/apparmor/domain.c
··· 394 394 new_profile = find_attach(ns, &ns->base.profiles, name); 395 395 if (!new_profile) 396 396 goto cleanup; 397 + /* 398 + * NOTE: Domain transitions from unconfined are allowed 399 + * even when no_new_privs is set because this aways results 400 + * in a further reduction of permissions. 401 + */ 397 402 goto apply; 398 403 } 399 404 ··· 459 454 } else 460 455 /* fail exec */ 461 456 error = -EACCES; 457 + 458 + /* 459 + * Policy has specified a domain transition, if no_new_privs then 460 + * fail the exec. 461 + */ 462 + if (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) { 463 + aa_put_profile(new_profile); 464 + error = -EPERM; 465 + goto cleanup; 466 + } 462 467 463 468 if (!new_profile) 464 469 goto audit; ··· 624 609 const char *target = NULL, *info = NULL; 625 610 int error = 0; 626 611 612 + /* 613 + * Fail explicitly requested domain transitions if no_new_privs. 614 + * There is no exception for unconfined as change_hat is not 615 + * available. 616 + */ 617 + if (current->no_new_privs) 618 + return -EPERM; 619 + 627 620 /* released below */ 628 621 cred = get_current_cred(); 629 622 cxt = cred->security; ··· 772 749 cred = get_current_cred(); 773 750 cxt = cred->security; 774 751 profile = aa_cred_profile(cred); 752 + 753 + /* 754 + * Fail explicitly requested domain transitions if no_new_privs 755 + * and not unconfined. 756 + * Domain transitions from unconfined are allowed even when 757 + * no_new_privs is set because this aways results in a reduction 758 + * of permissions. 759 + */ 760 + if (current->no_new_privs && !unconfined(profile)) { 761 + put_cred(cred); 762 + return -EPERM; 763 + } 775 764 776 765 if (ns_name) { 777 766 /* released below */
+1 -1
security/apparmor/file.c
··· 108 108 int type = AUDIT_APPARMOR_AUTO; 109 109 struct common_audit_data sa; 110 110 struct apparmor_audit_data aad = {0,}; 111 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 111 + sa.type = LSM_AUDIT_DATA_NONE; 112 112 sa.aad = &aad; 113 113 aad.op = op, 114 114 aad.fs.request = request;
+1
security/apparmor/include/audit.h
··· 110 110 void *profile; 111 111 const char *name; 112 112 const char *info; 113 + struct task_struct *tsk; 113 114 union { 114 115 void *target; 115 116 struct {
+1 -1
security/apparmor/ipc.c
··· 42 42 { 43 43 struct common_audit_data sa; 44 44 struct apparmor_audit_data aad = {0,}; 45 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 45 + sa.type = LSM_AUDIT_DATA_NONE; 46 46 sa.aad = &aad; 47 47 aad.op = OP_PTRACE; 48 48 aad.target = target;
+1 -1
security/apparmor/lib.c
··· 66 66 if (audit_enabled) { 67 67 struct common_audit_data sa; 68 68 struct apparmor_audit_data aad = {0,}; 69 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 69 + sa.type = LSM_AUDIT_DATA_NONE; 70 70 sa.aad = &aad; 71 71 aad.info = str; 72 72 aa_audit_msg(AUDIT_APPARMOR_STATUS, &sa, NULL);
+3 -3
security/apparmor/lsm.c
··· 373 373 AA_MAY_META_READ); 374 374 } 375 375 376 - static int apparmor_dentry_open(struct file *file, const struct cred *cred) 376 + static int apparmor_file_open(struct file *file, const struct cred *cred) 377 377 { 378 378 struct aa_file_cxt *fcxt = file->f_security; 379 379 struct aa_profile *profile; ··· 589 589 } else { 590 590 struct common_audit_data sa; 591 591 struct apparmor_audit_data aad = {0,}; 592 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 592 + sa.type = LSM_AUDIT_DATA_NONE; 593 593 sa.aad = &aad; 594 594 aad.op = OP_SETPROCATTR; 595 595 aad.info = name; ··· 640 640 .path_chmod = apparmor_path_chmod, 641 641 .path_chown = apparmor_path_chown, 642 642 .path_truncate = apparmor_path_truncate, 643 - .dentry_open = apparmor_dentry_open, 644 643 .inode_getattr = apparmor_inode_getattr, 645 644 645 + .file_open = apparmor_file_open, 646 646 .file_permission = apparmor_file_permission, 647 647 .file_alloc_security = apparmor_file_alloc_security, 648 648 .file_free_security = apparmor_file_free_security,
+2
security/apparmor/path.c
··· 94 94 * be returned. 95 95 */ 96 96 if (!res || IS_ERR(res)) { 97 + if (PTR_ERR(res) == -ENAMETOOLONG) 98 + return -ENAMETOOLONG; 97 99 connected = 0; 98 100 res = dentry_path_raw(path->dentry, buf, buflen); 99 101 if (IS_ERR(res)) {
+5 -1
security/apparmor/policy.c
··· 903 903 profile = aa_get_profile(__lookup_profile(&ns->base, hname)); 904 904 read_unlock(&ns->lock); 905 905 906 + /* the unconfined profile is not in the regular profile list */ 907 + if (!profile && strcmp(hname, "unconfined") == 0) 908 + profile = aa_get_profile(ns->unconfined); 909 + 906 910 /* refcount released by caller */ 907 911 return profile; 908 912 } ··· 969 965 { 970 966 struct common_audit_data sa; 971 967 struct apparmor_audit_data aad = {0,}; 972 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 968 + sa.type = LSM_AUDIT_DATA_NONE; 973 969 sa.aad = &aad; 974 970 aad.op = op; 975 971 aad.name = name;
+1 -1
security/apparmor/policy_unpack.c
··· 95 95 struct aa_profile *profile = __aa_current_profile(); 96 96 struct common_audit_data sa; 97 97 struct apparmor_audit_data aad = {0,}; 98 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 98 + sa.type = LSM_AUDIT_DATA_NONE; 99 99 sa.aad = &aad; 100 100 if (e) 101 101 aad.iface.pos = e->pos - e->start;
+1 -1
security/apparmor/resource.c
··· 52 52 struct common_audit_data sa; 53 53 struct apparmor_audit_data aad = {0,}; 54 54 55 - COMMON_AUDIT_DATA_INIT(&sa, NONE); 55 + sa.type = LSM_AUDIT_DATA_NONE; 56 56 sa.aad = &aad; 57 57 aad.op = OP_SETRLIMIT, 58 58 aad.rlim.rlim = resource;
+2 -2
security/capability.c
··· 348 348 return 0; 349 349 } 350 350 351 - static int cap_dentry_open(struct file *file, const struct cred *cred) 351 + static int cap_file_open(struct file *file, const struct cred *cred) 352 352 { 353 353 return 0; 354 354 } ··· 956 956 set_to_cap_if_null(ops, file_set_fowner); 957 957 set_to_cap_if_null(ops, file_send_sigiotask); 958 958 set_to_cap_if_null(ops, file_receive); 959 - set_to_cap_if_null(ops, dentry_open); 959 + set_to_cap_if_null(ops, file_open); 960 960 set_to_cap_if_null(ops, task_create); 961 961 set_to_cap_if_null(ops, task_free); 962 962 set_to_cap_if_null(ops, cred_alloc_blank);
+5 -2
security/commoncap.c
··· 512 512 513 513 514 514 /* Don't let someone trace a set[ug]id/setpcap binary with the revised 515 - * credentials unless they have the appropriate permit 515 + * credentials unless they have the appropriate permit. 516 + * 517 + * In addition, if NO_NEW_PRIVS, then ensure we get no new privs. 516 518 */ 517 519 if ((new->euid != old->uid || 518 520 new->egid != old->gid || 519 521 !cap_issubset(new->cap_permitted, old->cap_permitted)) && 520 522 bprm->unsafe & ~LSM_UNSAFE_PTRACE_CAP) { 521 523 /* downgrade; they get no more than they had, and maybe less */ 522 - if (!capable(CAP_SETUID)) { 524 + if (!capable(CAP_SETUID) || 525 + (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS)) { 523 526 new->euid = new->uid; 524 527 new->egid = new->gid; 525 528 }
+3 -1
security/integrity/ima/ima_main.c
··· 194 194 { 195 195 int rc; 196 196 197 - rc = process_measurement(bprm->file, bprm->filename, 197 + rc = process_measurement(bprm->file, 198 + (strcmp(bprm->filename, bprm->interp) == 0) ? 199 + bprm->filename : bprm->interp, 198 200 MAY_EXEC, BPRM_CHECK); 199 201 return 0; 200 202 }
+71
security/keys/Kconfig
··· 1 + # 2 + # Key management configuration 3 + # 4 + 5 + config KEYS 6 + bool "Enable access key retention support" 7 + help 8 + This option provides support for retaining authentication tokens and 9 + access keys in the kernel. 10 + 11 + It also includes provision of methods by which such keys might be 12 + associated with a process so that network filesystems, encryption 13 + support and the like can find them. 14 + 15 + Furthermore, a special type of key is available that acts as keyring: 16 + a searchable sequence of keys. Each process is equipped with access 17 + to five standard keyrings: UID-specific, GID-specific, session, 18 + process and thread. 19 + 20 + If you are unsure as to whether this is required, answer N. 21 + 22 + config TRUSTED_KEYS 23 + tristate "TRUSTED KEYS" 24 + depends on KEYS && TCG_TPM 25 + select CRYPTO 26 + select CRYPTO_HMAC 27 + select CRYPTO_SHA1 28 + help 29 + This option provides support for creating, sealing, and unsealing 30 + keys in the kernel. Trusted keys are random number symmetric keys, 31 + generated and RSA-sealed by the TPM. The TPM only unseals the keys, 32 + if the boot PCRs and other criteria match. Userspace will only ever 33 + see encrypted blobs. 34 + 35 + If you are unsure as to whether this is required, answer N. 36 + 37 + config ENCRYPTED_KEYS 38 + tristate "ENCRYPTED KEYS" 39 + depends on KEYS 40 + select CRYPTO 41 + select CRYPTO_HMAC 42 + select CRYPTO_AES 43 + select CRYPTO_CBC 44 + select CRYPTO_SHA256 45 + select CRYPTO_RNG 46 + help 47 + This option provides support for create/encrypting/decrypting keys 48 + in the kernel. Encrypted keys are kernel generated random numbers, 49 + which are encrypted/decrypted with a 'master' symmetric key. The 50 + 'master' key can be either a trusted-key or user-key type. 51 + Userspace only ever sees/stores encrypted blobs. 52 + 53 + If you are unsure as to whether this is required, answer N. 54 + 55 + config KEYS_DEBUG_PROC_KEYS 56 + bool "Enable the /proc/keys file by which keys may be viewed" 57 + depends on KEYS 58 + help 59 + This option turns on support for the /proc/keys file - through which 60 + can be listed all the keys on the system that are viewable by the 61 + reading process. 62 + 63 + The only keys included in the list are those that grant View 64 + permission to the reading process whether or not it possesses them. 65 + Note that LSM security checks are still performed, and may further 66 + filter out keys that the current process is not authorised to view. 67 + 68 + Only key attributes are listed here; key payloads are not included in 69 + the resulting table. 70 + 71 + If you are unsure as to whether this is required, answer N.
+9 -3
security/keys/Makefile
··· 2 2 # Makefile for key management 3 3 # 4 4 5 + # 6 + # Core 7 + # 5 8 obj-y := \ 6 9 gc.o \ 7 10 key.o \ ··· 15 12 request_key.o \ 16 13 request_key_auth.o \ 17 14 user_defined.o 18 - 19 - obj-$(CONFIG_TRUSTED_KEYS) += trusted.o 20 - obj-$(CONFIG_ENCRYPTED_KEYS) += encrypted-keys/ 21 15 obj-$(CONFIG_KEYS_COMPAT) += compat.o 22 16 obj-$(CONFIG_PROC_FS) += proc.o 23 17 obj-$(CONFIG_SYSCTL) += sysctl.o 18 + 19 + # 20 + # Key types 21 + # 22 + obj-$(CONFIG_TRUSTED_KEYS) += trusted.o 23 + obj-$(CONFIG_ENCRYPTED_KEYS) += encrypted-keys/
+3
security/keys/compat.c
··· 135 135 return compat_keyctl_instantiate_key_iov( 136 136 arg2, compat_ptr(arg3), arg4, arg5); 137 137 138 + case KEYCTL_INVALIDATE: 139 + return keyctl_invalidate_key(arg2); 140 + 138 141 default: 139 142 return -EOPNOTSUPP; 140 143 }
+56 -34
security/keys/gc.c
··· 72 72 } 73 73 74 74 /* 75 + * Schedule a dead links collection run. 76 + */ 77 + void key_schedule_gc_links(void) 78 + { 79 + set_bit(KEY_GC_KEY_EXPIRED, &key_gc_flags); 80 + queue_work(system_nrt_wq, &key_gc_work); 81 + } 82 + 83 + /* 75 84 * Some key's cleanup time was met after it expired, so we need to get the 76 85 * reaper to go through a cycle finding expired keys. 77 86 */ ··· 88 79 { 89 80 kenter(""); 90 81 key_gc_next_run = LONG_MAX; 91 - set_bit(KEY_GC_KEY_EXPIRED, &key_gc_flags); 92 - queue_work(system_nrt_wq, &key_gc_work); 82 + key_schedule_gc_links(); 93 83 } 94 84 95 85 /* ··· 139 131 static void key_gc_keyring(struct key *keyring, time_t limit) 140 132 { 141 133 struct keyring_list *klist; 142 - struct key *key; 143 134 int loop; 144 135 145 136 kenter("%x", key_serial(keyring)); 146 137 147 - if (test_bit(KEY_FLAG_REVOKED, &keyring->flags)) 138 + if (keyring->flags & ((1 << KEY_FLAG_INVALIDATED) | 139 + (1 << KEY_FLAG_REVOKED))) 148 140 goto dont_gc; 149 141 150 142 /* scan the keyring looking for dead keys */ ··· 156 148 loop = klist->nkeys; 157 149 smp_rmb(); 158 150 for (loop--; loop >= 0; loop--) { 159 - key = klist->keys[loop]; 160 - if (test_bit(KEY_FLAG_DEAD, &key->flags) || 161 - (key->expiry > 0 && key->expiry <= limit)) 151 + struct key *key = rcu_dereference(klist->keys[loop]); 152 + if (key_is_dead(key, limit)) 162 153 goto do_gc; 163 154 } 164 155 ··· 175 168 } 176 169 177 170 /* 178 - * Garbage collect an unreferenced, detached key 171 + * Garbage collect a list of unreferenced, detached keys 179 172 */ 180 - static noinline void key_gc_unused_key(struct key *key) 173 + static noinline void key_gc_unused_keys(struct list_head *keys) 181 174 { 182 - key_check(key); 175 + while (!list_empty(keys)) { 176 + struct key *key = 177 + list_entry(keys->next, struct key, graveyard_link); 178 + list_del(&key->graveyard_link); 183 179 184 - security_key_free(key); 180 + kdebug("- %u", key->serial); 181 + key_check(key); 185 182 186 - /* deal with the user's key tracking and quota */ 187 - if (test_bit(KEY_FLAG_IN_QUOTA, &key->flags)) { 188 - spin_lock(&key->user->lock); 189 - key->user->qnkeys--; 190 - key->user->qnbytes -= key->quotalen; 191 - spin_unlock(&key->user->lock); 192 - } 183 + security_key_free(key); 193 184 194 - atomic_dec(&key->user->nkeys); 195 - if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags)) 196 - atomic_dec(&key->user->nikeys); 185 + /* deal with the user's key tracking and quota */ 186 + if (test_bit(KEY_FLAG_IN_QUOTA, &key->flags)) { 187 + spin_lock(&key->user->lock); 188 + key->user->qnkeys--; 189 + key->user->qnbytes -= key->quotalen; 190 + spin_unlock(&key->user->lock); 191 + } 197 192 198 - key_user_put(key->user); 193 + atomic_dec(&key->user->nkeys); 194 + if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags)) 195 + atomic_dec(&key->user->nikeys); 199 196 200 - /* now throw away the key memory */ 201 - if (key->type->destroy) 202 - key->type->destroy(key); 197 + key_user_put(key->user); 203 198 204 - kfree(key->description); 199 + /* now throw away the key memory */ 200 + if (key->type->destroy) 201 + key->type->destroy(key); 202 + 203 + kfree(key->description); 205 204 206 205 #ifdef KEY_DEBUGGING 207 - key->magic = KEY_DEBUG_MAGIC_X; 206 + key->magic = KEY_DEBUG_MAGIC_X; 208 207 #endif 209 - kmem_cache_free(key_jar, key); 208 + kmem_cache_free(key_jar, key); 209 + } 210 210 } 211 211 212 212 /* ··· 225 211 */ 226 212 static void key_garbage_collector(struct work_struct *work) 227 213 { 214 + static LIST_HEAD(graveyard); 228 215 static u8 gc_state; /* Internal persistent state */ 229 216 #define KEY_GC_REAP_AGAIN 0x01 /* - Need another cycle */ 230 217 #define KEY_GC_REAPING_LINKS 0x02 /* - We need to reap links */ ··· 331 316 key_schedule_gc(new_timer); 332 317 } 333 318 334 - if (unlikely(gc_state & KEY_GC_REAPING_DEAD_2)) { 335 - /* Make sure everyone revalidates their keys if we marked a 336 - * bunch as being dead and make sure all keyring ex-payloads 337 - * are destroyed. 319 + if (unlikely(gc_state & KEY_GC_REAPING_DEAD_2) || 320 + !list_empty(&graveyard)) { 321 + /* Make sure that all pending keyring payload destructions are 322 + * fulfilled and that people aren't now looking at dead or 323 + * dying keys that they don't have a reference upon or a link 324 + * to. 338 325 */ 339 - kdebug("dead sync"); 326 + kdebug("gc sync"); 340 327 synchronize_rcu(); 328 + } 329 + 330 + if (!list_empty(&graveyard)) { 331 + kdebug("gc keys"); 332 + key_gc_unused_keys(&graveyard); 341 333 } 342 334 343 335 if (unlikely(gc_state & (KEY_GC_REAPING_DEAD_1 | ··· 381 359 rb_erase(&key->serial_node, &key_serial_tree); 382 360 spin_unlock(&key_serial_lock); 383 361 384 - key_gc_unused_key(key); 362 + list_add_tail(&key->graveyard_link, &graveyard); 385 363 gc_state |= KEY_GC_REAP_AGAIN; 386 364 goto maybe_resched; 387 365
+14 -1
security/keys/internal.h
··· 152 152 extern struct work_struct key_gc_work; 153 153 extern unsigned key_gc_delay; 154 154 extern void keyring_gc(struct key *keyring, time_t limit); 155 - extern void key_schedule_gc(time_t expiry_at); 155 + extern void key_schedule_gc(time_t gc_at); 156 + extern void key_schedule_gc_links(void); 156 157 extern void key_gc_keytype(struct key_type *ktype); 157 158 158 159 extern int key_task_permission(const key_ref_t key_ref, ··· 198 197 extern struct key *key_get_instantiation_authkey(key_serial_t target_id); 199 198 200 199 /* 200 + * Determine whether a key is dead. 201 + */ 202 + static inline bool key_is_dead(struct key *key, time_t limit) 203 + { 204 + return 205 + key->flags & ((1 << KEY_FLAG_DEAD) | 206 + (1 << KEY_FLAG_INVALIDATED)) || 207 + (key->expiry > 0 && key->expiry <= limit); 208 + } 209 + 210 + /* 201 211 * keyctl() functions 202 212 */ 203 213 extern long keyctl_get_keyring_ID(key_serial_t, int); ··· 237 225 extern long keyctl_instantiate_key_iov(key_serial_t, 238 226 const struct iovec __user *, 239 227 unsigned, key_serial_t); 228 + extern long keyctl_invalidate_key(key_serial_t); 240 229 241 230 extern long keyctl_instantiate_key_common(key_serial_t, 242 231 const struct iovec __user *,
+25
security/keys/key.c
··· 955 955 EXPORT_SYMBOL(key_revoke); 956 956 957 957 /** 958 + * key_invalidate - Invalidate a key. 959 + * @key: The key to be invalidated. 960 + * 961 + * Mark a key as being invalidated and have it cleaned up immediately. The key 962 + * is ignored by all searches and other operations from this point. 963 + */ 964 + void key_invalidate(struct key *key) 965 + { 966 + kenter("%d", key_serial(key)); 967 + 968 + key_check(key); 969 + 970 + if (!test_bit(KEY_FLAG_INVALIDATED, &key->flags)) { 971 + down_write_nested(&key->sem, 1); 972 + if (!test_and_set_bit(KEY_FLAG_INVALIDATED, &key->flags)) 973 + key_schedule_gc_links(); 974 + up_write(&key->sem); 975 + } 976 + } 977 + EXPORT_SYMBOL(key_invalidate); 978 + 979 + /** 958 980 * register_key_type - Register a type of key. 959 981 * @ktype: The new key type. 960 982 * ··· 1002 980 1003 981 /* store the type */ 1004 982 list_add(&ktype->link, &key_types_list); 983 + 984 + pr_notice("Key type %s registered\n", ktype->name); 1005 985 ret = 0; 1006 986 1007 987 out: ··· 1026 1002 list_del_init(&ktype->link); 1027 1003 downgrade_write(&key_types_sem); 1028 1004 key_gc_keytype(ktype); 1005 + pr_notice("Key type %s unregistered\n", ktype->name); 1029 1006 up_read(&key_types_sem); 1030 1007 } 1031 1008 EXPORT_SYMBOL(unregister_key_type);
+34
security/keys/keyctl.c
··· 375 375 } 376 376 377 377 /* 378 + * Invalidate a key. 379 + * 380 + * The key must be grant the caller Invalidate permission for this to work. 381 + * The key and any links to the key will be automatically garbage collected 382 + * immediately. 383 + * 384 + * If successful, 0 is returned. 385 + */ 386 + long keyctl_invalidate_key(key_serial_t id) 387 + { 388 + key_ref_t key_ref; 389 + long ret; 390 + 391 + kenter("%d", id); 392 + 393 + key_ref = lookup_user_key(id, 0, KEY_SEARCH); 394 + if (IS_ERR(key_ref)) { 395 + ret = PTR_ERR(key_ref); 396 + goto error; 397 + } 398 + 399 + key_invalidate(key_ref_to_ptr(key_ref)); 400 + ret = 0; 401 + 402 + key_ref_put(key_ref); 403 + error: 404 + kleave(" = %ld", ret); 405 + return ret; 406 + } 407 + 408 + /* 378 409 * Clear the specified keyring, creating an empty process keyring if one of the 379 410 * special keyring IDs is used. 380 411 * ··· 1652 1621 (const struct iovec __user *) arg3, 1653 1622 (unsigned) arg4, 1654 1623 (key_serial_t) arg5); 1624 + 1625 + case KEYCTL_INVALIDATE: 1626 + return keyctl_invalidate_key((key_serial_t) arg2); 1655 1627 1656 1628 default: 1657 1629 return -EOPNOTSUPP;
+107 -60
security/keys/keyring.c
··· 25 25 (keyring)->payload.subscriptions, \ 26 26 rwsem_is_locked((struct rw_semaphore *)&(keyring)->sem))) 27 27 28 + #define rcu_deref_link_locked(klist, index, keyring) \ 29 + (rcu_dereference_protected( \ 30 + (klist)->keys[index], \ 31 + rwsem_is_locked((struct rw_semaphore *)&(keyring)->sem))) 32 + 33 + #define MAX_KEYRING_LINKS \ 34 + min_t(size_t, USHRT_MAX - 1, \ 35 + ((PAGE_SIZE - sizeof(struct keyring_list)) / sizeof(struct key *))) 36 + 28 37 #define KEY_LINK_FIXQUOTA 1UL 29 38 30 39 /* ··· 147 138 /* 148 139 * Clean up a keyring when it is destroyed. Unpublish its name if it had one 149 140 * and dispose of its data. 141 + * 142 + * The garbage collector detects the final key_put(), removes the keyring from 143 + * the serial number tree and then does RCU synchronisation before coming here, 144 + * so we shouldn't need to worry about code poking around here with the RCU 145 + * readlock held by this time. 150 146 */ 151 147 static void keyring_destroy(struct key *keyring) 152 148 { ··· 168 154 write_unlock(&keyring_name_lock); 169 155 } 170 156 171 - klist = rcu_dereference_check(keyring->payload.subscriptions, 172 - atomic_read(&keyring->usage) == 0); 157 + klist = rcu_access_pointer(keyring->payload.subscriptions); 173 158 if (klist) { 174 159 for (loop = klist->nkeys - 1; loop >= 0; loop--) 175 - key_put(klist->keys[loop]); 160 + key_put(rcu_access_pointer(klist->keys[loop])); 176 161 kfree(klist); 177 162 } 178 163 } ··· 227 214 ret = -EFAULT; 228 215 229 216 for (loop = 0; loop < klist->nkeys; loop++) { 230 - key = klist->keys[loop]; 217 + key = rcu_deref_link_locked(klist, loop, 218 + keyring); 231 219 232 220 tmp = sizeof(key_serial_t); 233 221 if (tmp > buflen) ··· 323 309 bool no_state_check) 324 310 { 325 311 struct { 312 + /* Need a separate keylist pointer for RCU purposes */ 313 + struct key *keyring; 326 314 struct keyring_list *keylist; 327 315 int kix; 328 316 } stack[KEYRING_SEARCH_MAX_DEPTH]; ··· 382 366 /* otherwise, the top keyring must not be revoked, expired, or 383 367 * negatively instantiated if we are to search it */ 384 368 key_ref = ERR_PTR(-EAGAIN); 385 - if (kflags & ((1 << KEY_FLAG_REVOKED) | (1 << KEY_FLAG_NEGATIVE)) || 369 + if (kflags & ((1 << KEY_FLAG_INVALIDATED) | 370 + (1 << KEY_FLAG_REVOKED) | 371 + (1 << KEY_FLAG_NEGATIVE)) || 386 372 (keyring->expiry && now.tv_sec >= keyring->expiry)) 387 373 goto error_2; 388 374 389 375 /* start processing a new keyring */ 390 376 descend: 391 - if (test_bit(KEY_FLAG_REVOKED, &keyring->flags)) 377 + kflags = keyring->flags; 378 + if (kflags & ((1 << KEY_FLAG_INVALIDATED) | 379 + (1 << KEY_FLAG_REVOKED))) 392 380 goto not_this_keyring; 393 381 394 382 keylist = rcu_dereference(keyring->payload.subscriptions); ··· 403 383 nkeys = keylist->nkeys; 404 384 smp_rmb(); 405 385 for (kix = 0; kix < nkeys; kix++) { 406 - key = keylist->keys[kix]; 386 + key = rcu_dereference(keylist->keys[kix]); 407 387 kflags = key->flags; 408 388 409 389 /* ignore keys not of this type */ 410 390 if (key->type != type) 411 391 continue; 412 392 413 - /* skip revoked keys and expired keys */ 393 + /* skip invalidated, revoked and expired keys */ 414 394 if (!no_state_check) { 415 - if (kflags & (1 << KEY_FLAG_REVOKED)) 395 + if (kflags & ((1 << KEY_FLAG_INVALIDATED) | 396 + (1 << KEY_FLAG_REVOKED))) 416 397 continue; 417 398 418 399 if (key->expiry && now.tv_sec >= key->expiry) ··· 447 426 nkeys = keylist->nkeys; 448 427 smp_rmb(); 449 428 for (; kix < nkeys; kix++) { 450 - key = keylist->keys[kix]; 429 + key = rcu_dereference(keylist->keys[kix]); 451 430 if (key->type != &key_type_keyring) 452 431 continue; 453 432 ··· 462 441 continue; 463 442 464 443 /* stack the current position */ 444 + stack[sp].keyring = keyring; 465 445 stack[sp].keylist = keylist; 466 446 stack[sp].kix = kix; 467 447 sp++; ··· 478 456 if (sp > 0) { 479 457 /* resume the processing of a keyring higher up in the tree */ 480 458 sp--; 459 + keyring = stack[sp].keyring; 481 460 keylist = stack[sp].keylist; 482 461 kix = stack[sp].kix + 1; 483 462 goto ascend; ··· 490 467 /* we found a viable match */ 491 468 found: 492 469 atomic_inc(&key->usage); 470 + key->last_used_at = now.tv_sec; 471 + keyring->last_used_at = now.tv_sec; 472 + while (sp > 0) 473 + stack[--sp].keyring->last_used_at = now.tv_sec; 493 474 key_check(key); 494 475 key_ref = make_key_ref(key, possessed); 495 476 error_2: ··· 558 531 nkeys = klist->nkeys; 559 532 smp_rmb(); 560 533 for (loop = 0; loop < nkeys ; loop++) { 561 - key = klist->keys[loop]; 562 - 534 + key = rcu_dereference(klist->keys[loop]); 563 535 if (key->type == ktype && 564 536 (!key->type->match || 565 537 key->type->match(key, description)) && 566 538 key_permission(make_key_ref(key, possessed), 567 539 perm) == 0 && 568 - !test_bit(KEY_FLAG_REVOKED, &key->flags) 540 + !(key->flags & ((1 << KEY_FLAG_INVALIDATED) | 541 + (1 << KEY_FLAG_REVOKED))) 569 542 ) 570 543 goto found; 571 544 } ··· 576 549 577 550 found: 578 551 atomic_inc(&key->usage); 552 + keyring->last_used_at = key->last_used_at = 553 + current_kernel_time().tv_sec; 579 554 rcu_read_unlock(); 580 555 return make_key_ref(key, possessed); 581 556 } ··· 631 602 * (ie. it has a zero usage count) */ 632 603 if (!atomic_inc_not_zero(&keyring->usage)) 633 604 continue; 605 + keyring->last_used_at = current_kernel_time().tv_sec; 634 606 goto out; 635 607 } 636 608 } ··· 684 654 nkeys = keylist->nkeys; 685 655 smp_rmb(); 686 656 for (; kix < nkeys; kix++) { 687 - key = keylist->keys[kix]; 657 + key = rcu_dereference(keylist->keys[kix]); 688 658 689 659 if (key == A) 690 660 goto cycle_detected; ··· 741 711 container_of(rcu, struct keyring_list, rcu); 742 712 743 713 if (klist->delkey != USHRT_MAX) 744 - key_put(klist->keys[klist->delkey]); 714 + key_put(rcu_access_pointer(klist->keys[klist->delkey])); 745 715 kfree(klist); 746 716 } 747 717 ··· 755 725 struct keyring_list *klist, *nklist; 756 726 unsigned long prealloc; 757 727 unsigned max; 728 + time_t lowest_lru; 758 729 size_t size; 759 - int loop, ret; 730 + int loop, lru, ret; 760 731 761 732 kenter("%d,%s,%s,", key_serial(keyring), type->name, description); 762 733 ··· 778 747 klist = rcu_dereference_locked_keyring(keyring); 779 748 780 749 /* see if there's a matching key we can displace */ 750 + lru = -1; 781 751 if (klist && klist->nkeys > 0) { 752 + lowest_lru = TIME_T_MAX; 782 753 for (loop = klist->nkeys - 1; loop >= 0; loop--) { 783 - if (klist->keys[loop]->type == type && 784 - strcmp(klist->keys[loop]->description, 785 - description) == 0 786 - ) { 787 - /* found a match - we'll replace this one with 788 - * the new key */ 789 - size = sizeof(struct key *) * klist->maxkeys; 790 - size += sizeof(*klist); 791 - BUG_ON(size > PAGE_SIZE); 792 - 793 - ret = -ENOMEM; 794 - nklist = kmemdup(klist, size, GFP_KERNEL); 795 - if (!nklist) 796 - goto error_sem; 797 - 798 - /* note replacement slot */ 799 - klist->delkey = nklist->delkey = loop; 800 - prealloc = (unsigned long)nklist; 754 + struct key *key = rcu_deref_link_locked(klist, loop, 755 + keyring); 756 + if (key->type == type && 757 + strcmp(key->description, description) == 0) { 758 + /* Found a match - we'll replace the link with 759 + * one to the new key. We record the slot 760 + * position. 761 + */ 762 + klist->delkey = loop; 763 + prealloc = 0; 801 764 goto done; 802 765 } 766 + if (key->last_used_at < lowest_lru) { 767 + lowest_lru = key->last_used_at; 768 + lru = loop; 769 + } 803 770 } 771 + } 772 + 773 + /* If the keyring is full then do an LRU discard */ 774 + if (klist && 775 + klist->nkeys == klist->maxkeys && 776 + klist->maxkeys >= MAX_KEYRING_LINKS) { 777 + kdebug("LRU discard %d\n", lru); 778 + klist->delkey = lru; 779 + prealloc = 0; 780 + goto done; 804 781 } 805 782 806 783 /* check that we aren't going to overrun the user's quota */ ··· 819 780 820 781 if (klist && klist->nkeys < klist->maxkeys) { 821 782 /* there's sufficient slack space to append directly */ 822 - nklist = NULL; 783 + klist->delkey = klist->nkeys; 823 784 prealloc = KEY_LINK_FIXQUOTA; 824 785 } else { 825 786 /* grow the key list */ 826 787 max = 4; 827 - if (klist) 788 + if (klist) { 828 789 max += klist->maxkeys; 790 + if (max > MAX_KEYRING_LINKS) 791 + max = MAX_KEYRING_LINKS; 792 + BUG_ON(max <= klist->maxkeys); 793 + } 829 794 830 - ret = -ENFILE; 831 - if (max > USHRT_MAX - 1) 832 - goto error_quota; 833 795 size = sizeof(*klist) + sizeof(struct key *) * max; 834 - if (size > PAGE_SIZE) 835 - goto error_quota; 836 796 837 797 ret = -ENOMEM; 838 798 nklist = kmalloc(size, GFP_KERNEL); ··· 851 813 } 852 814 853 815 /* add the key into the new space */ 854 - nklist->keys[nklist->delkey] = NULL; 816 + RCU_INIT_POINTER(nklist->keys[nklist->delkey], NULL); 817 + prealloc = (unsigned long)nklist | KEY_LINK_FIXQUOTA; 855 818 } 856 819 857 - prealloc = (unsigned long)nklist | KEY_LINK_FIXQUOTA; 858 820 done: 859 821 *_prealloc = prealloc; 860 822 kleave(" = 0"); ··· 900 862 unsigned long *_prealloc) 901 863 { 902 864 struct keyring_list *klist, *nklist; 865 + struct key *discard; 903 866 904 867 nklist = (struct keyring_list *)(*_prealloc & ~KEY_LINK_FIXQUOTA); 905 868 *_prealloc = 0; ··· 910 871 klist = rcu_dereference_locked_keyring(keyring); 911 872 912 873 atomic_inc(&key->usage); 874 + keyring->last_used_at = key->last_used_at = 875 + current_kernel_time().tv_sec; 913 876 914 877 /* there's a matching key we can displace or an empty slot in a newly 915 878 * allocated list we can fill */ 916 879 if (nklist) { 917 - kdebug("replace %hu/%hu/%hu", 880 + kdebug("reissue %hu/%hu/%hu", 918 881 nklist->delkey, nklist->nkeys, nklist->maxkeys); 919 882 920 - nklist->keys[nklist->delkey] = key; 883 + RCU_INIT_POINTER(nklist->keys[nklist->delkey], key); 921 884 922 885 rcu_assign_pointer(keyring->payload.subscriptions, nklist); 923 886 ··· 930 889 klist->delkey, klist->nkeys, klist->maxkeys); 931 890 call_rcu(&klist->rcu, keyring_unlink_rcu_disposal); 932 891 } 892 + } else if (klist->delkey < klist->nkeys) { 893 + kdebug("replace %hu/%hu/%hu", 894 + klist->delkey, klist->nkeys, klist->maxkeys); 895 + 896 + discard = rcu_dereference_protected( 897 + klist->keys[klist->delkey], 898 + rwsem_is_locked(&keyring->sem)); 899 + rcu_assign_pointer(klist->keys[klist->delkey], key); 900 + /* The garbage collector will take care of RCU 901 + * synchronisation */ 902 + key_put(discard); 933 903 } else { 934 904 /* there's sufficient slack space to append directly */ 935 - klist->keys[klist->nkeys] = key; 905 + kdebug("append %hu/%hu/%hu", 906 + klist->delkey, klist->nkeys, klist->maxkeys); 907 + 908 + RCU_INIT_POINTER(klist->keys[klist->delkey], key); 936 909 smp_wmb(); 937 910 klist->nkeys++; 938 911 } ··· 1053 998 if (klist) { 1054 999 /* search the keyring for the key */ 1055 1000 for (loop = 0; loop < klist->nkeys; loop++) 1056 - if (klist->keys[loop] == key) 1001 + if (rcu_access_pointer(klist->keys[loop]) == key) 1057 1002 goto key_is_present; 1058 1003 } 1059 1004 ··· 1116 1061 klist = container_of(rcu, struct keyring_list, rcu); 1117 1062 1118 1063 for (loop = klist->nkeys - 1; loop >= 0; loop--) 1119 - key_put(klist->keys[loop]); 1064 + key_put(rcu_access_pointer(klist->keys[loop])); 1120 1065 1121 1066 kfree(klist); 1122 1067 } ··· 1183 1128 } 1184 1129 1185 1130 /* 1186 - * Determine whether a key is dead. 1187 - */ 1188 - static bool key_is_dead(struct key *key, time_t limit) 1189 - { 1190 - return test_bit(KEY_FLAG_DEAD, &key->flags) || 1191 - (key->expiry > 0 && key->expiry <= limit); 1192 - } 1193 - 1194 - /* 1195 1131 * Collect garbage from the contents of a keyring, replacing the old list with 1196 1132 * a new one with the pointers all shuffled down. 1197 1133 * ··· 1207 1161 /* work out how many subscriptions we're keeping */ 1208 1162 keep = 0; 1209 1163 for (loop = klist->nkeys - 1; loop >= 0; loop--) 1210 - if (!key_is_dead(klist->keys[loop], limit)) 1164 + if (!key_is_dead(rcu_deref_link_locked(klist, loop, keyring), 1165 + limit)) 1211 1166 keep++; 1212 1167 1213 1168 if (keep == klist->nkeys) ··· 1229 1182 */ 1230 1183 keep = 0; 1231 1184 for (loop = klist->nkeys - 1; loop >= 0; loop--) { 1232 - key = klist->keys[loop]; 1185 + key = rcu_deref_link_locked(klist, loop, keyring); 1233 1186 if (!key_is_dead(key, limit)) { 1234 1187 if (keep >= max) 1235 1188 goto discard_new; 1236 - new->keys[keep++] = key_get(key); 1189 + RCU_INIT_POINTER(new->keys[keep++], key_get(key)); 1237 1190 } 1238 1191 } 1239 1192 new->nkeys = keep;
+18 -21
security/keys/permission.c
··· 87 87 * key_validate - Validate a key. 88 88 * @key: The key to be validated. 89 89 * 90 - * Check that a key is valid, returning 0 if the key is okay, -EKEYREVOKED if 91 - * the key's type has been removed or if the key has been revoked or 92 - * -EKEYEXPIRED if the key has expired. 90 + * Check that a key is valid, returning 0 if the key is okay, -ENOKEY if the 91 + * key is invalidated, -EKEYREVOKED if the key's type has been removed or if 92 + * the key has been revoked or -EKEYEXPIRED if the key has expired. 93 93 */ 94 - int key_validate(struct key *key) 94 + int key_validate(const struct key *key) 95 95 { 96 - struct timespec now; 97 - int ret = 0; 96 + unsigned long flags = key->flags; 98 97 99 - if (key) { 100 - /* check it's still accessible */ 101 - ret = -EKEYREVOKED; 102 - if (test_bit(KEY_FLAG_REVOKED, &key->flags) || 103 - test_bit(KEY_FLAG_DEAD, &key->flags)) 104 - goto error; 98 + if (flags & (1 << KEY_FLAG_INVALIDATED)) 99 + return -ENOKEY; 105 100 106 - /* check it hasn't expired */ 107 - ret = 0; 108 - if (key->expiry) { 109 - now = current_kernel_time(); 110 - if (now.tv_sec >= key->expiry) 111 - ret = -EKEYEXPIRED; 112 - } 101 + /* check it's still accessible */ 102 + if (flags & ((1 << KEY_FLAG_REVOKED) | 103 + (1 << KEY_FLAG_DEAD))) 104 + return -EKEYREVOKED; 105 + 106 + /* check it hasn't expired */ 107 + if (key->expiry) { 108 + struct timespec now = current_kernel_time(); 109 + if (now.tv_sec >= key->expiry) 110 + return -EKEYEXPIRED; 113 111 } 114 112 115 - error: 116 - return ret; 113 + return 0; 117 114 } 118 115 EXPORT_SYMBOL(key_validate);
+2 -1
security/keys/proc.c
··· 242 242 #define showflag(KEY, LETTER, FLAG) \ 243 243 (test_bit(FLAG, &(KEY)->flags) ? LETTER : '-') 244 244 245 - seq_printf(m, "%08x %c%c%c%c%c%c %5d %4s %08x %5d %5d %-9.9s ", 245 + seq_printf(m, "%08x %c%c%c%c%c%c%c %5d %4s %08x %5d %5d %-9.9s ", 246 246 key->serial, 247 247 showflag(key, 'I', KEY_FLAG_INSTANTIATED), 248 248 showflag(key, 'R', KEY_FLAG_REVOKED), ··· 250 250 showflag(key, 'Q', KEY_FLAG_IN_QUOTA), 251 251 showflag(key, 'U', KEY_FLAG_USER_CONSTRUCT), 252 252 showflag(key, 'N', KEY_FLAG_NEGATIVE), 253 + showflag(key, 'i', KEY_FLAG_INVALIDATED), 253 254 atomic_read(&key->usage), 254 255 xbuf, 255 256 key->perm,
+2
security/keys/process_keys.c
··· 732 732 if (ret < 0) 733 733 goto invalid_key; 734 734 735 + key->last_used_at = current_kernel_time().tv_sec; 736 + 735 737 error: 736 738 put_cred(cred); 737 739 return key_ref;
+9 -6
security/lsm_audit.c
··· 213 213 { 214 214 struct task_struct *tsk = current; 215 215 216 - if (a->tsk) 217 - tsk = a->tsk; 218 - if (tsk && tsk->pid) { 219 - audit_log_format(ab, " pid=%d comm=", tsk->pid); 220 - audit_log_untrustedstring(ab, tsk->comm); 221 - } 216 + /* 217 + * To keep stack sizes in check force programers to notice if they 218 + * start making this union too large! See struct lsm_network_audit 219 + * as an example of how to deal with large data. 220 + */ 221 + BUILD_BUG_ON(sizeof(a->u) > sizeof(void *)*2); 222 + 223 + audit_log_format(ab, " pid=%d comm=", tsk->pid); 224 + audit_log_untrustedstring(ab, tsk->comm); 222 225 223 226 switch (a->type) { 224 227 case LSM_AUDIT_DATA_NONE:
+2 -2
security/security.c
··· 701 701 return security_ops->file_receive(file); 702 702 } 703 703 704 - int security_dentry_open(struct file *file, const struct cred *cred) 704 + int security_file_open(struct file *file, const struct cred *cred) 705 705 { 706 706 int ret; 707 707 708 - ret = security_ops->dentry_open(file, cred); 708 + ret = security_ops->file_open(file, cred); 709 709 if (ret) 710 710 return ret; 711 711
+24 -106
security/selinux/avc.c
··· 65 65 }; 66 66 67 67 struct avc_callback_node { 68 - int (*callback) (u32 event, u32 ssid, u32 tsid, 69 - u16 tclass, u32 perms, 70 - u32 *out_retained); 68 + int (*callback) (u32 event); 71 69 u32 events; 72 - u32 ssid; 73 - u32 tsid; 74 - u16 tclass; 75 - u32 perms; 76 70 struct avc_callback_node *next; 77 71 }; 78 72 ··· 430 436 { 431 437 struct common_audit_data *ad = a; 432 438 audit_log_format(ab, "avc: %s ", 433 - ad->selinux_audit_data->slad->denied ? "denied" : "granted"); 434 - avc_dump_av(ab, ad->selinux_audit_data->slad->tclass, 435 - ad->selinux_audit_data->slad->audited); 439 + ad->selinux_audit_data->denied ? "denied" : "granted"); 440 + avc_dump_av(ab, ad->selinux_audit_data->tclass, 441 + ad->selinux_audit_data->audited); 436 442 audit_log_format(ab, " for "); 437 443 } 438 444 ··· 446 452 { 447 453 struct common_audit_data *ad = a; 448 454 audit_log_format(ab, " "); 449 - avc_dump_query(ab, ad->selinux_audit_data->slad->ssid, 450 - ad->selinux_audit_data->slad->tsid, 451 - ad->selinux_audit_data->slad->tclass); 455 + avc_dump_query(ab, ad->selinux_audit_data->ssid, 456 + ad->selinux_audit_data->tsid, 457 + ad->selinux_audit_data->tclass); 452 458 } 453 459 454 460 /* This is the slow part of avc audit with big stack footprint */ 455 - static noinline int slow_avc_audit(u32 ssid, u32 tsid, u16 tclass, 461 + noinline int slow_avc_audit(u32 ssid, u32 tsid, u16 tclass, 456 462 u32 requested, u32 audited, u32 denied, 457 463 struct common_audit_data *a, 458 464 unsigned flags) 459 465 { 460 466 struct common_audit_data stack_data; 461 - struct selinux_audit_data sad = {0,}; 462 - struct selinux_late_audit_data slad; 467 + struct selinux_audit_data sad; 463 468 464 469 if (!a) { 465 470 a = &stack_data; 466 - COMMON_AUDIT_DATA_INIT(a, NONE); 467 - a->selinux_audit_data = &sad; 471 + a->type = LSM_AUDIT_DATA_NONE; 468 472 } 469 473 470 474 /* ··· 476 484 (flags & MAY_NOT_BLOCK)) 477 485 return -ECHILD; 478 486 479 - slad.tclass = tclass; 480 - slad.requested = requested; 481 - slad.ssid = ssid; 482 - slad.tsid = tsid; 483 - slad.audited = audited; 484 - slad.denied = denied; 487 + sad.tclass = tclass; 488 + sad.requested = requested; 489 + sad.ssid = ssid; 490 + sad.tsid = tsid; 491 + sad.audited = audited; 492 + sad.denied = denied; 485 493 486 - a->selinux_audit_data->slad = &slad; 494 + a->selinux_audit_data = &sad; 495 + 487 496 common_lsm_audit(a, avc_audit_pre_callback, avc_audit_post_callback); 488 497 return 0; 489 - } 490 - 491 - /** 492 - * avc_audit - Audit the granting or denial of permissions. 493 - * @ssid: source security identifier 494 - * @tsid: target security identifier 495 - * @tclass: target security class 496 - * @requested: requested permissions 497 - * @avd: access vector decisions 498 - * @result: result from avc_has_perm_noaudit 499 - * @a: auxiliary audit data 500 - * @flags: VFS walk flags 501 - * 502 - * Audit the granting or denial of permissions in accordance 503 - * with the policy. This function is typically called by 504 - * avc_has_perm() after a permission check, but can also be 505 - * called directly by callers who use avc_has_perm_noaudit() 506 - * in order to separate the permission check from the auditing. 507 - * For example, this separation is useful when the permission check must 508 - * be performed under a lock, to allow the lock to be released 509 - * before calling the auditing code. 510 - */ 511 - inline int avc_audit(u32 ssid, u32 tsid, 512 - u16 tclass, u32 requested, 513 - struct av_decision *avd, int result, struct common_audit_data *a, 514 - unsigned flags) 515 - { 516 - u32 denied, audited; 517 - denied = requested & ~avd->allowed; 518 - if (unlikely(denied)) { 519 - audited = denied & avd->auditdeny; 520 - /* 521 - * a->selinux_audit_data->auditdeny is TRICKY! Setting a bit in 522 - * this field means that ANY denials should NOT be audited if 523 - * the policy contains an explicit dontaudit rule for that 524 - * permission. Take notice that this is unrelated to the 525 - * actual permissions that were denied. As an example lets 526 - * assume: 527 - * 528 - * denied == READ 529 - * avd.auditdeny & ACCESS == 0 (not set means explicit rule) 530 - * selinux_audit_data->auditdeny & ACCESS == 1 531 - * 532 - * We will NOT audit the denial even though the denied 533 - * permission was READ and the auditdeny checks were for 534 - * ACCESS 535 - */ 536 - if (a && 537 - a->selinux_audit_data->auditdeny && 538 - !(a->selinux_audit_data->auditdeny & avd->auditdeny)) 539 - audited = 0; 540 - } else if (result) 541 - audited = denied = requested; 542 - else 543 - audited = requested & avd->auditallow; 544 - if (likely(!audited)) 545 - return 0; 546 - 547 - return slow_avc_audit(ssid, tsid, tclass, 548 - requested, audited, denied, 549 - a, flags); 550 498 } 551 499 552 500 /** 553 501 * avc_add_callback - Register a callback for security events. 554 502 * @callback: callback function 555 503 * @events: security events 556 - * @ssid: source security identifier or %SECSID_WILD 557 - * @tsid: target security identifier or %SECSID_WILD 558 - * @tclass: target security class 559 - * @perms: permissions 560 504 * 561 - * Register a callback function for events in the set @events 562 - * related to the SID pair (@ssid, @tsid) 563 - * and the permissions @perms, interpreting 564 - * @perms based on @tclass. Returns %0 on success or 565 - * -%ENOMEM if insufficient memory exists to add the callback. 505 + * Register a callback function for events in the set @events. 506 + * Returns %0 on success or -%ENOMEM if insufficient memory 507 + * exists to add the callback. 566 508 */ 567 - int avc_add_callback(int (*callback)(u32 event, u32 ssid, u32 tsid, 568 - u16 tclass, u32 perms, 569 - u32 *out_retained), 570 - u32 events, u32 ssid, u32 tsid, 571 - u16 tclass, u32 perms) 509 + int __init avc_add_callback(int (*callback)(u32 event), u32 events) 572 510 { 573 511 struct avc_callback_node *c; 574 512 int rc = 0; 575 513 576 - c = kmalloc(sizeof(*c), GFP_ATOMIC); 514 + c = kmalloc(sizeof(*c), GFP_KERNEL); 577 515 if (!c) { 578 516 rc = -ENOMEM; 579 517 goto out; ··· 511 589 512 590 c->callback = callback; 513 591 c->events = events; 514 - c->ssid = ssid; 515 - c->tsid = tsid; 516 - c->perms = perms; 517 592 c->next = avc_callbacks; 518 593 avc_callbacks = c; 519 594 out: ··· 650 731 651 732 for (c = avc_callbacks; c; c = c->next) { 652 733 if (c->events & AVC_CALLBACK_RESET) { 653 - tmprc = c->callback(AVC_CALLBACK_RESET, 654 - 0, 0, 0, 0, NULL); 734 + tmprc = c->callback(AVC_CALLBACK_RESET); 655 735 /* save the first error encountered for the return 656 736 value and continue processing the callbacks */ 657 737 if (!rc)
+127 -141
security/selinux/hooks.c
··· 1420 1420 int cap, int audit) 1421 1421 { 1422 1422 struct common_audit_data ad; 1423 - struct selinux_audit_data sad = {0,}; 1424 1423 struct av_decision avd; 1425 1424 u16 sclass; 1426 1425 u32 sid = cred_sid(cred); 1427 1426 u32 av = CAP_TO_MASK(cap); 1428 1427 int rc; 1429 1428 1430 - COMMON_AUDIT_DATA_INIT(&ad, CAP); 1431 - ad.selinux_audit_data = &sad; 1432 - ad.tsk = current; 1429 + ad.type = LSM_AUDIT_DATA_CAP; 1433 1430 ad.u.cap = cap; 1434 1431 1435 1432 switch (CAP_TO_INDEX(cap)) { ··· 1485 1488 return avc_has_perm_flags(sid, isec->sid, isec->sclass, perms, adp, flags); 1486 1489 } 1487 1490 1488 - static int inode_has_perm_noadp(const struct cred *cred, 1489 - struct inode *inode, 1490 - u32 perms, 1491 - unsigned flags) 1492 - { 1493 - struct common_audit_data ad; 1494 - struct selinux_audit_data sad = {0,}; 1495 - 1496 - COMMON_AUDIT_DATA_INIT(&ad, INODE); 1497 - ad.u.inode = inode; 1498 - ad.selinux_audit_data = &sad; 1499 - return inode_has_perm(cred, inode, perms, &ad, flags); 1500 - } 1501 - 1502 1491 /* Same as inode_has_perm, but pass explicit audit data containing 1503 1492 the dentry to help the auditing code to more easily generate the 1504 1493 pathname if needed. */ ··· 1494 1511 { 1495 1512 struct inode *inode = dentry->d_inode; 1496 1513 struct common_audit_data ad; 1497 - struct selinux_audit_data sad = {0,}; 1498 1514 1499 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 1515 + ad.type = LSM_AUDIT_DATA_DENTRY; 1500 1516 ad.u.dentry = dentry; 1501 - ad.selinux_audit_data = &sad; 1502 1517 return inode_has_perm(cred, inode, av, &ad, 0); 1503 1518 } 1504 1519 ··· 1509 1528 { 1510 1529 struct inode *inode = path->dentry->d_inode; 1511 1530 struct common_audit_data ad; 1512 - struct selinux_audit_data sad = {0,}; 1513 1531 1514 - COMMON_AUDIT_DATA_INIT(&ad, PATH); 1532 + ad.type = LSM_AUDIT_DATA_PATH; 1515 1533 ad.u.path = *path; 1516 - ad.selinux_audit_data = &sad; 1517 1534 return inode_has_perm(cred, inode, av, &ad, 0); 1518 1535 } 1519 1536 ··· 1530 1551 struct file_security_struct *fsec = file->f_security; 1531 1552 struct inode *inode = file->f_path.dentry->d_inode; 1532 1553 struct common_audit_data ad; 1533 - struct selinux_audit_data sad = {0,}; 1534 1554 u32 sid = cred_sid(cred); 1535 1555 int rc; 1536 1556 1537 - COMMON_AUDIT_DATA_INIT(&ad, PATH); 1557 + ad.type = LSM_AUDIT_DATA_PATH; 1538 1558 ad.u.path = file->f_path; 1539 - ad.selinux_audit_data = &sad; 1540 1559 1541 1560 if (sid != fsec->sid) { 1542 1561 rc = avc_has_perm(sid, fsec->sid, ··· 1564 1587 struct superblock_security_struct *sbsec; 1565 1588 u32 sid, newsid; 1566 1589 struct common_audit_data ad; 1567 - struct selinux_audit_data sad = {0,}; 1568 1590 int rc; 1569 1591 1570 1592 dsec = dir->i_security; ··· 1572 1596 sid = tsec->sid; 1573 1597 newsid = tsec->create_sid; 1574 1598 1575 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 1599 + ad.type = LSM_AUDIT_DATA_DENTRY; 1576 1600 ad.u.dentry = dentry; 1577 - ad.selinux_audit_data = &sad; 1578 1601 1579 1602 rc = avc_has_perm(sid, dsec->sid, SECCLASS_DIR, 1580 1603 DIR__ADD_NAME | DIR__SEARCH, ··· 1618 1643 { 1619 1644 struct inode_security_struct *dsec, *isec; 1620 1645 struct common_audit_data ad; 1621 - struct selinux_audit_data sad = {0,}; 1622 1646 u32 sid = current_sid(); 1623 1647 u32 av; 1624 1648 int rc; ··· 1625 1651 dsec = dir->i_security; 1626 1652 isec = dentry->d_inode->i_security; 1627 1653 1628 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 1654 + ad.type = LSM_AUDIT_DATA_DENTRY; 1629 1655 ad.u.dentry = dentry; 1630 - ad.selinux_audit_data = &sad; 1631 1656 1632 1657 av = DIR__SEARCH; 1633 1658 av |= (kind ? DIR__REMOVE_NAME : DIR__ADD_NAME); ··· 1661 1688 { 1662 1689 struct inode_security_struct *old_dsec, *new_dsec, *old_isec, *new_isec; 1663 1690 struct common_audit_data ad; 1664 - struct selinux_audit_data sad = {0,}; 1665 1691 u32 sid = current_sid(); 1666 1692 u32 av; 1667 1693 int old_is_dir, new_is_dir; ··· 1671 1699 old_is_dir = S_ISDIR(old_dentry->d_inode->i_mode); 1672 1700 new_dsec = new_dir->i_security; 1673 1701 1674 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 1675 - ad.selinux_audit_data = &sad; 1702 + ad.type = LSM_AUDIT_DATA_DENTRY; 1676 1703 1677 1704 ad.u.dentry = old_dentry; 1678 1705 rc = avc_has_perm(sid, old_dsec->sid, SECCLASS_DIR, ··· 1957 1986 struct task_security_struct *new_tsec; 1958 1987 struct inode_security_struct *isec; 1959 1988 struct common_audit_data ad; 1960 - struct selinux_audit_data sad = {0,}; 1961 1989 struct inode *inode = bprm->file->f_path.dentry->d_inode; 1962 1990 int rc; 1963 1991 ··· 1986 2016 new_tsec->sid = old_tsec->exec_sid; 1987 2017 /* Reset exec SID on execve. */ 1988 2018 new_tsec->exec_sid = 0; 2019 + 2020 + /* 2021 + * Minimize confusion: if no_new_privs and a transition is 2022 + * explicitly requested, then fail the exec. 2023 + */ 2024 + if (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) 2025 + return -EPERM; 1989 2026 } else { 1990 2027 /* Check for a default transition on this program. */ 1991 2028 rc = security_transition_sid(old_tsec->sid, isec->sid, ··· 2002 2025 return rc; 2003 2026 } 2004 2027 2005 - COMMON_AUDIT_DATA_INIT(&ad, PATH); 2006 - ad.selinux_audit_data = &sad; 2028 + ad.type = LSM_AUDIT_DATA_PATH; 2007 2029 ad.u.path = bprm->file->f_path; 2008 2030 2009 - if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) 2031 + if ((bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) || 2032 + (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS)) 2010 2033 new_tsec->sid = old_tsec->sid; 2011 2034 2012 2035 if (new_tsec->sid == old_tsec->sid) { ··· 2092 2115 static inline void flush_unauthorized_files(const struct cred *cred, 2093 2116 struct files_struct *files) 2094 2117 { 2095 - struct common_audit_data ad; 2096 - struct selinux_audit_data sad = {0,}; 2097 2118 struct file *file, *devnull = NULL; 2098 2119 struct tty_struct *tty; 2099 2120 struct fdtable *fdt; ··· 2103 2128 spin_lock(&tty_files_lock); 2104 2129 if (!list_empty(&tty->tty_files)) { 2105 2130 struct tty_file_private *file_priv; 2106 - struct inode *inode; 2107 2131 2108 2132 /* Revalidate access to controlling tty. 2109 - Use inode_has_perm on the tty inode directly rather 2133 + Use path_has_perm on the tty path directly rather 2110 2134 than using file_has_perm, as this particular open 2111 2135 file may belong to another process and we are only 2112 2136 interested in the inode-based check here. */ 2113 2137 file_priv = list_first_entry(&tty->tty_files, 2114 2138 struct tty_file_private, list); 2115 2139 file = file_priv->file; 2116 - inode = file->f_path.dentry->d_inode; 2117 - if (inode_has_perm_noadp(cred, inode, 2118 - FILE__READ | FILE__WRITE, 0)) { 2140 + if (path_has_perm(cred, &file->f_path, FILE__READ | FILE__WRITE)) 2119 2141 drop_tty = 1; 2120 - } 2121 2142 } 2122 2143 spin_unlock(&tty_files_lock); 2123 2144 tty_kref_put(tty); ··· 2123 2152 no_tty(); 2124 2153 2125 2154 /* Revalidate access to inherited open files. */ 2126 - 2127 - COMMON_AUDIT_DATA_INIT(&ad, INODE); 2128 - ad.selinux_audit_data = &sad; 2129 - 2130 2155 spin_lock(&files->file_lock); 2131 2156 for (;;) { 2132 2157 unsigned long set, i; ··· 2459 2492 { 2460 2493 const struct cred *cred = current_cred(); 2461 2494 struct common_audit_data ad; 2462 - struct selinux_audit_data sad = {0,}; 2463 2495 int rc; 2464 2496 2465 2497 rc = superblock_doinit(sb, data); ··· 2469 2503 if (flags & MS_KERNMOUNT) 2470 2504 return 0; 2471 2505 2472 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 2473 - ad.selinux_audit_data = &sad; 2506 + ad.type = LSM_AUDIT_DATA_DENTRY; 2474 2507 ad.u.dentry = sb->s_root; 2475 2508 return superblock_has_perm(cred, sb, FILESYSTEM__MOUNT, &ad); 2476 2509 } ··· 2478 2513 { 2479 2514 const struct cred *cred = current_cred(); 2480 2515 struct common_audit_data ad; 2481 - struct selinux_audit_data sad = {0,}; 2482 2516 2483 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 2484 - ad.selinux_audit_data = &sad; 2517 + ad.type = LSM_AUDIT_DATA_DENTRY; 2485 2518 ad.u.dentry = dentry->d_sb->s_root; 2486 2519 return superblock_has_perm(cred, dentry->d_sb, FILESYSTEM__GETATTR, &ad); 2487 2520 } ··· 2639 2676 return dentry_has_perm(cred, dentry, FILE__READ); 2640 2677 } 2641 2678 2679 + static noinline int audit_inode_permission(struct inode *inode, 2680 + u32 perms, u32 audited, u32 denied, 2681 + unsigned flags) 2682 + { 2683 + struct common_audit_data ad; 2684 + struct inode_security_struct *isec = inode->i_security; 2685 + int rc; 2686 + 2687 + ad.type = LSM_AUDIT_DATA_INODE; 2688 + ad.u.inode = inode; 2689 + 2690 + rc = slow_avc_audit(current_sid(), isec->sid, isec->sclass, perms, 2691 + audited, denied, &ad, flags); 2692 + if (rc) 2693 + return rc; 2694 + return 0; 2695 + } 2696 + 2642 2697 static int selinux_inode_permission(struct inode *inode, int mask) 2643 2698 { 2644 2699 const struct cred *cred = current_cred(); 2645 - struct common_audit_data ad; 2646 - struct selinux_audit_data sad = {0,}; 2647 2700 u32 perms; 2648 2701 bool from_access; 2649 2702 unsigned flags = mask & MAY_NOT_BLOCK; 2703 + struct inode_security_struct *isec; 2704 + u32 sid; 2705 + struct av_decision avd; 2706 + int rc, rc2; 2707 + u32 audited, denied; 2650 2708 2651 2709 from_access = mask & MAY_ACCESS; 2652 2710 mask &= (MAY_READ|MAY_WRITE|MAY_EXEC|MAY_APPEND); ··· 2676 2692 if (!mask) 2677 2693 return 0; 2678 2694 2679 - COMMON_AUDIT_DATA_INIT(&ad, INODE); 2680 - ad.selinux_audit_data = &sad; 2681 - ad.u.inode = inode; 2695 + validate_creds(cred); 2682 2696 2683 - if (from_access) 2684 - ad.selinux_audit_data->auditdeny |= FILE__AUDIT_ACCESS; 2697 + if (unlikely(IS_PRIVATE(inode))) 2698 + return 0; 2685 2699 2686 2700 perms = file_mask_to_av(inode->i_mode, mask); 2687 2701 2688 - return inode_has_perm(cred, inode, perms, &ad, flags); 2702 + sid = cred_sid(cred); 2703 + isec = inode->i_security; 2704 + 2705 + rc = avc_has_perm_noaudit(sid, isec->sid, isec->sclass, perms, 0, &avd); 2706 + audited = avc_audit_required(perms, &avd, rc, 2707 + from_access ? FILE__AUDIT_ACCESS : 0, 2708 + &denied); 2709 + if (likely(!audited)) 2710 + return rc; 2711 + 2712 + rc2 = audit_inode_permission(inode, perms, audited, denied, flags); 2713 + if (rc2) 2714 + return rc2; 2715 + return rc; 2689 2716 } 2690 2717 2691 2718 static int selinux_inode_setattr(struct dentry *dentry, struct iattr *iattr) 2692 2719 { 2693 2720 const struct cred *cred = current_cred(); 2694 2721 unsigned int ia_valid = iattr->ia_valid; 2722 + __u32 av = FILE__WRITE; 2695 2723 2696 2724 /* ATTR_FORCE is just used for ATTR_KILL_S[UG]ID. */ 2697 2725 if (ia_valid & ATTR_FORCE) { ··· 2717 2721 ATTR_ATIME_SET | ATTR_MTIME_SET | ATTR_TIMES_SET)) 2718 2722 return dentry_has_perm(cred, dentry, FILE__SETATTR); 2719 2723 2720 - return dentry_has_perm(cred, dentry, FILE__WRITE); 2724 + if (ia_valid & ATTR_SIZE) 2725 + av |= FILE__OPEN; 2726 + 2727 + return dentry_has_perm(cred, dentry, av); 2721 2728 } 2722 2729 2723 2730 static int selinux_inode_getattr(struct vfsmount *mnt, struct dentry *dentry) ··· 2762 2763 struct inode_security_struct *isec = inode->i_security; 2763 2764 struct superblock_security_struct *sbsec; 2764 2765 struct common_audit_data ad; 2765 - struct selinux_audit_data sad = {0,}; 2766 2766 u32 newsid, sid = current_sid(); 2767 2767 int rc = 0; 2768 2768 ··· 2775 2777 if (!inode_owner_or_capable(inode)) 2776 2778 return -EPERM; 2777 2779 2778 - COMMON_AUDIT_DATA_INIT(&ad, DENTRY); 2779 - ad.selinux_audit_data = &sad; 2780 + ad.type = LSM_AUDIT_DATA_DENTRY; 2780 2781 ad.u.dentry = dentry; 2781 2782 2782 2783 rc = avc_has_perm(sid, isec->sid, isec->sclass, ··· 2785 2788 2786 2789 rc = security_context_to_sid(value, size, &newsid); 2787 2790 if (rc == -EINVAL) { 2788 - if (!capable(CAP_MAC_ADMIN)) 2791 + if (!capable(CAP_MAC_ADMIN)) { 2792 + struct audit_buffer *ab; 2793 + size_t audit_size; 2794 + const char *str; 2795 + 2796 + /* We strip a nul only if it is at the end, otherwise the 2797 + * context contains a nul and we should audit that */ 2798 + str = value; 2799 + if (str[size - 1] == '\0') 2800 + audit_size = size - 1; 2801 + else 2802 + audit_size = size; 2803 + ab = audit_log_start(current->audit_context, GFP_ATOMIC, AUDIT_SELINUX_ERR); 2804 + audit_log_format(ab, "op=setxattr invalid_context="); 2805 + audit_log_n_untrustedstring(ab, value, audit_size); 2806 + audit_log_end(ab); 2807 + 2789 2808 return rc; 2809 + } 2790 2810 rc = security_context_to_sid_force(value, size, &newsid); 2791 2811 } 2792 2812 if (rc) ··· 2983 2969 2984 2970 if (sid == fsec->sid && fsec->isid == isec->sid && 2985 2971 fsec->pseqno == avc_policy_seqno()) 2986 - /* No change since dentry_open check. */ 2972 + /* No change since file_open check. */ 2987 2973 return 0; 2988 2974 2989 2975 return selinux_revalidate_file_permission(file, mask); ··· 3242 3228 return file_has_perm(cred, file, file_to_av(file)); 3243 3229 } 3244 3230 3245 - static int selinux_dentry_open(struct file *file, const struct cred *cred) 3231 + static int selinux_file_open(struct file *file, const struct cred *cred) 3246 3232 { 3247 3233 struct file_security_struct *fsec; 3248 - struct inode *inode; 3249 3234 struct inode_security_struct *isec; 3250 3235 3251 - inode = file->f_path.dentry->d_inode; 3252 3236 fsec = file->f_security; 3253 - isec = inode->i_security; 3237 + isec = file->f_path.dentry->d_inode->i_security; 3254 3238 /* 3255 3239 * Save inode label and policy sequence number 3256 3240 * at open-time so that selinux_file_permission ··· 3266 3254 * new inode label or new policy. 3267 3255 * This check is not redundant - do not remove. 3268 3256 */ 3269 - return inode_has_perm_noadp(cred, inode, open_file_to_av(file), 0); 3257 + return path_has_perm(cred, &file->f_path, open_file_to_av(file)); 3270 3258 } 3271 3259 3272 3260 /* task security operations */ ··· 3385 3373 { 3386 3374 u32 sid; 3387 3375 struct common_audit_data ad; 3388 - struct selinux_audit_data sad = {0,}; 3389 3376 3390 3377 sid = task_sid(current); 3391 3378 3392 - COMMON_AUDIT_DATA_INIT(&ad, KMOD); 3393 - ad.selinux_audit_data = &sad; 3379 + ad.type = LSM_AUDIT_DATA_KMOD; 3394 3380 ad.u.kmod_name = kmod_name; 3395 3381 3396 3382 return avc_has_perm(sid, SECINITSID_KERNEL, SECCLASS_SYSTEM, ··· 3761 3751 { 3762 3752 struct sk_security_struct *sksec = sk->sk_security; 3763 3753 struct common_audit_data ad; 3764 - struct selinux_audit_data sad = {0,}; 3765 3754 struct lsm_network_audit net = {0,}; 3766 3755 u32 tsid = task_sid(task); 3767 3756 3768 3757 if (sksec->sid == SECINITSID_KERNEL) 3769 3758 return 0; 3770 3759 3771 - COMMON_AUDIT_DATA_INIT(&ad, NET); 3772 - ad.selinux_audit_data = &sad; 3760 + ad.type = LSM_AUDIT_DATA_NET; 3773 3761 ad.u.net = &net; 3774 3762 ad.u.net->sk = sk; 3775 3763 ··· 3847 3839 char *addrp; 3848 3840 struct sk_security_struct *sksec = sk->sk_security; 3849 3841 struct common_audit_data ad; 3850 - struct selinux_audit_data sad = {0,}; 3851 3842 struct lsm_network_audit net = {0,}; 3852 3843 struct sockaddr_in *addr4 = NULL; 3853 3844 struct sockaddr_in6 *addr6 = NULL; ··· 3873 3866 snum, &sid); 3874 3867 if (err) 3875 3868 goto out; 3876 - COMMON_AUDIT_DATA_INIT(&ad, NET); 3877 - ad.selinux_audit_data = &sad; 3869 + ad.type = LSM_AUDIT_DATA_NET; 3878 3870 ad.u.net = &net; 3879 3871 ad.u.net->sport = htons(snum); 3880 3872 ad.u.net->family = family; ··· 3907 3901 if (err) 3908 3902 goto out; 3909 3903 3910 - COMMON_AUDIT_DATA_INIT(&ad, NET); 3911 - ad.selinux_audit_data = &sad; 3904 + ad.type = LSM_AUDIT_DATA_NET; 3912 3905 ad.u.net = &net; 3913 3906 ad.u.net->sport = htons(snum); 3914 3907 ad.u.net->family = family; ··· 3942 3937 if (sksec->sclass == SECCLASS_TCP_SOCKET || 3943 3938 sksec->sclass == SECCLASS_DCCP_SOCKET) { 3944 3939 struct common_audit_data ad; 3945 - struct selinux_audit_data sad = {0,}; 3946 3940 struct lsm_network_audit net = {0,}; 3947 3941 struct sockaddr_in *addr4 = NULL; 3948 3942 struct sockaddr_in6 *addr6 = NULL; ··· 3967 3963 perm = (sksec->sclass == SECCLASS_TCP_SOCKET) ? 3968 3964 TCP_SOCKET__NAME_CONNECT : DCCP_SOCKET__NAME_CONNECT; 3969 3965 3970 - COMMON_AUDIT_DATA_INIT(&ad, NET); 3971 - ad.selinux_audit_data = &sad; 3966 + ad.type = LSM_AUDIT_DATA_NET; 3972 3967 ad.u.net = &net; 3973 3968 ad.u.net->dport = htons(snum); 3974 3969 ad.u.net->family = sk->sk_family; ··· 4059 4056 struct sk_security_struct *sksec_other = other->sk_security; 4060 4057 struct sk_security_struct *sksec_new = newsk->sk_security; 4061 4058 struct common_audit_data ad; 4062 - struct selinux_audit_data sad = {0,}; 4063 4059 struct lsm_network_audit net = {0,}; 4064 4060 int err; 4065 4061 4066 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4067 - ad.selinux_audit_data = &sad; 4062 + ad.type = LSM_AUDIT_DATA_NET; 4068 4063 ad.u.net = &net; 4069 4064 ad.u.net->sk = other; 4070 4065 ··· 4091 4090 struct sk_security_struct *ssec = sock->sk->sk_security; 4092 4091 struct sk_security_struct *osec = other->sk->sk_security; 4093 4092 struct common_audit_data ad; 4094 - struct selinux_audit_data sad = {0,}; 4095 4093 struct lsm_network_audit net = {0,}; 4096 4094 4097 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4098 - ad.selinux_audit_data = &sad; 4095 + ad.type = LSM_AUDIT_DATA_NET; 4099 4096 ad.u.net = &net; 4100 4097 ad.u.net->sk = other->sk; 4101 4098 ··· 4131 4132 struct sk_security_struct *sksec = sk->sk_security; 4132 4133 u32 sk_sid = sksec->sid; 4133 4134 struct common_audit_data ad; 4134 - struct selinux_audit_data sad = {0,}; 4135 4135 struct lsm_network_audit net = {0,}; 4136 4136 char *addrp; 4137 4137 4138 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4139 - ad.selinux_audit_data = &sad; 4138 + ad.type = LSM_AUDIT_DATA_NET; 4140 4139 ad.u.net = &net; 4141 4140 ad.u.net->netif = skb->skb_iif; 4142 4141 ad.u.net->family = family; ··· 4164 4167 u16 family = sk->sk_family; 4165 4168 u32 sk_sid = sksec->sid; 4166 4169 struct common_audit_data ad; 4167 - struct selinux_audit_data sad = {0,}; 4168 4170 struct lsm_network_audit net = {0,}; 4169 4171 char *addrp; 4170 4172 u8 secmark_active; ··· 4188 4192 if (!secmark_active && !peerlbl_active) 4189 4193 return 0; 4190 4194 4191 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4192 - ad.selinux_audit_data = &sad; 4195 + ad.type = LSM_AUDIT_DATA_NET; 4193 4196 ad.u.net = &net; 4194 4197 ad.u.net->netif = skb->skb_iif; 4195 4198 ad.u.net->family = family; ··· 4526 4531 char *addrp; 4527 4532 u32 peer_sid; 4528 4533 struct common_audit_data ad; 4529 - struct selinux_audit_data sad = {0,}; 4530 4534 struct lsm_network_audit net = {0,}; 4531 4535 u8 secmark_active; 4532 4536 u8 netlbl_active; ··· 4543 4549 if (selinux_skb_peerlbl_sid(skb, family, &peer_sid) != 0) 4544 4550 return NF_DROP; 4545 4551 4546 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4547 - ad.selinux_audit_data = &sad; 4552 + ad.type = LSM_AUDIT_DATA_NET; 4548 4553 ad.u.net = &net; 4549 4554 ad.u.net->netif = ifindex; 4550 4555 ad.u.net->family = family; ··· 4633 4640 struct sock *sk = skb->sk; 4634 4641 struct sk_security_struct *sksec; 4635 4642 struct common_audit_data ad; 4636 - struct selinux_audit_data sad = {0,}; 4637 4643 struct lsm_network_audit net = {0,}; 4638 4644 char *addrp; 4639 4645 u8 proto; ··· 4641 4649 return NF_ACCEPT; 4642 4650 sksec = sk->sk_security; 4643 4651 4644 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4645 - ad.selinux_audit_data = &sad; 4652 + ad.type = LSM_AUDIT_DATA_NET; 4646 4653 ad.u.net = &net; 4647 4654 ad.u.net->netif = ifindex; 4648 4655 ad.u.net->family = family; ··· 4666 4675 u32 peer_sid; 4667 4676 struct sock *sk; 4668 4677 struct common_audit_data ad; 4669 - struct selinux_audit_data sad = {0,}; 4670 4678 struct lsm_network_audit net = {0,}; 4671 4679 char *addrp; 4672 4680 u8 secmark_active; ··· 4712 4722 secmark_perm = PACKET__SEND; 4713 4723 } 4714 4724 4715 - COMMON_AUDIT_DATA_INIT(&ad, NET); 4716 - ad.selinux_audit_data = &sad; 4725 + ad.type = LSM_AUDIT_DATA_NET; 4717 4726 ad.u.net = &net; 4718 4727 ad.u.net->netif = ifindex; 4719 4728 ad.u.net->family = family; ··· 4830 4841 { 4831 4842 struct ipc_security_struct *isec; 4832 4843 struct common_audit_data ad; 4833 - struct selinux_audit_data sad = {0,}; 4834 4844 u32 sid = current_sid(); 4835 4845 4836 4846 isec = ipc_perms->security; 4837 4847 4838 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 4839 - ad.selinux_audit_data = &sad; 4848 + ad.type = LSM_AUDIT_DATA_IPC; 4840 4849 ad.u.ipc_id = ipc_perms->key; 4841 4850 4842 4851 return avc_has_perm(sid, isec->sid, isec->sclass, perms, &ad); ··· 4855 4868 { 4856 4869 struct ipc_security_struct *isec; 4857 4870 struct common_audit_data ad; 4858 - struct selinux_audit_data sad = {0,}; 4859 4871 u32 sid = current_sid(); 4860 4872 int rc; 4861 4873 ··· 4864 4878 4865 4879 isec = msq->q_perm.security; 4866 4880 4867 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 4868 - ad.selinux_audit_data = &sad; 4881 + ad.type = LSM_AUDIT_DATA_IPC; 4869 4882 ad.u.ipc_id = msq->q_perm.key; 4870 4883 4871 4884 rc = avc_has_perm(sid, isec->sid, SECCLASS_MSGQ, ··· 4885 4900 { 4886 4901 struct ipc_security_struct *isec; 4887 4902 struct common_audit_data ad; 4888 - struct selinux_audit_data sad = {0,}; 4889 4903 u32 sid = current_sid(); 4890 4904 4891 4905 isec = msq->q_perm.security; 4892 4906 4893 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 4894 - ad.selinux_audit_data = &sad; 4907 + ad.type = LSM_AUDIT_DATA_IPC; 4895 4908 ad.u.ipc_id = msq->q_perm.key; 4896 4909 4897 4910 return avc_has_perm(sid, isec->sid, SECCLASS_MSGQ, ··· 4929 4946 struct ipc_security_struct *isec; 4930 4947 struct msg_security_struct *msec; 4931 4948 struct common_audit_data ad; 4932 - struct selinux_audit_data sad = {0,}; 4933 4949 u32 sid = current_sid(); 4934 4950 int rc; 4935 4951 ··· 4949 4967 return rc; 4950 4968 } 4951 4969 4952 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 4953 - ad.selinux_audit_data = &sad; 4970 + ad.type = LSM_AUDIT_DATA_IPC; 4954 4971 ad.u.ipc_id = msq->q_perm.key; 4955 4972 4956 4973 /* Can this process write to the queue? */ ··· 4974 4993 struct ipc_security_struct *isec; 4975 4994 struct msg_security_struct *msec; 4976 4995 struct common_audit_data ad; 4977 - struct selinux_audit_data sad = {0,}; 4978 4996 u32 sid = task_sid(target); 4979 4997 int rc; 4980 4998 4981 4999 isec = msq->q_perm.security; 4982 5000 msec = msg->security; 4983 5001 4984 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 4985 - ad.selinux_audit_data = &sad; 5002 + ad.type = LSM_AUDIT_DATA_IPC; 4986 5003 ad.u.ipc_id = msq->q_perm.key; 4987 5004 4988 5005 rc = avc_has_perm(sid, isec->sid, ··· 4996 5017 { 4997 5018 struct ipc_security_struct *isec; 4998 5019 struct common_audit_data ad; 4999 - struct selinux_audit_data sad = {0,}; 5000 5020 u32 sid = current_sid(); 5001 5021 int rc; 5002 5022 ··· 5005 5027 5006 5028 isec = shp->shm_perm.security; 5007 5029 5008 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 5009 - ad.selinux_audit_data = &sad; 5030 + ad.type = LSM_AUDIT_DATA_IPC; 5010 5031 ad.u.ipc_id = shp->shm_perm.key; 5011 5032 5012 5033 rc = avc_has_perm(sid, isec->sid, SECCLASS_SHM, ··· 5026 5049 { 5027 5050 struct ipc_security_struct *isec; 5028 5051 struct common_audit_data ad; 5029 - struct selinux_audit_data sad = {0,}; 5030 5052 u32 sid = current_sid(); 5031 5053 5032 5054 isec = shp->shm_perm.security; 5033 5055 5034 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 5035 - ad.selinux_audit_data = &sad; 5056 + ad.type = LSM_AUDIT_DATA_IPC; 5036 5057 ad.u.ipc_id = shp->shm_perm.key; 5037 5058 5038 5059 return avc_has_perm(sid, isec->sid, SECCLASS_SHM, ··· 5088 5113 { 5089 5114 struct ipc_security_struct *isec; 5090 5115 struct common_audit_data ad; 5091 - struct selinux_audit_data sad = {0,}; 5092 5116 u32 sid = current_sid(); 5093 5117 int rc; 5094 5118 ··· 5097 5123 5098 5124 isec = sma->sem_perm.security; 5099 5125 5100 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 5101 - ad.selinux_audit_data = &sad; 5126 + ad.type = LSM_AUDIT_DATA_IPC; 5102 5127 ad.u.ipc_id = sma->sem_perm.key; 5103 5128 5104 5129 rc = avc_has_perm(sid, isec->sid, SECCLASS_SEM, ··· 5118 5145 { 5119 5146 struct ipc_security_struct *isec; 5120 5147 struct common_audit_data ad; 5121 - struct selinux_audit_data sad = {0,}; 5122 5148 u32 sid = current_sid(); 5123 5149 5124 5150 isec = sma->sem_perm.security; 5125 5151 5126 - COMMON_AUDIT_DATA_INIT(&ad, IPC); 5127 - ad.selinux_audit_data = &sad; 5152 + ad.type = LSM_AUDIT_DATA_IPC; 5128 5153 ad.u.ipc_id = sma->sem_perm.key; 5129 5154 5130 5155 return avc_has_perm(sid, isec->sid, SECCLASS_SEM, ··· 5302 5331 } 5303 5332 error = security_context_to_sid(value, size, &sid); 5304 5333 if (error == -EINVAL && !strcmp(name, "fscreate")) { 5305 - if (!capable(CAP_MAC_ADMIN)) 5334 + if (!capable(CAP_MAC_ADMIN)) { 5335 + struct audit_buffer *ab; 5336 + size_t audit_size; 5337 + 5338 + /* We strip a nul only if it is at the end, otherwise the 5339 + * context contains a nul and we should audit that */ 5340 + if (str[size - 1] == '\0') 5341 + audit_size = size - 1; 5342 + else 5343 + audit_size = size; 5344 + ab = audit_log_start(current->audit_context, GFP_ATOMIC, AUDIT_SELINUX_ERR); 5345 + audit_log_format(ab, "op=fscreate invalid_context="); 5346 + audit_log_n_untrustedstring(ab, value, audit_size); 5347 + audit_log_end(ab); 5348 + 5306 5349 return error; 5350 + } 5307 5351 error = security_context_to_sid_force(value, size, 5308 5352 &sid); 5309 5353 } ··· 5578 5592 .file_send_sigiotask = selinux_file_send_sigiotask, 5579 5593 .file_receive = selinux_file_receive, 5580 5594 5581 - .dentry_open = selinux_dentry_open, 5595 + .file_open = selinux_file_open, 5582 5596 5583 5597 .task_create = selinux_task_create, 5584 5598 .cred_alloc_blank = selinux_cred_alloc_blank,
+77 -23
security/selinux/include/avc.h
··· 49 49 /* 50 50 * We only need this data after we have decided to send an audit message. 51 51 */ 52 - struct selinux_late_audit_data { 52 + struct selinux_audit_data { 53 53 u32 ssid; 54 54 u32 tsid; 55 55 u16 tclass; ··· 60 60 }; 61 61 62 62 /* 63 - * We collect this at the beginning or during an selinux security operation 64 - */ 65 - struct selinux_audit_data { 66 - /* 67 - * auditdeny is a bit tricky and unintuitive. See the 68 - * comments in avc.c for it's meaning and usage. 69 - */ 70 - u32 auditdeny; 71 - struct selinux_late_audit_data *slad; 72 - }; 73 - 74 - /* 75 63 * AVC operations 76 64 */ 77 65 78 66 void __init avc_init(void); 79 67 80 - int avc_audit(u32 ssid, u32 tsid, 81 - u16 tclass, u32 requested, 82 - struct av_decision *avd, 83 - int result, 84 - struct common_audit_data *a, unsigned flags); 68 + static inline u32 avc_audit_required(u32 requested, 69 + struct av_decision *avd, 70 + int result, 71 + u32 auditdeny, 72 + u32 *deniedp) 73 + { 74 + u32 denied, audited; 75 + denied = requested & ~avd->allowed; 76 + if (unlikely(denied)) { 77 + audited = denied & avd->auditdeny; 78 + /* 79 + * auditdeny is TRICKY! Setting a bit in 80 + * this field means that ANY denials should NOT be audited if 81 + * the policy contains an explicit dontaudit rule for that 82 + * permission. Take notice that this is unrelated to the 83 + * actual permissions that were denied. As an example lets 84 + * assume: 85 + * 86 + * denied == READ 87 + * avd.auditdeny & ACCESS == 0 (not set means explicit rule) 88 + * auditdeny & ACCESS == 1 89 + * 90 + * We will NOT audit the denial even though the denied 91 + * permission was READ and the auditdeny checks were for 92 + * ACCESS 93 + */ 94 + if (auditdeny && !(auditdeny & avd->auditdeny)) 95 + audited = 0; 96 + } else if (result) 97 + audited = denied = requested; 98 + else 99 + audited = requested & avd->auditallow; 100 + *deniedp = denied; 101 + return audited; 102 + } 103 + 104 + int slow_avc_audit(u32 ssid, u32 tsid, u16 tclass, 105 + u32 requested, u32 audited, u32 denied, 106 + struct common_audit_data *a, 107 + unsigned flags); 108 + 109 + /** 110 + * avc_audit - Audit the granting or denial of permissions. 111 + * @ssid: source security identifier 112 + * @tsid: target security identifier 113 + * @tclass: target security class 114 + * @requested: requested permissions 115 + * @avd: access vector decisions 116 + * @result: result from avc_has_perm_noaudit 117 + * @a: auxiliary audit data 118 + * @flags: VFS walk flags 119 + * 120 + * Audit the granting or denial of permissions in accordance 121 + * with the policy. This function is typically called by 122 + * avc_has_perm() after a permission check, but can also be 123 + * called directly by callers who use avc_has_perm_noaudit() 124 + * in order to separate the permission check from the auditing. 125 + * For example, this separation is useful when the permission check must 126 + * be performed under a lock, to allow the lock to be released 127 + * before calling the auditing code. 128 + */ 129 + static inline int avc_audit(u32 ssid, u32 tsid, 130 + u16 tclass, u32 requested, 131 + struct av_decision *avd, 132 + int result, 133 + struct common_audit_data *a, unsigned flags) 134 + { 135 + u32 audited, denied; 136 + audited = avc_audit_required(requested, avd, result, 0, &denied); 137 + if (likely(!audited)) 138 + return 0; 139 + return slow_avc_audit(ssid, tsid, tclass, 140 + requested, audited, denied, 141 + a, flags); 142 + } 85 143 86 144 #define AVC_STRICT 1 /* Ignore permissive mode. */ 87 145 int avc_has_perm_noaudit(u32 ssid, u32 tsid, ··· 170 112 #define AVC_CALLBACK_AUDITDENY_ENABLE 64 171 113 #define AVC_CALLBACK_AUDITDENY_DISABLE 128 172 114 173 - int avc_add_callback(int (*callback)(u32 event, u32 ssid, u32 tsid, 174 - u16 tclass, u32 perms, 175 - u32 *out_retained), 176 - u32 events, u32 ssid, u32 tsid, 177 - u16 tclass, u32 perms); 115 + int avc_add_callback(int (*callback)(u32 event), u32 events); 178 116 179 117 /* Exported to selinuxfs */ 180 118 int avc_get_hash_stats(char *page);
+3 -1
security/selinux/include/security.h
··· 31 31 #define POLICYDB_VERSION_BOUNDARY 24 32 32 #define POLICYDB_VERSION_FILENAME_TRANS 25 33 33 #define POLICYDB_VERSION_ROLETRANS 26 34 + #define POLICYDB_VERSION_NEW_OBJECT_DEFAULTS 27 35 + #define POLICYDB_VERSION_DEFAULT_TYPE 28 34 36 35 37 /* Range of policy versions we understand*/ 36 38 #define POLICYDB_VERSION_MIN POLICYDB_VERSION_BASE 37 39 #ifdef CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX 38 40 #define POLICYDB_VERSION_MAX CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX_VALUE 39 41 #else 40 - #define POLICYDB_VERSION_MAX POLICYDB_VERSION_ROLETRANS 42 + #define POLICYDB_VERSION_MAX POLICYDB_VERSION_DEFAULT_TYPE 41 43 #endif 42 44 43 45 /* Mask for just the mount related flags */
+2 -4
security/selinux/netif.c
··· 252 252 spin_unlock_bh(&sel_netif_lock); 253 253 } 254 254 255 - static int sel_netif_avc_callback(u32 event, u32 ssid, u32 tsid, 256 - u16 class, u32 perms, u32 *retained) 255 + static int sel_netif_avc_callback(u32 event) 257 256 { 258 257 if (event == AVC_CALLBACK_RESET) { 259 258 sel_netif_flush(); ··· 291 292 292 293 register_netdevice_notifier(&sel_netif_netdev_notifier); 293 294 294 - err = avc_add_callback(sel_netif_avc_callback, AVC_CALLBACK_RESET, 295 - SECSID_NULL, SECSID_NULL, SECCLASS_NULL, 0); 295 + err = avc_add_callback(sel_netif_avc_callback, AVC_CALLBACK_RESET); 296 296 if (err) 297 297 panic("avc_add_callback() failed, error %d\n", err); 298 298
+2 -4
security/selinux/netnode.c
··· 297 297 spin_unlock_bh(&sel_netnode_lock); 298 298 } 299 299 300 - static int sel_netnode_avc_callback(u32 event, u32 ssid, u32 tsid, 301 - u16 class, u32 perms, u32 *retained) 300 + static int sel_netnode_avc_callback(u32 event) 302 301 { 303 302 if (event == AVC_CALLBACK_RESET) { 304 303 sel_netnode_flush(); ··· 319 320 sel_netnode_hash[iter].size = 0; 320 321 } 321 322 322 - ret = avc_add_callback(sel_netnode_avc_callback, AVC_CALLBACK_RESET, 323 - SECSID_NULL, SECSID_NULL, SECCLASS_NULL, 0); 323 + ret = avc_add_callback(sel_netnode_avc_callback, AVC_CALLBACK_RESET); 324 324 if (ret != 0) 325 325 panic("avc_add_callback() failed, error %d\n", ret); 326 326
+2 -4
security/selinux/netport.c
··· 234 234 spin_unlock_bh(&sel_netport_lock); 235 235 } 236 236 237 - static int sel_netport_avc_callback(u32 event, u32 ssid, u32 tsid, 238 - u16 class, u32 perms, u32 *retained) 237 + static int sel_netport_avc_callback(u32 event) 239 238 { 240 239 if (event == AVC_CALLBACK_RESET) { 241 240 sel_netport_flush(); ··· 256 257 sel_netport_hash[iter].size = 0; 257 258 } 258 259 259 - ret = avc_add_callback(sel_netport_avc_callback, AVC_CALLBACK_RESET, 260 - SECSID_NULL, SECSID_NULL, SECCLASS_NULL, 0); 260 + ret = avc_add_callback(sel_netport_avc_callback, AVC_CALLBACK_RESET); 261 261 if (ret != 0) 262 262 panic("avc_add_callback() failed, error %d\n", ret); 263 263
+4 -7
security/selinux/selinuxfs.c
··· 496 496 .read = sel_read_policy, 497 497 .mmap = sel_mmap_policy, 498 498 .release = sel_release_policy, 499 + .llseek = generic_file_llseek, 499 500 }; 500 501 501 502 static ssize_t sel_write_load(struct file *file, const char __user *buf, ··· 1233 1232 kfree(bool_pending_names[i]); 1234 1233 kfree(bool_pending_names); 1235 1234 kfree(bool_pending_values); 1235 + bool_num = 0; 1236 1236 bool_pending_names = NULL; 1237 1237 bool_pending_values = NULL; 1238 1238 ··· 1534 1532 return 0; 1535 1533 } 1536 1534 1537 - static inline unsigned int sel_div(unsigned long a, unsigned long b) 1538 - { 1539 - return a / b - (a % b < 0); 1540 - } 1541 - 1542 1535 static inline unsigned long sel_class_to_ino(u16 class) 1543 1536 { 1544 1537 return (class * (SEL_VEC_MAX + 1)) | SEL_CLASS_INO_OFFSET; ··· 1541 1544 1542 1545 static inline u16 sel_ino_to_class(unsigned long ino) 1543 1546 { 1544 - return sel_div(ino & SEL_INO_MASK, SEL_VEC_MAX + 1); 1547 + return (ino & SEL_INO_MASK) / (SEL_VEC_MAX + 1); 1545 1548 } 1546 1549 1547 1550 static inline unsigned long sel_perm_to_ino(u16 class, u32 perm) ··· 1828 1831 [SEL_REJECT_UNKNOWN] = {"reject_unknown", &sel_handle_unknown_ops, S_IRUGO}, 1829 1832 [SEL_DENY_UNKNOWN] = {"deny_unknown", &sel_handle_unknown_ops, S_IRUGO}, 1830 1833 [SEL_STATUS] = {"status", &sel_handle_status_ops, S_IRUGO}, 1831 - [SEL_POLICY] = {"policy", &sel_policy_ops, S_IRUSR}, 1834 + [SEL_POLICY] = {"policy", &sel_policy_ops, S_IRUGO}, 1832 1835 /* last one */ {""} 1833 1836 }; 1834 1837 ret = simple_fill_super(sb, SELINUX_MAGIC, selinux_files);
+20
security/selinux/ss/context.h
··· 74 74 return rc; 75 75 } 76 76 77 + /* 78 + * Sets both levels in the MLS range of 'dst' to the high level of 'src'. 79 + */ 80 + static inline int mls_context_cpy_high(struct context *dst, struct context *src) 81 + { 82 + int rc; 83 + 84 + dst->range.level[0].sens = src->range.level[1].sens; 85 + rc = ebitmap_cpy(&dst->range.level[0].cat, &src->range.level[1].cat); 86 + if (rc) 87 + goto out; 88 + 89 + dst->range.level[1].sens = src->range.level[1].sens; 90 + rc = ebitmap_cpy(&dst->range.level[1].cat, &src->range.level[1].cat); 91 + if (rc) 92 + ebitmap_destroy(&dst->range.level[0].cat); 93 + out: 94 + return rc; 95 + } 96 + 77 97 static inline int mls_context_cmp(struct context *c1, struct context *c2) 78 98 { 79 99 return ((c1->range.level[0].sens == c2->range.level[0].sens) &&
+24
security/selinux/ss/mls.c
··· 517 517 { 518 518 struct range_trans rtr; 519 519 struct mls_range *r; 520 + struct class_datum *cladatum; 521 + int default_range = 0; 520 522 521 523 if (!policydb.mls_enabled) 522 524 return 0; ··· 532 530 r = hashtab_search(policydb.range_tr, &rtr); 533 531 if (r) 534 532 return mls_range_set(newcontext, r); 533 + 534 + if (tclass && tclass <= policydb.p_classes.nprim) { 535 + cladatum = policydb.class_val_to_struct[tclass - 1]; 536 + if (cladatum) 537 + default_range = cladatum->default_range; 538 + } 539 + 540 + switch (default_range) { 541 + case DEFAULT_SOURCE_LOW: 542 + return mls_context_cpy_low(newcontext, scontext); 543 + case DEFAULT_SOURCE_HIGH: 544 + return mls_context_cpy_high(newcontext, scontext); 545 + case DEFAULT_SOURCE_LOW_HIGH: 546 + return mls_context_cpy(newcontext, scontext); 547 + case DEFAULT_TARGET_LOW: 548 + return mls_context_cpy_low(newcontext, tcontext); 549 + case DEFAULT_TARGET_HIGH: 550 + return mls_context_cpy_high(newcontext, tcontext); 551 + case DEFAULT_TARGET_LOW_HIGH: 552 + return mls_context_cpy(newcontext, tcontext); 553 + } 554 + 535 555 /* Fallthrough */ 536 556 case AVTAB_CHANGE: 537 557 if ((tclass == policydb.process_class) || (sock == true))
+44
security/selinux/ss/policydb.c
··· 133 133 .sym_num = SYM_NUM, 134 134 .ocon_num = OCON_NUM, 135 135 }, 136 + { 137 + .version = POLICYDB_VERSION_NEW_OBJECT_DEFAULTS, 138 + .sym_num = SYM_NUM, 139 + .ocon_num = OCON_NUM, 140 + }, 141 + { 142 + .version = POLICYDB_VERSION_DEFAULT_TYPE, 143 + .sym_num = SYM_NUM, 144 + .ocon_num = OCON_NUM, 145 + }, 136 146 }; 137 147 138 148 static struct policydb_compat_info *policydb_lookup_compat(int version) ··· 1314 1304 rc = read_cons_helper(&cladatum->validatetrans, ncons, 1, fp); 1315 1305 if (rc) 1316 1306 goto bad; 1307 + } 1308 + 1309 + if (p->policyvers >= POLICYDB_VERSION_NEW_OBJECT_DEFAULTS) { 1310 + rc = next_entry(buf, fp, sizeof(u32) * 3); 1311 + if (rc) 1312 + goto bad; 1313 + 1314 + cladatum->default_user = le32_to_cpu(buf[0]); 1315 + cladatum->default_role = le32_to_cpu(buf[1]); 1316 + cladatum->default_range = le32_to_cpu(buf[2]); 1317 + } 1318 + 1319 + if (p->policyvers >= POLICYDB_VERSION_DEFAULT_TYPE) { 1320 + rc = next_entry(buf, fp, sizeof(u32) * 1); 1321 + if (rc) 1322 + goto bad; 1323 + cladatum->default_type = le32_to_cpu(buf[0]); 1317 1324 } 1318 1325 1319 1326 rc = hashtab_insert(h, key, cladatum); ··· 2858 2831 rc = write_cons_helper(p, cladatum->validatetrans, fp); 2859 2832 if (rc) 2860 2833 return rc; 2834 + 2835 + if (p->policyvers >= POLICYDB_VERSION_NEW_OBJECT_DEFAULTS) { 2836 + buf[0] = cpu_to_le32(cladatum->default_user); 2837 + buf[1] = cpu_to_le32(cladatum->default_role); 2838 + buf[2] = cpu_to_le32(cladatum->default_range); 2839 + 2840 + rc = put_entry(buf, sizeof(uint32_t), 3, fp); 2841 + if (rc) 2842 + return rc; 2843 + } 2844 + 2845 + if (p->policyvers >= POLICYDB_VERSION_DEFAULT_TYPE) { 2846 + buf[0] = cpu_to_le32(cladatum->default_type); 2847 + rc = put_entry(buf, sizeof(uint32_t), 1, fp); 2848 + if (rc) 2849 + return rc; 2850 + } 2861 2851 2862 2852 return 0; 2863 2853 }
+14
security/selinux/ss/policydb.h
··· 60 60 struct symtab permissions; /* class-specific permission symbol table */ 61 61 struct constraint_node *constraints; /* constraints on class permissions */ 62 62 struct constraint_node *validatetrans; /* special transition rules */ 63 + /* Options how a new object user, role, and type should be decided */ 64 + #define DEFAULT_SOURCE 1 65 + #define DEFAULT_TARGET 2 66 + char default_user; 67 + char default_role; 68 + char default_type; 69 + /* Options how a new object range should be decided */ 70 + #define DEFAULT_SOURCE_LOW 1 71 + #define DEFAULT_SOURCE_HIGH 2 72 + #define DEFAULT_SOURCE_LOW_HIGH 3 73 + #define DEFAULT_TARGET_LOW 4 74 + #define DEFAULT_TARGET_HIGH 5 75 + #define DEFAULT_TARGET_LOW_HIGH 6 76 + char default_range; 63 77 }; 64 78 65 79 /* Role attributes */
+40 -16
security/selinux/ss/services.c
··· 1018 1018 1019 1019 if (context->len) { 1020 1020 *scontext_len = context->len; 1021 - *scontext = kstrdup(context->str, GFP_ATOMIC); 1022 - if (!(*scontext)) 1023 - return -ENOMEM; 1021 + if (scontext) { 1022 + *scontext = kstrdup(context->str, GFP_ATOMIC); 1023 + if (!(*scontext)) 1024 + return -ENOMEM; 1025 + } 1024 1026 return 0; 1025 1027 } 1026 1028 ··· 1391 1389 u32 *out_sid, 1392 1390 bool kern) 1393 1391 { 1392 + struct class_datum *cladatum = NULL; 1394 1393 struct context *scontext = NULL, *tcontext = NULL, newcontext; 1395 1394 struct role_trans *roletr = NULL; 1396 1395 struct avtab_key avkey; ··· 1440 1437 goto out_unlock; 1441 1438 } 1442 1439 1440 + if (tclass && tclass <= policydb.p_classes.nprim) 1441 + cladatum = policydb.class_val_to_struct[tclass - 1]; 1442 + 1443 1443 /* Set the user identity. */ 1444 1444 switch (specified) { 1445 1445 case AVTAB_TRANSITION: 1446 1446 case AVTAB_CHANGE: 1447 - /* Use the process user identity. */ 1448 - newcontext.user = scontext->user; 1447 + if (cladatum && cladatum->default_user == DEFAULT_TARGET) { 1448 + newcontext.user = tcontext->user; 1449 + } else { 1450 + /* notice this gets both DEFAULT_SOURCE and unset */ 1451 + /* Use the process user identity. */ 1452 + newcontext.user = scontext->user; 1453 + } 1449 1454 break; 1450 1455 case AVTAB_MEMBER: 1451 1456 /* Use the related object owner. */ ··· 1461 1450 break; 1462 1451 } 1463 1452 1464 - /* Set the role and type to default values. */ 1465 - if ((tclass == policydb.process_class) || (sock == true)) { 1466 - /* Use the current role and type of process. */ 1453 + /* Set the role to default values. */ 1454 + if (cladatum && cladatum->default_role == DEFAULT_SOURCE) { 1467 1455 newcontext.role = scontext->role; 1468 - newcontext.type = scontext->type; 1456 + } else if (cladatum && cladatum->default_role == DEFAULT_TARGET) { 1457 + newcontext.role = tcontext->role; 1469 1458 } else { 1470 - /* Use the well-defined object role. */ 1471 - newcontext.role = OBJECT_R_VAL; 1472 - /* Use the type of the related object. */ 1459 + if ((tclass == policydb.process_class) || (sock == true)) 1460 + newcontext.role = scontext->role; 1461 + else 1462 + newcontext.role = OBJECT_R_VAL; 1463 + } 1464 + 1465 + /* Set the type to default values. */ 1466 + if (cladatum && cladatum->default_type == DEFAULT_SOURCE) { 1467 + newcontext.type = scontext->type; 1468 + } else if (cladatum && cladatum->default_type == DEFAULT_TARGET) { 1473 1469 newcontext.type = tcontext->type; 1470 + } else { 1471 + if ((tclass == policydb.process_class) || (sock == true)) { 1472 + /* Use the type of process. */ 1473 + newcontext.type = scontext->type; 1474 + } else { 1475 + /* Use the type of the related object. */ 1476 + newcontext.type = tcontext->type; 1477 + } 1474 1478 } 1475 1479 1476 1480 /* Look for a type transition/member/change rule. */ ··· 3044 3018 3045 3019 static int (*aurule_callback)(void) = audit_update_lsm_rules; 3046 3020 3047 - static int aurule_avc_callback(u32 event, u32 ssid, u32 tsid, 3048 - u16 class, u32 perms, u32 *retained) 3021 + static int aurule_avc_callback(u32 event) 3049 3022 { 3050 3023 int err = 0; 3051 3024 ··· 3057 3032 { 3058 3033 int err; 3059 3034 3060 - err = avc_add_callback(aurule_avc_callback, AVC_CALLBACK_RESET, 3061 - SECSID_NULL, SECSID_NULL, SECCLASS_NULL, 0); 3035 + err = avc_add_callback(aurule_avc_callback, AVC_CALLBACK_RESET); 3062 3036 if (err) 3063 3037 panic("avc_add_callback() failed, error %d\n", err); 3064 3038
+22 -37
security/smack/smack.h
··· 23 23 #include <linux/lsm_audit.h> 24 24 25 25 /* 26 + * Smack labels were limited to 23 characters for a long time. 27 + */ 28 + #define SMK_LABELLEN 24 29 + #define SMK_LONGLABEL 256 30 + 31 + /* 32 + * Maximum number of bytes for the levels in a CIPSO IP option. 26 33 * Why 23? CIPSO is constrained to 30, so a 32 byte buffer is 27 34 * bigger than can be used, and 24 is the next lower multiple 28 35 * of 8, and there are too many issues if there isn't space set 29 36 * aside for the terminating null byte. 30 37 */ 31 - #define SMK_MAXLEN 23 32 - #define SMK_LABELLEN (SMK_MAXLEN+1) 38 + #define SMK_CIPSOLEN 24 33 39 34 40 struct superblock_smack { 35 41 char *smk_root; ··· 72 66 73 67 #define SMK_INODE_INSTANT 0x01 /* inode is instantiated */ 74 68 #define SMK_INODE_TRANSMUTE 0x02 /* directory is transmuting */ 69 + #define SMK_INODE_CHANGED 0x04 /* smack was transmuted */ 75 70 76 71 /* 77 72 * A label access rule. ··· 82 75 char *smk_subject; 83 76 char *smk_object; 84 77 int smk_access; 85 - }; 86 - 87 - /* 88 - * An entry in the table mapping smack values to 89 - * CIPSO level/category-set values. 90 - */ 91 - struct smack_cipso { 92 - int smk_level; 93 - char smk_catset[SMK_LABELLEN]; 94 78 }; 95 79 96 80 /* ··· 111 113 * interfaces don't. The secid should go away when all of 112 114 * these components have been repaired. 113 115 * 114 - * If there is a cipso value associated with the label it 115 - * gets stored here, too. This will most likely be rare as 116 - * the cipso direct mapping in used internally. 116 + * The cipso value associated with the label gets stored here, too. 117 117 * 118 118 * Keep the access rules for this subject label here so that 119 119 * the entire set of rules does not need to be examined every 120 120 * time. 121 121 */ 122 122 struct smack_known { 123 - struct list_head list; 124 - char smk_known[SMK_LABELLEN]; 125 - u32 smk_secid; 126 - struct smack_cipso *smk_cipso; 127 - spinlock_t smk_cipsolock; /* for changing cipso map */ 128 - struct list_head smk_rules; /* access rules */ 129 - struct mutex smk_rules_lock; /* lock for the rules */ 123 + struct list_head list; 124 + char *smk_known; 125 + u32 smk_secid; 126 + struct netlbl_lsm_secattr smk_netlabel; /* on wire labels */ 127 + struct list_head smk_rules; /* access rules */ 128 + struct mutex smk_rules_lock; /* lock for rules */ 130 129 }; 131 130 132 131 /* ··· 160 165 #define SMACK_CIPSO_DOI_DEFAULT 3 /* Historical */ 161 166 #define SMACK_CIPSO_DOI_INVALID -1 /* Not a DOI */ 162 167 #define SMACK_CIPSO_DIRECT_DEFAULT 250 /* Arbitrary */ 168 + #define SMACK_CIPSO_MAPPED_DEFAULT 251 /* Also arbitrary */ 163 169 #define SMACK_CIPSO_MAXCATVAL 63 /* Bigger gets harder */ 164 170 #define SMACK_CIPSO_MAXLEVEL 255 /* CIPSO 2.2 standard */ 165 171 #define SMACK_CIPSO_MAXCATNUM 239 /* CIPSO 2.2 standard */ ··· 211 215 int smk_access_entry(char *, char *, struct list_head *); 212 216 int smk_access(char *, char *, int, struct smk_audit_info *); 213 217 int smk_curacc(char *, u32, struct smk_audit_info *); 214 - int smack_to_cipso(const char *, struct smack_cipso *); 215 - char *smack_from_cipso(u32, char *); 216 218 char *smack_from_secid(const u32); 217 - void smk_parse_smack(const char *string, int len, char *smack); 219 + char *smk_parse_smack(const char *string, int len); 220 + int smk_netlbl_mls(int, char *, struct netlbl_lsm_secattr *, int); 218 221 char *smk_import(const char *, int); 219 222 struct smack_known *smk_import_entry(const char *, int); 220 223 struct smack_known *smk_find_entry(const char *); ··· 223 228 * Shared data. 224 229 */ 225 230 extern int smack_cipso_direct; 231 + extern int smack_cipso_mapped; 226 232 extern char *smack_net_ambient; 227 233 extern char *smack_onlycap; 228 234 extern const char *smack_cipso_option; ··· 235 239 extern struct smack_known smack_known_star; 236 240 extern struct smack_known smack_known_web; 237 241 242 + extern struct mutex smack_known_lock; 238 243 extern struct list_head smack_known_list; 239 244 extern struct list_head smk_netlbladdr_list; 240 245 241 246 extern struct security_operations smack_ops; 242 - 243 - /* 244 - * Stricly for CIPSO level manipulation. 245 - * Set the category bit number in a smack label sized buffer. 246 - */ 247 - static inline void smack_catset_bit(int cat, char *catsetp) 248 - { 249 - if (cat > SMK_LABELLEN * 8) 250 - return; 251 - 252 - catsetp[(cat - 1) / 8] |= 0x80 >> ((cat - 1) % 8); 253 - } 254 247 255 248 /* 256 249 * Is the directory transmuting? ··· 304 319 static inline void smk_ad_init(struct smk_audit_info *a, const char *func, 305 320 char type) 306 321 { 307 - memset(a, 0, sizeof(*a)); 322 + memset(&a->sad, 0, sizeof(a->sad)); 308 323 a->a.type = type; 309 324 a->a.smack_audit_data = &a->sad; 310 325 a->a.smack_audit_data->function = func;
+115 -116
security/smack/smack_access.c
··· 19 19 struct smack_known smack_known_huh = { 20 20 .smk_known = "?", 21 21 .smk_secid = 2, 22 - .smk_cipso = NULL, 23 22 }; 24 23 25 24 struct smack_known smack_known_hat = { 26 25 .smk_known = "^", 27 26 .smk_secid = 3, 28 - .smk_cipso = NULL, 29 27 }; 30 28 31 29 struct smack_known smack_known_star = { 32 30 .smk_known = "*", 33 31 .smk_secid = 4, 34 - .smk_cipso = NULL, 35 32 }; 36 33 37 34 struct smack_known smack_known_floor = { 38 35 .smk_known = "_", 39 36 .smk_secid = 5, 40 - .smk_cipso = NULL, 41 37 }; 42 38 43 39 struct smack_known smack_known_invalid = { 44 40 .smk_known = "", 45 41 .smk_secid = 6, 46 - .smk_cipso = NULL, 47 42 }; 48 43 49 44 struct smack_known smack_known_web = { 50 45 .smk_known = "@", 51 46 .smk_secid = 7, 52 - .smk_cipso = NULL, 53 47 }; 54 48 55 49 LIST_HEAD(smack_known_list); ··· 325 331 } 326 332 #endif 327 333 328 - static DEFINE_MUTEX(smack_known_lock); 334 + DEFINE_MUTEX(smack_known_lock); 329 335 330 336 /** 331 337 * smk_find_entry - find a label on the list, return the list entry ··· 339 345 struct smack_known *skp; 340 346 341 347 list_for_each_entry_rcu(skp, &smack_known_list, list) { 342 - if (strncmp(skp->smk_known, string, SMK_MAXLEN) == 0) 348 + if (strcmp(skp->smk_known, string) == 0) 343 349 return skp; 344 350 } 345 351 ··· 350 356 * smk_parse_smack - parse smack label from a text string 351 357 * @string: a text string that might contain a Smack label 352 358 * @len: the maximum size, or zero if it is NULL terminated. 353 - * @smack: parsed smack label, or NULL if parse error 359 + * 360 + * Returns a pointer to the clean label, or NULL 354 361 */ 355 - void smk_parse_smack(const char *string, int len, char *smack) 362 + char *smk_parse_smack(const char *string, int len) 356 363 { 357 - int found; 364 + char *smack; 358 365 int i; 359 366 360 - if (len <= 0 || len > SMK_MAXLEN) 361 - len = SMK_MAXLEN; 367 + if (len <= 0) 368 + len = strlen(string) + 1; 362 369 363 - for (i = 0, found = 0; i < SMK_LABELLEN; i++) { 364 - if (found) 365 - smack[i] = '\0'; 366 - else if (i >= len || string[i] > '~' || string[i] <= ' ' || 367 - string[i] == '/' || string[i] == '"' || 368 - string[i] == '\\' || string[i] == '\'') { 369 - smack[i] = '\0'; 370 - found = 1; 371 - } else 372 - smack[i] = string[i]; 370 + /* 371 + * Reserve a leading '-' as an indicator that 372 + * this isn't a label, but an option to interfaces 373 + * including /smack/cipso and /smack/cipso2 374 + */ 375 + if (string[0] == '-') 376 + return NULL; 377 + 378 + for (i = 0; i < len; i++) 379 + if (string[i] > '~' || string[i] <= ' ' || string[i] == '/' || 380 + string[i] == '"' || string[i] == '\\' || string[i] == '\'') 381 + break; 382 + 383 + if (i == 0 || i >= SMK_LONGLABEL) 384 + return NULL; 385 + 386 + smack = kzalloc(i + 1, GFP_KERNEL); 387 + if (smack != NULL) { 388 + strncpy(smack, string, i + 1); 389 + smack[i] = '\0'; 373 390 } 391 + return smack; 392 + } 393 + 394 + /** 395 + * smk_netlbl_mls - convert a catset to netlabel mls categories 396 + * @catset: the Smack categories 397 + * @sap: where to put the netlabel categories 398 + * 399 + * Allocates and fills attr.mls 400 + * Returns 0 on success, error code on failure. 401 + */ 402 + int smk_netlbl_mls(int level, char *catset, struct netlbl_lsm_secattr *sap, 403 + int len) 404 + { 405 + unsigned char *cp; 406 + unsigned char m; 407 + int cat; 408 + int rc; 409 + int byte; 410 + 411 + sap->flags |= NETLBL_SECATTR_MLS_CAT; 412 + sap->attr.mls.lvl = level; 413 + sap->attr.mls.cat = netlbl_secattr_catmap_alloc(GFP_ATOMIC); 414 + sap->attr.mls.cat->startbit = 0; 415 + 416 + for (cat = 1, cp = catset, byte = 0; byte < len; cp++, byte++) 417 + for (m = 0x80; m != 0; m >>= 1, cat++) { 418 + if ((m & *cp) == 0) 419 + continue; 420 + rc = netlbl_secattr_catmap_setbit(sap->attr.mls.cat, 421 + cat, GFP_ATOMIC); 422 + if (rc < 0) { 423 + netlbl_secattr_catmap_free(sap->attr.mls.cat); 424 + return rc; 425 + } 426 + } 427 + 428 + return 0; 374 429 } 375 430 376 431 /** ··· 433 390 struct smack_known *smk_import_entry(const char *string, int len) 434 391 { 435 392 struct smack_known *skp; 436 - char smack[SMK_LABELLEN]; 393 + char *smack; 394 + int slen; 395 + int rc; 437 396 438 - smk_parse_smack(string, len, smack); 439 - if (smack[0] == '\0') 397 + smack = smk_parse_smack(string, len); 398 + if (smack == NULL) 440 399 return NULL; 441 400 442 401 mutex_lock(&smack_known_lock); 443 402 444 403 skp = smk_find_entry(smack); 404 + if (skp != NULL) 405 + goto freeout; 445 406 446 - if (skp == NULL) { 447 - skp = kzalloc(sizeof(struct smack_known), GFP_KERNEL); 448 - if (skp != NULL) { 449 - strncpy(skp->smk_known, smack, SMK_MAXLEN); 450 - skp->smk_secid = smack_next_secid++; 451 - skp->smk_cipso = NULL; 452 - INIT_LIST_HEAD(&skp->smk_rules); 453 - spin_lock_init(&skp->smk_cipsolock); 454 - mutex_init(&skp->smk_rules_lock); 455 - /* 456 - * Make sure that the entry is actually 457 - * filled before putting it on the list. 458 - */ 459 - list_add_rcu(&skp->list, &smack_known_list); 460 - } 407 + skp = kzalloc(sizeof(*skp), GFP_KERNEL); 408 + if (skp == NULL) 409 + goto freeout; 410 + 411 + skp->smk_known = smack; 412 + skp->smk_secid = smack_next_secid++; 413 + skp->smk_netlabel.domain = skp->smk_known; 414 + skp->smk_netlabel.flags = 415 + NETLBL_SECATTR_DOMAIN | NETLBL_SECATTR_MLS_LVL; 416 + /* 417 + * If direct labeling works use it. 418 + * Otherwise use mapped labeling. 419 + */ 420 + slen = strlen(smack); 421 + if (slen < SMK_CIPSOLEN) 422 + rc = smk_netlbl_mls(smack_cipso_direct, skp->smk_known, 423 + &skp->smk_netlabel, slen); 424 + else 425 + rc = smk_netlbl_mls(smack_cipso_mapped, (char *)&skp->smk_secid, 426 + &skp->smk_netlabel, sizeof(skp->smk_secid)); 427 + 428 + if (rc >= 0) { 429 + INIT_LIST_HEAD(&skp->smk_rules); 430 + mutex_init(&skp->smk_rules_lock); 431 + /* 432 + * Make sure that the entry is actually 433 + * filled before putting it on the list. 434 + */ 435 + list_add_rcu(&skp->list, &smack_known_list); 436 + goto unlockout; 461 437 } 462 - 438 + /* 439 + * smk_netlbl_mls failed. 440 + */ 441 + kfree(skp); 442 + skp = NULL; 443 + freeout: 444 + kfree(smack); 445 + unlockout: 463 446 mutex_unlock(&smack_known_lock); 464 447 465 448 return skp; ··· 548 479 */ 549 480 u32 smack_to_secid(const char *smack) 550 481 { 551 - struct smack_known *skp; 482 + struct smack_known *skp = smk_find_entry(smack); 552 483 553 - rcu_read_lock(); 554 - list_for_each_entry_rcu(skp, &smack_known_list, list) { 555 - if (strncmp(skp->smk_known, smack, SMK_MAXLEN) == 0) { 556 - rcu_read_unlock(); 557 - return skp->smk_secid; 558 - } 559 - } 560 - rcu_read_unlock(); 561 - return 0; 562 - } 563 - 564 - /** 565 - * smack_from_cipso - find the Smack label associated with a CIPSO option 566 - * @level: Bell & LaPadula level from the network 567 - * @cp: Bell & LaPadula categories from the network 568 - * 569 - * This is a simple lookup in the label table. 570 - * 571 - * Return the matching label from the label list or NULL. 572 - */ 573 - char *smack_from_cipso(u32 level, char *cp) 574 - { 575 - struct smack_known *kp; 576 - char *final = NULL; 577 - 578 - rcu_read_lock(); 579 - list_for_each_entry(kp, &smack_known_list, list) { 580 - if (kp->smk_cipso == NULL) 581 - continue; 582 - 583 - spin_lock_bh(&kp->smk_cipsolock); 584 - 585 - if (kp->smk_cipso->smk_level == level && 586 - memcmp(kp->smk_cipso->smk_catset, cp, SMK_LABELLEN) == 0) 587 - final = kp->smk_known; 588 - 589 - spin_unlock_bh(&kp->smk_cipsolock); 590 - 591 - if (final != NULL) 592 - break; 593 - } 594 - rcu_read_unlock(); 595 - 596 - return final; 597 - } 598 - 599 - /** 600 - * smack_to_cipso - find the CIPSO option to go with a Smack label 601 - * @smack: a pointer to the smack label in question 602 - * @cp: where to put the result 603 - * 604 - * Returns zero if a value is available, non-zero otherwise. 605 - */ 606 - int smack_to_cipso(const char *smack, struct smack_cipso *cp) 607 - { 608 - struct smack_known *kp; 609 - int found = 0; 610 - 611 - rcu_read_lock(); 612 - list_for_each_entry_rcu(kp, &smack_known_list, list) { 613 - if (kp->smk_known == smack || 614 - strcmp(kp->smk_known, smack) == 0) { 615 - found = 1; 616 - break; 617 - } 618 - } 619 - rcu_read_unlock(); 620 - 621 - if (found == 0 || kp->smk_cipso == NULL) 622 - return -ENOENT; 623 - 624 - memcpy(cp, kp->smk_cipso, sizeof(struct smack_cipso)); 625 - return 0; 484 + if (skp == NULL) 485 + return 0; 486 + return skp->smk_secid; 626 487 }
+86 -157
security/smack/smack_lsm.c
··· 30 30 #include <linux/slab.h> 31 31 #include <linux/mutex.h> 32 32 #include <linux/pipe_fs_i.h> 33 - #include <net/netlabel.h> 34 33 #include <net/cipso_ipv4.h> 35 34 #include <linux/audit.h> 36 35 #include <linux/magic.h> ··· 56 57 static char *smk_fetch(const char *name, struct inode *ip, struct dentry *dp) 57 58 { 58 59 int rc; 59 - char in[SMK_LABELLEN]; 60 + char *buffer; 61 + char *result = NULL; 60 62 61 63 if (ip->i_op->getxattr == NULL) 62 64 return NULL; 63 65 64 - rc = ip->i_op->getxattr(dp, name, in, SMK_LABELLEN); 65 - if (rc < 0) 66 + buffer = kzalloc(SMK_LONGLABEL, GFP_KERNEL); 67 + if (buffer == NULL) 66 68 return NULL; 67 69 68 - return smk_import(in, rc); 70 + rc = ip->i_op->getxattr(dp, name, buffer, SMK_LONGLABEL); 71 + if (rc > 0) 72 + result = smk_import(buffer, rc); 73 + 74 + kfree(buffer); 75 + 76 + return result; 69 77 } 70 78 71 79 /** ··· 85 79 { 86 80 struct inode_smack *isp; 87 81 88 - isp = kzalloc(sizeof(struct inode_smack), GFP_KERNEL); 82 + isp = kzalloc(sizeof(struct inode_smack), GFP_NOFS); 89 83 if (isp == NULL) 90 84 return NULL; 91 85 ··· 562 556 void **value, size_t *len) 563 557 { 564 558 struct smack_known *skp; 559 + struct inode_smack *issp = inode->i_security; 565 560 char *csp = smk_of_current(); 566 561 char *isp = smk_of_inode(inode); 567 562 char *dsp = smk_of_inode(dir); 568 563 int may; 569 564 570 565 if (name) { 571 - *name = kstrdup(XATTR_SMACK_SUFFIX, GFP_KERNEL); 566 + *name = kstrdup(XATTR_SMACK_SUFFIX, GFP_NOFS); 572 567 if (*name == NULL) 573 568 return -ENOMEM; 574 569 } ··· 584 577 * If the access rule allows transmutation and 585 578 * the directory requests transmutation then 586 579 * by all means transmute. 580 + * Mark the inode as changed. 587 581 */ 588 582 if (may > 0 && ((may & MAY_TRANSMUTE) != 0) && 589 - smk_inode_transmutable(dir)) 583 + smk_inode_transmutable(dir)) { 590 584 isp = dsp; 585 + issp->smk_flags |= SMK_INODE_CHANGED; 586 + } 591 587 592 - *value = kstrdup(isp, GFP_KERNEL); 588 + *value = kstrdup(isp, GFP_NOFS); 593 589 if (*value == NULL) 594 590 return -ENOMEM; 595 591 } ··· 831 821 * check label validity here so import wont fail on 832 822 * post_setxattr 833 823 */ 834 - if (size == 0 || size >= SMK_LABELLEN || 824 + if (size == 0 || size >= SMK_LONGLABEL || 835 825 smk_import(value, size) == NULL) 836 826 rc = -EINVAL; 837 827 } else if (strcmp(name, XATTR_NAME_SMACKTRANSMUTE) == 0) { ··· 1359 1349 } 1360 1350 1361 1351 /** 1362 - * smack_dentry_open - Smack dentry open processing 1352 + * smack_file_open - Smack dentry open processing 1363 1353 * @file: the object 1364 1354 * @cred: unused 1365 1355 * ··· 1367 1357 * 1368 1358 * Returns 0 1369 1359 */ 1370 - static int smack_dentry_open(struct file *file, const struct cred *cred) 1360 + static int smack_file_open(struct file *file, const struct cred *cred) 1371 1361 { 1372 1362 struct inode_smack *isp = file->f_path.dentry->d_inode->i_security; 1373 1363 ··· 1830 1820 } 1831 1821 1832 1822 /** 1833 - * smack_set_catset - convert a capset to netlabel mls categories 1834 - * @catset: the Smack categories 1835 - * @sap: where to put the netlabel categories 1836 - * 1837 - * Allocates and fills attr.mls.cat 1838 - */ 1839 - static void smack_set_catset(char *catset, struct netlbl_lsm_secattr *sap) 1840 - { 1841 - unsigned char *cp; 1842 - unsigned char m; 1843 - int cat; 1844 - int rc; 1845 - int byte; 1846 - 1847 - if (!catset) 1848 - return; 1849 - 1850 - sap->flags |= NETLBL_SECATTR_MLS_CAT; 1851 - sap->attr.mls.cat = netlbl_secattr_catmap_alloc(GFP_ATOMIC); 1852 - sap->attr.mls.cat->startbit = 0; 1853 - 1854 - for (cat = 1, cp = catset, byte = 0; byte < SMK_LABELLEN; cp++, byte++) 1855 - for (m = 0x80; m != 0; m >>= 1, cat++) { 1856 - if ((m & *cp) == 0) 1857 - continue; 1858 - rc = netlbl_secattr_catmap_setbit(sap->attr.mls.cat, 1859 - cat, GFP_ATOMIC); 1860 - } 1861 - } 1862 - 1863 - /** 1864 - * smack_to_secattr - fill a secattr from a smack value 1865 - * @smack: the smack value 1866 - * @nlsp: where the result goes 1867 - * 1868 - * Casey says that CIPSO is good enough for now. 1869 - * It can be used to effect. 1870 - * It can also be abused to effect when necessary. 1871 - * Apologies to the TSIG group in general and GW in particular. 1872 - */ 1873 - static void smack_to_secattr(char *smack, struct netlbl_lsm_secattr *nlsp) 1874 - { 1875 - struct smack_cipso cipso; 1876 - int rc; 1877 - 1878 - nlsp->domain = smack; 1879 - nlsp->flags = NETLBL_SECATTR_DOMAIN | NETLBL_SECATTR_MLS_LVL; 1880 - 1881 - rc = smack_to_cipso(smack, &cipso); 1882 - if (rc == 0) { 1883 - nlsp->attr.mls.lvl = cipso.smk_level; 1884 - smack_set_catset(cipso.smk_catset, nlsp); 1885 - } else { 1886 - nlsp->attr.mls.lvl = smack_cipso_direct; 1887 - smack_set_catset(smack, nlsp); 1888 - } 1889 - } 1890 - 1891 - /** 1892 1823 * smack_netlabel - Set the secattr on a socket 1893 1824 * @sk: the socket 1894 1825 * @labeled: socket label scheme ··· 1841 1890 */ 1842 1891 static int smack_netlabel(struct sock *sk, int labeled) 1843 1892 { 1893 + struct smack_known *skp; 1844 1894 struct socket_smack *ssp = sk->sk_security; 1845 - struct netlbl_lsm_secattr secattr; 1846 1895 int rc = 0; 1847 1896 1848 1897 /* ··· 1860 1909 labeled == SMACK_UNLABELED_SOCKET) 1861 1910 netlbl_sock_delattr(sk); 1862 1911 else { 1863 - netlbl_secattr_init(&secattr); 1864 - smack_to_secattr(ssp->smk_out, &secattr); 1865 - rc = netlbl_sock_setattr(sk, sk->sk_family, &secattr); 1866 - netlbl_secattr_destroy(&secattr); 1912 + skp = smk_find_entry(ssp->smk_out); 1913 + rc = netlbl_sock_setattr(sk, sk->sk_family, &skp->smk_netlabel); 1867 1914 } 1868 1915 1869 1916 bh_unlock_sock(sk); ··· 1934 1985 struct socket *sock; 1935 1986 int rc = 0; 1936 1987 1937 - if (value == NULL || size > SMK_LABELLEN || size == 0) 1988 + if (value == NULL || size > SMK_LONGLABEL || size == 0) 1938 1989 return -EACCES; 1939 1990 1940 1991 sp = smk_import(value, size); ··· 2501 2552 char *final; 2502 2553 char trattr[TRANS_TRUE_SIZE]; 2503 2554 int transflag = 0; 2555 + int rc; 2504 2556 struct dentry *dp; 2505 2557 2506 2558 if (inode == NULL) ··· 2620 2670 */ 2621 2671 dp = dget(opt_dentry); 2622 2672 fetched = smk_fetch(XATTR_NAME_SMACK, inode, dp); 2623 - if (fetched != NULL) { 2673 + if (fetched != NULL) 2624 2674 final = fetched; 2625 - if (S_ISDIR(inode->i_mode)) { 2626 - trattr[0] = '\0'; 2627 - inode->i_op->getxattr(dp, 2675 + 2676 + /* 2677 + * Transmuting directory 2678 + */ 2679 + if (S_ISDIR(inode->i_mode)) { 2680 + /* 2681 + * If this is a new directory and the label was 2682 + * transmuted when the inode was initialized 2683 + * set the transmute attribute on the directory 2684 + * and mark the inode. 2685 + * 2686 + * If there is a transmute attribute on the 2687 + * directory mark the inode. 2688 + */ 2689 + if (isp->smk_flags & SMK_INODE_CHANGED) { 2690 + isp->smk_flags &= ~SMK_INODE_CHANGED; 2691 + rc = inode->i_op->setxattr(dp, 2628 2692 XATTR_NAME_SMACKTRANSMUTE, 2629 - trattr, TRANS_TRUE_SIZE); 2630 - if (strncmp(trattr, TRANS_TRUE, 2631 - TRANS_TRUE_SIZE) == 0) 2632 - transflag = SMK_INODE_TRANSMUTE; 2693 + TRANS_TRUE, TRANS_TRUE_SIZE, 2694 + 0); 2695 + } else { 2696 + rc = inode->i_op->getxattr(dp, 2697 + XATTR_NAME_SMACKTRANSMUTE, trattr, 2698 + TRANS_TRUE_SIZE); 2699 + if (rc >= 0 && strncmp(trattr, TRANS_TRUE, 2700 + TRANS_TRUE_SIZE) != 0) 2701 + rc = -EINVAL; 2633 2702 } 2703 + if (rc >= 0) 2704 + transflag = SMK_INODE_TRANSMUTE; 2634 2705 } 2635 2706 isp->smk_task = smk_fetch(XATTR_NAME_SMACKEXEC, inode, dp); 2636 2707 isp->smk_mmap = smk_fetch(XATTR_NAME_SMACKMMAP, inode, dp); ··· 2730 2759 if (!capable(CAP_MAC_ADMIN)) 2731 2760 return -EPERM; 2732 2761 2733 - if (value == NULL || size == 0 || size >= SMK_LABELLEN) 2762 + if (value == NULL || size == 0 || size >= SMK_LONGLABEL) 2734 2763 return -EINVAL; 2735 2764 2736 2765 if (strcmp(name, "current") != 0) ··· 2866 2895 static char *smack_from_secattr(struct netlbl_lsm_secattr *sap, 2867 2896 struct socket_smack *ssp) 2868 2897 { 2869 - struct smack_known *skp; 2870 - char smack[SMK_LABELLEN]; 2898 + struct smack_known *kp; 2871 2899 char *sp; 2872 - int pcat; 2900 + int found = 0; 2873 2901 2874 2902 if ((sap->flags & NETLBL_SECATTR_MLS_LVL) != 0) { 2875 2903 /* ··· 2876 2906 * If there are flags but no level netlabel isn't 2877 2907 * behaving the way we expect it to. 2878 2908 * 2879 - * Get the categories, if any 2909 + * Look it up in the label table 2880 2910 * Without guidance regarding the smack value 2881 2911 * for the packet fall back on the network 2882 2912 * ambient value. 2883 2913 */ 2884 - memset(smack, '\0', SMK_LABELLEN); 2885 - if ((sap->flags & NETLBL_SECATTR_MLS_CAT) != 0) 2886 - for (pcat = -1;;) { 2887 - pcat = netlbl_secattr_catmap_walk( 2888 - sap->attr.mls.cat, pcat + 1); 2889 - if (pcat < 0) 2890 - break; 2891 - smack_catset_bit(pcat, smack); 2892 - } 2893 - /* 2894 - * If it is CIPSO using smack direct mapping 2895 - * we are already done. WeeHee. 2896 - */ 2897 - if (sap->attr.mls.lvl == smack_cipso_direct) { 2898 - /* 2899 - * The label sent is usually on the label list. 2900 - * 2901 - * If it is not we may still want to allow the 2902 - * delivery. 2903 - * 2904 - * If the recipient is accepting all packets 2905 - * because it is using the star ("*") label 2906 - * for SMACK64IPIN provide the web ("@") label 2907 - * so that a directed response will succeed. 2908 - * This is not very correct from a MAC point 2909 - * of view, but gets around the problem that 2910 - * locking prevents adding the newly discovered 2911 - * label to the list. 2912 - * The case where the recipient is not using 2913 - * the star label should obviously fail. 2914 - * The easy way to do this is to provide the 2915 - * star label as the subject label. 2916 - */ 2917 - skp = smk_find_entry(smack); 2918 - if (skp != NULL) 2919 - return skp->smk_known; 2920 - if (ssp != NULL && 2921 - ssp->smk_in == smack_known_star.smk_known) 2922 - return smack_known_web.smk_known; 2923 - return smack_known_star.smk_known; 2914 + rcu_read_lock(); 2915 + list_for_each_entry(kp, &smack_known_list, list) { 2916 + if (sap->attr.mls.lvl != kp->smk_netlabel.attr.mls.lvl) 2917 + continue; 2918 + if (memcmp(sap->attr.mls.cat, 2919 + kp->smk_netlabel.attr.mls.cat, 2920 + SMK_CIPSOLEN) != 0) 2921 + continue; 2922 + found = 1; 2923 + break; 2924 2924 } 2925 - /* 2926 - * Look it up in the supplied table if it is not 2927 - * a direct mapping. 2928 - */ 2929 - sp = smack_from_cipso(sap->attr.mls.lvl, smack); 2930 - if (sp != NULL) 2931 - return sp; 2925 + rcu_read_unlock(); 2926 + 2927 + if (found) 2928 + return kp->smk_known; 2929 + 2932 2930 if (ssp != NULL && ssp->smk_in == smack_known_star.smk_known) 2933 2931 return smack_known_web.smk_known; 2934 2932 return smack_known_star.smk_known; ··· 3096 3158 struct request_sock *req) 3097 3159 { 3098 3160 u16 family = sk->sk_family; 3161 + struct smack_known *skp; 3099 3162 struct socket_smack *ssp = sk->sk_security; 3100 3163 struct netlbl_lsm_secattr secattr; 3101 3164 struct sockaddr_in addr; 3102 3165 struct iphdr *hdr; 3103 3166 char *sp; 3167 + char *hsp; 3104 3168 int rc; 3105 3169 struct smk_audit_info ad; 3106 3170 #ifdef CONFIG_AUDIT ··· 3149 3209 hdr = ip_hdr(skb); 3150 3210 addr.sin_addr.s_addr = hdr->saddr; 3151 3211 rcu_read_lock(); 3152 - if (smack_host_label(&addr) == NULL) { 3153 - rcu_read_unlock(); 3154 - netlbl_secattr_init(&secattr); 3155 - smack_to_secattr(sp, &secattr); 3156 - rc = netlbl_req_setattr(req, &secattr); 3157 - netlbl_secattr_destroy(&secattr); 3158 - } else { 3159 - rcu_read_unlock(); 3212 + hsp = smack_host_label(&addr); 3213 + rcu_read_unlock(); 3214 + 3215 + if (hsp == NULL) { 3216 + skp = smk_find_entry(sp); 3217 + rc = netlbl_req_setattr(req, &skp->smk_netlabel); 3218 + } else 3160 3219 netlbl_req_delattr(req); 3161 - } 3162 3220 3163 3221 return rc; 3164 3222 } ··· 3338 3400 char *rule = vrule; 3339 3401 3340 3402 if (!rule) { 3341 - audit_log(actx, GFP_KERNEL, AUDIT_SELINUX_ERR, 3403 + audit_log(actx, GFP_ATOMIC, AUDIT_SELINUX_ERR, 3342 3404 "Smack: missing rule\n"); 3343 3405 return -ENOENT; 3344 3406 } ··· 3487 3549 .file_send_sigiotask = smack_file_send_sigiotask, 3488 3550 .file_receive = smack_file_receive, 3489 3551 3490 - .dentry_open = smack_dentry_open, 3552 + .file_open = smack_file_open, 3491 3553 3492 3554 .cred_alloc_blank = smack_cred_alloc_blank, 3493 3555 .cred_free = smack_cred_free, ··· 3580 3642 3581 3643 static __init void init_smack_known_list(void) 3582 3644 { 3583 - /* 3584 - * Initialize CIPSO locks 3585 - */ 3586 - spin_lock_init(&smack_known_huh.smk_cipsolock); 3587 - spin_lock_init(&smack_known_hat.smk_cipsolock); 3588 - spin_lock_init(&smack_known_star.smk_cipsolock); 3589 - spin_lock_init(&smack_known_floor.smk_cipsolock); 3590 - spin_lock_init(&smack_known_invalid.smk_cipsolock); 3591 - spin_lock_init(&smack_known_web.smk_cipsolock); 3592 3645 /* 3593 3646 * Initialize rule list locks 3594 3647 */
+778 -249
security/smack/smackfs.c
··· 22 22 #include <linux/mutex.h> 23 23 #include <linux/slab.h> 24 24 #include <net/net_namespace.h> 25 - #include <net/netlabel.h> 26 25 #include <net/cipso_ipv4.h> 27 26 #include <linux/seq_file.h> 28 27 #include <linux/ctype.h> ··· 44 45 SMK_LOGGING = 10, /* logging */ 45 46 SMK_LOAD_SELF = 11, /* task specific rules */ 46 47 SMK_ACCESSES = 12, /* access policy */ 48 + SMK_MAPPED = 13, /* CIPSO level indicating mapped label */ 49 + SMK_LOAD2 = 14, /* load policy with long labels */ 50 + SMK_LOAD_SELF2 = 15, /* load task specific rules with long labels */ 51 + SMK_ACCESS2 = 16, /* make an access check with long labels */ 52 + SMK_CIPSO2 = 17, /* load long label -> CIPSO mapping */ 47 53 }; 48 54 49 55 /* ··· 64 60 * If it isn't somehow marked, use this. 65 61 * It can be reset via smackfs/ambient 66 62 */ 67 - char *smack_net_ambient = smack_known_floor.smk_known; 63 + char *smack_net_ambient; 68 64 69 65 /* 70 66 * This is the level in a CIPSO header that indicates a ··· 72 68 * It can be reset via smackfs/direct 73 69 */ 74 70 int smack_cipso_direct = SMACK_CIPSO_DIRECT_DEFAULT; 71 + 72 + /* 73 + * This is the level in a CIPSO header that indicates a 74 + * secid is contained directly in the category set. 75 + * It can be reset via smackfs/mapped 76 + */ 77 + int smack_cipso_mapped = SMACK_CIPSO_MAPPED_DEFAULT; 75 78 76 79 /* 77 80 * Unless a process is running with this label even ··· 100 89 101 90 /* 102 91 * Rule lists are maintained for each label. 103 - * This master list is just for reading /smack/load. 92 + * This master list is just for reading /smack/load and /smack/load2. 104 93 */ 105 94 struct smack_master_list { 106 95 struct list_head list; ··· 136 125 #define SMK_OLOADLEN (SMK_LABELLEN + SMK_LABELLEN + SMK_OACCESSLEN) 137 126 #define SMK_LOADLEN (SMK_LABELLEN + SMK_LABELLEN + SMK_ACCESSLEN) 138 127 128 + /* 129 + * Stricly for CIPSO level manipulation. 130 + * Set the category bit number in a smack label sized buffer. 131 + */ 132 + static inline void smack_catset_bit(unsigned int cat, char *catsetp) 133 + { 134 + if (cat == 0 || cat > (SMK_CIPSOLEN * 8)) 135 + return; 136 + 137 + catsetp[(cat - 1) / 8] |= 0x80 >> ((cat - 1) % 8); 138 + } 139 + 139 140 /** 140 141 * smk_netlabel_audit_set - fill a netlbl_audit struct 141 142 * @nap: structure to fill ··· 160 137 } 161 138 162 139 /* 163 - * Values for parsing single label host rules 140 + * Value for parsing single label host rules 164 141 * "1.2.3.4 X" 165 - * "192.168.138.129/32 abcdefghijklmnopqrstuvw" 166 142 */ 167 143 #define SMK_NETLBLADDRMIN 9 168 - #define SMK_NETLBLADDRMAX 42 169 144 170 145 /** 171 146 * smk_set_access - add a rule to the rule list ··· 209 188 } 210 189 211 190 /** 212 - * smk_parse_rule - parse Smack rule from load string 213 - * @data: string to be parsed whose size is SMK_LOADLEN 191 + * smk_fill_rule - Fill Smack rule from strings 192 + * @subject: subject label string 193 + * @object: object label string 194 + * @access: access string 214 195 * @rule: Smack rule 215 196 * @import: if non-zero, import labels 197 + * 198 + * Returns 0 on success, -1 on failure 216 199 */ 217 - static int smk_parse_rule(const char *data, struct smack_rule *rule, int import) 200 + static int smk_fill_rule(const char *subject, const char *object, 201 + const char *access, struct smack_rule *rule, 202 + int import) 218 203 { 219 - char smack[SMK_LABELLEN]; 204 + int rc = -1; 205 + int done; 206 + const char *cp; 220 207 struct smack_known *skp; 221 208 222 209 if (import) { 223 - rule->smk_subject = smk_import(data, 0); 210 + rule->smk_subject = smk_import(subject, 0); 224 211 if (rule->smk_subject == NULL) 225 212 return -1; 226 213 227 - rule->smk_object = smk_import(data + SMK_LABELLEN, 0); 214 + rule->smk_object = smk_import(object, 0); 228 215 if (rule->smk_object == NULL) 229 216 return -1; 230 217 } else { 231 - smk_parse_smack(data, 0, smack); 232 - skp = smk_find_entry(smack); 218 + cp = smk_parse_smack(subject, 0); 219 + if (cp == NULL) 220 + return -1; 221 + skp = smk_find_entry(cp); 222 + kfree(cp); 233 223 if (skp == NULL) 234 224 return -1; 235 225 rule->smk_subject = skp->smk_known; 236 226 237 - smk_parse_smack(data + SMK_LABELLEN, 0, smack); 238 - skp = smk_find_entry(smack); 227 + cp = smk_parse_smack(object, 0); 228 + if (cp == NULL) 229 + return -1; 230 + skp = smk_find_entry(cp); 231 + kfree(cp); 239 232 if (skp == NULL) 240 233 return -1; 241 234 rule->smk_object = skp->smk_known; ··· 257 222 258 223 rule->smk_access = 0; 259 224 260 - switch (data[SMK_LABELLEN + SMK_LABELLEN]) { 261 - case '-': 262 - break; 263 - case 'r': 264 - case 'R': 265 - rule->smk_access |= MAY_READ; 266 - break; 267 - default: 268 - return -1; 225 + for (cp = access, done = 0; *cp && !done; cp++) { 226 + switch (*cp) { 227 + case '-': 228 + break; 229 + case 'r': 230 + case 'R': 231 + rule->smk_access |= MAY_READ; 232 + break; 233 + case 'w': 234 + case 'W': 235 + rule->smk_access |= MAY_WRITE; 236 + break; 237 + case 'x': 238 + case 'X': 239 + rule->smk_access |= MAY_EXEC; 240 + break; 241 + case 'a': 242 + case 'A': 243 + rule->smk_access |= MAY_APPEND; 244 + break; 245 + case 't': 246 + case 'T': 247 + rule->smk_access |= MAY_TRANSMUTE; 248 + break; 249 + default: 250 + done = 1; 251 + break; 252 + } 269 253 } 254 + rc = 0; 270 255 271 - switch (data[SMK_LABELLEN + SMK_LABELLEN + 1]) { 272 - case '-': 273 - break; 274 - case 'w': 275 - case 'W': 276 - rule->smk_access |= MAY_WRITE; 277 - break; 278 - default: 279 - return -1; 280 - } 281 - 282 - switch (data[SMK_LABELLEN + SMK_LABELLEN + 2]) { 283 - case '-': 284 - break; 285 - case 'x': 286 - case 'X': 287 - rule->smk_access |= MAY_EXEC; 288 - break; 289 - default: 290 - return -1; 291 - } 292 - 293 - switch (data[SMK_LABELLEN + SMK_LABELLEN + 3]) { 294 - case '-': 295 - break; 296 - case 'a': 297 - case 'A': 298 - rule->smk_access |= MAY_APPEND; 299 - break; 300 - default: 301 - return -1; 302 - } 303 - 304 - switch (data[SMK_LABELLEN + SMK_LABELLEN + 4]) { 305 - case '-': 306 - break; 307 - case 't': 308 - case 'T': 309 - rule->smk_access |= MAY_TRANSMUTE; 310 - break; 311 - default: 312 - return -1; 313 - } 314 - 315 - return 0; 256 + return rc; 316 257 } 317 258 318 259 /** 319 - * smk_write_load_list - write() for any /smack/load 260 + * smk_parse_rule - parse Smack rule from load string 261 + * @data: string to be parsed whose size is SMK_LOADLEN 262 + * @rule: Smack rule 263 + * @import: if non-zero, import labels 264 + * 265 + * Returns 0 on success, -1 on errors. 266 + */ 267 + static int smk_parse_rule(const char *data, struct smack_rule *rule, int import) 268 + { 269 + int rc; 270 + 271 + rc = smk_fill_rule(data, data + SMK_LABELLEN, 272 + data + SMK_LABELLEN + SMK_LABELLEN, rule, import); 273 + return rc; 274 + } 275 + 276 + /** 277 + * smk_parse_long_rule - parse Smack rule from rule string 278 + * @data: string to be parsed, null terminated 279 + * @rule: Smack rule 280 + * @import: if non-zero, import labels 281 + * 282 + * Returns 0 on success, -1 on failure 283 + */ 284 + static int smk_parse_long_rule(const char *data, struct smack_rule *rule, 285 + int import) 286 + { 287 + char *subject; 288 + char *object; 289 + char *access; 290 + int datalen; 291 + int rc = -1; 292 + 293 + /* 294 + * This is probably inefficient, but safe. 295 + */ 296 + datalen = strlen(data); 297 + subject = kzalloc(datalen, GFP_KERNEL); 298 + if (subject == NULL) 299 + return -1; 300 + object = kzalloc(datalen, GFP_KERNEL); 301 + if (object == NULL) 302 + goto free_out_s; 303 + access = kzalloc(datalen, GFP_KERNEL); 304 + if (access == NULL) 305 + goto free_out_o; 306 + 307 + if (sscanf(data, "%s %s %s", subject, object, access) == 3) 308 + rc = smk_fill_rule(subject, object, access, rule, import); 309 + 310 + kfree(access); 311 + free_out_o: 312 + kfree(object); 313 + free_out_s: 314 + kfree(subject); 315 + return rc; 316 + } 317 + 318 + #define SMK_FIXED24_FMT 0 /* Fixed 24byte label format */ 319 + #define SMK_LONG_FMT 1 /* Variable long label format */ 320 + /** 321 + * smk_write_rules_list - write() for any /smack rule file 320 322 * @file: file pointer, not actually used 321 323 * @buf: where to get the data from 322 324 * @count: bytes sent 323 325 * @ppos: where to start - must be 0 324 326 * @rule_list: the list of rules to write to 325 327 * @rule_lock: lock for the rule list 328 + * @format: /smack/load or /smack/load2 format. 326 329 * 327 330 * Get one smack access rule from above. 328 - * The format is exactly: 329 - * char subject[SMK_LABELLEN] 330 - * char object[SMK_LABELLEN] 331 - * char access[SMK_ACCESSLEN] 332 - * 333 - * writes must be SMK_LABELLEN+SMK_LABELLEN+SMK_ACCESSLEN bytes. 331 + * The format for SMK_LONG_FMT is: 332 + * "subject<whitespace>object<whitespace>access[<whitespace>...]" 333 + * The format for SMK_FIXED24_FMT is exactly: 334 + * "subject object rwxat" 334 335 */ 335 - static ssize_t smk_write_load_list(struct file *file, const char __user *buf, 336 - size_t count, loff_t *ppos, 337 - struct list_head *rule_list, 338 - struct mutex *rule_lock) 336 + static ssize_t smk_write_rules_list(struct file *file, const char __user *buf, 337 + size_t count, loff_t *ppos, 338 + struct list_head *rule_list, 339 + struct mutex *rule_lock, int format) 339 340 { 340 341 struct smack_master_list *smlp; 341 342 struct smack_known *skp; 342 343 struct smack_rule *rule; 343 344 char *data; 345 + int datalen; 344 346 int rc = -EINVAL; 345 347 int load = 0; 346 348 ··· 387 315 */ 388 316 if (*ppos != 0) 389 317 return -EINVAL; 390 - /* 391 - * Minor hack for backward compatibility 392 - */ 393 - if (count < (SMK_OLOADLEN) || count > SMK_LOADLEN) 394 - return -EINVAL; 395 318 396 - data = kzalloc(SMK_LOADLEN, GFP_KERNEL); 319 + if (format == SMK_FIXED24_FMT) { 320 + /* 321 + * Minor hack for backward compatibility 322 + */ 323 + if (count != SMK_OLOADLEN && count != SMK_LOADLEN) 324 + return -EINVAL; 325 + datalen = SMK_LOADLEN; 326 + } else 327 + datalen = count + 1; 328 + 329 + data = kzalloc(datalen, GFP_KERNEL); 397 330 if (data == NULL) 398 331 return -ENOMEM; 399 332 ··· 407 330 goto out; 408 331 } 409 332 410 - /* 411 - * More on the minor hack for backward compatibility 412 - */ 413 - if (count == (SMK_OLOADLEN)) 414 - data[SMK_OLOADLEN] = '-'; 415 - 416 333 rule = kzalloc(sizeof(*rule), GFP_KERNEL); 417 334 if (rule == NULL) { 418 335 rc = -ENOMEM; 419 336 goto out; 420 337 } 421 338 422 - if (smk_parse_rule(data, rule, 1)) 423 - goto out_free_rule; 339 + if (format == SMK_LONG_FMT) { 340 + /* 341 + * Be sure the data string is terminated. 342 + */ 343 + data[count] = '\0'; 344 + if (smk_parse_long_rule(data, rule, 1)) 345 + goto out_free_rule; 346 + } else { 347 + /* 348 + * More on the minor hack for backward compatibility 349 + */ 350 + if (count == (SMK_OLOADLEN)) 351 + data[SMK_OLOADLEN] = '-'; 352 + if (smk_parse_rule(data, rule, 1)) 353 + goto out_free_rule; 354 + } 355 + 424 356 425 357 if (rule_list == NULL) { 426 358 load = 1; ··· 440 354 441 355 rc = count; 442 356 /* 443 - * If this is "load" as opposed to "load-self" and a new rule 357 + * If this is a global as opposed to self and a new rule 444 358 * it needs to get added for reporting. 445 359 * smk_set_access returns true if there was already a rule 446 360 * for the subject/object pair, and false if it was new. 447 361 */ 448 - if (load && !smk_set_access(rule, rule_list, rule_lock)) { 449 - smlp = kzalloc(sizeof(*smlp), GFP_KERNEL); 450 - if (smlp != NULL) { 451 - smlp->smk_rule = rule; 452 - list_add_rcu(&smlp->list, &smack_rule_list); 453 - } else 454 - rc = -ENOMEM; 362 + if (!smk_set_access(rule, rule_list, rule_lock)) { 363 + if (load) { 364 + smlp = kzalloc(sizeof(*smlp), GFP_KERNEL); 365 + if (smlp != NULL) { 366 + smlp->smk_rule = rule; 367 + list_add_rcu(&smlp->list, &smack_rule_list); 368 + } else 369 + rc = -ENOMEM; 370 + } 455 371 goto out; 456 372 } 457 373 ··· 509 421 /* No-op */ 510 422 } 511 423 512 - /* 513 - * Seq_file read operations for /smack/load 514 - */ 515 - 516 - static void *load_seq_start(struct seq_file *s, loff_t *pos) 424 + static void smk_rule_show(struct seq_file *s, struct smack_rule *srp, int max) 517 425 { 518 - return smk_seq_start(s, pos, &smack_rule_list); 519 - } 426 + /* 427 + * Don't show any rules with label names too long for 428 + * interface file (/smack/load or /smack/load2) 429 + * because you should expect to be able to write 430 + * anything you read back. 431 + */ 432 + if (strlen(srp->smk_subject) >= max || strlen(srp->smk_object) >= max) 433 + return; 520 434 521 - static void *load_seq_next(struct seq_file *s, void *v, loff_t *pos) 522 - { 523 - return smk_seq_next(s, v, pos, &smack_rule_list); 524 - } 525 - 526 - static int load_seq_show(struct seq_file *s, void *v) 527 - { 528 - struct list_head *list = v; 529 - struct smack_master_list *smlp = 530 - list_entry(list, struct smack_master_list, list); 531 - struct smack_rule *srp = smlp->smk_rule; 532 - 533 - seq_printf(s, "%s %s", (char *)srp->smk_subject, 534 - (char *)srp->smk_object); 435 + seq_printf(s, "%s %s", srp->smk_subject, srp->smk_object); 535 436 536 437 seq_putc(s, ' '); 537 438 ··· 538 461 seq_putc(s, '-'); 539 462 540 463 seq_putc(s, '\n'); 464 + } 465 + 466 + /* 467 + * Seq_file read operations for /smack/load 468 + */ 469 + 470 + static void *load2_seq_start(struct seq_file *s, loff_t *pos) 471 + { 472 + return smk_seq_start(s, pos, &smack_rule_list); 473 + } 474 + 475 + static void *load2_seq_next(struct seq_file *s, void *v, loff_t *pos) 476 + { 477 + return smk_seq_next(s, v, pos, &smack_rule_list); 478 + } 479 + 480 + static int load_seq_show(struct seq_file *s, void *v) 481 + { 482 + struct list_head *list = v; 483 + struct smack_master_list *smlp = 484 + list_entry(list, struct smack_master_list, list); 485 + 486 + smk_rule_show(s, smlp->smk_rule, SMK_LABELLEN); 541 487 542 488 return 0; 543 489 } 544 490 545 491 static const struct seq_operations load_seq_ops = { 546 - .start = load_seq_start, 547 - .next = load_seq_next, 492 + .start = load2_seq_start, 493 + .next = load2_seq_next, 548 494 .show = load_seq_show, 549 495 .stop = smk_seq_stop, 550 496 }; ··· 604 504 if (!capable(CAP_MAC_ADMIN)) 605 505 return -EPERM; 606 506 607 - return smk_write_load_list(file, buf, count, ppos, NULL, NULL); 507 + return smk_write_rules_list(file, buf, count, ppos, NULL, NULL, 508 + SMK_FIXED24_FMT); 608 509 } 609 510 610 511 static const struct file_operations smk_load_ops = { ··· 675 574 printk(KERN_WARNING "%s:%d remove rc = %d\n", 676 575 __func__, __LINE__, rc); 677 576 } 577 + if (smack_net_ambient == NULL) 578 + smack_net_ambient = smack_known_floor.smk_known; 678 579 679 580 rc = netlbl_cfg_unlbl_map_add(smack_net_ambient, PF_INET, 680 581 NULL, NULL, &nai); ··· 708 605 struct list_head *list = v; 709 606 struct smack_known *skp = 710 607 list_entry(list, struct smack_known, list); 711 - struct smack_cipso *scp = skp->smk_cipso; 712 - char *cbp; 608 + struct netlbl_lsm_secattr_catmap *cmp = skp->smk_netlabel.attr.mls.cat; 713 609 char sep = '/'; 714 - int cat = 1; 715 610 int i; 716 - unsigned char m; 717 611 718 - if (scp == NULL) 612 + /* 613 + * Don't show a label that could not have been set using 614 + * /smack/cipso. This is in support of the notion that 615 + * anything read from /smack/cipso ought to be writeable 616 + * to /smack/cipso. 617 + * 618 + * /smack/cipso2 should be used instead. 619 + */ 620 + if (strlen(skp->smk_known) >= SMK_LABELLEN) 719 621 return 0; 720 622 721 - seq_printf(s, "%s %3d", (char *)&skp->smk_known, scp->smk_level); 623 + seq_printf(s, "%s %3d", skp->smk_known, skp->smk_netlabel.attr.mls.lvl); 722 624 723 - cbp = scp->smk_catset; 724 - for (i = 0; i < SMK_LABELLEN; i++) 725 - for (m = 0x80; m != 0; m >>= 1) { 726 - if (m & cbp[i]) { 727 - seq_printf(s, "%c%d", sep, cat); 728 - sep = ','; 729 - } 730 - cat++; 731 - } 625 + for (i = netlbl_secattr_catmap_walk(cmp, 0); i >= 0; 626 + i = netlbl_secattr_catmap_walk(cmp, i + 1)) { 627 + seq_printf(s, "%c%d", sep, i); 628 + sep = ','; 629 + } 732 630 733 631 seq_putc(s, '\n'); 734 632 ··· 757 653 } 758 654 759 655 /** 760 - * smk_write_cipso - write() for /smack/cipso 656 + * smk_set_cipso - do the work for write() for cipso and cipso2 761 657 * @file: file pointer, not actually used 762 658 * @buf: where to get the data from 763 659 * @count: bytes sent 764 660 * @ppos: where to start 661 + * @format: /smack/cipso or /smack/cipso2 765 662 * 766 663 * Accepts only one cipso rule per write call. 767 664 * Returns number of bytes written or error code, as appropriate 768 665 */ 769 - static ssize_t smk_write_cipso(struct file *file, const char __user *buf, 770 - size_t count, loff_t *ppos) 666 + static ssize_t smk_set_cipso(struct file *file, const char __user *buf, 667 + size_t count, loff_t *ppos, int format) 771 668 { 772 669 struct smack_known *skp; 773 - struct smack_cipso *scp = NULL; 774 - char mapcatset[SMK_LABELLEN]; 670 + struct netlbl_lsm_secattr ncats; 671 + char mapcatset[SMK_CIPSOLEN]; 775 672 int maplevel; 776 - int cat; 673 + unsigned int cat; 777 674 int catlen; 778 675 ssize_t rc = -EINVAL; 779 676 char *data = NULL; ··· 791 686 return -EPERM; 792 687 if (*ppos != 0) 793 688 return -EINVAL; 794 - if (count < SMK_CIPSOMIN || count > SMK_CIPSOMAX) 689 + if (format == SMK_FIXED24_FMT && 690 + (count < SMK_CIPSOMIN || count > SMK_CIPSOMAX)) 795 691 return -EINVAL; 796 692 797 693 data = kzalloc(count + 1, GFP_KERNEL); ··· 804 698 goto unlockedout; 805 699 } 806 700 807 - /* labels cannot begin with a '-' */ 808 - if (data[0] == '-') { 809 - rc = -EINVAL; 810 - goto unlockedout; 811 - } 812 701 data[count] = '\0'; 813 702 rule = data; 814 703 /* ··· 816 715 if (skp == NULL) 817 716 goto out; 818 717 819 - rule += SMK_LABELLEN; 718 + if (format == SMK_FIXED24_FMT) 719 + rule += SMK_LABELLEN; 720 + else 721 + rule += strlen(skp->smk_known); 722 + 820 723 ret = sscanf(rule, "%d", &maplevel); 821 724 if (ret != 1 || maplevel > SMACK_CIPSO_MAXLEVEL) 822 725 goto out; ··· 830 725 if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM) 831 726 goto out; 832 727 833 - if (count != (SMK_CIPSOMIN + catlen * SMK_DIGITLEN)) 728 + if (format == SMK_FIXED24_FMT && 729 + count != (SMK_CIPSOMIN + catlen * SMK_DIGITLEN)) 834 730 goto out; 835 731 836 732 memset(mapcatset, 0, sizeof(mapcatset)); 837 733 838 734 for (i = 0; i < catlen; i++) { 839 735 rule += SMK_DIGITLEN; 840 - ret = sscanf(rule, "%d", &cat); 736 + ret = sscanf(rule, "%u", &cat); 841 737 if (ret != 1 || cat > SMACK_CIPSO_MAXCATVAL) 842 738 goto out; 843 739 844 740 smack_catset_bit(cat, mapcatset); 845 741 } 846 742 847 - if (skp->smk_cipso == NULL) { 848 - scp = kzalloc(sizeof(struct smack_cipso), GFP_KERNEL); 849 - if (scp == NULL) { 850 - rc = -ENOMEM; 851 - goto out; 852 - } 743 + rc = smk_netlbl_mls(maplevel, mapcatset, &ncats, SMK_CIPSOLEN); 744 + if (rc >= 0) { 745 + netlbl_secattr_catmap_free(skp->smk_netlabel.attr.mls.cat); 746 + skp->smk_netlabel.attr.mls.cat = ncats.attr.mls.cat; 747 + skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl; 748 + rc = count; 853 749 } 854 750 855 - spin_lock_bh(&skp->smk_cipsolock); 856 - 857 - if (scp == NULL) 858 - scp = skp->smk_cipso; 859 - else 860 - skp->smk_cipso = scp; 861 - 862 - scp->smk_level = maplevel; 863 - memcpy(scp->smk_catset, mapcatset, sizeof(mapcatset)); 864 - 865 - spin_unlock_bh(&skp->smk_cipsolock); 866 - 867 - rc = count; 868 751 out: 869 752 mutex_unlock(&smack_cipso_lock); 870 753 unlockedout: ··· 860 767 return rc; 861 768 } 862 769 770 + /** 771 + * smk_write_cipso - write() for /smack/cipso 772 + * @file: file pointer, not actually used 773 + * @buf: where to get the data from 774 + * @count: bytes sent 775 + * @ppos: where to start 776 + * 777 + * Accepts only one cipso rule per write call. 778 + * Returns number of bytes written or error code, as appropriate 779 + */ 780 + static ssize_t smk_write_cipso(struct file *file, const char __user *buf, 781 + size_t count, loff_t *ppos) 782 + { 783 + return smk_set_cipso(file, buf, count, ppos, SMK_FIXED24_FMT); 784 + } 785 + 863 786 static const struct file_operations smk_cipso_ops = { 864 787 .open = smk_open_cipso, 865 788 .read = seq_read, 866 789 .llseek = seq_lseek, 867 790 .write = smk_write_cipso, 791 + .release = seq_release, 792 + }; 793 + 794 + /* 795 + * Seq_file read operations for /smack/cipso2 796 + */ 797 + 798 + /* 799 + * Print cipso labels in format: 800 + * label level[/cat[,cat]] 801 + */ 802 + static int cipso2_seq_show(struct seq_file *s, void *v) 803 + { 804 + struct list_head *list = v; 805 + struct smack_known *skp = 806 + list_entry(list, struct smack_known, list); 807 + struct netlbl_lsm_secattr_catmap *cmp = skp->smk_netlabel.attr.mls.cat; 808 + char sep = '/'; 809 + int i; 810 + 811 + seq_printf(s, "%s %3d", skp->smk_known, skp->smk_netlabel.attr.mls.lvl); 812 + 813 + for (i = netlbl_secattr_catmap_walk(cmp, 0); i >= 0; 814 + i = netlbl_secattr_catmap_walk(cmp, i + 1)) { 815 + seq_printf(s, "%c%d", sep, i); 816 + sep = ','; 817 + } 818 + 819 + seq_putc(s, '\n'); 820 + 821 + return 0; 822 + } 823 + 824 + static const struct seq_operations cipso2_seq_ops = { 825 + .start = cipso_seq_start, 826 + .next = cipso_seq_next, 827 + .show = cipso2_seq_show, 828 + .stop = smk_seq_stop, 829 + }; 830 + 831 + /** 832 + * smk_open_cipso2 - open() for /smack/cipso2 833 + * @inode: inode structure representing file 834 + * @file: "cipso2" file pointer 835 + * 836 + * Connect our cipso_seq_* operations with /smack/cipso2 837 + * file_operations 838 + */ 839 + static int smk_open_cipso2(struct inode *inode, struct file *file) 840 + { 841 + return seq_open(file, &cipso2_seq_ops); 842 + } 843 + 844 + /** 845 + * smk_write_cipso2 - write() for /smack/cipso2 846 + * @file: file pointer, not actually used 847 + * @buf: where to get the data from 848 + * @count: bytes sent 849 + * @ppos: where to start 850 + * 851 + * Accepts only one cipso rule per write call. 852 + * Returns number of bytes written or error code, as appropriate 853 + */ 854 + static ssize_t smk_write_cipso2(struct file *file, const char __user *buf, 855 + size_t count, loff_t *ppos) 856 + { 857 + return smk_set_cipso(file, buf, count, ppos, SMK_LONG_FMT); 858 + } 859 + 860 + static const struct file_operations smk_cipso2_ops = { 861 + .open = smk_open_cipso2, 862 + .read = seq_read, 863 + .llseek = seq_lseek, 864 + .write = smk_write_cipso2, 868 865 .release = seq_release, 869 866 }; 870 867 ··· 1070 887 { 1071 888 struct smk_netlbladdr *skp; 1072 889 struct sockaddr_in newname; 1073 - char smack[SMK_LABELLEN]; 890 + char *smack; 1074 891 char *sp; 1075 - char data[SMK_NETLBLADDRMAX + 1]; 892 + char *data; 1076 893 char *host = (char *)&newname.sin_addr.s_addr; 1077 894 int rc; 1078 895 struct netlbl_audit audit_info; ··· 1094 911 return -EPERM; 1095 912 if (*ppos != 0) 1096 913 return -EINVAL; 1097 - if (count < SMK_NETLBLADDRMIN || count > SMK_NETLBLADDRMAX) 914 + if (count < SMK_NETLBLADDRMIN) 1098 915 return -EINVAL; 1099 - if (copy_from_user(data, buf, count) != 0) 1100 - return -EFAULT; 916 + 917 + data = kzalloc(count + 1, GFP_KERNEL); 918 + if (data == NULL) 919 + return -ENOMEM; 920 + 921 + if (copy_from_user(data, buf, count) != 0) { 922 + rc = -EFAULT; 923 + goto free_data_out; 924 + } 925 + 926 + smack = kzalloc(count + 1, GFP_KERNEL); 927 + if (smack == NULL) { 928 + rc = -ENOMEM; 929 + goto free_data_out; 930 + } 1101 931 1102 932 data[count] = '\0'; 1103 933 ··· 1119 923 if (rc != 6) { 1120 924 rc = sscanf(data, "%hhd.%hhd.%hhd.%hhd %s", 1121 925 &host[0], &host[1], &host[2], &host[3], smack); 1122 - if (rc != 5) 1123 - return -EINVAL; 926 + if (rc != 5) { 927 + rc = -EINVAL; 928 + goto free_out; 929 + } 1124 930 m = BEBITS; 1125 931 } 1126 - if (m > BEBITS) 1127 - return -EINVAL; 932 + if (m > BEBITS) { 933 + rc = -EINVAL; 934 + goto free_out; 935 + } 1128 936 1129 - /* if smack begins with '-', its an option, don't import it */ 937 + /* 938 + * If smack begins with '-', it is an option, don't import it 939 + */ 1130 940 if (smack[0] != '-') { 1131 941 sp = smk_import(smack, 0); 1132 - if (sp == NULL) 1133 - return -EINVAL; 942 + if (sp == NULL) { 943 + rc = -EINVAL; 944 + goto free_out; 945 + } 1134 946 } else { 1135 947 /* check known options */ 1136 948 if (strcmp(smack, smack_cipso_option) == 0) 1137 949 sp = (char *)smack_cipso_option; 1138 - else 1139 - return -EINVAL; 950 + else { 951 + rc = -EINVAL; 952 + goto free_out; 953 + } 1140 954 } 1141 955 1142 956 for (temp_mask = 0; m > 0; m--) { ··· 1211 1005 rc = count; 1212 1006 1213 1007 mutex_unlock(&smk_netlbladdr_lock); 1008 + 1009 + free_out: 1010 + kfree(smack); 1011 + free_data_out: 1012 + kfree(data); 1214 1013 1215 1014 return rc; 1216 1015 } ··· 1330 1119 static ssize_t smk_write_direct(struct file *file, const char __user *buf, 1331 1120 size_t count, loff_t *ppos) 1332 1121 { 1122 + struct smack_known *skp; 1333 1123 char temp[80]; 1334 1124 int i; 1335 1125 ··· 1348 1136 if (sscanf(temp, "%d", &i) != 1) 1349 1137 return -EINVAL; 1350 1138 1351 - smack_cipso_direct = i; 1139 + /* 1140 + * Don't do anything if the value hasn't actually changed. 1141 + * If it is changing reset the level on entries that were 1142 + * set up to be direct when they were created. 1143 + */ 1144 + if (smack_cipso_direct != i) { 1145 + mutex_lock(&smack_known_lock); 1146 + list_for_each_entry_rcu(skp, &smack_known_list, list) 1147 + if (skp->smk_netlabel.attr.mls.lvl == 1148 + smack_cipso_direct) 1149 + skp->smk_netlabel.attr.mls.lvl = i; 1150 + smack_cipso_direct = i; 1151 + mutex_unlock(&smack_known_lock); 1152 + } 1352 1153 1353 1154 return count; 1354 1155 } ··· 1369 1144 static const struct file_operations smk_direct_ops = { 1370 1145 .read = smk_read_direct, 1371 1146 .write = smk_write_direct, 1147 + .llseek = default_llseek, 1148 + }; 1149 + 1150 + /** 1151 + * smk_read_mapped - read() for /smack/mapped 1152 + * @filp: file pointer, not actually used 1153 + * @buf: where to put the result 1154 + * @count: maximum to send along 1155 + * @ppos: where to start 1156 + * 1157 + * Returns number of bytes read or error code, as appropriate 1158 + */ 1159 + static ssize_t smk_read_mapped(struct file *filp, char __user *buf, 1160 + size_t count, loff_t *ppos) 1161 + { 1162 + char temp[80]; 1163 + ssize_t rc; 1164 + 1165 + if (*ppos != 0) 1166 + return 0; 1167 + 1168 + sprintf(temp, "%d", smack_cipso_mapped); 1169 + rc = simple_read_from_buffer(buf, count, ppos, temp, strlen(temp)); 1170 + 1171 + return rc; 1172 + } 1173 + 1174 + /** 1175 + * smk_write_mapped - write() for /smack/mapped 1176 + * @file: file pointer, not actually used 1177 + * @buf: where to get the data from 1178 + * @count: bytes sent 1179 + * @ppos: where to start 1180 + * 1181 + * Returns number of bytes written or error code, as appropriate 1182 + */ 1183 + static ssize_t smk_write_mapped(struct file *file, const char __user *buf, 1184 + size_t count, loff_t *ppos) 1185 + { 1186 + struct smack_known *skp; 1187 + char temp[80]; 1188 + int i; 1189 + 1190 + if (!capable(CAP_MAC_ADMIN)) 1191 + return -EPERM; 1192 + 1193 + if (count >= sizeof(temp) || count == 0) 1194 + return -EINVAL; 1195 + 1196 + if (copy_from_user(temp, buf, count) != 0) 1197 + return -EFAULT; 1198 + 1199 + temp[count] = '\0'; 1200 + 1201 + if (sscanf(temp, "%d", &i) != 1) 1202 + return -EINVAL; 1203 + 1204 + /* 1205 + * Don't do anything if the value hasn't actually changed. 1206 + * If it is changing reset the level on entries that were 1207 + * set up to be mapped when they were created. 1208 + */ 1209 + if (smack_cipso_mapped != i) { 1210 + mutex_lock(&smack_known_lock); 1211 + list_for_each_entry_rcu(skp, &smack_known_list, list) 1212 + if (skp->smk_netlabel.attr.mls.lvl == 1213 + smack_cipso_mapped) 1214 + skp->smk_netlabel.attr.mls.lvl = i; 1215 + smack_cipso_mapped = i; 1216 + mutex_unlock(&smack_known_lock); 1217 + } 1218 + 1219 + return count; 1220 + } 1221 + 1222 + static const struct file_operations smk_mapped_ops = { 1223 + .read = smk_read_mapped, 1224 + .write = smk_write_mapped, 1372 1225 .llseek = default_llseek, 1373 1226 }; 1374 1227 ··· 1498 1195 static ssize_t smk_write_ambient(struct file *file, const char __user *buf, 1499 1196 size_t count, loff_t *ppos) 1500 1197 { 1501 - char in[SMK_LABELLEN]; 1502 1198 char *oldambient; 1503 - char *smack; 1199 + char *smack = NULL; 1200 + char *data; 1201 + int rc = count; 1504 1202 1505 1203 if (!capable(CAP_MAC_ADMIN)) 1506 1204 return -EPERM; 1507 1205 1508 - if (count >= SMK_LABELLEN) 1509 - return -EINVAL; 1206 + data = kzalloc(count + 1, GFP_KERNEL); 1207 + if (data == NULL) 1208 + return -ENOMEM; 1510 1209 1511 - if (copy_from_user(in, buf, count) != 0) 1512 - return -EFAULT; 1210 + if (copy_from_user(data, buf, count) != 0) { 1211 + rc = -EFAULT; 1212 + goto out; 1213 + } 1513 1214 1514 - smack = smk_import(in, count); 1515 - if (smack == NULL) 1516 - return -EINVAL; 1215 + smack = smk_import(data, count); 1216 + if (smack == NULL) { 1217 + rc = -EINVAL; 1218 + goto out; 1219 + } 1517 1220 1518 1221 mutex_lock(&smack_ambient_lock); 1519 1222 ··· 1529 1220 1530 1221 mutex_unlock(&smack_ambient_lock); 1531 1222 1532 - return count; 1223 + out: 1224 + kfree(data); 1225 + return rc; 1533 1226 } 1534 1227 1535 1228 static const struct file_operations smk_ambient_ops = { ··· 1582 1271 static ssize_t smk_write_onlycap(struct file *file, const char __user *buf, 1583 1272 size_t count, loff_t *ppos) 1584 1273 { 1585 - char in[SMK_LABELLEN]; 1274 + char *data; 1586 1275 char *sp = smk_of_task(current->cred->security); 1276 + int rc = count; 1587 1277 1588 1278 if (!capable(CAP_MAC_ADMIN)) 1589 1279 return -EPERM; ··· 1597 1285 if (smack_onlycap != NULL && smack_onlycap != sp) 1598 1286 return -EPERM; 1599 1287 1600 - if (count >= SMK_LABELLEN) 1601 - return -EINVAL; 1602 - 1603 - if (copy_from_user(in, buf, count) != 0) 1604 - return -EFAULT; 1288 + data = kzalloc(count, GFP_KERNEL); 1289 + if (data == NULL) 1290 + return -ENOMEM; 1605 1291 1606 1292 /* 1607 1293 * Should the null string be passed in unset the onlycap value. ··· 1607 1297 * smk_import only expects to return NULL for errors. It 1608 1298 * is usually the case that a nullstring or "\n" would be 1609 1299 * bad to pass to smk_import but in fact this is useful here. 1300 + * 1301 + * smk_import will also reject a label beginning with '-', 1302 + * so "-usecapabilities" will also work. 1610 1303 */ 1611 - smack_onlycap = smk_import(in, count); 1304 + if (copy_from_user(data, buf, count) != 0) 1305 + rc = -EFAULT; 1306 + else 1307 + smack_onlycap = smk_import(data, count); 1612 1308 1613 - return count; 1309 + kfree(data); 1310 + return rc; 1614 1311 } 1615 1312 1616 1313 static const struct file_operations smk_onlycap_ops = { ··· 1715 1398 struct smack_rule *srp = 1716 1399 list_entry(list, struct smack_rule, list); 1717 1400 1718 - seq_printf(s, "%s %s", (char *)srp->smk_subject, 1719 - (char *)srp->smk_object); 1720 - 1721 - seq_putc(s, ' '); 1722 - 1723 - if (srp->smk_access & MAY_READ) 1724 - seq_putc(s, 'r'); 1725 - if (srp->smk_access & MAY_WRITE) 1726 - seq_putc(s, 'w'); 1727 - if (srp->smk_access & MAY_EXEC) 1728 - seq_putc(s, 'x'); 1729 - if (srp->smk_access & MAY_APPEND) 1730 - seq_putc(s, 'a'); 1731 - if (srp->smk_access & MAY_TRANSMUTE) 1732 - seq_putc(s, 't'); 1733 - if (srp->smk_access == 0) 1734 - seq_putc(s, '-'); 1735 - 1736 - seq_putc(s, '\n'); 1401 + smk_rule_show(s, srp, SMK_LABELLEN); 1737 1402 1738 1403 return 0; 1739 1404 } ··· 1729 1430 1730 1431 1731 1432 /** 1732 - * smk_open_load_self - open() for /smack/load-self 1433 + * smk_open_load_self - open() for /smack/load-self2 1733 1434 * @inode: inode structure representing file 1734 1435 * @file: "load" file pointer 1735 1436 * ··· 1753 1454 { 1754 1455 struct task_smack *tsp = current_security(); 1755 1456 1756 - return smk_write_load_list(file, buf, count, ppos, &tsp->smk_rules, 1757 - &tsp->smk_rules_lock); 1457 + return smk_write_rules_list(file, buf, count, ppos, &tsp->smk_rules, 1458 + &tsp->smk_rules_lock, SMK_FIXED24_FMT); 1758 1459 } 1759 1460 1760 1461 static const struct file_operations smk_load_self_ops = { ··· 1766 1467 }; 1767 1468 1768 1469 /** 1470 + * smk_user_access - handle access check transaction 1471 + * @file: file pointer 1472 + * @buf: data from user space 1473 + * @count: bytes sent 1474 + * @ppos: where to start - must be 0 1475 + */ 1476 + static ssize_t smk_user_access(struct file *file, const char __user *buf, 1477 + size_t count, loff_t *ppos, int format) 1478 + { 1479 + struct smack_rule rule; 1480 + char *data; 1481 + char *cod; 1482 + int res; 1483 + 1484 + data = simple_transaction_get(file, buf, count); 1485 + if (IS_ERR(data)) 1486 + return PTR_ERR(data); 1487 + 1488 + if (format == SMK_FIXED24_FMT) { 1489 + if (count < SMK_LOADLEN) 1490 + return -EINVAL; 1491 + res = smk_parse_rule(data, &rule, 0); 1492 + } else { 1493 + /* 1494 + * Copy the data to make sure the string is terminated. 1495 + */ 1496 + cod = kzalloc(count + 1, GFP_KERNEL); 1497 + if (cod == NULL) 1498 + return -ENOMEM; 1499 + memcpy(cod, data, count); 1500 + cod[count] = '\0'; 1501 + res = smk_parse_long_rule(cod, &rule, 0); 1502 + kfree(cod); 1503 + } 1504 + 1505 + if (res) 1506 + return -EINVAL; 1507 + 1508 + res = smk_access(rule.smk_subject, rule.smk_object, rule.smk_access, 1509 + NULL); 1510 + data[0] = res == 0 ? '1' : '0'; 1511 + data[1] = '\0'; 1512 + 1513 + simple_transaction_set(file, 2); 1514 + 1515 + if (format == SMK_FIXED24_FMT) 1516 + return SMK_LOADLEN; 1517 + return count; 1518 + } 1519 + 1520 + /** 1769 1521 * smk_write_access - handle access check transaction 1770 1522 * @file: file pointer 1771 1523 * @buf: data from user space ··· 1826 1476 static ssize_t smk_write_access(struct file *file, const char __user *buf, 1827 1477 size_t count, loff_t *ppos) 1828 1478 { 1829 - struct smack_rule rule; 1830 - char *data; 1831 - int res; 1832 - 1833 - data = simple_transaction_get(file, buf, count); 1834 - if (IS_ERR(data)) 1835 - return PTR_ERR(data); 1836 - 1837 - if (count < SMK_LOADLEN || smk_parse_rule(data, &rule, 0)) 1838 - return -EINVAL; 1839 - 1840 - res = smk_access(rule.smk_subject, rule.smk_object, rule.smk_access, 1841 - NULL); 1842 - data[0] = res == 0 ? '1' : '0'; 1843 - data[1] = '\0'; 1844 - 1845 - simple_transaction_set(file, 2); 1846 - return SMK_LOADLEN; 1479 + return smk_user_access(file, buf, count, ppos, SMK_FIXED24_FMT); 1847 1480 } 1848 1481 1849 1482 static const struct file_operations smk_access_ops = { 1850 1483 .write = smk_write_access, 1484 + .read = simple_transaction_read, 1485 + .release = simple_transaction_release, 1486 + .llseek = generic_file_llseek, 1487 + }; 1488 + 1489 + 1490 + /* 1491 + * Seq_file read operations for /smack/load2 1492 + */ 1493 + 1494 + static int load2_seq_show(struct seq_file *s, void *v) 1495 + { 1496 + struct list_head *list = v; 1497 + struct smack_master_list *smlp = 1498 + list_entry(list, struct smack_master_list, list); 1499 + 1500 + smk_rule_show(s, smlp->smk_rule, SMK_LONGLABEL); 1501 + 1502 + return 0; 1503 + } 1504 + 1505 + static const struct seq_operations load2_seq_ops = { 1506 + .start = load2_seq_start, 1507 + .next = load2_seq_next, 1508 + .show = load2_seq_show, 1509 + .stop = smk_seq_stop, 1510 + }; 1511 + 1512 + /** 1513 + * smk_open_load2 - open() for /smack/load2 1514 + * @inode: inode structure representing file 1515 + * @file: "load2" file pointer 1516 + * 1517 + * For reading, use load2_seq_* seq_file reading operations. 1518 + */ 1519 + static int smk_open_load2(struct inode *inode, struct file *file) 1520 + { 1521 + return seq_open(file, &load2_seq_ops); 1522 + } 1523 + 1524 + /** 1525 + * smk_write_load2 - write() for /smack/load2 1526 + * @file: file pointer, not actually used 1527 + * @buf: where to get the data from 1528 + * @count: bytes sent 1529 + * @ppos: where to start - must be 0 1530 + * 1531 + */ 1532 + static ssize_t smk_write_load2(struct file *file, const char __user *buf, 1533 + size_t count, loff_t *ppos) 1534 + { 1535 + /* 1536 + * Must have privilege. 1537 + */ 1538 + if (!capable(CAP_MAC_ADMIN)) 1539 + return -EPERM; 1540 + 1541 + return smk_write_rules_list(file, buf, count, ppos, NULL, NULL, 1542 + SMK_LONG_FMT); 1543 + } 1544 + 1545 + static const struct file_operations smk_load2_ops = { 1546 + .open = smk_open_load2, 1547 + .read = seq_read, 1548 + .llseek = seq_lseek, 1549 + .write = smk_write_load2, 1550 + .release = seq_release, 1551 + }; 1552 + 1553 + /* 1554 + * Seq_file read operations for /smack/load-self2 1555 + */ 1556 + 1557 + static void *load_self2_seq_start(struct seq_file *s, loff_t *pos) 1558 + { 1559 + struct task_smack *tsp = current_security(); 1560 + 1561 + return smk_seq_start(s, pos, &tsp->smk_rules); 1562 + } 1563 + 1564 + static void *load_self2_seq_next(struct seq_file *s, void *v, loff_t *pos) 1565 + { 1566 + struct task_smack *tsp = current_security(); 1567 + 1568 + return smk_seq_next(s, v, pos, &tsp->smk_rules); 1569 + } 1570 + 1571 + static int load_self2_seq_show(struct seq_file *s, void *v) 1572 + { 1573 + struct list_head *list = v; 1574 + struct smack_rule *srp = 1575 + list_entry(list, struct smack_rule, list); 1576 + 1577 + smk_rule_show(s, srp, SMK_LONGLABEL); 1578 + 1579 + return 0; 1580 + } 1581 + 1582 + static const struct seq_operations load_self2_seq_ops = { 1583 + .start = load_self2_seq_start, 1584 + .next = load_self2_seq_next, 1585 + .show = load_self2_seq_show, 1586 + .stop = smk_seq_stop, 1587 + }; 1588 + 1589 + /** 1590 + * smk_open_load_self2 - open() for /smack/load-self2 1591 + * @inode: inode structure representing file 1592 + * @file: "load" file pointer 1593 + * 1594 + * For reading, use load_seq_* seq_file reading operations. 1595 + */ 1596 + static int smk_open_load_self2(struct inode *inode, struct file *file) 1597 + { 1598 + return seq_open(file, &load_self2_seq_ops); 1599 + } 1600 + 1601 + /** 1602 + * smk_write_load_self2 - write() for /smack/load-self2 1603 + * @file: file pointer, not actually used 1604 + * @buf: where to get the data from 1605 + * @count: bytes sent 1606 + * @ppos: where to start - must be 0 1607 + * 1608 + */ 1609 + static ssize_t smk_write_load_self2(struct file *file, const char __user *buf, 1610 + size_t count, loff_t *ppos) 1611 + { 1612 + struct task_smack *tsp = current_security(); 1613 + 1614 + return smk_write_rules_list(file, buf, count, ppos, &tsp->smk_rules, 1615 + &tsp->smk_rules_lock, SMK_LONG_FMT); 1616 + } 1617 + 1618 + static const struct file_operations smk_load_self2_ops = { 1619 + .open = smk_open_load_self2, 1620 + .read = seq_read, 1621 + .llseek = seq_lseek, 1622 + .write = smk_write_load_self2, 1623 + .release = seq_release, 1624 + }; 1625 + 1626 + /** 1627 + * smk_write_access2 - handle access check transaction 1628 + * @file: file pointer 1629 + * @buf: data from user space 1630 + * @count: bytes sent 1631 + * @ppos: where to start - must be 0 1632 + */ 1633 + static ssize_t smk_write_access2(struct file *file, const char __user *buf, 1634 + size_t count, loff_t *ppos) 1635 + { 1636 + return smk_user_access(file, buf, count, ppos, SMK_LONG_FMT); 1637 + } 1638 + 1639 + static const struct file_operations smk_access2_ops = { 1640 + .write = smk_write_access2, 1851 1641 .read = simple_transaction_read, 1852 1642 .release = simple_transaction_release, 1853 1643 .llseek = generic_file_llseek, ··· 2029 1539 "load-self", &smk_load_self_ops, S_IRUGO|S_IWUGO}, 2030 1540 [SMK_ACCESSES] = { 2031 1541 "access", &smk_access_ops, S_IRUGO|S_IWUGO}, 1542 + [SMK_MAPPED] = { 1543 + "mapped", &smk_mapped_ops, S_IRUGO|S_IWUSR}, 1544 + [SMK_LOAD2] = { 1545 + "load2", &smk_load2_ops, S_IRUGO|S_IWUSR}, 1546 + [SMK_LOAD_SELF2] = { 1547 + "load-self2", &smk_load_self2_ops, S_IRUGO|S_IWUGO}, 1548 + [SMK_ACCESS2] = { 1549 + "access2", &smk_access2_ops, S_IRUGO|S_IWUGO}, 1550 + [SMK_CIPSO2] = { 1551 + "cipso2", &smk_cipso2_ops, S_IRUGO|S_IWUSR}, 2032 1552 /* last one */ 2033 1553 {""} 2034 1554 }; ··· 2081 1581 2082 1582 static struct vfsmount *smackfs_mount; 2083 1583 1584 + static int __init smk_preset_netlabel(struct smack_known *skp) 1585 + { 1586 + skp->smk_netlabel.domain = skp->smk_known; 1587 + skp->smk_netlabel.flags = 1588 + NETLBL_SECATTR_DOMAIN | NETLBL_SECATTR_MLS_LVL; 1589 + return smk_netlbl_mls(smack_cipso_direct, skp->smk_known, 1590 + &skp->smk_netlabel, strlen(skp->smk_known)); 1591 + } 1592 + 2084 1593 /** 2085 1594 * init_smk_fs - get the smackfs superblock 2086 1595 * ··· 2106 1597 static int __init init_smk_fs(void) 2107 1598 { 2108 1599 int err; 1600 + int rc; 2109 1601 2110 1602 if (!security_module_enable(&smack_ops)) 2111 1603 return 0; ··· 2123 1613 2124 1614 smk_cipso_doi(); 2125 1615 smk_unlbl_ambient(NULL); 1616 + 1617 + rc = smk_preset_netlabel(&smack_known_floor); 1618 + if (err == 0 && rc < 0) 1619 + err = rc; 1620 + rc = smk_preset_netlabel(&smack_known_hat); 1621 + if (err == 0 && rc < 0) 1622 + err = rc; 1623 + rc = smk_preset_netlabel(&smack_known_huh); 1624 + if (err == 0 && rc < 0) 1625 + err = rc; 1626 + rc = smk_preset_netlabel(&smack_known_invalid); 1627 + if (err == 0 && rc < 0) 1628 + err = rc; 1629 + rc = smk_preset_netlabel(&smack_known_star); 1630 + if (err == 0 && rc < 0) 1631 + err = rc; 1632 + rc = smk_preset_netlabel(&smack_known_web); 1633 + if (err == 0 && rc < 0) 1634 + err = rc; 2126 1635 2127 1636 return err; 2128 1637 }
+6 -20
security/tomoyo/common.c
··· 850 850 policy_list[TOMOYO_ID_MANAGER], 851 851 }; 852 852 int error = is_delete ? -ENOENT : -ENOMEM; 853 - if (tomoyo_domain_def(manager)) { 854 - if (!tomoyo_correct_domain(manager)) 855 - return -EINVAL; 856 - e.is_domain = true; 857 - } else { 858 - if (!tomoyo_correct_path(manager)) 859 - return -EINVAL; 860 - } 853 + if (!tomoyo_correct_domain(manager) && 854 + !tomoyo_correct_word(manager)) 855 + return -EINVAL; 861 856 e.manager = tomoyo_get_name(manager); 862 857 if (e.manager) { 863 858 error = tomoyo_update_policy(&e.head, sizeof(e), &param, ··· 927 932 return true; 928 933 if (!tomoyo_manage_by_non_root && (task->cred->uid || task->cred->euid)) 929 934 return false; 930 - list_for_each_entry_rcu(ptr, &tomoyo_kernel_namespace. 931 - policy_list[TOMOYO_ID_MANAGER], head.list) { 932 - if (!ptr->head.is_deleted && ptr->is_domain 933 - && !tomoyo_pathcmp(domainname, ptr->manager)) { 934 - found = true; 935 - break; 936 - } 937 - } 938 - if (found) 939 - return true; 940 935 exe = tomoyo_get_exe(); 941 936 if (!exe) 942 937 return false; 943 938 list_for_each_entry_rcu(ptr, &tomoyo_kernel_namespace. 944 939 policy_list[TOMOYO_ID_MANAGER], head.list) { 945 - if (!ptr->head.is_deleted && !ptr->is_domain 946 - && !strcmp(exe, ptr->manager->name)) { 940 + if (!ptr->head.is_deleted && 941 + (!tomoyo_pathcmp(domainname, ptr->manager) || 942 + !strcmp(exe, ptr->manager->name))) { 947 943 found = true; 948 944 break; 949 945 }
-1
security/tomoyo/common.h
··· 860 860 /* Structure for policy manager. */ 861 861 struct tomoyo_manager { 862 862 struct tomoyo_acl_head head; 863 - bool is_domain; /* True if manager is a domainname. */ 864 863 /* A path to program or a domainname. */ 865 864 const struct tomoyo_path_info *manager; 866 865 };
+3 -3
security/tomoyo/tomoyo.c
··· 319 319 } 320 320 321 321 /** 322 - * tomoyo_dentry_open - Target for security_dentry_open(). 322 + * tomoyo_file_open - Target for security_file_open(). 323 323 * 324 324 * @f: Pointer to "struct file". 325 325 * @cred: Pointer to "struct cred". 326 326 * 327 327 * Returns 0 on success, negative value otherwise. 328 328 */ 329 - static int tomoyo_dentry_open(struct file *f, const struct cred *cred) 329 + static int tomoyo_file_open(struct file *f, const struct cred *cred) 330 330 { 331 331 int flags = f->f_flags; 332 332 /* Don't check read permission here if called from do_execve(). */ ··· 510 510 .bprm_set_creds = tomoyo_bprm_set_creds, 511 511 .bprm_check_security = tomoyo_bprm_check_security, 512 512 .file_fcntl = tomoyo_file_fcntl, 513 - .dentry_open = tomoyo_dentry_open, 513 + .file_open = tomoyo_file_open, 514 514 .path_truncate = tomoyo_path_truncate, 515 515 .path_unlink = tomoyo_path_unlink, 516 516 .path_mkdir = tomoyo_path_mkdir,
+51 -12
security/yama/yama_lsm.c
··· 18 18 #include <linux/prctl.h> 19 19 #include <linux/ratelimit.h> 20 20 21 - static int ptrace_scope = 1; 21 + #define YAMA_SCOPE_DISABLED 0 22 + #define YAMA_SCOPE_RELATIONAL 1 23 + #define YAMA_SCOPE_CAPABILITY 2 24 + #define YAMA_SCOPE_NO_ATTACH 3 25 + 26 + static int ptrace_scope = YAMA_SCOPE_RELATIONAL; 22 27 23 28 /* describe a ptrace relationship for potential exception */ 24 29 struct ptrace_relation { ··· 256 251 return rc; 257 252 258 253 /* require ptrace target be a child of ptracer on attach */ 259 - if (mode == PTRACE_MODE_ATTACH && 260 - ptrace_scope && 261 - !task_is_descendant(current, child) && 262 - !ptracer_exception_found(current, child) && 263 - !capable(CAP_SYS_PTRACE)) 264 - rc = -EPERM; 254 + if (mode == PTRACE_MODE_ATTACH) { 255 + switch (ptrace_scope) { 256 + case YAMA_SCOPE_DISABLED: 257 + /* No additional restrictions. */ 258 + break; 259 + case YAMA_SCOPE_RELATIONAL: 260 + if (!task_is_descendant(current, child) && 261 + !ptracer_exception_found(current, child) && 262 + !ns_capable(task_user_ns(child), CAP_SYS_PTRACE)) 263 + rc = -EPERM; 264 + break; 265 + case YAMA_SCOPE_CAPABILITY: 266 + if (!ns_capable(task_user_ns(child), CAP_SYS_PTRACE)) 267 + rc = -EPERM; 268 + break; 269 + case YAMA_SCOPE_NO_ATTACH: 270 + default: 271 + rc = -EPERM; 272 + break; 273 + } 274 + } 265 275 266 276 if (rc) { 267 277 char name[sizeof(current->comm)]; 268 - printk_ratelimited(KERN_NOTICE "ptrace of non-child" 269 - " pid %d was attempted by: %s (pid %d)\n", 278 + printk_ratelimited(KERN_NOTICE 279 + "ptrace of pid %d was attempted by: %s (pid %d)\n", 270 280 child->pid, 271 281 get_task_comm(name, current), 272 282 current->pid); ··· 299 279 }; 300 280 301 281 #ifdef CONFIG_SYSCTL 282 + static int yama_dointvec_minmax(struct ctl_table *table, int write, 283 + void __user *buffer, size_t *lenp, loff_t *ppos) 284 + { 285 + int rc; 286 + 287 + if (write && !capable(CAP_SYS_PTRACE)) 288 + return -EPERM; 289 + 290 + rc = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 291 + if (rc) 292 + return rc; 293 + 294 + /* Lock the max value if it ever gets set. */ 295 + if (write && *(int *)table->data == *(int *)table->extra2) 296 + table->extra1 = table->extra2; 297 + 298 + return rc; 299 + } 300 + 302 301 static int zero; 303 - static int one = 1; 302 + static int max_scope = YAMA_SCOPE_NO_ATTACH; 304 303 305 304 struct ctl_path yama_sysctl_path[] = { 306 305 { .procname = "kernel", }, ··· 333 294 .data = &ptrace_scope, 334 295 .maxlen = sizeof(int), 335 296 .mode = 0644, 336 - .proc_handler = proc_dointvec_minmax, 297 + .proc_handler = yama_dointvec_minmax, 337 298 .extra1 = &zero, 338 - .extra2 = &one, 299 + .extra2 = &max_scope, 339 300 }, 340 301 { } 341 302 };