Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
"A quiet cycle for the security subsystem with just a few maintenance
updates."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
Smack: create a sysfs mount point for smackfs
Smack: use select not depends in Kconfig
Yama: remove locking from delete path
Yama: add RCU to drop read locking
drivers/char/tpm: remove tasklet and cleanup
KEYS: Use keyring_alloc() to create special keyrings
KEYS: Reduce initial permissions on keys
KEYS: Make the session and process keyrings per-thread
seccomp: Make syscall skipping and nr changes more consistent
key: Fix resource leak
keys: Fix unreachable code
KEYS: Add payload preparsing opportunity prior to key instantiate or update

+369 -370
+68 -6
Documentation/prctl/seccomp_filter.txt
··· 95 95 96 96 SECCOMP_RET_TRAP: 97 97 Results in the kernel sending a SIGSYS signal to the triggering 98 - task without executing the system call. The kernel will 99 - rollback the register state to just before the system call 100 - entry such that a signal handler in the task will be able to 101 - inspect the ucontext_t->uc_mcontext registers and emulate 102 - system call success or failure upon return from the signal 103 - handler. 98 + task without executing the system call. siginfo->si_call_addr 99 + will show the address of the system call instruction, and 100 + siginfo->si_syscall and siginfo->si_arch will indicate which 101 + syscall was attempted. The program counter will be as though 102 + the syscall happened (i.e. it will not point to the syscall 103 + instruction). The return value register will contain an arch- 104 + dependent value -- if resuming execution, set it to something 105 + sensible. (The architecture dependency is because replacing 106 + it with -ENOSYS could overwrite some useful information.) 104 107 105 108 The SECCOMP_RET_DATA portion of the return value will be passed 106 109 as si_errno. ··· 125 122 of a PTRACE_EVENT_SECCOMP and the SECCOMP_RET_DATA portion of 126 123 the BPF program return value will be available to the tracer 127 124 via PTRACE_GETEVENTMSG. 125 + 126 + The tracer can skip the system call by changing the syscall number 127 + to -1. Alternatively, the tracer can change the system call 128 + requested by changing the system call to a valid syscall number. If 129 + the tracer asks to skip the system call, then the system call will 130 + appear to return the value that the tracer puts in the return value 131 + register. 132 + 133 + The seccomp check will not be run again after the tracer is 134 + notified. (This means that seccomp-based sandboxes MUST NOT 135 + allow use of ptrace, even of other sandboxed processes, without 136 + extreme care; ptracers can use this mechanism to escape.) 128 137 129 138 SECCOMP_RET_ALLOW: 130 139 Results in the system call being executed. ··· 176 161 support seccomp filter with minor fixup: SIGSYS support and seccomp return 177 162 value checking. Then it must just add CONFIG_HAVE_ARCH_SECCOMP_FILTER 178 163 to its arch-specific Kconfig. 164 + 165 + 166 + 167 + Caveats 168 + ------- 169 + 170 + The vDSO can cause some system calls to run entirely in userspace, 171 + leading to surprises when you run programs on different machines that 172 + fall back to real syscalls. To minimize these surprises on x86, make 173 + sure you test with 174 + /sys/devices/system/clocksource/clocksource0/current_clocksource set to 175 + something like acpi_pm. 176 + 177 + On x86-64, vsyscall emulation is enabled by default. (vsyscalls are 178 + legacy variants on vDSO calls.) Currently, emulated vsyscalls will honor seccomp, with a few oddities: 179 + 180 + - A return value of SECCOMP_RET_TRAP will set a si_call_addr pointing to 181 + the vsyscall entry for the given call and not the address after the 182 + 'syscall' instruction. Any code which wants to restart the call 183 + should be aware that (a) a ret instruction has been emulated and (b) 184 + trying to resume the syscall will again trigger the standard vsyscall 185 + emulation security checks, making resuming the syscall mostly 186 + pointless. 187 + 188 + - A return value of SECCOMP_RET_TRACE will signal the tracer as usual, 189 + but the syscall may not be changed to another system call using the 190 + orig_rax register. It may only be changed to -1 order to skip the 191 + currently emulated call. Any other change MAY terminate the process. 192 + The rip value seen by the tracer will be the syscall entry address; 193 + this is different from normal behavior. The tracer MUST NOT modify 194 + rip or rsp. (Do not rely on other changes terminating the process. 195 + They might work. For example, on some kernels, choosing a syscall 196 + that only exists in future kernels will be correctly emulated (by 197 + returning -ENOSYS). 198 + 199 + To detect this quirky behavior, check for addr & ~0x0C00 == 200 + 0xFFFFFFFFFF600000. (For SECCOMP_RET_TRACE, use rip. For 201 + SECCOMP_RET_TRAP, use siginfo->si_call_addr.) Do not check any other 202 + condition: future kernels may improve vsyscall emulation and current 203 + kernels in vsyscall=native mode will behave differently, but the 204 + instructions at 0xF...F600{0,4,8,C}00 will not be system calls in these 205 + cases. 206 + 207 + Note that modern systems are unlikely to use vsyscalls at all -- they 208 + are a legacy feature and they are considerably slower than standard 209 + syscalls. New code will use the vDSO, and vDSO-issued system calls 210 + are indistinguishable from normal system calls.
+17
Documentation/security/keys.txt
··· 994 994 reference pointer if successful. 995 995 996 996 997 + (*) A keyring can be created by: 998 + 999 + struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid, 1000 + const struct cred *cred, 1001 + key_perm_t perm, 1002 + unsigned long flags, 1003 + struct key *dest); 1004 + 1005 + This creates a keyring with the given attributes and returns it. If dest 1006 + is not NULL, the new keyring will be linked into the keyring to which it 1007 + points. No permission checks are made upon the destination keyring. 1008 + 1009 + Error EDQUOT can be returned if the keyring would overload the quota (pass 1010 + KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted 1011 + towards the user's quota). Error ENOMEM can also be returned. 1012 + 1013 + 997 1014 (*) To check the validity of a key, this function can be called: 998 1015 999 1016 int validate_key(struct key *key);
+60 -52
arch/x86/kernel/vsyscall_64.c
··· 145 145 return nr; 146 146 } 147 147 148 - #ifdef CONFIG_SECCOMP 149 - static int vsyscall_seccomp(struct task_struct *tsk, int syscall_nr) 150 - { 151 - if (!seccomp_mode(&tsk->seccomp)) 152 - return 0; 153 - task_pt_regs(tsk)->orig_ax = syscall_nr; 154 - task_pt_regs(tsk)->ax = syscall_nr; 155 - return __secure_computing(syscall_nr); 156 - } 157 - #else 158 - #define vsyscall_seccomp(_tsk, _nr) 0 159 - #endif 160 - 161 148 static bool write_ok_or_segv(unsigned long ptr, size_t size) 162 149 { 163 150 /* ··· 177 190 { 178 191 struct task_struct *tsk; 179 192 unsigned long caller; 180 - int vsyscall_nr; 193 + int vsyscall_nr, syscall_nr, tmp; 181 194 int prev_sig_on_uaccess_error; 182 195 long ret; 183 - int skip; 184 196 185 197 /* 186 198 * No point in checking CS -- the only way to get here is a user mode ··· 211 225 } 212 226 213 227 tsk = current; 228 + 229 + /* 230 + * Check for access_ok violations and find the syscall nr. 231 + * 232 + * NULL is a valid user pointer (in the access_ok sense) on 32-bit and 233 + * 64-bit, so we don't need to special-case it here. For all the 234 + * vsyscalls, NULL means "don't write anything" not "write it at 235 + * address 0". 236 + */ 237 + switch (vsyscall_nr) { 238 + case 0: 239 + if (!write_ok_or_segv(regs->di, sizeof(struct timeval)) || 240 + !write_ok_or_segv(regs->si, sizeof(struct timezone))) { 241 + ret = -EFAULT; 242 + goto check_fault; 243 + } 244 + 245 + syscall_nr = __NR_gettimeofday; 246 + break; 247 + 248 + case 1: 249 + if (!write_ok_or_segv(regs->di, sizeof(time_t))) { 250 + ret = -EFAULT; 251 + goto check_fault; 252 + } 253 + 254 + syscall_nr = __NR_time; 255 + break; 256 + 257 + case 2: 258 + if (!write_ok_or_segv(regs->di, sizeof(unsigned)) || 259 + !write_ok_or_segv(regs->si, sizeof(unsigned))) { 260 + ret = -EFAULT; 261 + goto check_fault; 262 + } 263 + 264 + syscall_nr = __NR_getcpu; 265 + break; 266 + } 267 + 268 + /* 269 + * Handle seccomp. regs->ip must be the original value. 270 + * See seccomp_send_sigsys and Documentation/prctl/seccomp_filter.txt. 271 + * 272 + * We could optimize the seccomp disabled case, but performance 273 + * here doesn't matter. 274 + */ 275 + regs->orig_ax = syscall_nr; 276 + regs->ax = -ENOSYS; 277 + tmp = secure_computing(syscall_nr); 278 + if ((!tmp && regs->orig_ax != syscall_nr) || regs->ip != address) { 279 + warn_bad_vsyscall(KERN_DEBUG, regs, 280 + "seccomp tried to change syscall nr or ip"); 281 + do_exit(SIGSYS); 282 + } 283 + if (tmp) 284 + goto do_ret; /* skip requested */ 285 + 214 286 /* 215 287 * With a real vsyscall, page faults cause SIGSEGV. We want to 216 288 * preserve that behavior to make writing exploits harder. ··· 276 232 prev_sig_on_uaccess_error = current_thread_info()->sig_on_uaccess_error; 277 233 current_thread_info()->sig_on_uaccess_error = 1; 278 234 279 - /* 280 - * NULL is a valid user pointer (in the access_ok sense) on 32-bit and 281 - * 64-bit, so we don't need to special-case it here. For all the 282 - * vsyscalls, NULL means "don't write anything" not "write it at 283 - * address 0". 284 - */ 285 235 ret = -EFAULT; 286 - skip = 0; 287 236 switch (vsyscall_nr) { 288 237 case 0: 289 - skip = vsyscall_seccomp(tsk, __NR_gettimeofday); 290 - if (skip) 291 - break; 292 - 293 - if (!write_ok_or_segv(regs->di, sizeof(struct timeval)) || 294 - !write_ok_or_segv(regs->si, sizeof(struct timezone))) 295 - break; 296 - 297 238 ret = sys_gettimeofday( 298 239 (struct timeval __user *)regs->di, 299 240 (struct timezone __user *)regs->si); 300 241 break; 301 242 302 243 case 1: 303 - skip = vsyscall_seccomp(tsk, __NR_time); 304 - if (skip) 305 - break; 306 - 307 - if (!write_ok_or_segv(regs->di, sizeof(time_t))) 308 - break; 309 - 310 244 ret = sys_time((time_t __user *)regs->di); 311 245 break; 312 246 313 247 case 2: 314 - skip = vsyscall_seccomp(tsk, __NR_getcpu); 315 - if (skip) 316 - break; 317 - 318 - if (!write_ok_or_segv(regs->di, sizeof(unsigned)) || 319 - !write_ok_or_segv(regs->si, sizeof(unsigned))) 320 - break; 321 - 322 248 ret = sys_getcpu((unsigned __user *)regs->di, 323 249 (unsigned __user *)regs->si, 324 250 NULL); ··· 297 283 298 284 current_thread_info()->sig_on_uaccess_error = prev_sig_on_uaccess_error; 299 285 300 - if (skip) { 301 - if ((long)regs->ax <= 0L) /* seccomp errno emulation */ 302 - goto do_ret; 303 - goto done; /* seccomp trace/trap */ 304 - } 305 - 286 + check_fault: 306 287 if (ret == -EFAULT) { 307 288 /* Bad news -- userspace fed a bad pointer to a vsyscall. */ 308 289 warn_bad_vsyscall(KERN_INFO, regs, ··· 320 311 /* Emulate a ret instruction. */ 321 312 regs->ip = caller; 322 313 regs->sp += 8; 323 - done: 324 314 return true; 325 315 326 316 sigsegv:
+28 -53
drivers/char/tpm/tpm_ibmvtpm.c
··· 38 38 }; 39 39 MODULE_DEVICE_TABLE(vio, tpm_ibmvtpm_device_table); 40 40 41 - DECLARE_WAIT_QUEUE_HEAD(wq); 42 - 43 41 /** 44 42 * ibmvtpm_send_crq - Send a CRQ request 45 43 * @vdev: vio device struct ··· 81 83 { 82 84 struct ibmvtpm_dev *ibmvtpm; 83 85 u16 len; 86 + int sig; 84 87 85 88 ibmvtpm = (struct ibmvtpm_dev *)chip->vendor.data; 86 89 ··· 90 91 return 0; 91 92 } 92 93 93 - wait_event_interruptible(wq, ibmvtpm->crq_res.len != 0); 94 + sig = wait_event_interruptible(ibmvtpm->wq, ibmvtpm->res_len != 0); 95 + if (sig) 96 + return -EINTR; 94 97 95 - if (count < ibmvtpm->crq_res.len) { 98 + len = ibmvtpm->res_len; 99 + 100 + if (count < len) { 96 101 dev_err(ibmvtpm->dev, 97 102 "Invalid size in recv: count=%ld, crq_size=%d\n", 98 - count, ibmvtpm->crq_res.len); 103 + count, len); 99 104 return -EIO; 100 105 } 101 106 102 107 spin_lock(&ibmvtpm->rtce_lock); 103 - memcpy((void *)buf, (void *)ibmvtpm->rtce_buf, ibmvtpm->crq_res.len); 104 - memset(ibmvtpm->rtce_buf, 0, ibmvtpm->crq_res.len); 105 - ibmvtpm->crq_res.valid = 0; 106 - ibmvtpm->crq_res.msg = 0; 107 - len = ibmvtpm->crq_res.len; 108 - ibmvtpm->crq_res.len = 0; 108 + memcpy((void *)buf, (void *)ibmvtpm->rtce_buf, len); 109 + memset(ibmvtpm->rtce_buf, 0, len); 110 + ibmvtpm->res_len = 0; 109 111 spin_unlock(&ibmvtpm->rtce_lock); 110 112 return len; 111 113 } ··· 273 273 int rc = 0; 274 274 275 275 free_irq(vdev->irq, ibmvtpm); 276 - tasklet_kill(&ibmvtpm->tasklet); 277 276 278 277 do { 279 278 if (rc) ··· 371 372 static int tpm_ibmvtpm_resume(struct device *dev) 372 373 { 373 374 struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev); 374 - unsigned long flags; 375 375 int rc = 0; 376 376 377 377 do { ··· 385 387 return rc; 386 388 } 387 389 388 - spin_lock_irqsave(&ibmvtpm->lock, flags); 389 - vio_disable_interrupts(ibmvtpm->vdev); 390 - tasklet_schedule(&ibmvtpm->tasklet); 391 - spin_unlock_irqrestore(&ibmvtpm->lock, flags); 390 + rc = vio_enable_interrupts(ibmvtpm->vdev); 391 + if (rc) { 392 + dev_err(dev, "Error vio_enable_interrupts rc=%d\n", rc); 393 + return rc; 394 + } 392 395 393 396 rc = ibmvtpm_crq_send_init(ibmvtpm); 394 397 if (rc) ··· 466 467 if (crq->valid & VTPM_MSG_RES) { 467 468 if (++crq_q->index == crq_q->num_entry) 468 469 crq_q->index = 0; 469 - rmb(); 470 + smp_rmb(); 470 471 } else 471 472 crq = NULL; 472 473 return crq; ··· 534 535 ibmvtpm->vtpm_version = crq->data; 535 536 return; 536 537 case VTPM_TPM_COMMAND_RES: 537 - ibmvtpm->crq_res.valid = crq->valid; 538 - ibmvtpm->crq_res.msg = crq->msg; 539 - ibmvtpm->crq_res.len = crq->len; 540 - ibmvtpm->crq_res.data = crq->data; 541 - wake_up_interruptible(&wq); 538 + /* len of the data in rtce buffer */ 539 + ibmvtpm->res_len = crq->len; 540 + wake_up_interruptible(&ibmvtpm->wq); 542 541 return; 543 542 default: 544 543 return; ··· 556 559 static irqreturn_t ibmvtpm_interrupt(int irq, void *vtpm_instance) 557 560 { 558 561 struct ibmvtpm_dev *ibmvtpm = (struct ibmvtpm_dev *) vtpm_instance; 559 - unsigned long flags; 560 - 561 - spin_lock_irqsave(&ibmvtpm->lock, flags); 562 - vio_disable_interrupts(ibmvtpm->vdev); 563 - tasklet_schedule(&ibmvtpm->tasklet); 564 - spin_unlock_irqrestore(&ibmvtpm->lock, flags); 565 - 566 - return IRQ_HANDLED; 567 - } 568 - 569 - /** 570 - * ibmvtpm_tasklet - Interrupt handler tasklet 571 - * @data: ibm vtpm device struct 572 - * 573 - * Returns: 574 - * Nothing 575 - **/ 576 - static void ibmvtpm_tasklet(void *data) 577 - { 578 - struct ibmvtpm_dev *ibmvtpm = data; 579 562 struct ibmvtpm_crq *crq; 580 - unsigned long flags; 581 563 582 - spin_lock_irqsave(&ibmvtpm->lock, flags); 564 + /* while loop is needed for initial setup (get version and 565 + * get rtce_size). There should be only one tpm request at any 566 + * given time. 567 + */ 583 568 while ((crq = ibmvtpm_crq_get_next(ibmvtpm)) != NULL) { 584 569 ibmvtpm_crq_process(crq, ibmvtpm); 585 570 crq->valid = 0; 586 - wmb(); 571 + smp_wmb(); 587 572 } 588 573 589 - vio_enable_interrupts(ibmvtpm->vdev); 590 - spin_unlock_irqrestore(&ibmvtpm->lock, flags); 574 + return IRQ_HANDLED; 591 575 } 592 576 593 577 /** ··· 628 650 goto reg_crq_cleanup; 629 651 } 630 652 631 - tasklet_init(&ibmvtpm->tasklet, (void *)ibmvtpm_tasklet, 632 - (unsigned long)ibmvtpm); 633 - 634 653 rc = request_irq(vio_dev->irq, ibmvtpm_interrupt, 0, 635 654 tpm_ibmvtpm_driver_name, ibmvtpm); 636 655 if (rc) { ··· 641 666 goto init_irq_cleanup; 642 667 } 643 668 669 + init_waitqueue_head(&ibmvtpm->wq); 670 + 644 671 crq_q->index = 0; 645 672 646 673 ibmvtpm->dev = dev; 647 674 ibmvtpm->vdev = vio_dev; 648 675 chip->vendor.data = (void *)ibmvtpm; 649 676 650 - spin_lock_init(&ibmvtpm->lock); 651 677 spin_lock_init(&ibmvtpm->rtce_lock); 652 678 653 679 rc = ibmvtpm_crq_send_init(ibmvtpm); ··· 665 689 666 690 return rc; 667 691 init_irq_cleanup: 668 - tasklet_kill(&ibmvtpm->tasklet); 669 692 do { 670 693 rc1 = plpar_hcall_norets(H_FREE_CRQ, vio_dev->unit_address); 671 694 } while (rc1 == H_BUSY || H_IS_LONG_BUSY(rc1));
+2 -3
drivers/char/tpm/tpm_ibmvtpm.h
··· 38 38 struct vio_dev *vdev; 39 39 struct ibmvtpm_crq_queue crq_queue; 40 40 dma_addr_t crq_dma_handle; 41 - spinlock_t lock; 42 - struct tasklet_struct tasklet; 43 41 u32 rtce_size; 44 42 void __iomem *rtce_buf; 45 43 dma_addr_t rtce_dma_handle; 46 44 spinlock_t rtce_lock; 47 - struct ibmvtpm_crq crq_res; 45 + wait_queue_head_t wq; 46 + u16 res_len; 48 47 u32 vtpm_version; 49 48 }; 50 49
+4 -8
fs/cifs/cifsacl.c
··· 346 346 if (!cred) 347 347 return -ENOMEM; 348 348 349 - keyring = key_alloc(&key_type_keyring, ".cifs_idmap", 0, 0, cred, 350 - (KEY_POS_ALL & ~KEY_POS_SETATTR) | 351 - KEY_USR_VIEW | KEY_USR_READ, 352 - KEY_ALLOC_NOT_IN_QUOTA); 349 + keyring = keyring_alloc(".cifs_idmap", 0, 0, cred, 350 + (KEY_POS_ALL & ~KEY_POS_SETATTR) | 351 + KEY_USR_VIEW | KEY_USR_READ, 352 + KEY_ALLOC_NOT_IN_QUOTA, NULL); 353 353 if (IS_ERR(keyring)) { 354 354 ret = PTR_ERR(keyring); 355 355 goto failed_put_cred; 356 356 } 357 - 358 - ret = key_instantiate_and_link(keyring, NULL, 0, NULL, NULL); 359 - if (ret < 0) 360 - goto failed_put_key; 361 357 362 358 ret = register_key_type(&cifs_idmap_key_type); 363 359 if (ret < 0)
+4 -8
fs/nfs/idmap.c
··· 193 193 if (!cred) 194 194 return -ENOMEM; 195 195 196 - keyring = key_alloc(&key_type_keyring, ".id_resolver", 0, 0, cred, 197 - (KEY_POS_ALL & ~KEY_POS_SETATTR) | 198 - KEY_USR_VIEW | KEY_USR_READ, 199 - KEY_ALLOC_NOT_IN_QUOTA); 196 + keyring = keyring_alloc(".id_resolver", 0, 0, cred, 197 + (KEY_POS_ALL & ~KEY_POS_SETATTR) | 198 + KEY_USR_VIEW | KEY_USR_READ, 199 + KEY_ALLOC_NOT_IN_QUOTA, NULL); 200 200 if (IS_ERR(keyring)) { 201 201 ret = PTR_ERR(keyring); 202 202 goto failed_put_cred; 203 203 } 204 - 205 - ret = key_instantiate_and_link(keyring, NULL, 0, NULL, NULL); 206 - if (ret < 0) 207 - goto failed_put_key; 208 204 209 205 ret = register_key_type(&key_type_id_resolver); 210 206 if (ret < 0)
+2 -15
include/linux/cred.h
··· 77 77 extern int in_egroup_p(kgid_t); 78 78 79 79 /* 80 - * The common credentials for a thread group 81 - * - shared by CLONE_THREAD 82 - */ 83 - #ifdef CONFIG_KEYS 84 - struct thread_group_cred { 85 - atomic_t usage; 86 - pid_t tgid; /* thread group process ID */ 87 - spinlock_t lock; 88 - struct key __rcu *session_keyring; /* keyring inherited over fork */ 89 - struct key *process_keyring; /* keyring private to this process */ 90 - struct rcu_head rcu; /* RCU deletion hook */ 91 - }; 92 - #endif 93 - 94 - /* 95 80 * The security context of a task 96 81 * 97 82 * The parts of the context break down into two categories: ··· 124 139 #ifdef CONFIG_KEYS 125 140 unsigned char jit_keyring; /* default keyring to attach requested 126 141 * keys to */ 142 + struct key __rcu *session_keyring; /* keyring inherited over fork */ 143 + struct key *process_keyring; /* keyring private to this process */ 127 144 struct key *thread_keyring; /* keyring private to this thread */ 128 145 struct key *request_key_auth; /* assumed request_key authority */ 129 146 struct thread_group_cred *tgcred; /* thread-group shared credentials */
+1
include/linux/key.h
··· 265 265 266 266 extern struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid, 267 267 const struct cred *cred, 268 + key_perm_t perm, 268 269 unsigned long flags, 269 270 struct key *dest); 270 271
+15 -112
kernel/cred.c
··· 30 30 static struct kmem_cache *cred_jar; 31 31 32 32 /* 33 - * The common credentials for the initial task's thread group 34 - */ 35 - #ifdef CONFIG_KEYS 36 - static struct thread_group_cred init_tgcred = { 37 - .usage = ATOMIC_INIT(2), 38 - .tgid = 0, 39 - .lock = __SPIN_LOCK_UNLOCKED(init_cred.tgcred.lock), 40 - }; 41 - #endif 42 - 43 - /* 44 33 * The initial credentials for the initial task 45 34 */ 46 35 struct cred init_cred = { ··· 54 65 .user = INIT_USER, 55 66 .user_ns = &init_user_ns, 56 67 .group_info = &init_groups, 57 - #ifdef CONFIG_KEYS 58 - .tgcred = &init_tgcred, 59 - #endif 60 68 }; 61 69 62 70 static inline void set_cred_subscribers(struct cred *cred, int n) ··· 82 96 } 83 97 84 98 /* 85 - * Dispose of the shared task group credentials 86 - */ 87 - #ifdef CONFIG_KEYS 88 - static void release_tgcred_rcu(struct rcu_head *rcu) 89 - { 90 - struct thread_group_cred *tgcred = 91 - container_of(rcu, struct thread_group_cred, rcu); 92 - 93 - BUG_ON(atomic_read(&tgcred->usage) != 0); 94 - 95 - key_put(tgcred->session_keyring); 96 - key_put(tgcred->process_keyring); 97 - kfree(tgcred); 98 - } 99 - #endif 100 - 101 - /* 102 - * Release a set of thread group credentials. 103 - */ 104 - static void release_tgcred(struct cred *cred) 105 - { 106 - #ifdef CONFIG_KEYS 107 - struct thread_group_cred *tgcred = cred->tgcred; 108 - 109 - if (atomic_dec_and_test(&tgcred->usage)) 110 - call_rcu(&tgcred->rcu, release_tgcred_rcu); 111 - #endif 112 - } 113 - 114 - /* 115 99 * The RCU callback to actually dispose of a set of credentials 116 100 */ 117 101 static void put_cred_rcu(struct rcu_head *rcu) ··· 106 150 #endif 107 151 108 152 security_cred_free(cred); 153 + key_put(cred->session_keyring); 154 + key_put(cred->process_keyring); 109 155 key_put(cred->thread_keyring); 110 156 key_put(cred->request_key_auth); 111 - release_tgcred(cred); 112 157 if (cred->group_info) 113 158 put_group_info(cred->group_info); 114 159 free_uid(cred->user); ··· 203 246 if (!new) 204 247 return NULL; 205 248 206 - #ifdef CONFIG_KEYS 207 - new->tgcred = kzalloc(sizeof(*new->tgcred), GFP_KERNEL); 208 - if (!new->tgcred) { 209 - kmem_cache_free(cred_jar, new); 210 - return NULL; 211 - } 212 - atomic_set(&new->tgcred->usage, 1); 213 - #endif 214 - 215 249 atomic_set(&new->usage, 1); 216 250 #ifdef CONFIG_DEBUG_CREDENTIALS 217 251 new->magic = CRED_MAGIC; ··· 256 308 get_user_ns(new->user_ns); 257 309 258 310 #ifdef CONFIG_KEYS 311 + key_get(new->session_keyring); 312 + key_get(new->process_keyring); 259 313 key_get(new->thread_keyring); 260 314 key_get(new->request_key_auth); 261 - atomic_inc(&new->tgcred->usage); 262 315 #endif 263 316 264 317 #ifdef CONFIG_SECURITY ··· 283 334 */ 284 335 struct cred *prepare_exec_creds(void) 285 336 { 286 - struct thread_group_cred *tgcred = NULL; 287 337 struct cred *new; 288 338 289 - #ifdef CONFIG_KEYS 290 - tgcred = kmalloc(sizeof(*tgcred), GFP_KERNEL); 291 - if (!tgcred) 292 - return NULL; 293 - #endif 294 - 295 339 new = prepare_creds(); 296 - if (!new) { 297 - kfree(tgcred); 340 + if (!new) 298 341 return new; 299 - } 300 342 301 343 #ifdef CONFIG_KEYS 302 344 /* newly exec'd tasks don't get a thread keyring */ 303 345 key_put(new->thread_keyring); 304 346 new->thread_keyring = NULL; 305 347 306 - /* create a new per-thread-group creds for all this set of threads to 307 - * share */ 308 - memcpy(tgcred, new->tgcred, sizeof(struct thread_group_cred)); 309 - 310 - atomic_set(&tgcred->usage, 1); 311 - spin_lock_init(&tgcred->lock); 312 - 313 348 /* inherit the session keyring; new process keyring */ 314 - key_get(tgcred->session_keyring); 315 - tgcred->process_keyring = NULL; 316 - 317 - release_tgcred(new); 318 - new->tgcred = tgcred; 349 + key_put(new->process_keyring); 350 + new->process_keyring = NULL; 319 351 #endif 320 352 321 353 return new; ··· 313 383 */ 314 384 int copy_creds(struct task_struct *p, unsigned long clone_flags) 315 385 { 316 - #ifdef CONFIG_KEYS 317 - struct thread_group_cred *tgcred; 318 - #endif 319 386 struct cred *new; 320 387 int ret; 321 388 ··· 352 425 install_thread_keyring_to_cred(new); 353 426 } 354 427 355 - /* we share the process and session keyrings between all the threads in 356 - * a process - this is slightly icky as we violate COW credentials a 357 - * bit */ 428 + /* The process keyring is only shared between the threads in a process; 429 + * anything outside of those threads doesn't inherit. 430 + */ 358 431 if (!(clone_flags & CLONE_THREAD)) { 359 - tgcred = kmalloc(sizeof(*tgcred), GFP_KERNEL); 360 - if (!tgcred) { 361 - ret = -ENOMEM; 362 - goto error_put; 363 - } 364 - atomic_set(&tgcred->usage, 1); 365 - spin_lock_init(&tgcred->lock); 366 - tgcred->process_keyring = NULL; 367 - tgcred->session_keyring = key_get(new->tgcred->session_keyring); 368 - 369 - release_tgcred(new); 370 - new->tgcred = tgcred; 432 + key_put(new->process_keyring); 433 + new->process_keyring = NULL; 371 434 } 372 435 #endif 373 436 ··· 560 643 */ 561 644 struct cred *prepare_kernel_cred(struct task_struct *daemon) 562 645 { 563 - #ifdef CONFIG_KEYS 564 - struct thread_group_cred *tgcred; 565 - #endif 566 646 const struct cred *old; 567 647 struct cred *new; 568 648 569 649 new = kmem_cache_alloc(cred_jar, GFP_KERNEL); 570 650 if (!new) 571 651 return NULL; 572 - 573 - #ifdef CONFIG_KEYS 574 - tgcred = kmalloc(sizeof(*tgcred), GFP_KERNEL); 575 - if (!tgcred) { 576 - kmem_cache_free(cred_jar, new); 577 - return NULL; 578 - } 579 - #endif 580 652 581 653 kdebug("prepare_kernel_cred() alloc %p", new); 582 654 ··· 584 678 get_group_info(new->group_info); 585 679 586 680 #ifdef CONFIG_KEYS 587 - atomic_set(&tgcred->usage, 1); 588 - spin_lock_init(&tgcred->lock); 589 - tgcred->process_keyring = NULL; 590 - tgcred->session_keyring = NULL; 591 - new->tgcred = tgcred; 592 - new->request_key_auth = NULL; 681 + new->session_keyring = NULL; 682 + new->process_keyring = NULL; 593 683 new->thread_keyring = NULL; 684 + new->request_key_auth = NULL; 594 685 new->jit_keyring = KEY_REQKEY_DEFL_THREAD_KEYRING; 595 686 #endif 596 687
+10 -3
kernel/seccomp.c
··· 396 396 #ifdef CONFIG_SECCOMP_FILTER 397 397 case SECCOMP_MODE_FILTER: { 398 398 int data; 399 + struct pt_regs *regs = task_pt_regs(current); 399 400 ret = seccomp_run_filters(this_syscall); 400 401 data = ret & SECCOMP_RET_DATA; 401 402 ret &= SECCOMP_RET_ACTION; 402 403 switch (ret) { 403 404 case SECCOMP_RET_ERRNO: 404 405 /* Set the low-order 16-bits as a errno. */ 405 - syscall_set_return_value(current, task_pt_regs(current), 406 + syscall_set_return_value(current, regs, 406 407 -data, 0); 407 408 goto skip; 408 409 case SECCOMP_RET_TRAP: 409 410 /* Show the handler the original registers. */ 410 - syscall_rollback(current, task_pt_regs(current)); 411 + syscall_rollback(current, regs); 411 412 /* Let the filter pass back 16 bits of data. */ 412 413 seccomp_send_sigsys(this_syscall, data); 413 414 goto skip; 414 415 case SECCOMP_RET_TRACE: 415 416 /* Skip these calls if there is no tracer. */ 416 - if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) 417 + if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) { 418 + syscall_set_return_value(current, regs, 419 + -ENOSYS, 0); 417 420 goto skip; 421 + } 418 422 /* Allow the BPF to provide the event message */ 419 423 ptrace_event(PTRACE_EVENT_SECCOMP, data); 420 424 /* ··· 429 425 */ 430 426 if (fatal_signal_pending(current)) 431 427 break; 428 + if (syscall_get_nr(current, regs) < 0) 429 + goto skip; /* Explicit request to skip. */ 430 + 432 431 return 0; 433 432 case SECCOMP_RET_ALLOW: 434 433 return 0;
+6 -9
net/dns_resolver/dns_key.c
··· 259 259 if (!cred) 260 260 return -ENOMEM; 261 261 262 - keyring = key_alloc(&key_type_keyring, ".dns_resolver", 263 - GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, cred, 264 - (KEY_POS_ALL & ~KEY_POS_SETATTR) | 265 - KEY_USR_VIEW | KEY_USR_READ, 266 - KEY_ALLOC_NOT_IN_QUOTA); 262 + keyring = keyring_alloc(".dns_resolver", 263 + GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, cred, 264 + (KEY_POS_ALL & ~KEY_POS_SETATTR) | 265 + KEY_USR_VIEW | KEY_USR_READ, 266 + KEY_ALLOC_NOT_IN_QUOTA, NULL); 267 267 if (IS_ERR(keyring)) { 268 268 ret = PTR_ERR(keyring); 269 269 goto failed_put_cred; 270 270 } 271 - 272 - ret = key_instantiate_and_link(keyring, NULL, 0, NULL, NULL); 273 - if (ret < 0) 274 - goto failed_put_key; 275 271 276 272 ret = register_key_type(&key_type_dns_resolver); 277 273 if (ret < 0) ··· 300 304 module_init(init_dns_resolver) 301 305 module_exit(exit_dns_resolver) 302 306 MODULE_LICENSE("GPL"); 307 +
+3 -3
security/keys/key.c
··· 854 854 /* if the client doesn't provide, decide on the permissions we want */ 855 855 if (perm == KEY_PERM_UNDEF) { 856 856 perm = KEY_POS_VIEW | KEY_POS_SEARCH | KEY_POS_LINK | KEY_POS_SETATTR; 857 - perm |= KEY_USR_VIEW | KEY_USR_SEARCH | KEY_USR_LINK | KEY_USR_SETATTR; 857 + perm |= KEY_USR_VIEW; 858 858 859 859 if (ktype->read) 860 - perm |= KEY_POS_READ | KEY_USR_READ; 860 + perm |= KEY_POS_READ; 861 861 862 862 if (ktype == &key_type_keyring || ktype->update) 863 - perm |= KEY_USR_WRITE; 863 + perm |= KEY_POS_WRITE; 864 864 } 865 865 866 866 /* allocate a new key */
+8 -7
security/keys/keyctl.c
··· 1132 1132 ret = rw_copy_check_uvector(WRITE, _payload_iov, ioc, 1133 1133 ARRAY_SIZE(iovstack), iovstack, &iov); 1134 1134 if (ret < 0) 1135 - return ret; 1135 + goto err; 1136 1136 if (ret == 0) 1137 1137 goto no_payload_free; 1138 1138 1139 1139 ret = keyctl_instantiate_key_common(id, iov, ioc, ret, ringid); 1140 - 1140 + err: 1141 1141 if (iov != iovstack) 1142 1142 kfree(iov); 1143 1143 return ret; ··· 1495 1495 goto error_keyring; 1496 1496 newwork = &cred->rcu; 1497 1497 1498 - cred->tgcred->session_keyring = key_ref_to_ptr(keyring_r); 1498 + cred->session_keyring = key_ref_to_ptr(keyring_r); 1499 + keyring_r = NULL; 1499 1500 init_task_work(newwork, key_change_session_keyring); 1500 1501 1501 1502 me = current; ··· 1520 1519 mycred = current_cred(); 1521 1520 pcred = __task_cred(parent); 1522 1521 if (mycred == pcred || 1523 - mycred->tgcred->session_keyring == pcred->tgcred->session_keyring) { 1522 + mycred->session_keyring == pcred->session_keyring) { 1524 1523 ret = 0; 1525 1524 goto unlock; 1526 1525 } ··· 1536 1535 goto unlock; 1537 1536 1538 1537 /* the keyrings must have the same UID */ 1539 - if ((pcred->tgcred->session_keyring && 1540 - !uid_eq(pcred->tgcred->session_keyring->uid, mycred->euid)) || 1541 - !uid_eq(mycred->tgcred->session_keyring->uid, mycred->euid)) 1538 + if ((pcred->session_keyring && 1539 + !uid_eq(pcred->session_keyring->uid, mycred->euid)) || 1540 + !uid_eq(mycred->session_keyring->uid, mycred->euid)) 1542 1541 goto unlock; 1543 1542 1544 1543 /* cancel an already pending keyring replacement */
+4 -6
security/keys/keyring.c
··· 257 257 * Allocate a keyring and link into the destination keyring. 258 258 */ 259 259 struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid, 260 - const struct cred *cred, unsigned long flags, 261 - struct key *dest) 260 + const struct cred *cred, key_perm_t perm, 261 + unsigned long flags, struct key *dest) 262 262 { 263 263 struct key *keyring; 264 264 int ret; 265 265 266 266 keyring = key_alloc(&key_type_keyring, description, 267 - uid, gid, cred, 268 - (KEY_POS_ALL & ~KEY_POS_SETATTR) | KEY_USR_ALL, 269 - flags); 270 - 267 + uid, gid, cred, perm, flags); 271 268 if (!IS_ERR(keyring)) { 272 269 ret = key_instantiate_and_link(keyring, NULL, 0, dest, NULL); 273 270 if (ret < 0) { ··· 275 278 276 279 return keyring; 277 280 } 281 + EXPORT_SYMBOL(keyring_alloc); 278 282 279 283 /** 280 284 * keyring_search_aux - Search a keyring tree for a key matching some criteria
+38 -52
security/keys/process_keys.c
··· 45 45 struct user_struct *user; 46 46 const struct cred *cred; 47 47 struct key *uid_keyring, *session_keyring; 48 + key_perm_t user_keyring_perm; 48 49 char buf[20]; 49 50 int ret; 50 51 uid_t uid; 51 52 53 + user_keyring_perm = (KEY_POS_ALL & ~KEY_POS_SETATTR) | KEY_USR_ALL; 52 54 cred = current_cred(); 53 55 user = cred->user; 54 56 uid = from_kuid(cred->user_ns, user->uid); ··· 75 73 uid_keyring = find_keyring_by_name(buf, true); 76 74 if (IS_ERR(uid_keyring)) { 77 75 uid_keyring = keyring_alloc(buf, user->uid, INVALID_GID, 78 - cred, KEY_ALLOC_IN_QUOTA, 79 - NULL); 76 + cred, user_keyring_perm, 77 + KEY_ALLOC_IN_QUOTA, NULL); 80 78 if (IS_ERR(uid_keyring)) { 81 79 ret = PTR_ERR(uid_keyring); 82 80 goto error; ··· 91 89 if (IS_ERR(session_keyring)) { 92 90 session_keyring = 93 91 keyring_alloc(buf, user->uid, INVALID_GID, 94 - cred, KEY_ALLOC_IN_QUOTA, NULL); 92 + cred, user_keyring_perm, 93 + KEY_ALLOC_IN_QUOTA, NULL); 95 94 if (IS_ERR(session_keyring)) { 96 95 ret = PTR_ERR(session_keyring); 97 96 goto error_release; ··· 133 130 struct key *keyring; 134 131 135 132 keyring = keyring_alloc("_tid", new->uid, new->gid, new, 133 + KEY_POS_ALL | KEY_USR_VIEW, 136 134 KEY_ALLOC_QUOTA_OVERRUN, NULL); 137 135 if (IS_ERR(keyring)) 138 136 return PTR_ERR(keyring); ··· 174 170 int install_process_keyring_to_cred(struct cred *new) 175 171 { 176 172 struct key *keyring; 177 - int ret; 178 173 179 - if (new->tgcred->process_keyring) 174 + if (new->process_keyring) 180 175 return -EEXIST; 181 176 182 - keyring = keyring_alloc("_pid", new->uid, new->gid, 183 - new, KEY_ALLOC_QUOTA_OVERRUN, NULL); 177 + keyring = keyring_alloc("_pid", new->uid, new->gid, new, 178 + KEY_POS_ALL | KEY_USR_VIEW, 179 + KEY_ALLOC_QUOTA_OVERRUN, NULL); 184 180 if (IS_ERR(keyring)) 185 181 return PTR_ERR(keyring); 186 182 187 - spin_lock_irq(&new->tgcred->lock); 188 - if (!new->tgcred->process_keyring) { 189 - new->tgcred->process_keyring = keyring; 190 - keyring = NULL; 191 - ret = 0; 192 - } else { 193 - ret = -EEXIST; 194 - } 195 - spin_unlock_irq(&new->tgcred->lock); 196 - key_put(keyring); 197 - return ret; 183 + new->process_keyring = keyring; 184 + return 0; 198 185 } 199 186 200 187 /* ··· 226 231 /* create an empty session keyring */ 227 232 if (!keyring) { 228 233 flags = KEY_ALLOC_QUOTA_OVERRUN; 229 - if (cred->tgcred->session_keyring) 234 + if (cred->session_keyring) 230 235 flags = KEY_ALLOC_IN_QUOTA; 231 236 232 - keyring = keyring_alloc("_ses", cred->uid, cred->gid, 233 - cred, flags, NULL); 237 + keyring = keyring_alloc("_ses", cred->uid, cred->gid, cred, 238 + KEY_POS_ALL | KEY_USR_VIEW | KEY_USR_READ, 239 + flags, NULL); 234 240 if (IS_ERR(keyring)) 235 241 return PTR_ERR(keyring); 236 242 } else { ··· 239 243 } 240 244 241 245 /* install the keyring */ 242 - spin_lock_irq(&cred->tgcred->lock); 243 - old = cred->tgcred->session_keyring; 244 - rcu_assign_pointer(cred->tgcred->session_keyring, keyring); 245 - spin_unlock_irq(&cred->tgcred->lock); 246 + old = cred->session_keyring; 247 + rcu_assign_pointer(cred->session_keyring, keyring); 246 248 247 - /* we're using RCU on the pointer, but there's no point synchronising 248 - * on it if it didn't previously point to anything */ 249 - if (old) { 250 - synchronize_rcu(); 249 + if (old) 251 250 key_put(old); 252 - } 253 251 254 252 return 0; 255 253 } ··· 358 368 } 359 369 360 370 /* search the process keyring second */ 361 - if (cred->tgcred->process_keyring) { 371 + if (cred->process_keyring) { 362 372 key_ref = keyring_search_aux( 363 - make_key_ref(cred->tgcred->process_keyring, 1), 373 + make_key_ref(cred->process_keyring, 1), 364 374 cred, type, description, match, no_state_check); 365 375 if (!IS_ERR(key_ref)) 366 376 goto found; ··· 379 389 } 380 390 381 391 /* search the session keyring */ 382 - if (cred->tgcred->session_keyring) { 392 + if (cred->session_keyring) { 383 393 rcu_read_lock(); 384 394 key_ref = keyring_search_aux( 385 - make_key_ref(rcu_dereference( 386 - cred->tgcred->session_keyring), 387 - 1), 395 + make_key_ref(rcu_dereference(cred->session_keyring), 1), 388 396 cred, type, description, match, no_state_check); 389 397 rcu_read_unlock(); 390 398 ··· 552 564 break; 553 565 554 566 case KEY_SPEC_PROCESS_KEYRING: 555 - if (!cred->tgcred->process_keyring) { 567 + if (!cred->process_keyring) { 556 568 if (!(lflags & KEY_LOOKUP_CREATE)) 557 569 goto error; 558 570 ··· 564 576 goto reget_creds; 565 577 } 566 578 567 - key = cred->tgcred->process_keyring; 579 + key = cred->process_keyring; 568 580 atomic_inc(&key->usage); 569 581 key_ref = make_key_ref(key, 1); 570 582 break; 571 583 572 584 case KEY_SPEC_SESSION_KEYRING: 573 - if (!cred->tgcred->session_keyring) { 585 + if (!cred->session_keyring) { 574 586 /* always install a session keyring upon access if one 575 587 * doesn't exist yet */ 576 588 ret = install_user_keyrings(); ··· 585 597 if (ret < 0) 586 598 goto error; 587 599 goto reget_creds; 588 - } else if (cred->tgcred->session_keyring == 600 + } else if (cred->session_keyring == 589 601 cred->user->session_keyring && 590 602 lflags & KEY_LOOKUP_CREATE) { 591 603 ret = join_session_keyring(NULL); ··· 595 607 } 596 608 597 609 rcu_read_lock(); 598 - key = rcu_dereference(cred->tgcred->session_keyring); 610 + key = rcu_dereference(cred->session_keyring); 599 611 atomic_inc(&key->usage); 600 612 rcu_read_unlock(); 601 613 key_ref = make_key_ref(key, 1); ··· 755 767 struct key *keyring; 756 768 long ret, serial; 757 769 758 - /* only permit this if there's a single thread in the thread group - 759 - * this avoids us having to adjust the creds on all threads and risking 760 - * ENOMEM */ 761 - if (!current_is_single_threaded()) 762 - return -EMLINK; 763 - 764 770 new = prepare_creds(); 765 771 if (!new) 766 772 return -ENOMEM; ··· 766 784 if (ret < 0) 767 785 goto error; 768 786 769 - serial = new->tgcred->session_keyring->serial; 787 + serial = new->session_keyring->serial; 770 788 ret = commit_creds(new); 771 789 if (ret == 0) 772 790 ret = serial; ··· 780 798 keyring = find_keyring_by_name(name, false); 781 799 if (PTR_ERR(keyring) == -ENOKEY) { 782 800 /* not found - try and create a new one */ 783 - keyring = keyring_alloc(name, old->uid, old->gid, old, 784 - KEY_ALLOC_IN_QUOTA, NULL); 801 + keyring = keyring_alloc( 802 + name, old->uid, old->gid, old, 803 + KEY_POS_ALL | KEY_USR_VIEW | KEY_USR_READ | KEY_USR_LINK, 804 + KEY_ALLOC_IN_QUOTA, NULL); 785 805 if (IS_ERR(keyring)) { 786 806 ret = PTR_ERR(keyring); 787 807 goto error2; 788 808 } 789 809 } else if (IS_ERR(keyring)) { 790 810 ret = PTR_ERR(keyring); 811 + goto error2; 812 + } else if (keyring == new->session_keyring) { 813 + ret = 0; 791 814 goto error2; 792 815 } 793 816 ··· 850 863 851 864 new->jit_keyring = old->jit_keyring; 852 865 new->thread_keyring = key_get(old->thread_keyring); 853 - new->tgcred->tgid = old->tgcred->tgid; 854 - new->tgcred->process_keyring = key_get(old->tgcred->process_keyring); 866 + new->process_keyring = key_get(old->process_keyring); 855 867 856 868 security_transfer_creds(new, old); 857 869
+15 -6
security/keys/request_key.c
··· 126 126 127 127 cred = get_current_cred(); 128 128 keyring = keyring_alloc(desc, cred->fsuid, cred->fsgid, cred, 129 + KEY_POS_ALL | KEY_USR_VIEW | KEY_USR_READ, 129 130 KEY_ALLOC_QUOTA_OVERRUN, NULL); 130 131 put_cred(cred); 131 132 if (IS_ERR(keyring)) { ··· 151 150 cred->thread_keyring ? cred->thread_keyring->serial : 0); 152 151 153 152 prkey = 0; 154 - if (cred->tgcred->process_keyring) 155 - prkey = cred->tgcred->process_keyring->serial; 153 + if (cred->process_keyring) 154 + prkey = cred->process_keyring->serial; 156 155 sprintf(keyring_str[1], "%d", prkey); 157 156 158 157 rcu_read_lock(); 159 - session = rcu_dereference(cred->tgcred->session_keyring); 158 + session = rcu_dereference(cred->session_keyring); 160 159 if (!session) 161 160 session = cred->user->session_keyring; 162 161 sskey = session->serial; ··· 298 297 break; 299 298 300 299 case KEY_REQKEY_DEFL_PROCESS_KEYRING: 301 - dest_keyring = key_get(cred->tgcred->process_keyring); 300 + dest_keyring = key_get(cred->process_keyring); 302 301 if (dest_keyring) 303 302 break; 304 303 305 304 case KEY_REQKEY_DEFL_SESSION_KEYRING: 306 305 rcu_read_lock(); 307 306 dest_keyring = key_get( 308 - rcu_dereference(cred->tgcred->session_keyring)); 307 + rcu_dereference(cred->session_keyring)); 309 308 rcu_read_unlock(); 310 309 311 310 if (dest_keyring) ··· 348 347 const struct cred *cred = current_cred(); 349 348 unsigned long prealloc; 350 349 struct key *key; 350 + key_perm_t perm; 351 351 key_ref_t key_ref; 352 352 int ret; 353 353 ··· 357 355 *_key = NULL; 358 356 mutex_lock(&user->cons_lock); 359 357 358 + perm = KEY_POS_VIEW | KEY_POS_SEARCH | KEY_POS_LINK | KEY_POS_SETATTR; 359 + perm |= KEY_USR_VIEW; 360 + if (type->read) 361 + perm |= KEY_POS_READ; 362 + if (type == &key_type_keyring || type->update) 363 + perm |= KEY_POS_WRITE; 364 + 360 365 key = key_alloc(type, description, cred->fsuid, cred->fsgid, cred, 361 - KEY_POS_ALL, flags); 366 + perm, flags); 362 367 if (IS_ERR(key)) 363 368 goto alloc_failed; 364 369
+5 -1
security/smack/Kconfig
··· 1 1 config SECURITY_SMACK 2 2 bool "Simplified Mandatory Access Control Kernel Support" 3 - depends on NETLABEL && SECURITY_NETWORK 3 + depends on NET 4 + depends on INET 5 + depends on SECURITY 6 + select NETLABEL 7 + select SECURITY_NETWORK 4 8 default n 5 9 help 6 10 This selects the Simplified Mandatory Access Control Kernel.
+17
security/smack/smackfs.c
··· 2063 2063 .llseek = generic_file_llseek, 2064 2064 }; 2065 2065 2066 + static struct kset *smackfs_kset; 2067 + /** 2068 + * smk_init_sysfs - initialize /sys/fs/smackfs 2069 + * 2070 + */ 2071 + static int smk_init_sysfs(void) 2072 + { 2073 + smackfs_kset = kset_create_and_add("smackfs", NULL, fs_kobj); 2074 + if (!smackfs_kset) 2075 + return -ENOMEM; 2076 + return 0; 2077 + } 2078 + 2066 2079 /** 2067 2080 * smk_fill_super - fill the /smackfs superblock 2068 2081 * @sb: the empty superblock ··· 2195 2182 2196 2183 if (!security_module_enable(&smack_ops)) 2197 2184 return 0; 2185 + 2186 + err = smk_init_sysfs(); 2187 + if (err) 2188 + printk(KERN_ERR "smackfs: sysfs mountpoint problem.\n"); 2198 2189 2199 2190 err = register_filesystem(&smk_fs_type); 2200 2191 if (!err) {
+62 -26
security/yama/yama_lsm.c
··· 17 17 #include <linux/ptrace.h> 18 18 #include <linux/prctl.h> 19 19 #include <linux/ratelimit.h> 20 + #include <linux/workqueue.h> 20 21 21 22 #define YAMA_SCOPE_DISABLED 0 22 23 #define YAMA_SCOPE_RELATIONAL 1 ··· 30 29 struct ptrace_relation { 31 30 struct task_struct *tracer; 32 31 struct task_struct *tracee; 32 + bool invalid; 33 33 struct list_head node; 34 + struct rcu_head rcu; 34 35 }; 35 36 36 37 static LIST_HEAD(ptracer_relations); 37 38 static DEFINE_SPINLOCK(ptracer_relations_lock); 39 + 40 + static void yama_relation_cleanup(struct work_struct *work); 41 + static DECLARE_WORK(yama_relation_work, yama_relation_cleanup); 42 + 43 + /** 44 + * yama_relation_cleanup - remove invalid entries from the relation list 45 + * 46 + */ 47 + static void yama_relation_cleanup(struct work_struct *work) 48 + { 49 + struct ptrace_relation *relation; 50 + 51 + spin_lock(&ptracer_relations_lock); 52 + rcu_read_lock(); 53 + list_for_each_entry_rcu(relation, &ptracer_relations, node) { 54 + if (relation->invalid) { 55 + list_del_rcu(&relation->node); 56 + kfree_rcu(relation, rcu); 57 + } 58 + } 59 + rcu_read_unlock(); 60 + spin_unlock(&ptracer_relations_lock); 61 + } 38 62 39 63 /** 40 64 * yama_ptracer_add - add/replace an exception for this tracer/tracee pair ··· 74 48 static int yama_ptracer_add(struct task_struct *tracer, 75 49 struct task_struct *tracee) 76 50 { 77 - int rc = 0; 78 - struct ptrace_relation *added; 79 - struct ptrace_relation *entry, *relation = NULL; 51 + struct ptrace_relation *relation, *added; 80 52 81 53 added = kmalloc(sizeof(*added), GFP_KERNEL); 82 54 if (!added) 83 55 return -ENOMEM; 84 56 85 - spin_lock_bh(&ptracer_relations_lock); 86 - list_for_each_entry(entry, &ptracer_relations, node) 87 - if (entry->tracee == tracee) { 88 - relation = entry; 89 - break; 57 + added->tracee = tracee; 58 + added->tracer = tracer; 59 + added->invalid = false; 60 + 61 + spin_lock(&ptracer_relations_lock); 62 + rcu_read_lock(); 63 + list_for_each_entry_rcu(relation, &ptracer_relations, node) { 64 + if (relation->invalid) 65 + continue; 66 + if (relation->tracee == tracee) { 67 + list_replace_rcu(&relation->node, &added->node); 68 + kfree_rcu(relation, rcu); 69 + goto out; 90 70 } 91 - if (!relation) { 92 - relation = added; 93 - relation->tracee = tracee; 94 - list_add(&relation->node, &ptracer_relations); 95 71 } 96 - relation->tracer = tracer; 97 72 98 - spin_unlock_bh(&ptracer_relations_lock); 99 - if (added != relation) 100 - kfree(added); 73 + list_add_rcu(&added->node, &ptracer_relations); 101 74 102 - return rc; 75 + out: 76 + rcu_read_unlock(); 77 + spin_unlock(&ptracer_relations_lock); 78 + return 0; 103 79 } 104 80 105 81 /** ··· 112 84 static void yama_ptracer_del(struct task_struct *tracer, 113 85 struct task_struct *tracee) 114 86 { 115 - struct ptrace_relation *relation, *safe; 87 + struct ptrace_relation *relation; 88 + bool marked = false; 116 89 117 - spin_lock_bh(&ptracer_relations_lock); 118 - list_for_each_entry_safe(relation, safe, &ptracer_relations, node) 90 + rcu_read_lock(); 91 + list_for_each_entry_rcu(relation, &ptracer_relations, node) { 92 + if (relation->invalid) 93 + continue; 119 94 if (relation->tracee == tracee || 120 95 (tracer && relation->tracer == tracer)) { 121 - list_del(&relation->node); 122 - kfree(relation); 96 + relation->invalid = true; 97 + marked = true; 123 98 } 124 - spin_unlock_bh(&ptracer_relations_lock); 99 + } 100 + rcu_read_unlock(); 101 + 102 + if (marked) 103 + schedule_work(&yama_relation_work); 125 104 } 126 105 127 106 /** ··· 252 217 struct task_struct *parent = NULL; 253 218 bool found = false; 254 219 255 - spin_lock_bh(&ptracer_relations_lock); 256 220 rcu_read_lock(); 257 221 if (!thread_group_leader(tracee)) 258 222 tracee = rcu_dereference(tracee->group_leader); 259 - list_for_each_entry(relation, &ptracer_relations, node) 223 + list_for_each_entry_rcu(relation, &ptracer_relations, node) { 224 + if (relation->invalid) 225 + continue; 260 226 if (relation->tracee == tracee) { 261 227 parent = relation->tracer; 262 228 found = true; 263 229 break; 264 230 } 231 + } 265 232 266 233 if (found && (parent == NULL || task_is_descendant(parent, tracer))) 267 234 rc = 1; 268 235 rcu_read_unlock(); 269 - spin_unlock_bh(&ptracer_relations_lock); 270 236 271 237 return rc; 272 238 }