Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux

Pull fork cleanups from Christian Brauner:
"This is cleanup series from when we reworked a chunk of the process
creation paths in the kernel and switched to struct
{kernel_}clone_args.

High-level this does two main things:

- Remove the double export of both do_fork() and _do_fork() where
do_fork() used the incosistent legacy clone calling convention.

Now we only export _do_fork() which is based on struct
kernel_clone_args.

- Remove the copy_thread_tls()/copy_thread() split making the
architecture specific HAVE_COYP_THREAD_TLS config option obsolete.

This switches all remaining architectures to select
HAVE_COPY_THREAD_TLS and thus to the copy_thread_tls() calling
convention. The current split makes the process creation codepaths
more convoluted than they need to be. Each architecture has their own
copy_thread() function unless it selects HAVE_COPY_THREAD_TLS then it
has a copy_thread_tls() function.

The split is not needed anymore nowadays, all architectures support
CLONE_SETTLS but quite a few of them never bothered to select
HAVE_COPY_THREAD_TLS and instead simply continued to use copy_thread()
and use the old calling convention. Removing this split cleans up the
process creation codepaths and paves the way for implementing clone3()
on such architectures since it requires the copy_thread_tls() calling
convention.

After having made each architectures support copy_thread_tls() this
series simply renames that function back to copy_thread(). It also
switches all architectures that call do_fork() directly over to
_do_fork() and the struct kernel_clone_args calling convention. This
is a corollary of switching the architectures that did not yet support
it over to copy_thread_tls() since do_fork() is conditional on not
supporting copy_thread_tls() (Mostly because it lacks a separate
argument for tls which is trivial to fix but there's no need for this
function to exist.).

The do_fork() removal is in itself already useful as it allows to to
remove the export of both do_fork() and _do_fork() we currently have
in favor of only _do_fork(). This has already been discussed back when
we added clone3(). The legacy clone() calling convention is - as is
probably well-known - somewhat odd:

#
# ABI hall of shame
#
config CLONE_BACKWARDS
config CLONE_BACKWARDS2
config CLONE_BACKWARDS3

that is aggravated by the fact that some architectures such as sparc
follow the CLONE_BACKWARDSx calling convention but don't really select
the corresponding config option since they call do_fork() directly.

So do_fork() enforces a somewhat arbitrary calling convention in the
first place that doesn't really help the individual architectures that
deviate from it. They can thus simply be switched to _do_fork()
enforcing a single calling convention. (I really hope that any new
architectures will __not__ try to implement their own calling
conventions...)

Most architectures already have made a similar switch (m68k comes to
mind).

Overall this removes more code than it adds even with a good portion
of added comments. It simplifies a chunk of arch specific assembly
either by moving the code into C or by simply rewriting the assembly.

Architectures that have been touched in non-trivial ways have all been
actually boot and stress tested: sparc and ia64 have been tested with
Debian 9 images. They are the two architectures which have been
touched the most. All non-trivial changes to architectures have seen
acks from the relevant maintainers. nios2 with a custom built
buildroot image. h8300 I couldn't get something bootable to test on
but the changes have been fairly automatic and I'm sure we'll hear
people yell if I broke something there.

All other architectures that have been touched in trivial ways have
been compile tested for each single patch of the series via git rebase
-x "make ..." v5.8-rc2. arm{64} and x86{_64} have been boot tested
even though they have just been trivially touched (removal of the
HAVE_COPY_THREAD_TLS macro from their Kconfig) because well they are
basically "core architectures" and since it is trivial to get your
hands on a useable image"

* tag 'fork-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
arch: rename copy_thread_tls() back to copy_thread()
arch: remove HAVE_COPY_THREAD_TLS
unicore: switch to copy_thread_tls()
sh: switch to copy_thread_tls()
nds32: switch to copy_thread_tls()
microblaze: switch to copy_thread_tls()
hexagon: switch to copy_thread_tls()
c6x: switch to copy_thread_tls()
alpha: switch to copy_thread_tls()
fork: remove do_fork()
h8300: select HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
nios2: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
ia64: enable HAVE_COPY_THREAD_TLS, switch to kernel_clone_args
sparc: unconditionally enable HAVE_COPY_THREAD_TLS
sparc: share process creation helpers between sparc and sparc64
sparc64: enable HAVE_COPY_THREAD_TLS
fork: fold legacy_clone_args_valid() into _do_fork()

+274 -286
-7
arch/Kconfig
··· 754 754 depends on MMU 755 755 select ARCH_HAS_ELF_RANDOMIZE 756 756 757 - config HAVE_COPY_THREAD_TLS 758 - bool 759 - help 760 - Architecture provides copy_thread_tls to accept tls argument via 761 - normal C parameter passing, rather than extracting the syscall 762 - argument from pt_regs. 763 - 764 757 config HAVE_STACK_VALIDATION 765 758 bool 766 759 help
+4 -5
arch/alpha/kernel/process.c
··· 233 233 /* 234 234 * Copy architecture-specific thread state 235 235 */ 236 - int 237 - copy_thread(unsigned long clone_flags, unsigned long usp, 238 - unsigned long kthread_arg, 239 - struct task_struct *p) 236 + int copy_thread(unsigned long clone_flags, unsigned long usp, 237 + unsigned long kthread_arg, struct task_struct *p, 238 + unsigned long tls) 240 239 { 241 240 extern void ret_from_fork(void); 242 241 extern void ret_from_kernel_thread(void); ··· 266 267 required for proper operation in the case of a threaded 267 268 application calling fork. */ 268 269 if (clone_flags & CLONE_SETTLS) 269 - childti->pcb.unique = regs->r20; 270 + childti->pcb.unique = tls; 270 271 else 271 272 regs->r20 = 0; /* OSF/1 has some strange fork() semantics. */ 272 273 childti->pcb.usp = usp ?: rdusp();
-1
arch/arc/Kconfig
··· 29 29 select GENERIC_SMP_IDLE_THREAD 30 30 select HAVE_ARCH_KGDB 31 31 select HAVE_ARCH_TRACEHOOK 32 - select HAVE_COPY_THREAD_TLS 33 32 select HAVE_DEBUG_STACKOVERFLOW 34 33 select HAVE_DEBUG_KMEMLEAK 35 34 select HAVE_FUTEX_CMPXCHG if FUTEX
+3 -2
arch/arc/kernel/process.c
··· 173 173 * | user_r25 | 174 174 * ------------------ <===== END of PAGE 175 175 */ 176 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 177 - unsigned long kthread_arg, struct task_struct *p, unsigned long tls) 176 + int copy_thread(unsigned long clone_flags, unsigned long usp, 177 + unsigned long kthread_arg, struct task_struct *p, 178 + unsigned long tls) 178 179 { 179 180 struct pt_regs *c_regs; /* child's pt_regs */ 180 181 unsigned long *childksp; /* to unwind out of __switch_to() */
-1
arch/arm/Kconfig
··· 72 72 select HAVE_ARM_SMCCC if CPU_V7 73 73 select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32 74 74 select HAVE_CONTEXT_TRACKING 75 - select HAVE_COPY_THREAD_TLS 76 75 select HAVE_C_RECORDMCOUNT 77 76 select HAVE_DEBUG_KMEMLEAK if !XIP_KERNEL 78 77 select HAVE_DMA_CONTIGUOUS if MMU
+2 -3
arch/arm/kernel/process.c
··· 225 225 226 226 asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 227 227 228 - int 229 - copy_thread_tls(unsigned long clone_flags, unsigned long stack_start, 230 - unsigned long stk_sz, struct task_struct *p, unsigned long tls) 228 + int copy_thread(unsigned long clone_flags, unsigned long stack_start, 229 + unsigned long stk_sz, struct task_struct *p, unsigned long tls) 231 230 { 232 231 struct thread_info *thread = task_thread_info(p); 233 232 struct pt_regs *childregs = task_pt_regs(p);
-1
arch/arm64/Kconfig
··· 149 149 select HAVE_CMPXCHG_DOUBLE 150 150 select HAVE_CMPXCHG_LOCAL 151 151 select HAVE_CONTEXT_TRACKING 152 - select HAVE_COPY_THREAD_TLS 153 152 select HAVE_DEBUG_BUGVERBOSE 154 153 select HAVE_DEBUG_KMEMLEAK 155 154 select HAVE_DMA_CONTIGUOUS
+1 -1
arch/arm64/kernel/process.c
··· 375 375 376 376 asmlinkage void ret_from_fork(void) asm("ret_from_fork"); 377 377 378 - int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start, 378 + int copy_thread(unsigned long clone_flags, unsigned long stack_start, 379 379 unsigned long stk_sz, struct task_struct *p, unsigned long tls) 380 380 { 381 381 struct pt_regs *childregs = task_pt_regs(p);
+2 -2
arch/c6x/kernel/process.c
··· 105 105 * Copy a new thread context in its stack. 106 106 */ 107 107 int copy_thread(unsigned long clone_flags, unsigned long usp, 108 - unsigned long ustk_size, 109 - struct task_struct *p) 108 + unsigned long ustk_size, struct task_struct *p, 109 + unsigned long tls) 110 110 { 111 111 struct pt_regs *childregs; 112 112
-1
arch/csky/Kconfig
··· 38 38 select GX6605S_TIMER if CPU_CK610 39 39 select HAVE_ARCH_TRACEHOOK 40 40 select HAVE_ARCH_AUDITSYSCALL 41 - select HAVE_COPY_THREAD_TLS 42 41 select HAVE_DEBUG_BUGVERBOSE 43 42 select HAVE_DYNAMIC_FTRACE 44 43 select HAVE_DYNAMIC_FTRACE_WITH_REGS
+1 -1
arch/csky/kernel/process.c
··· 40 40 return sw->r15; 41 41 } 42 42 43 - int copy_thread_tls(unsigned long clone_flags, 43 + int copy_thread(unsigned long clone_flags, 44 44 unsigned long usp, 45 45 unsigned long kthread_arg, 46 46 struct task_struct *p,
+12 -5
arch/h8300/kernel/process.c
··· 105 105 { 106 106 } 107 107 108 - int copy_thread(unsigned long clone_flags, 109 - unsigned long usp, unsigned long topstk, 110 - struct task_struct *p) 108 + int copy_thread(unsigned long clone_flags, unsigned long usp, 109 + unsigned long topstk, struct task_struct *p, unsigned long tls) 111 110 { 112 111 struct pt_regs *childregs; 113 112 ··· 158 159 unsigned long newsp; 159 160 uintptr_t parent_tidptr; 160 161 uintptr_t child_tidptr; 162 + struct kernel_clone_args kargs = {}; 161 163 162 164 get_user(clone_flags, &args[0]); 163 165 get_user(newsp, &args[1]); 164 166 get_user(parent_tidptr, &args[2]); 165 167 get_user(child_tidptr, &args[3]); 166 - return do_fork(clone_flags, newsp, 0, 167 - (int __user *)parent_tidptr, (int __user *)child_tidptr); 168 + 169 + kargs.flags = (lower_32_bits(clone_flags) & ~CSIGNAL); 170 + kargs.pidfd = (int __user *)parent_tidptr; 171 + kargs.child_tid = (int __user *)child_tidptr; 172 + kargs.parent_tid = (int __user *)parent_tidptr; 173 + kargs.exit_signal = (lower_32_bits(clone_flags) & CSIGNAL); 174 + kargs.stack = newsp; 175 + 176 + return _do_fork(&kargs); 168 177 }
+3 -3
arch/hexagon/kernel/process.c
··· 50 50 /* 51 51 * Copy architecture-specific thread state 52 52 */ 53 - int copy_thread(unsigned long clone_flags, unsigned long usp, 54 - unsigned long arg, struct task_struct *p) 53 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 54 + struct task_struct *p, unsigned long tls) 55 55 { 56 56 struct thread_info *ti = task_thread_info(p); 57 57 struct hexagon_switch_stack *ss; ··· 100 100 * ugp is used to provide TLS support. 101 101 */ 102 102 if (clone_flags & CLONE_SETTLS) 103 - childregs->ugp = childregs->r04; 103 + childregs->ugp = tls; 104 104 105 105 /* 106 106 * Parent sees new pid -- not necessary, not even possible at
+13 -19
arch/ia64/kernel/entry.S
··· 112 112 .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) 113 113 alloc r16=ar.pfs,8,2,6,0 114 114 DO_SAVE_SWITCH_STACK 115 - adds r2=PT(R16)+IA64_SWITCH_STACK_SIZE+16,sp 116 115 mov loc0=rp 117 - mov loc1=r16 // save ar.pfs across do_fork 116 + mov loc1=r16 // save ar.pfs across ia64_clone 118 117 .body 118 + mov out0=in0 119 119 mov out1=in1 120 120 mov out2=in2 121 - tbit.nz p6,p0=in0,CLONE_SETTLS_BIT 122 - mov out3=in3 // parent_tidptr: valid only w/CLONE_PARENT_SETTID 123 - ;; 124 - (p6) st8 [r2]=in5 // store TLS in r16 for copy_thread() 125 - mov out4=in4 // child_tidptr: valid only w/CLONE_CHILD_SETTID or CLONE_CHILD_CLEARTID 126 - mov out0=in0 // out0 = clone_flags 127 - br.call.sptk.many rp=do_fork 121 + mov out3=in3 122 + mov out4=in4 123 + mov out5=in5 124 + br.call.sptk.many rp=ia64_clone 128 125 .ret1: .restore sp 129 126 adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack 130 127 mov ar.pfs=loc1 ··· 140 143 .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) 141 144 alloc r16=ar.pfs,8,2,6,0 142 145 DO_SAVE_SWITCH_STACK 143 - adds r2=PT(R16)+IA64_SWITCH_STACK_SIZE+16,sp 144 146 mov loc0=rp 145 - mov loc1=r16 // save ar.pfs across do_fork 147 + mov loc1=r16 // save ar.pfs across ia64_clone 146 148 .body 149 + mov out0=in0 147 150 mov out1=in1 148 151 mov out2=16 // stacksize (compensates for 16-byte scratch area) 149 - tbit.nz p6,p0=in0,CLONE_SETTLS_BIT 150 - mov out3=in2 // parent_tidptr: valid only w/CLONE_PARENT_SETTID 151 - ;; 152 - (p6) st8 [r2]=in4 // store TLS in r13 (tp) 153 - mov out4=in3 // child_tidptr: valid only w/CLONE_CHILD_SETTID or CLONE_CHILD_CLEARTID 154 - mov out0=in0 // out0 = clone_flags 155 - br.call.sptk.many rp=do_fork 152 + mov out3=in3 153 + mov out4=in4 154 + mov out5=in5 155 + br.call.sptk.many rp=ia64_clone 156 156 .ret2: .restore sp 157 157 adds sp=IA64_SWITCH_STACK_SIZE,sp // pop the switch stack 158 158 mov ar.pfs=loc1 ··· 584 590 nop.i 0 585 591 /* 586 592 * We need to call schedule_tail() to complete the scheduling process. 587 - * Called by ia64_switch_to() after do_fork()->copy_thread(). r8 contains the 593 + * Called by ia64_switch_to() after ia64_clone()->copy_thread(). r8 contains the 588 594 * address of the previously executing task. 589 595 */ 590 596 br.call.sptk.many rp=ia64_invoke_schedule_tail
+23 -6
arch/ia64/kernel/process.c
··· 296 296 pfm_load_regs(task); 297 297 298 298 info = __this_cpu_read(pfm_syst_info); 299 - if (info & PFM_CPUINFO_SYST_WIDE) 299 + if (info & PFM_CPUINFO_SYST_WIDE) 300 300 pfm_syst_wide_update_task(task, info, 1); 301 301 #endif 302 302 } ··· 310 310 * 311 311 * <clone syscall> <some kernel call frames> 312 312 * sys_clone : 313 - * do_fork do_fork 313 + * _do_fork _do_fork 314 314 * copy_thread copy_thread 315 315 * 316 316 * This means that the stack layout is as follows: ··· 333 333 * so there is nothing to worry about. 334 334 */ 335 335 int 336 - copy_thread(unsigned long clone_flags, 337 - unsigned long user_stack_base, unsigned long user_stack_size, 338 - struct task_struct *p) 336 + copy_thread(unsigned long clone_flags, unsigned long user_stack_base, 337 + unsigned long user_stack_size, struct task_struct *p, unsigned long tls) 339 338 { 340 339 extern char ia64_ret_from_clone; 341 340 struct switch_stack *child_stack, *stack; ··· 415 416 rbs_size = stack->ar_bspstore - rbs; 416 417 memcpy((void *) child_rbs, (void *) rbs, rbs_size); 417 418 if (clone_flags & CLONE_SETTLS) 418 - child_ptregs->r13 = regs->r16; /* see sys_clone2() in entry.S */ 419 + child_ptregs->r13 = tls; 419 420 if (user_stack_base) { 420 421 child_ptregs->r12 = user_stack_base + user_stack_size - 16; 421 422 child_ptregs->ar_bspstore = user_stack_base; ··· 438 439 pfm_inherit(p, child_ptregs); 439 440 #endif 440 441 return retval; 442 + } 443 + 444 + asmlinkage long ia64_clone(unsigned long clone_flags, unsigned long stack_start, 445 + unsigned long stack_size, unsigned long parent_tidptr, 446 + unsigned long child_tidptr, unsigned long tls) 447 + { 448 + struct kernel_clone_args args = { 449 + .flags = (lower_32_bits(clone_flags) & ~CSIGNAL), 450 + .pidfd = (int __user *)parent_tidptr, 451 + .child_tid = (int __user *)child_tidptr, 452 + .parent_tid = (int __user *)parent_tidptr, 453 + .exit_signal = (lower_32_bits(clone_flags) & CSIGNAL), 454 + .stack = stack_start, 455 + .stack_size = stack_size, 456 + .tls = tls, 457 + }; 458 + 459 + return _do_fork(&args); 441 460 } 442 461 443 462 static void
-1
arch/m68k/Kconfig
··· 14 14 select HAVE_AOUT if MMU 15 15 select HAVE_ASM_MODVERSIONS 16 16 select HAVE_DEBUG_BUGVERBOSE 17 - select HAVE_COPY_THREAD_TLS 18 17 select GENERIC_IRQ_SHOW 19 18 select GENERIC_ATOMIC64 20 19 select HAVE_UID16
+2 -6
arch/m68k/kernel/process.c
··· 125 125 .tls = regs->d5, 126 126 }; 127 127 128 - if (!legacy_clone_args_valid(&args)) 129 - return -EINVAL; 130 - 131 128 return _do_fork(&args); 132 129 } 133 130 ··· 138 141 return sys_clone3((struct clone_args __user *)regs->d1, regs->d2); 139 142 } 140 143 141 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 142 - unsigned long arg, struct task_struct *p, 143 - unsigned long tls) 144 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 145 + struct task_struct *p, unsigned long tls) 144 146 { 145 147 struct fork_frame { 146 148 struct switch_stack sw;
+3 -3
arch/microblaze/kernel/process.c
··· 54 54 { 55 55 } 56 56 57 - int copy_thread(unsigned long clone_flags, unsigned long usp, 58 - unsigned long arg, struct task_struct *p) 57 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 58 + struct task_struct *p, unsigned long tls) 59 59 { 60 60 struct pt_regs *childregs = task_pt_regs(p); 61 61 struct thread_info *ti = task_thread_info(p); ··· 114 114 * which contains TLS area 115 115 */ 116 116 if (clone_flags & CLONE_SETTLS) 117 - childregs->r21 = childregs->r10; 117 + childregs->r21 = tls; 118 118 119 119 return 0; 120 120 }
-1
arch/mips/Kconfig
··· 51 51 select HAVE_CBPF_JIT if !64BIT && !CPU_MICROMIPS 52 52 select HAVE_CONTEXT_TRACKING 53 53 select HAVE_TIF_NOHZ 54 - select HAVE_COPY_THREAD_TLS 55 54 select HAVE_C_RECORDMCOUNT 56 55 select HAVE_DEBUG_KMEMLEAK 57 56 select HAVE_DEBUG_STACKOVERFLOW
+3 -2
arch/mips/kernel/process.c
··· 119 119 /* 120 120 * Copy architecture-specific thread state 121 121 */ 122 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 123 - unsigned long kthread_arg, struct task_struct *p, unsigned long tls) 122 + int copy_thread(unsigned long clone_flags, unsigned long usp, 123 + unsigned long kthread_arg, struct task_struct *p, 124 + unsigned long tls) 124 125 { 125 126 struct thread_info *ti = task_thread_info(p); 126 127 struct pt_regs *childregs, *regs = current_pt_regs();
+2 -2
arch/nds32/kernel/process.c
··· 150 150 151 151 asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 152 152 int copy_thread(unsigned long clone_flags, unsigned long stack_start, 153 - unsigned long stk_sz, struct task_struct *p) 153 + unsigned long stk_sz, struct task_struct *p, unsigned long tls) 154 154 { 155 155 struct pt_regs *childregs = task_pt_regs(p); 156 156 ··· 170 170 childregs->uregs[0] = 0; 171 171 childregs->osp = 0; 172 172 if (clone_flags & CLONE_SETTLS) 173 - childregs->uregs[25] = childregs->uregs[3]; 173 + childregs->uregs[25] = tls; 174 174 } 175 175 /* cpu context switching */ 176 176 p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
+1 -6
arch/nios2/kernel/entry.S
··· 389 389 */ 390 390 ENTRY(sys_clone) 391 391 SAVE_SWITCH_STACK 392 - addi sp, sp, -4 393 - stw r7, 0(sp) /* Pass 5th arg thru stack */ 394 - mov r7, r6 /* 4th arg is 3rd of clone() */ 395 - mov r6, zero /* 3rd arg always 0 */ 396 - call do_fork 397 - addi sp, sp, 4 392 + call nios2_clone 398 393 RESTORE_SWITCH_STACK 399 394 ret 400 395
+20 -3
arch/nios2/kernel/process.c
··· 100 100 { 101 101 } 102 102 103 - int copy_thread(unsigned long clone_flags, 104 - unsigned long usp, unsigned long arg, struct task_struct *p) 103 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 104 + struct task_struct *p, unsigned long tls) 105 105 { 106 106 struct pt_regs *childregs = task_pt_regs(p); 107 107 struct pt_regs *regs; ··· 140 140 141 141 /* Initialize tls register. */ 142 142 if (clone_flags & CLONE_SETTLS) 143 - childstack->r23 = regs->r8; 143 + childstack->r23 = tls; 144 144 145 145 return 0; 146 146 } ··· 258 258 int dump_fpu(struct pt_regs *regs, elf_fpregset_t *r) 259 259 { 260 260 return 0; /* Nios2 has no FPU and thus no FPU registers */ 261 + } 262 + 263 + asmlinkage int nios2_clone(unsigned long clone_flags, unsigned long newsp, 264 + int __user *parent_tidptr, int __user *child_tidptr, 265 + unsigned long tls) 266 + { 267 + struct kernel_clone_args args = { 268 + .flags = (lower_32_bits(clone_flags) & ~CSIGNAL), 269 + .pidfd = parent_tidptr, 270 + .child_tid = child_tidptr, 271 + .parent_tid = parent_tidptr, 272 + .exit_signal = (lower_32_bits(clone_flags) & CSIGNAL), 273 + .stack = newsp, 274 + .tls = tls, 275 + }; 276 + 277 + return _do_fork(&args); 261 278 }
-1
arch/openrisc/Kconfig
··· 16 16 select HANDLE_DOMAIN_IRQ 17 17 select GPIOLIB 18 18 select HAVE_ARCH_TRACEHOOK 19 - select HAVE_COPY_THREAD_TLS 20 19 select SPARSE_IRQ 21 20 select GENERIC_IRQ_CHIP 22 21 select GENERIC_IRQ_PROBE
+3 -3
arch/openrisc/kernel/process.c
··· 116 116 extern asmlinkage void ret_from_fork(void); 117 117 118 118 /* 119 - * copy_thread_tls 119 + * copy_thread 120 120 * @clone_flags: flags 121 121 * @usp: user stack pointer or fn for kernel thread 122 122 * @arg: arg to fn for kernel thread; always NULL for userspace thread ··· 147 147 */ 148 148 149 149 int 150 - copy_thread_tls(unsigned long clone_flags, unsigned long usp, 151 - unsigned long arg, struct task_struct *p, unsigned long tls) 150 + copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 151 + struct task_struct *p, unsigned long tls) 152 152 { 153 153 struct pt_regs *userregs; 154 154 struct pt_regs *kregs;
-1
arch/parisc/Kconfig
··· 62 62 select HAVE_FTRACE_MCOUNT_RECORD if HAVE_DYNAMIC_FTRACE 63 63 select HAVE_KPROBES_ON_FTRACE 64 64 select HAVE_DYNAMIC_FTRACE_WITH_REGS 65 - select HAVE_COPY_THREAD_TLS 66 65 67 66 help 68 67 The PA-RISC microprocessor is designed by Hewlett-Packard and used
+1 -1
arch/parisc/kernel/process.c
··· 208 208 * Copy architecture-specific thread state 209 209 */ 210 210 int 211 - copy_thread_tls(unsigned long clone_flags, unsigned long usp, 211 + copy_thread(unsigned long clone_flags, unsigned long usp, 212 212 unsigned long kthread_arg, struct task_struct *p, unsigned long tls) 213 213 { 214 214 struct pt_regs *cregs = &(p->thread.regs);
-1
arch/powerpc/Kconfig
··· 186 186 select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2) 187 187 select HAVE_CONTEXT_TRACKING if PPC64 188 188 select HAVE_TIF_NOHZ if PPC64 189 - select HAVE_COPY_THREAD_TLS 190 189 select HAVE_DEBUG_KMEMLEAK 191 190 select HAVE_DEBUG_STACKOVERFLOW 192 191 select HAVE_DYNAMIC_FTRACE
+1 -1
arch/powerpc/kernel/process.c
··· 1593 1593 /* 1594 1594 * Copy architecture-specific thread state 1595 1595 */ 1596 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 1596 + int copy_thread(unsigned long clone_flags, unsigned long usp, 1597 1597 unsigned long kthread_arg, struct task_struct *p, 1598 1598 unsigned long tls) 1599 1599 {
-1
arch/riscv/Kconfig
··· 54 54 select HAVE_ARCH_SECCOMP_FILTER 55 55 select HAVE_ARCH_TRACEHOOK 56 56 select HAVE_ASM_MODVERSIONS 57 - select HAVE_COPY_THREAD_TLS 58 57 select HAVE_DMA_CONTIGUOUS if MMU 59 58 select HAVE_EBPF_JIT if MMU 60 59 select HAVE_FUTEX_CMPXCHG if FUTEX
+2 -2
arch/riscv/kernel/process.c
··· 101 101 return 0; 102 102 } 103 103 104 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp, 105 - unsigned long arg, struct task_struct *p, unsigned long tls) 104 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 105 + struct task_struct *p, unsigned long tls) 106 106 { 107 107 struct pt_regs *childregs = task_pt_regs(p); 108 108
-1
arch/s390/Kconfig
··· 136 136 select HAVE_EBPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES 137 137 select HAVE_CMPXCHG_DOUBLE 138 138 select HAVE_CMPXCHG_LOCAL 139 - select HAVE_COPY_THREAD_TLS 140 139 select HAVE_DEBUG_KMEMLEAK 141 140 select HAVE_DMA_CONTIGUOUS 142 141 select HAVE_DYNAMIC_FTRACE
+2 -2
arch/s390/kernel/process.c
··· 80 80 return 0; 81 81 } 82 82 83 - int copy_thread_tls(unsigned long clone_flags, unsigned long new_stackp, 84 - unsigned long arg, struct task_struct *p, unsigned long tls) 83 + int copy_thread(unsigned long clone_flags, unsigned long new_stackp, 84 + unsigned long arg, struct task_struct *p, unsigned long tls) 85 85 { 86 86 struct fake_frame 87 87 {
+3 -3
arch/sh/kernel/process_32.c
··· 115 115 asmlinkage void ret_from_fork(void); 116 116 asmlinkage void ret_from_kernel_thread(void); 117 117 118 - int copy_thread(unsigned long clone_flags, unsigned long usp, 119 - unsigned long arg, struct task_struct *p) 118 + int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg, 119 + struct task_struct *p, unsigned long tls) 120 120 { 121 121 struct thread_info *ti = task_thread_info(p); 122 122 struct pt_regs *childregs; ··· 158 158 ti->addr_limit = USER_DS; 159 159 160 160 if (clone_flags & CLONE_SETTLS) 161 - childregs->gbr = childregs->regs[0]; 161 + childregs->gbr = tls; 162 162 163 163 childregs->regs[0] = 0; /* Set return value for child */ 164 164 p->thread.pc = (unsigned long) ret_from_fork;
+3 -4
arch/sparc/include/asm/syscalls.h
··· 4 4 5 5 struct pt_regs; 6 6 7 - asmlinkage long sparc_do_fork(unsigned long clone_flags, 8 - unsigned long stack_start, 9 - struct pt_regs *regs, 10 - unsigned long stack_size); 7 + asmlinkage long sparc_fork(struct pt_regs *regs); 8 + asmlinkage long sparc_vfork(struct pt_regs *regs); 9 + asmlinkage long sparc_clone(struct pt_regs *regs); 11 10 12 11 #endif /* _SPARC64_SYSCALLS_H */
+1
arch/sparc/kernel/Makefile
··· 33 33 obj-$(CONFIG_SPARC32) += sun4m_irq.o sun4d_irq.o 34 34 35 35 obj-y += process_$(BITS).o 36 + obj-y += process.o 36 37 obj-y += signal_$(BITS).o 37 38 obj-y += sigutil_$(BITS).o 38 39 obj-$(CONFIG_SPARC32) += ioport.o
+7 -22
arch/sparc/kernel/entry.S
··· 869 869 ld [%curptr + TI_TASK], %o4 870 870 rd %psr, %g4 871 871 WRITE_PAUSE 872 - mov SIGCHLD, %o0 ! arg0: clone flags 873 872 rd %wim, %g5 874 873 WRITE_PAUSE 875 - mov %fp, %o1 ! arg1: usp 876 874 std %g4, [%o4 + AOFF_task_thread + AOFF_thread_fork_kpsr] 877 - add %sp, STACKFRAME_SZ, %o2 ! arg2: pt_regs ptr 878 - mov 0, %o3 879 - call sparc_do_fork 875 + add %sp, STACKFRAME_SZ, %o0 876 + call sparc_fork 880 877 mov %l5, %o7 881 878 882 879 /* Whee, kernel threads! */ ··· 885 888 ld [%curptr + TI_TASK], %o4 886 889 rd %psr, %g4 887 890 WRITE_PAUSE 888 - 889 - /* arg0,1: flags,usp -- loaded already */ 890 - cmp %o1, 0x0 ! Is new_usp NULL? 891 891 rd %wim, %g5 892 892 WRITE_PAUSE 893 - be,a 1f 894 - mov %fp, %o1 ! yes, use callers usp 895 - andn %o1, 7, %o1 ! no, align to 8 bytes 896 - 1: 897 893 std %g4, [%o4 + AOFF_task_thread + AOFF_thread_fork_kpsr] 898 - add %sp, STACKFRAME_SZ, %o2 ! arg2: pt_regs ptr 899 - mov 0, %o3 900 - call sparc_do_fork 894 + add %sp, STACKFRAME_SZ, %o0 895 + call sparc_clone 901 896 mov %l5, %o7 902 897 903 898 /* Whee, real vfork! */ ··· 903 914 rd %wim, %g5 904 915 WRITE_PAUSE 905 916 std %g4, [%o4 + AOFF_task_thread + AOFF_thread_fork_kpsr] 906 - sethi %hi(0x4000 | 0x0100 | SIGCHLD), %o0 907 - mov %fp, %o1 908 - or %o0, %lo(0x4000 | 0x0100 | SIGCHLD), %o0 909 - sethi %hi(sparc_do_fork), %l1 910 - mov 0, %o3 911 - jmpl %l1 + %lo(sparc_do_fork), %g0 912 - add %sp, STACKFRAME_SZ, %o2 917 + sethi %hi(sparc_vfork), %l1 918 + jmpl %l1 + %lo(sparc_vfork), %g0 919 + add %sp, STACKFRAME_SZ, %o0 913 920 914 921 .align 4 915 922 linux_sparc_ni_syscall:
+5 -6
arch/sparc/kernel/kernel.h
··· 14 14 extern unsigned int fsr_storage; 15 15 extern int ncpus_probed; 16 16 17 + /* process{_32,_64}.c */ 18 + asmlinkage long sparc_clone(struct pt_regs *regs); 19 + asmlinkage long sparc_fork(struct pt_regs *regs); 20 + asmlinkage long sparc_vfork(struct pt_regs *regs); 21 + 17 22 #ifdef CONFIG_SPARC64 18 23 /* setup_64.c */ 19 24 struct seq_file; ··· 157 152 /* trampoline_32.S */ 158 153 extern unsigned long sun4m_cpu_startup; 159 154 extern unsigned long sun4d_cpu_startup; 160 - 161 - /* process_32.c */ 162 - asmlinkage int sparc_do_fork(unsigned long clone_flags, 163 - unsigned long stack_start, 164 - struct pt_regs *regs, 165 - unsigned long stack_size); 166 155 167 156 /* signal_32.c */ 168 157 asmlinkage void do_sigreturn(struct pt_regs *regs);
+110
arch/sparc/kernel/process.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * This file handles the architecture independent parts of process handling.. 5 + */ 6 + 7 + #include <linux/compat.h> 8 + #include <linux/errno.h> 9 + #include <linux/kernel.h> 10 + #include <linux/ptrace.h> 11 + #include <linux/sched.h> 12 + #include <linux/sched/task.h> 13 + #include <linux/sched/task_stack.h> 14 + #include <linux/signal.h> 15 + 16 + #include "kernel.h" 17 + 18 + asmlinkage long sparc_fork(struct pt_regs *regs) 19 + { 20 + unsigned long orig_i1 = regs->u_regs[UREG_I1]; 21 + long ret; 22 + struct kernel_clone_args args = { 23 + .exit_signal = SIGCHLD, 24 + /* Reuse the parent's stack for the child. */ 25 + .stack = regs->u_regs[UREG_FP], 26 + }; 27 + 28 + ret = _do_fork(&args); 29 + 30 + /* If we get an error and potentially restart the system 31 + * call, we're screwed because copy_thread() clobbered 32 + * the parent's %o1. So detect that case and restore it 33 + * here. 34 + */ 35 + if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK) 36 + regs->u_regs[UREG_I1] = orig_i1; 37 + 38 + return ret; 39 + } 40 + 41 + asmlinkage long sparc_vfork(struct pt_regs *regs) 42 + { 43 + unsigned long orig_i1 = regs->u_regs[UREG_I1]; 44 + long ret; 45 + 46 + struct kernel_clone_args args = { 47 + .flags = CLONE_VFORK | CLONE_VM, 48 + .exit_signal = SIGCHLD, 49 + /* Reuse the parent's stack for the child. */ 50 + .stack = regs->u_regs[UREG_FP], 51 + }; 52 + 53 + ret = _do_fork(&args); 54 + 55 + /* If we get an error and potentially restart the system 56 + * call, we're screwed because copy_thread() clobbered 57 + * the parent's %o1. So detect that case and restore it 58 + * here. 59 + */ 60 + if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK) 61 + regs->u_regs[UREG_I1] = orig_i1; 62 + 63 + return ret; 64 + } 65 + 66 + asmlinkage long sparc_clone(struct pt_regs *regs) 67 + { 68 + unsigned long orig_i1 = regs->u_regs[UREG_I1]; 69 + unsigned int flags = lower_32_bits(regs->u_regs[UREG_I0]); 70 + long ret; 71 + 72 + struct kernel_clone_args args = { 73 + .flags = (flags & ~CSIGNAL), 74 + .exit_signal = (flags & CSIGNAL), 75 + .tls = regs->u_regs[UREG_I3], 76 + }; 77 + 78 + #ifdef CONFIG_COMPAT 79 + if (test_thread_flag(TIF_32BIT)) { 80 + args.pidfd = compat_ptr(regs->u_regs[UREG_I2]); 81 + args.child_tid = compat_ptr(regs->u_regs[UREG_I4]); 82 + args.parent_tid = compat_ptr(regs->u_regs[UREG_I2]); 83 + } else 84 + #endif 85 + { 86 + args.pidfd = (int __user *)regs->u_regs[UREG_I2]; 87 + args.child_tid = (int __user *)regs->u_regs[UREG_I4]; 88 + args.parent_tid = (int __user *)regs->u_regs[UREG_I2]; 89 + } 90 + 91 + /* Did userspace give setup a separate stack for the child or are we 92 + * reusing the parent's? 93 + */ 94 + if (regs->u_regs[UREG_I1]) 95 + args.stack = regs->u_regs[UREG_I1]; 96 + else 97 + args.stack = regs->u_regs[UREG_FP]; 98 + 99 + ret = _do_fork(&args); 100 + 101 + /* If we get an error and potentially restart the system 102 + * call, we're screwed because copy_thread() clobbered 103 + * the parent's %o1. So detect that case and restore it 104 + * here. 105 + */ 106 + if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK) 107 + regs->u_regs[UREG_I1] = orig_i1; 108 + 109 + return ret; 110 + }
+3 -30
arch/sparc/kernel/process_32.c
··· 257 257 return sp; 258 258 } 259 259 260 - asmlinkage int sparc_do_fork(unsigned long clone_flags, 261 - unsigned long stack_start, 262 - struct pt_regs *regs, 263 - unsigned long stack_size) 264 - { 265 - unsigned long parent_tid_ptr, child_tid_ptr; 266 - unsigned long orig_i1 = regs->u_regs[UREG_I1]; 267 - long ret; 268 - 269 - parent_tid_ptr = regs->u_regs[UREG_I2]; 270 - child_tid_ptr = regs->u_regs[UREG_I4]; 271 - 272 - ret = do_fork(clone_flags, stack_start, stack_size, 273 - (int __user *) parent_tid_ptr, 274 - (int __user *) child_tid_ptr); 275 - 276 - /* If we get an error and potentially restart the system 277 - * call, we're screwed because copy_thread() clobbered 278 - * the parent's %o1. So detect that case and restore it 279 - * here. 280 - */ 281 - if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK) 282 - regs->u_regs[UREG_I1] = orig_i1; 283 - 284 - return ret; 285 - } 286 - 287 260 /* Copy a Sparc thread. The fork() return value conventions 288 261 * under SunOS are nothing short of bletcherous: 289 262 * Parent --> %o0 == childs pid, %o1 == 0 ··· 273 300 extern void ret_from_fork(void); 274 301 extern void ret_from_kernel_thread(void); 275 302 276 - int copy_thread(unsigned long clone_flags, unsigned long sp, 277 - unsigned long arg, struct task_struct *p) 303 + int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, 304 + struct task_struct *p, unsigned long tls) 278 305 { 279 306 struct thread_info *ti = task_thread_info(p); 280 307 struct pt_regs *childregs, *regs = current_pt_regs(); ··· 376 403 regs->u_regs[UREG_I1] = 0; 377 404 378 405 if (clone_flags & CLONE_SETTLS) 379 - childregs->u_regs[UREG_G7] = regs->u_regs[UREG_I3]; 406 + childregs->u_regs[UREG_G7] = tls; 380 407 381 408 return 0; 382 409 }
+3 -37
arch/sparc/kernel/process_64.c
··· 572 572 force_sig(SIGSEGV); 573 573 } 574 574 575 - asmlinkage long sparc_do_fork(unsigned long clone_flags, 576 - unsigned long stack_start, 577 - struct pt_regs *regs, 578 - unsigned long stack_size) 579 - { 580 - int __user *parent_tid_ptr, *child_tid_ptr; 581 - unsigned long orig_i1 = regs->u_regs[UREG_I1]; 582 - long ret; 583 - 584 - #ifdef CONFIG_COMPAT 585 - if (test_thread_flag(TIF_32BIT)) { 586 - parent_tid_ptr = compat_ptr(regs->u_regs[UREG_I2]); 587 - child_tid_ptr = compat_ptr(regs->u_regs[UREG_I4]); 588 - } else 589 - #endif 590 - { 591 - parent_tid_ptr = (int __user *) regs->u_regs[UREG_I2]; 592 - child_tid_ptr = (int __user *) regs->u_regs[UREG_I4]; 593 - } 594 - 595 - ret = do_fork(clone_flags, stack_start, stack_size, 596 - parent_tid_ptr, child_tid_ptr); 597 - 598 - /* If we get an error and potentially restart the system 599 - * call, we're screwed because copy_thread() clobbered 600 - * the parent's %o1. So detect that case and restore it 601 - * here. 602 - */ 603 - if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK) 604 - regs->u_regs[UREG_I1] = orig_i1; 605 - 606 - return ret; 607 - } 608 - 609 575 /* Copy a Sparc thread. The fork() return value conventions 610 576 * under SunOS are nothing short of bletcherous: 611 577 * Parent --> %o0 == childs pid, %o1 == 0 612 578 * Child --> %o0 == parents pid, %o1 == 1 613 579 */ 614 - int copy_thread(unsigned long clone_flags, unsigned long sp, 615 - unsigned long arg, struct task_struct *p) 580 + int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, 581 + struct task_struct *p, unsigned long tls) 616 582 { 617 583 struct thread_info *t = task_thread_info(p); 618 584 struct pt_regs *regs = current_pt_regs(); ··· 636 670 regs->u_regs[UREG_I1] = 0; 637 671 638 672 if (clone_flags & CLONE_SETTLS) 639 - t->kregs->u_regs[UREG_G7] = regs->u_regs[UREG_I3]; 673 + t->kregs->u_regs[UREG_G7] = tls; 640 674 641 675 return 0; 642 676 }
+13 -10
arch/sparc/kernel/syscalls.S
··· 86 86 * during system calls... 87 87 */ 88 88 .align 32 89 - sys_vfork: /* Under Linux, vfork and fork are just special cases of clone. */ 90 - sethi %hi(0x4000 | 0x0100 | SIGCHLD), %o0 91 - or %o0, %lo(0x4000 | 0x0100 | SIGCHLD), %o0 92 - ba,pt %xcc, sys_clone 89 + sys_vfork: 90 + flushw 91 + ba,pt %xcc, sparc_vfork 92 + add %sp, PTREGS_OFF, %o0 93 + 94 + .align 32 93 95 sys_fork: 94 - clr %o1 95 - mov SIGCHLD, %o0 96 + flushw 97 + ba,pt %xcc, sparc_fork 98 + add %sp, PTREGS_OFF, %o0 99 + 100 + .align 32 96 101 sys_clone: 97 102 flushw 98 - movrz %o1, %fp, %o1 99 - mov 0, %o3 100 - ba,pt %xcc, sparc_do_fork 101 - add %sp, PTREGS_OFF, %o2 103 + ba,pt %xcc, sparc_clone 104 + add %sp, PTREGS_OFF, %o0 102 105 103 106 .globl ret_from_fork 104 107 ret_from_fork:
-1
arch/um/Kconfig
··· 14 14 select HAVE_FUTEX_CMPXCHG if FUTEX 15 15 select HAVE_DEBUG_KMEMLEAK 16 16 select HAVE_DEBUG_BUGVERBOSE 17 - select HAVE_COPY_THREAD_TLS 18 17 select GENERIC_IRQ_SHOW 19 18 select GENERIC_CPU_DEVICES 20 19 select GENERIC_CLOCKEVENTS
+1 -1
arch/um/kernel/process.c
··· 152 152 userspace(&current->thread.regs.regs, current_thread_info()->aux_fp_regs); 153 153 } 154 154 155 - int copy_thread_tls(unsigned long clone_flags, unsigned long sp, 155 + int copy_thread(unsigned long clone_flags, unsigned long sp, 156 156 unsigned long arg, struct task_struct * p, unsigned long tls) 157 157 { 158 158 void (*handler)(void);
-1
arch/x86/Kconfig
··· 161 161 select HAVE_CMPXCHG_DOUBLE 162 162 select HAVE_CMPXCHG_LOCAL 163 163 select HAVE_CONTEXT_TRACKING if X86_64 164 - select HAVE_COPY_THREAD_TLS 165 164 select HAVE_C_RECORDMCOUNT 166 165 select HAVE_DEBUG_KMEMLEAK 167 166 select HAVE_DMA_CONTIGUOUS
+2 -2
arch/x86/kernel/process.c
··· 121 121 return do_set_thread_area_64(p, ARCH_SET_FS, tls); 122 122 } 123 123 124 - int copy_thread_tls(unsigned long clone_flags, unsigned long sp, 125 - unsigned long arg, struct task_struct *p, unsigned long tls) 124 + int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, 125 + struct task_struct *p, unsigned long tls) 126 126 { 127 127 struct inactive_task_frame *frame; 128 128 struct fork_frame *fork_frame;
-3
arch/x86/kernel/sys_ia32.c
··· 251 251 .tls = tls_val, 252 252 }; 253 253 254 - if (!legacy_clone_args_valid(&args)) 255 - return -EINVAL; 256 - 257 254 return _do_fork(&args); 258 255 } 259 256 #endif /* CONFIG_IA32_EMULATION */
+1 -1
arch/x86/kernel/unwind_frame.c
··· 269 269 /* 270 270 * kthreads (other than the boot CPU's idle thread) have some 271 271 * partial regs at the end of their stack which were placed 272 - * there by copy_thread_tls(). But the regs don't have any 272 + * there by copy_thread(). But the regs don't have any 273 273 * useful information, so we can skip them. 274 274 * 275 275 * This user_mode() check is slightly broader than a PF_KTHREAD
-1
arch/xtensa/Kconfig
··· 24 24 select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL 25 25 select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL 26 26 select HAVE_ARCH_TRACEHOOK 27 - select HAVE_COPY_THREAD_TLS 28 27 select HAVE_DEBUG_KMEMLEAK 29 28 select HAVE_DMA_CONTIGUOUS 30 29 select HAVE_EXIT_THREAD
+1 -1
arch/xtensa/kernel/process.c
··· 201 201 * involved. Much simpler to just not copy those live frames across. 202 202 */ 203 203 204 - int copy_thread_tls(unsigned long clone_flags, unsigned long usp_thread_fn, 204 + int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn, 205 205 unsigned long thread_fn_arg, struct task_struct *p, 206 206 unsigned long tls) 207 207 {
+1 -16
include/linux/sched/task.h
··· 66 66 67 67 extern void release_task(struct task_struct * p); 68 68 69 - #ifdef CONFIG_HAVE_COPY_THREAD_TLS 70 - extern int copy_thread_tls(unsigned long, unsigned long, unsigned long, 71 - struct task_struct *, unsigned long); 72 - #else 73 69 extern int copy_thread(unsigned long, unsigned long, unsigned long, 74 - struct task_struct *); 70 + struct task_struct *, unsigned long); 75 71 76 - /* Architectures that haven't opted into copy_thread_tls get the tls argument 77 - * via pt_regs, so ignore the tls argument passed via C. */ 78 - static inline int copy_thread_tls( 79 - unsigned long clone_flags, unsigned long sp, unsigned long arg, 80 - struct task_struct *p, unsigned long tls) 81 - { 82 - return copy_thread(clone_flags, sp, arg, p); 83 - } 84 - #endif 85 72 extern void flush_thread(void); 86 73 87 74 #ifdef CONFIG_HAVE_EXIT_THREAD ··· 84 97 extern void exit_itimers(struct signal_struct *); 85 98 86 99 extern long _do_fork(struct kernel_clone_args *kargs); 87 - extern bool legacy_clone_args_valid(const struct kernel_clone_args *kargs); 88 - extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *); 89 100 struct task_struct *fork_idle(int); 90 101 struct mm_struct *copy_init_mm(void); 91 102 extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+16 -51
kernel/fork.c
··· 2097 2097 retval = copy_io(clone_flags, p); 2098 2098 if (retval) 2099 2099 goto bad_fork_cleanup_namespaces; 2100 - retval = copy_thread_tls(clone_flags, args->stack, args->stack_size, p, 2101 - args->tls); 2100 + retval = copy_thread(clone_flags, args->stack, args->stack_size, p, args->tls); 2102 2101 if (retval) 2103 2102 goto bad_fork_cleanup_io; 2104 2103 ··· 2416 2417 long nr; 2417 2418 2418 2419 /* 2420 + * For legacy clone() calls, CLONE_PIDFD uses the parent_tid argument 2421 + * to return the pidfd. Hence, CLONE_PIDFD and CLONE_PARENT_SETTID are 2422 + * mutually exclusive. With clone3() CLONE_PIDFD has grown a separate 2423 + * field in struct clone_args and it still doesn't make sense to have 2424 + * them both point at the same memory location. Performing this check 2425 + * here has the advantage that we don't need to have a separate helper 2426 + * to check for legacy clone(). 2427 + */ 2428 + if ((args->flags & CLONE_PIDFD) && 2429 + (args->flags & CLONE_PARENT_SETTID) && 2430 + (args->pidfd == args->parent_tid)) 2431 + return -EINVAL; 2432 + 2433 + /* 2419 2434 * Determine whether and which event to report to ptracer. When 2420 2435 * called from kernel_thread or CLONE_UNTRACED is explicitly 2421 2436 * requested, no event is reported; otherwise, report if the event ··· 2485 2472 put_pid(pid); 2486 2473 return nr; 2487 2474 } 2488 - 2489 - bool legacy_clone_args_valid(const struct kernel_clone_args *kargs) 2490 - { 2491 - /* clone(CLONE_PIDFD) uses parent_tidptr to return a pidfd */ 2492 - if ((kargs->flags & CLONE_PIDFD) && 2493 - (kargs->flags & CLONE_PARENT_SETTID)) 2494 - return false; 2495 - 2496 - return true; 2497 - } 2498 - 2499 - #ifndef CONFIG_HAVE_COPY_THREAD_TLS 2500 - /* For compatibility with architectures that call do_fork directly rather than 2501 - * using the syscall entry points below. */ 2502 - long do_fork(unsigned long clone_flags, 2503 - unsigned long stack_start, 2504 - unsigned long stack_size, 2505 - int __user *parent_tidptr, 2506 - int __user *child_tidptr) 2507 - { 2508 - struct kernel_clone_args args = { 2509 - .flags = (lower_32_bits(clone_flags) & ~CSIGNAL), 2510 - .pidfd = parent_tidptr, 2511 - .child_tid = child_tidptr, 2512 - .parent_tid = parent_tidptr, 2513 - .exit_signal = (lower_32_bits(clone_flags) & CSIGNAL), 2514 - .stack = stack_start, 2515 - .stack_size = stack_size, 2516 - }; 2517 - 2518 - if (!legacy_clone_args_valid(&args)) 2519 - return -EINVAL; 2520 - 2521 - return _do_fork(&args); 2522 - } 2523 - #endif 2524 2475 2525 2476 /* 2526 2477 * Create a kernel thread. ··· 2564 2587 .tls = tls, 2565 2588 }; 2566 2589 2567 - if (!legacy_clone_args_valid(&args)) 2568 - return -EINVAL; 2569 - 2570 2590 return _do_fork(&args); 2571 2591 } 2572 2592 #endif 2573 2593 2574 2594 #ifdef __ARCH_WANT_SYS_CLONE3 2575 - 2576 - /* 2577 - * copy_thread implementations handle CLONE_SETTLS by reading the TLS value from 2578 - * the registers containing the syscall arguments for clone. This doesn't work 2579 - * with clone3 since the TLS value is passed in clone_args instead. 2580 - */ 2581 - #ifndef CONFIG_HAVE_COPY_THREAD_TLS 2582 - #error clone3 requires copy_thread_tls support in arch 2583 - #endif 2584 2595 2585 2596 noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, 2586 2597 struct clone_args __user *uargs, ··· 2884 2919 /* 2885 2920 * unshare allows a process to 'unshare' part of the process 2886 2921 * context which was originally shared using clone. copy_* 2887 - * functions used by do_fork() cannot be used here directly 2922 + * functions used by _do_fork() cannot be used here directly 2888 2923 * because they modify an inactive task_struct that is being 2889 2924 * constructed. Here we are modifying the current, active, 2890 2925 * task_struct.