Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched: Rename sched.c as sched/core.c in comments and Documentation

Most of the stuff from kernel/sched.c was moved to kernel/sched/core.c long time
back and the comments/Documentation never got updated.

I figured it out when I was going through sched-domains.txt and so thought of
fixing it globally.

I haven't crossed check if the stuff that is referenced in sched/core.c by all
these files is still present and hasn't changed as that wasn't the motive behind
this patch.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/cdff76a265326ab8d71922a1db5be599f20aad45.1370329560.git.viresh.kumar@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Viresh Kumar and committed by
Ingo Molnar
0a0fca9d 8404c90d

+27 -26
+1 -1
Documentation/cgroups/cpusets.txt
··· 373 373 1.7 What is sched_load_balance ? 374 374 -------------------------------- 375 375 376 - The kernel scheduler (kernel/sched.c) automatically load balances 376 + The kernel scheduler (kernel/sched/core.c) automatically load balances 377 377 tasks. If one CPU is underutilized, kernel code running on that 378 378 CPU will look for tasks on other more overloaded CPUs and move those 379 379 tasks to itself, within the constraints of such placement mechanisms
+1 -1
Documentation/rt-mutex-design.txt
··· 384 384 __rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the 385 385 result does not equal the task's current priority, then rt_mutex_setprio 386 386 is called to adjust the priority of the task to the new priority. 387 - Note that rt_mutex_setprio is defined in kernel/sched.c to implement the 387 + Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the 388 388 actual change in priority. 389 389 390 390 It is interesting to note that __rt_mutex_adjust_prio can either increase
+2 -2
Documentation/scheduler/sched-domains.txt
··· 25 25 load of each of its member CPUs, and only when the load of a group becomes 26 26 out of balance are tasks moved between groups. 27 27 28 - In kernel/sched.c, trigger_load_balance() is run periodically on each CPU 28 + In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU 29 29 through scheduler_tick(). It raises a softirq after the next regularly scheduled 30 30 rebalancing event for the current runqueue has arrived. The actual load 31 31 balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run ··· 62 62 the specifics and what to tune. 63 63 64 64 Architectures may retain the regular override the default SD_*_INIT flags 65 - while using the generic domain builder in kernel/sched.c if they wish to 65 + while using the generic domain builder in kernel/sched/core.c if they wish to 66 66 retain the traditional SMT->SMP->NUMA topology (or some subset of that). This 67 67 can be done by #define'ing ARCH_HASH_SCHED_TUNE. 68 68
+1 -1
Documentation/spinlocks.txt
··· 137 137 But when you do the write-lock, you have to use the irq-safe version. 138 138 139 139 For an example of being clever with rw-locks, see the "waitqueue_lock" 140 - handling in kernel/sched.c - nothing ever _changes_ a wait-queue from 140 + handling in kernel/sched/core.c - nothing ever _changes_ a wait-queue from 141 141 within an interrupt, they only read the queue in order to know whom to 142 142 wake up. So read-locks are safe (which is good: they are very common 143 143 indeed), while write-locks need to protect themselves against interrupts.
+2 -2
Documentation/virtual/uml/UserModeLinux-HOWTO.txt
··· 3127 3127 at process_kern.c:156 3128 3128 #3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000) 3129 3129 at process_kern.c:161 3130 - #4 0x10001d12 in schedule () at sched.c:777 3130 + #4 0x10001d12 in schedule () at core.c:777 3131 3131 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71 3132 3132 #6 0x1006aa10 in __down_failed () at semaphore.c:157 3133 3133 #7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174 ··· 3191 3191 at process_kern.c:161 3192 3192 161 _switch_to(prev, next); 3193 3193 (gdb) 3194 - #4 0x10001d12 in schedule () at sched.c:777 3194 + #4 0x10001d12 in schedule () at core.c:777 3195 3195 777 switch_to(prev, next, prev); 3196 3196 (gdb) 3197 3197 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71
+1 -1
arch/avr32/kernel/process.c
··· 341 341 * is actually quite ugly. It might be possible to 342 342 * determine the frame size automatically at build 343 343 * time by doing this: 344 - * - compile sched.c 344 + * - compile sched/core.c 345 345 * - disassemble the resulting sched.o 346 346 * - look for 'sub sp,??' shortly after '<schedule>:' 347 347 */
+1 -1
arch/cris/include/arch-v10/arch/bitops.h
··· 17 17 in another register: 18 18 ! __asm__ ("swapnwbr %2\n\tlz %2,%0" 19 19 ! : "=r,r" (res), "=r,X" (dummy) : "1,0" (w)); 20 - confuses gcc (sched.c, gcc from cris-dist-1.14). */ 20 + confuses gcc (core.c, gcc from cris-dist-1.14). */ 21 21 22 22 unsigned long res; 23 23 __asm__ ("swapnwbr %0 \n\t"
+1 -1
arch/ia64/kernel/head.S
··· 1035 1035 * Return a CPU-local timestamp in nano-seconds. This timestamp is 1036 1036 * NOT synchronized across CPUs its return value must never be 1037 1037 * compared against the values returned on another CPU. The usage in 1038 - * kernel/sched.c ensures that. 1038 + * kernel/sched/core.c ensures that. 1039 1039 * 1040 1040 * The return-value of sched_clock() is NOT supposed to wrap-around. 1041 1041 * If it did, it would cause some scheduling hiccups (at the worst).
+2 -2
arch/mips/kernel/mips-mt-fpaff.c
··· 27 27 * FPU affinity with the user's requested processor affinity. 28 28 * This code is 98% identical with the sys_sched_setaffinity() 29 29 * and sys_sched_getaffinity() system calls, and should be 30 - * updated when kernel/sched.c changes. 30 + * updated when kernel/sched/core.c changes. 31 31 */ 32 32 33 33 /* 34 34 * find_process_by_pid - find a process with a matching PID value. 35 - * used in sys_sched_set/getaffinity() in kernel/sched.c, so 35 + * used in sys_sched_set/getaffinity() in kernel/sched/core.c, so 36 36 * cloned here. 37 37 */ 38 38 static inline struct task_struct *find_process_by_pid(pid_t pid)
+3 -2
arch/mips/kernel/scall32-o32.S
··· 476 476 /* 477 477 * For FPU affinity scheduling on MIPS MT processors, we need to 478 478 * intercept sys_sched_xxxaffinity() calls until we get a proper hook 479 - * in kernel/sched.c. Considered only temporary we only support these 480 - * hooks for the 32-bit kernel - there is no MIPS64 MT processor atm. 479 + * in kernel/sched/core.c. Considered only temporary we only support 480 + * these hooks for the 32-bit kernel - there is no MIPS64 MT processor 481 + * atm. 481 482 */ 482 483 sys mipsmt_sys_sched_setaffinity 3 483 484 sys mipsmt_sys_sched_getaffinity 3
+1 -1
arch/powerpc/include/asm/mmu_context.h
··· 38 38 39 39 /* 40 40 * switch_mm is the entry point called from the architecture independent 41 - * code in kernel/sched.c 41 + * code in kernel/sched/core.c 42 42 */ 43 43 static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, 44 44 struct task_struct *tsk)
+1 -1
arch/tile/include/asm/processor.h
··· 225 225 226 226 /* 227 227 * Return saved (kernel) PC of a blocked thread. 228 - * Only used in a printk() in kernel/sched.c, so don't work too hard. 228 + * Only used in a printk() in kernel/sched/core.c, so don't work too hard. 229 229 */ 230 230 #define thread_saved_pc(t) ((t)->thread.pc) 231 231
+1 -1
arch/tile/kernel/stack.c
··· 442 442 regs_to_pt_regs(&regs, pc, lr, sp, r52)); 443 443 } 444 444 445 - /* This is called only from kernel/sched.c, with esp == NULL */ 445 + /* This is called only from kernel/sched/core.c, with esp == NULL */ 446 446 void show_stack(struct task_struct *task, unsigned long *esp) 447 447 { 448 448 struct KBacktraceIterator kbt;
+1 -1
arch/um/kernel/sysrq.c
··· 39 39 static const int kstack_depth_to_print = 24; 40 40 41 41 /* This recently started being used in arch-independent code too, as in 42 - * kernel/sched.c.*/ 42 + * kernel/sched/core.c.*/ 43 43 void show_stack(struct task_struct *task, unsigned long *esp) 44 44 { 45 45 unsigned long *stack;
+1 -1
include/linux/completion.h
··· 5 5 * (C) Copyright 2001 Linus Torvalds 6 6 * 7 7 * Atomic wait-for-completion handler data structures. 8 - * See kernel/sched.c for details. 8 + * See kernel/sched/core.c for details. 9 9 */ 10 10 11 11 #include <linux/wait.h>
+1 -1
include/linux/perf_event.h
··· 803 803 #define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x)) 804 804 805 805 /* 806 - * This has to have a higher priority than migration_notifier in sched.c. 806 + * This has to have a higher priority than migration_notifier in sched/core.c. 807 807 */ 808 808 #define perf_cpu_notifier(fn) \ 809 809 do { \
+1 -1
include/linux/spinlock_up.h
··· 67 67 68 68 #else /* DEBUG_SPINLOCK */ 69 69 #define arch_spin_is_locked(lock) ((void)(lock), 0) 70 - /* for sched.c and kernel_lock.c: */ 70 + /* for sched/core.c and kernel_lock.c: */ 71 71 # define arch_spin_lock(lock) do { barrier(); (void)(lock); } while (0) 72 72 # define arch_spin_lock_flags(lock, flags) do { barrier(); (void)(lock); } while (0) 73 73 # define arch_spin_unlock(lock) do { barrier(); (void)(lock); } while (0)
+1 -1
include/uapi/asm-generic/unistd.h
··· 361 361 #define __NR_ptrace 117 362 362 __SYSCALL(__NR_ptrace, sys_ptrace) 363 363 364 - /* kernel/sched.c */ 364 + /* kernel/sched/core.c */ 365 365 #define __NR_sched_setparam 118 366 366 __SYSCALL(__NR_sched_setparam, sys_sched_setparam) 367 367 #define __NR_sched_setscheduler 119
+2 -2
kernel/cpuset.c
··· 540 540 * This function builds a partial partition of the systems CPUs 541 541 * A 'partial partition' is a set of non-overlapping subsets whose 542 542 * union is a subset of that set. 543 - * The output of this function needs to be passed to kernel/sched.c 543 + * The output of this function needs to be passed to kernel/sched/core.c 544 544 * partition_sched_domains() routine, which will rebuild the scheduler's 545 545 * load balancing domains (sched domains) as specified by that partial 546 546 * partition. ··· 569 569 * is a subset of one of these domains, while there are as 570 570 * many such domains as possible, each as small as possible. 571 571 * doms - Conversion of 'csa' to an array of cpumasks, for passing to 572 - * the kernel/sched.c routine partition_sched_domains() in a 572 + * the kernel/sched/core.c routine partition_sched_domains() in a 573 573 * convenient format, that can be easily compared to the prior 574 574 * value to determine what partition elements (sched domains) 575 575 * were changed (added or removed.)
+1 -1
kernel/time.c
··· 11 11 * Modification history kernel/time.c 12 12 * 13 13 * 1993-09-02 Philip Gladstone 14 - * Created file with time related functions from sched.c and adjtimex() 14 + * Created file with time related functions from sched/core.c and adjtimex() 15 15 * 1993-10-08 Torsten Duwe 16 16 * adjtime interface update and CMOS clock write code 17 17 * 1995-08-13 Torsten Duwe
+1 -1
kernel/workqueue_internal.h
··· 64 64 65 65 /* 66 66 * Scheduler hooks for concurrency managed workqueue. Only to be used from 67 - * sched.c and workqueue.c. 67 + * sched/core.c and workqueue.c. 68 68 */ 69 69 void wq_worker_waking_up(struct task_struct *task, int cpu); 70 70 struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu);