Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched_ext: Rename scx_bpf_dispatch[_vtime]() to scx_bpf_dsq_insert[_vtime]()

In sched_ext API, a repeatedly reported pain point is the overuse of the
verb "dispatch" and confusion around "consume":

- ops.dispatch()
- scx_bpf_dispatch[_vtime]()
- scx_bpf_consume()
- scx_bpf_dispatch[_vtime]_from_dsq*()

This overloading of the term is historical. Originally, there were only
built-in DSQs and moving a task into a DSQ always dispatched it for
execution. Using the verb "dispatch" for the kfuncs to move tasks into these
DSQs made sense.

Later, user DSQs were added and scx_bpf_dispatch[_vtime]() updated to be
able to insert tasks into any DSQ. The only allowed DSQ to DSQ transfer was
from a non-local DSQ to a local DSQ and this operation was named "consume".
This was already confusing as a task could be dispatched to a user DSQ from
ops.enqueue() and then the DSQ would have to be consumed in ops.dispatch().
Later addition of scx_bpf_dispatch_from_dsq*() made the confusion even worse
as "dispatch" in this context meant moving a task to an arbitrary DSQ from a
user DSQ.

Clean up the API with the following renames:

1. scx_bpf_dispatch[_vtime]() -> scx_bpf_dsq_insert[_vtime]()
2. scx_bpf_consume() -> scx_bpf_dsq_move_to_local()
3. scx_bpf_dispatch[_vtime]_from_dsq*() -> scx_bpf_dsq_move[_vtime]*()

This patch performs the first set of renames. Compatibility is maintained
by:

- The previous kfunc names are still provided by the kernel so that old
binaries can run. Kernel generates a warning when the old names are used.

- compat.bpf.h provides wrappers for the new names which automatically fall
back to the old names when running on older kernels. They also trigger
build error if old names are used for new builds.

The compat features will be dropped after v6.15.

v2: Documentation updates.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Andrea Righi <arighi@nvidia.com>
Acked-by: Changwoo Min <changwoo@igalia.com>
Acked-by: Johannes Bechberger <me@mostlynerdless.de>
Acked-by: Giovanni Gherdovich <ggherdovich@suse.com>
Cc: Dan Schatzberg <dschatzberg@meta.com>
Cc: Ming Yang <yougmark94@gmail.com>

+144 -97
+25 -25
Documentation/scheduler/sched-ext.rst
··· 130 130 * Decide which CPU a task should be migrated to before being 131 131 * enqueued (either at wakeup, fork time, or exec time). If an 132 132 * idle core is found by the default ops.select_cpu() implementation, 133 - * then dispatch the task directly to SCX_DSQ_LOCAL and skip the 133 + * then insert the task directly into SCX_DSQ_LOCAL and skip the 134 134 * ops.enqueue() callback. 135 135 * 136 136 * Note that this implementation has exactly the same behavior as the ··· 148 148 cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &direct); 149 149 150 150 if (direct) 151 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 151 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 152 152 153 153 return cpu; 154 154 } 155 155 156 156 /* 157 - * Do a direct dispatch of a task to the global DSQ. This ops.enqueue() 158 - * callback will only be invoked if we failed to find a core to dispatch 159 - * to in ops.select_cpu() above. 157 + * Do a direct insertion of a task to the global DSQ. This ops.enqueue() 158 + * callback will only be invoked if we failed to find a core to insert 159 + * into in ops.select_cpu() above. 160 160 * 161 161 * Note that this implementation has exactly the same behavior as the 162 162 * default ops.enqueue implementation, which just dispatches the task ··· 166 166 */ 167 167 void BPF_STRUCT_OPS(simple_enqueue, struct task_struct *p, u64 enq_flags) 168 168 { 169 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 169 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 170 170 } 171 171 172 172 s32 BPF_STRUCT_OPS_SLEEPABLE(simple_init) ··· 202 202 an arbitrary number of dsq's using ``scx_bpf_create_dsq()`` and 203 203 ``scx_bpf_destroy_dsq()``. 204 204 205 - A CPU always executes a task from its local DSQ. A task is "dispatched" to a 205 + A CPU always executes a task from its local DSQ. A task is "inserted" into a 206 206 DSQ. A non-local DSQ is "consumed" to transfer a task to the consuming CPU's 207 207 local DSQ. 208 208 ··· 229 229 scheduler can wake up any cpu using the ``scx_bpf_kick_cpu()`` helper, 230 230 using ``ops.select_cpu()`` judiciously can be simpler and more efficient. 231 231 232 - A task can be immediately dispatched to a DSQ from ``ops.select_cpu()`` by 233 - calling ``scx_bpf_dispatch()``. If the task is dispatched to 234 - ``SCX_DSQ_LOCAL`` from ``ops.select_cpu()``, it will be dispatched to the 232 + A task can be immediately inserted into a DSQ from ``ops.select_cpu()`` 233 + by calling ``scx_bpf_dsq_insert()``. If the task is inserted into 234 + ``SCX_DSQ_LOCAL`` from ``ops.select_cpu()``, it will be inserted into the 235 235 local DSQ of whichever CPU is returned from ``ops.select_cpu()``. 236 - Additionally, dispatching directly from ``ops.select_cpu()`` will cause the 236 + Additionally, inserting directly from ``ops.select_cpu()`` will cause the 237 237 ``ops.enqueue()`` callback to be skipped. 238 238 239 239 Note that the scheduler core will ignore an invalid CPU selection, for 240 240 example, if it's outside the allowed cpumask of the task. 241 241 242 242 2. Once the target CPU is selected, ``ops.enqueue()`` is invoked (unless the 243 - task was dispatched directly from ``ops.select_cpu()``). ``ops.enqueue()`` 243 + task was inserted directly from ``ops.select_cpu()``). ``ops.enqueue()`` 244 244 can make one of the following decisions: 245 245 246 - * Immediately dispatch the task to either the global or local DSQ by 247 - calling ``scx_bpf_dispatch()`` with ``SCX_DSQ_GLOBAL`` or 246 + * Immediately insert the task into either the global or local DSQ by 247 + calling ``scx_bpf_dsq_insert()`` with ``SCX_DSQ_GLOBAL`` or 248 248 ``SCX_DSQ_LOCAL``, respectively. 249 249 250 - * Immediately dispatch the task to a custom DSQ by calling 251 - ``scx_bpf_dispatch()`` with a DSQ ID which is smaller than 2^63. 250 + * Immediately insert the task into a custom DSQ by calling 251 + ``scx_bpf_dsq_insert()`` with a DSQ ID which is smaller than 2^63. 252 252 253 253 * Queue the task on the BPF side. 254 254 ··· 257 257 run, ``ops.dispatch()`` is invoked which can use the following two 258 258 functions to populate the local DSQ. 259 259 260 - * ``scx_bpf_dispatch()`` dispatches a task to a DSQ. Any target DSQ can 261 - be used - ``SCX_DSQ_LOCAL``, ``SCX_DSQ_LOCAL_ON | cpu``, 262 - ``SCX_DSQ_GLOBAL`` or a custom DSQ. While ``scx_bpf_dispatch()`` 260 + * ``scx_bpf_dsq_insert()`` inserts a task to a DSQ. Any target DSQ can be 261 + used - ``SCX_DSQ_LOCAL``, ``SCX_DSQ_LOCAL_ON | cpu``, 262 + ``SCX_DSQ_GLOBAL`` or a custom DSQ. While ``scx_bpf_dsq_insert()`` 263 263 currently can't be called with BPF locks held, this is being worked on 264 - and will be supported. ``scx_bpf_dispatch()`` schedules dispatching 264 + and will be supported. ``scx_bpf_dsq_insert()`` schedules insertion 265 265 rather than performing them immediately. There can be up to 266 266 ``ops.dispatch_max_batch`` pending tasks. 267 267 ··· 288 288 a task is never queued on the BPF scheduler and both the local and global 289 289 DSQs are consumed automatically. 290 290 291 - ``scx_bpf_dispatch()`` queues the task on the FIFO of the target DSQ. Use 292 - ``scx_bpf_dispatch_vtime()`` for the priority queue. Internal DSQs such as 291 + ``scx_bpf_dsq_insert()`` inserts the task on the FIFO of the target DSQ. Use 292 + ``scx_bpf_dsq_insert_vtime()`` for the priority queue. Internal DSQs such as 293 293 ``SCX_DSQ_LOCAL`` and ``SCX_DSQ_GLOBAL`` do not support priority-queue 294 - dispatching, and must be dispatched to with ``scx_bpf_dispatch()``. See the 295 - function documentation and usage in ``tools/sched_ext/scx_simple.bpf.c`` for 296 - more information. 294 + dispatching, and must be dispatched to with ``scx_bpf_dsq_insert()``. See 295 + the function documentation and usage in ``tools/sched_ext/scx_simple.bpf.c`` 296 + for more information. 297 297 298 298 Where to Look 299 299 =============
+65 -46
kernel/sched/ext.c
··· 220 220 * dispatch. While an explicit custom mechanism can be added, 221 221 * select_cpu() serves as the default way to wake up idle CPUs. 222 222 * 223 - * @p may be dispatched directly by calling scx_bpf_dispatch(). If @p 224 - * is dispatched, the ops.enqueue() callback will be skipped. Finally, 225 - * if @p is dispatched to SCX_DSQ_LOCAL, it will be dispatched to the 226 - * local DSQ of whatever CPU is returned by this callback. 223 + * @p may be inserted into a DSQ directly by calling 224 + * scx_bpf_dsq_insert(). If so, the ops.enqueue() will be skipped. 225 + * Directly inserting into %SCX_DSQ_LOCAL will put @p in the local DSQ 226 + * of the CPU returned by this operation. 227 227 * 228 228 * Note that select_cpu() is never called for tasks that can only run 229 229 * on a single CPU or tasks with migration disabled, as they don't have ··· 237 237 * @p: task being enqueued 238 238 * @enq_flags: %SCX_ENQ_* 239 239 * 240 - * @p is ready to run. Dispatch directly by calling scx_bpf_dispatch() 241 - * or enqueue on the BPF scheduler. If not directly dispatched, the bpf 242 - * scheduler owns @p and if it fails to dispatch @p, the task will 243 - * stall. 240 + * @p is ready to run. Insert directly into a DSQ by calling 241 + * scx_bpf_dsq_insert() or enqueue on the BPF scheduler. If not directly 242 + * inserted, the bpf scheduler owns @p and if it fails to dispatch @p, 243 + * the task will stall. 244 244 * 245 - * If @p was dispatched from ops.select_cpu(), this callback is 245 + * If @p was inserted into a DSQ from ops.select_cpu(), this callback is 246 246 * skipped. 247 247 */ 248 248 void (*enqueue)(struct task_struct *p, u64 enq_flags); ··· 270 270 * 271 271 * Called when a CPU's local dsq is empty. The operation should dispatch 272 272 * one or more tasks from the BPF scheduler into the DSQs using 273 - * scx_bpf_dispatch() and/or consume user DSQs into the local DSQ using 274 - * scx_bpf_consume(). 273 + * scx_bpf_dsq_insert() and/or consume user DSQs into the local DSQ 274 + * using scx_bpf_consume(). 275 275 * 276 - * The maximum number of times scx_bpf_dispatch() can be called without 277 - * an intervening scx_bpf_consume() is specified by 276 + * The maximum number of times scx_bpf_dsq_insert() can be called 277 + * without an intervening scx_bpf_consume() is specified by 278 278 * ops.dispatch_max_batch. See the comments on top of the two functions 279 279 * for more details. 280 280 * ··· 714 714 715 715 /* 716 716 * Set the following to trigger preemption when calling 717 - * scx_bpf_dispatch() with a local dsq as the target. The slice of the 717 + * scx_bpf_dsq_insert() with a local dsq as the target. The slice of the 718 718 * current task is cleared to zero and the CPU is kicked into the 719 719 * scheduling path. Implies %SCX_ENQ_HEAD. 720 720 */ ··· 2322 2322 /* 2323 2323 * We don't require the BPF scheduler to avoid dispatching to offline 2324 2324 * CPUs mostly for convenience but also because CPUs can go offline 2325 - * between scx_bpf_dispatch() calls and here. Trigger error iff the 2325 + * between scx_bpf_dsq_insert() calls and here. Trigger error iff the 2326 2326 * picked CPU is outside the allowed mask. 2327 2327 */ 2328 2328 if (!task_allowed_on_cpu(p, cpu)) { ··· 2658 2658 * Dispatching to local DSQs may need to wait for queueing to complete or 2659 2659 * require rq lock dancing. As we don't wanna do either while inside 2660 2660 * ops.dispatch() to avoid locking order inversion, we split dispatching into 2661 - * two parts. scx_bpf_dispatch() which is called by ops.dispatch() records the 2661 + * two parts. scx_bpf_dsq_insert() which is called by ops.dispatch() records the 2662 2662 * task and its qseq. Once ops.dispatch() returns, this function is called to 2663 2663 * finish up. 2664 2664 * ··· 2690 2690 /* 2691 2691 * If qseq doesn't match, @p has gone through at least one 2692 2692 * dispatch/dequeue and re-enqueue cycle between 2693 - * scx_bpf_dispatch() and here and we have no claim on it. 2693 + * scx_bpf_dsq_insert() and here and we have no claim on it. 2694 2694 */ 2695 2695 if ((opss & SCX_OPSS_QSEQ_MASK) != qseq_at_dispatch) 2696 2696 return; ··· 6258 6258 .set = &scx_kfunc_ids_select_cpu, 6259 6259 }; 6260 6260 6261 - static bool scx_dispatch_preamble(struct task_struct *p, u64 enq_flags) 6261 + static bool scx_dsq_insert_preamble(struct task_struct *p, u64 enq_flags) 6262 6262 { 6263 6263 if (!scx_kf_allowed(SCX_KF_ENQUEUE | SCX_KF_DISPATCH)) 6264 6264 return false; ··· 6278 6278 return true; 6279 6279 } 6280 6280 6281 - static void scx_dispatch_commit(struct task_struct *p, u64 dsq_id, u64 enq_flags) 6281 + static void scx_dsq_insert_commit(struct task_struct *p, u64 dsq_id, 6282 + u64 enq_flags) 6282 6283 { 6283 6284 struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx); 6284 6285 struct task_struct *ddsp_task; ··· 6306 6305 __bpf_kfunc_start_defs(); 6307 6306 6308 6307 /** 6309 - * scx_bpf_dispatch - Dispatch a task into the FIFO queue of a DSQ 6310 - * @p: task_struct to dispatch 6311 - * @dsq_id: DSQ to dispatch to 6308 + * scx_bpf_dsq_insert - Insert a task into the FIFO queue of a DSQ 6309 + * @p: task_struct to insert 6310 + * @dsq_id: DSQ to insert into 6312 6311 * @slice: duration @p can run for in nsecs, 0 to keep the current value 6313 6312 * @enq_flags: SCX_ENQ_* 6314 6313 * 6315 - * Dispatch @p into the FIFO queue of the DSQ identified by @dsq_id. It is safe 6316 - * to call this function spuriously. Can be called from ops.enqueue(), 6314 + * Insert @p into the FIFO queue of the DSQ identified by @dsq_id. It is safe to 6315 + * call this function spuriously. Can be called from ops.enqueue(), 6317 6316 * ops.select_cpu(), and ops.dispatch(). 6318 6317 * 6319 6318 * When called from ops.select_cpu() or ops.enqueue(), it's for direct dispatch ··· 6322 6321 * ops.select_cpu() to be on the target CPU in the first place. 6323 6322 * 6324 6323 * When called from ops.select_cpu(), @enq_flags and @dsp_id are stored, and @p 6325 - * will be directly dispatched to the corresponding dispatch queue after 6326 - * ops.select_cpu() returns. If @p is dispatched to SCX_DSQ_LOCAL, it will be 6327 - * dispatched to the local DSQ of the CPU returned by ops.select_cpu(). 6324 + * will be directly inserted into the corresponding dispatch queue after 6325 + * ops.select_cpu() returns. If @p is inserted into SCX_DSQ_LOCAL, it will be 6326 + * inserted into the local DSQ of the CPU returned by ops.select_cpu(). 6328 6327 * @enq_flags are OR'd with the enqueue flags on the enqueue path before the 6329 - * task is dispatched. 6328 + * task is inserted. 6330 6329 * 6331 6330 * When called from ops.dispatch(), there are no restrictions on @p or @dsq_id 6332 - * and this function can be called upto ops.dispatch_max_batch times to dispatch 6331 + * and this function can be called upto ops.dispatch_max_batch times to insert 6333 6332 * multiple tasks. scx_bpf_dispatch_nr_slots() returns the number of the 6334 6333 * remaining slots. scx_bpf_consume() flushes the batch and resets the counter. 6335 6334 * ··· 6341 6340 * %SCX_SLICE_INF, @p never expires and the BPF scheduler must kick the CPU with 6342 6341 * scx_bpf_kick_cpu() to trigger scheduling. 6343 6342 */ 6344 - __bpf_kfunc void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice, 6345 - u64 enq_flags) 6343 + __bpf_kfunc void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, 6344 + u64 enq_flags) 6346 6345 { 6347 - if (!scx_dispatch_preamble(p, enq_flags)) 6346 + if (!scx_dsq_insert_preamble(p, enq_flags)) 6348 6347 return; 6349 6348 6350 6349 if (slice) ··· 6352 6351 else 6353 6352 p->scx.slice = p->scx.slice ?: 1; 6354 6353 6355 - scx_dispatch_commit(p, dsq_id, enq_flags); 6354 + scx_dsq_insert_commit(p, dsq_id, enq_flags); 6355 + } 6356 + 6357 + /* for backward compatibility, will be removed in v6.15 */ 6358 + __bpf_kfunc void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice, 6359 + u64 enq_flags) 6360 + { 6361 + printk_deferred_once(KERN_WARNING "sched_ext: scx_bpf_dispatch() renamed to scx_bpf_dsq_insert()"); 6362 + scx_bpf_dsq_insert(p, dsq_id, slice, enq_flags); 6356 6363 } 6357 6364 6358 6365 /** 6359 - * scx_bpf_dispatch_vtime - Dispatch a task into the vtime priority queue of a DSQ 6360 - * @p: task_struct to dispatch 6361 - * @dsq_id: DSQ to dispatch to 6366 + * scx_bpf_dsq_insert_vtime - Insert a task into the vtime priority queue of a DSQ 6367 + * @p: task_struct to insert 6368 + * @dsq_id: DSQ to insert into 6362 6369 * @slice: duration @p can run for in nsecs, 0 to keep the current value 6363 6370 * @vtime: @p's ordering inside the vtime-sorted queue of the target DSQ 6364 6371 * @enq_flags: SCX_ENQ_* 6365 6372 * 6366 - * Dispatch @p into the vtime priority queue of the DSQ identified by @dsq_id. 6373 + * Insert @p into the vtime priority queue of the DSQ identified by @dsq_id. 6367 6374 * Tasks queued into the priority queue are ordered by @vtime and always 6368 6375 * consumed after the tasks in the FIFO queue. All other aspects are identical 6369 - * to scx_bpf_dispatch(). 6376 + * to scx_bpf_dsq_insert(). 6370 6377 * 6371 6378 * @vtime ordering is according to time_before64() which considers wrapping. A 6372 6379 * numerically larger vtime may indicate an earlier position in the ordering and 6373 6380 * vice-versa. 6374 6381 */ 6375 - __bpf_kfunc void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id, 6376 - u64 slice, u64 vtime, u64 enq_flags) 6382 + __bpf_kfunc void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, 6383 + u64 slice, u64 vtime, u64 enq_flags) 6377 6384 { 6378 - if (!scx_dispatch_preamble(p, enq_flags)) 6385 + if (!scx_dsq_insert_preamble(p, enq_flags)) 6379 6386 return; 6380 6387 6381 6388 if (slice) ··· 6393 6384 6394 6385 p->scx.dsq_vtime = vtime; 6395 6386 6396 - scx_dispatch_commit(p, dsq_id, enq_flags | SCX_ENQ_DSQ_PRIQ); 6387 + scx_dsq_insert_commit(p, dsq_id, enq_flags | SCX_ENQ_DSQ_PRIQ); 6388 + } 6389 + 6390 + /* for backward compatibility, will be removed in v6.15 */ 6391 + __bpf_kfunc void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id, 6392 + u64 slice, u64 vtime, u64 enq_flags) 6393 + { 6394 + printk_deferred_once(KERN_WARNING "sched_ext: scx_bpf_dispatch_vtime() renamed to scx_bpf_dsq_insert_vtime()"); 6395 + scx_bpf_dsq_insert_vtime(p, dsq_id, slice, vtime, enq_flags); 6397 6396 } 6398 6397 6399 6398 __bpf_kfunc_end_defs(); 6400 6399 6401 6400 BTF_KFUNCS_START(scx_kfunc_ids_enqueue_dispatch) 6401 + BTF_ID_FLAGS(func, scx_bpf_dsq_insert, KF_RCU) 6402 + BTF_ID_FLAGS(func, scx_bpf_dsq_insert_vtime, KF_RCU) 6402 6403 BTF_ID_FLAGS(func, scx_bpf_dispatch, KF_RCU) 6403 6404 BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime, KF_RCU) 6404 6405 BTF_KFUNCS_END(scx_kfunc_ids_enqueue_dispatch) ··· 6546 6527 * to the current CPU's local DSQ for execution. Can only be called from 6547 6528 * ops.dispatch(). 6548 6529 * 6549 - * This function flushes the in-flight dispatches from scx_bpf_dispatch() before 6550 - * trying to consume the specified DSQ. It may also grab rq locks and thus can't 6551 - * be called under any BPF locks. 6530 + * This function flushes the in-flight dispatches from scx_bpf_dsq_insert() 6531 + * before trying to consume the specified DSQ. It may also grab rq locks and 6532 + * thus can't be called under any BPF locks. 6552 6533 * 6553 6534 * Returns %true if a task has been consumed, %false if there isn't any task to 6554 6535 * consume. ··· 6669 6650 * scx_bpf_dispatch_from_dsq_set_vtime() to update. 6670 6651 * 6671 6652 * All other aspects are identical to scx_bpf_dispatch_from_dsq(). See 6672 - * scx_bpf_dispatch_vtime() for more information on @vtime. 6653 + * scx_bpf_dsq_insert_vtime() for more information on @vtime. 6673 6654 */ 6674 6655 __bpf_kfunc bool scx_bpf_dispatch_vtime_from_dsq(struct bpf_iter_scx_dsq *it__iter, 6675 6656 struct task_struct *p, u64 dsq_id,
+2 -2
tools/sched_ext/include/scx/common.bpf.h
··· 36 36 37 37 s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym; 38 38 s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym; 39 - void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym; 40 - void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym; 39 + void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak; 40 + void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak; 41 41 u32 scx_bpf_dispatch_nr_slots(void) __ksym; 42 42 void scx_bpf_dispatch_cancel(void) __ksym; 43 43 bool scx_bpf_consume(u64 dsq_id) __ksym;
+26
tools/sched_ext/include/scx/compat.bpf.h
··· 35 35 scx_bpf_dispatch_vtime_from_dsq((it), (p), (dsq_id), (enq_flags)) : false) 36 36 37 37 /* 38 + * v6.13: The verb `dispatch` was too overloaded and confusing. kfuncs are 39 + * renamed to unload the verb. 40 + * 41 + * Build error is triggered if old names are used. New binaries work with both 42 + * new and old names. The compat macros will be removed on v6.15 release. 43 + */ 44 + void scx_bpf_dispatch___compat(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak; 45 + void scx_bpf_dispatch_vtime___compat(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak; 46 + 47 + #define scx_bpf_dsq_insert(p, dsq_id, slice, enq_flags) \ 48 + (bpf_ksym_exists(scx_bpf_dsq_insert) ? \ 49 + scx_bpf_dsq_insert((p), (dsq_id), (slice), (enq_flags)) : \ 50 + scx_bpf_dispatch___compat((p), (dsq_id), (slice), (enq_flags))) 51 + 52 + #define scx_bpf_dsq_insert_vtime(p, dsq_id, slice, vtime, enq_flags) \ 53 + (bpf_ksym_exists(scx_bpf_dsq_insert_vtime) ? \ 54 + scx_bpf_dsq_insert_vtime((p), (dsq_id), (slice), (vtime), (enq_flags)) : \ 55 + scx_bpf_dispatch_vtime___compat((p), (dsq_id), (slice), (vtime), (enq_flags))) 56 + 57 + #define scx_bpf_dispatch(p, dsq_id, slice, enq_flags) \ 58 + _Static_assert(false, "scx_bpf_dispatch() renamed to scx_bpf_dsq_insert()") 59 + 60 + #define scx_bpf_dispatch_vtime(p, dsq_id, slice, vtime, enq_flags) \ 61 + _Static_assert(false, "scx_bpf_dispatch_vtime() renamed to scx_bpf_dsq_insert_vtime()") 62 + 63 + /* 38 64 * Define sched_ext_ops. This may be expanded to define multiple variants for 39 65 * backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH(). 40 66 */
+5 -5
tools/sched_ext/scx_central.bpf.c
··· 118 118 */ 119 119 if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) { 120 120 __sync_fetch_and_add(&nr_locals, 1); 121 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_INF, 122 - enq_flags | SCX_ENQ_PREEMPT); 121 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_INF, 122 + enq_flags | SCX_ENQ_PREEMPT); 123 123 return; 124 124 } 125 125 126 126 if (bpf_map_push_elem(&central_q, &pid, 0)) { 127 127 __sync_fetch_and_add(&nr_overflows, 1); 128 - scx_bpf_dispatch(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, enq_flags); 128 + scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, enq_flags); 129 129 return; 130 130 } 131 131 ··· 158 158 */ 159 159 if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) { 160 160 __sync_fetch_and_add(&nr_mismatches, 1); 161 - scx_bpf_dispatch(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0); 161 + scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0); 162 162 bpf_task_release(p); 163 163 /* 164 164 * We might run out of dispatch buffer slots if we continue dispatching ··· 172 172 } 173 173 174 174 /* dispatch to local and mark that @cpu doesn't need more */ 175 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | cpu, SCX_SLICE_INF, 0); 175 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | cpu, SCX_SLICE_INF, 0); 176 176 177 177 if (cpu != central_cpu) 178 178 scx_bpf_kick_cpu(cpu, SCX_KICK_IDLE);
+8 -6
tools/sched_ext/scx_flatcg.bpf.c
··· 341 341 if (is_idle) { 342 342 set_bypassed_at(p, taskc); 343 343 stat_inc(FCG_STAT_LOCAL); 344 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 344 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 345 345 } 346 346 347 347 return cpu; ··· 377 377 */ 378 378 if (p->nr_cpus_allowed == 1 && (p->flags & PF_KTHREAD)) { 379 379 stat_inc(FCG_STAT_LOCAL); 380 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, enq_flags); 380 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 381 + enq_flags); 381 382 } else { 382 383 stat_inc(FCG_STAT_GLOBAL); 383 - scx_bpf_dispatch(p, FALLBACK_DSQ, SCX_SLICE_DFL, enq_flags); 384 + scx_bpf_dsq_insert(p, FALLBACK_DSQ, SCX_SLICE_DFL, 385 + enq_flags); 384 386 } 385 387 return; 386 388 } ··· 393 391 goto out_release; 394 392 395 393 if (fifo_sched) { 396 - scx_bpf_dispatch(p, cgrp->kn->id, SCX_SLICE_DFL, enq_flags); 394 + scx_bpf_dsq_insert(p, cgrp->kn->id, SCX_SLICE_DFL, enq_flags); 397 395 } else { 398 396 u64 tvtime = p->scx.dsq_vtime; 399 397 ··· 404 402 if (vtime_before(tvtime, cgc->tvtime_now - SCX_SLICE_DFL)) 405 403 tvtime = cgc->tvtime_now - SCX_SLICE_DFL; 406 404 407 - scx_bpf_dispatch_vtime(p, cgrp->kn->id, SCX_SLICE_DFL, 408 - tvtime, enq_flags); 405 + scx_bpf_dsq_insert_vtime(p, cgrp->kn->id, SCX_SLICE_DFL, 406 + tvtime, enq_flags); 409 407 } 410 408 411 409 cgrp_enqueued(cgrp, cgc);
+6 -6
tools/sched_ext/scx_qmap.bpf.c
··· 226 226 */ 227 227 if (tctx->force_local) { 228 228 tctx->force_local = false; 229 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, slice_ns, enq_flags); 229 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, slice_ns, enq_flags); 230 230 return; 231 231 } 232 232 ··· 234 234 if (!(enq_flags & SCX_ENQ_CPU_SELECTED) && 235 235 (cpu = pick_direct_dispatch_cpu(p, scx_bpf_task_cpu(p))) >= 0) { 236 236 __sync_fetch_and_add(&nr_ddsp_from_enq, 1); 237 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | cpu, slice_ns, enq_flags); 237 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | cpu, slice_ns, enq_flags); 238 238 return; 239 239 } 240 240 ··· 247 247 if (enq_flags & SCX_ENQ_REENQ) { 248 248 s32 cpu; 249 249 250 - scx_bpf_dispatch(p, SHARED_DSQ, 0, enq_flags); 250 + scx_bpf_dsq_insert(p, SHARED_DSQ, 0, enq_flags); 251 251 cpu = scx_bpf_pick_idle_cpu(p->cpus_ptr, 0); 252 252 if (cpu >= 0) 253 253 scx_bpf_kick_cpu(cpu, SCX_KICK_IDLE); ··· 262 262 263 263 /* Queue on the selected FIFO. If the FIFO overflows, punt to global. */ 264 264 if (bpf_map_push_elem(ring, &pid, 0)) { 265 - scx_bpf_dispatch(p, SHARED_DSQ, slice_ns, enq_flags); 265 + scx_bpf_dsq_insert(p, SHARED_DSQ, slice_ns, enq_flags); 266 266 return; 267 267 } 268 268 ··· 385 385 */ 386 386 p = bpf_task_from_pid(2); 387 387 if (p) { 388 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, slice_ns, 0); 388 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, slice_ns, 0); 389 389 bpf_task_release(p); 390 390 return; 391 391 } ··· 431 431 update_core_sched_head_seq(p); 432 432 __sync_fetch_and_add(&nr_dispatched, 1); 433 433 434 - scx_bpf_dispatch(p, SHARED_DSQ, slice_ns, 0); 434 + scx_bpf_dsq_insert(p, SHARED_DSQ, slice_ns, 0); 435 435 bpf_task_release(p); 436 436 437 437 batch--;
+7 -7
tools/sched_ext/scx_simple.bpf.c
··· 31 31 32 32 /* 33 33 * Built-in DSQs such as SCX_DSQ_GLOBAL cannot be used as priority queues 34 - * (meaning, cannot be dispatched to with scx_bpf_dispatch_vtime()). We 34 + * (meaning, cannot be dispatched to with scx_bpf_dsq_insert_vtime()). We 35 35 * therefore create a separate DSQ with ID 0 that we dispatch to and consume 36 - * from. If scx_simple only supported global FIFO scheduling, then we could 37 - * just use SCX_DSQ_GLOBAL. 36 + * from. If scx_simple only supported global FIFO scheduling, then we could just 37 + * use SCX_DSQ_GLOBAL. 38 38 */ 39 39 #define SHARED_DSQ 0 40 40 ··· 65 65 cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle); 66 66 if (is_idle) { 67 67 stat_inc(0); /* count local queueing */ 68 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 68 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); 69 69 } 70 70 71 71 return cpu; ··· 76 76 stat_inc(1); /* count global queueing */ 77 77 78 78 if (fifo_sched) { 79 - scx_bpf_dispatch(p, SHARED_DSQ, SCX_SLICE_DFL, enq_flags); 79 + scx_bpf_dsq_insert(p, SHARED_DSQ, SCX_SLICE_DFL, enq_flags); 80 80 } else { 81 81 u64 vtime = p->scx.dsq_vtime; 82 82 ··· 87 87 if (vtime_before(vtime, vtime_now - SCX_SLICE_DFL)) 88 88 vtime = vtime_now - SCX_SLICE_DFL; 89 89 90 - scx_bpf_dispatch_vtime(p, SHARED_DSQ, SCX_SLICE_DFL, vtime, 91 - enq_flags); 90 + scx_bpf_dsq_insert_vtime(p, SHARED_DSQ, SCX_SLICE_DFL, vtime, 91 + enq_flags); 92 92 } 93 93 } 94 94