Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'dma-fence-deadline' of https://gitlab.freedesktop.org/drm/msm into drm-next

This series adds a deadline hint to fences, so realtime deadlines
such as vblank can be communicated to the fence signaller for power/
frequency management decisions.

This is partially inspired by a trick i915 does, but implemented
via dma-fence for a couple of reasons:

1) To continue to be able to use the atomic helpers
2) To support cases where display and gpu are different drivers

See https://patchwork.freedesktop.org/series/93035/

This does not yet add any UAPI, although this will be needed in
a number of cases:

1) Workloads "ping-ponging" between CPU and GPU, where we don't
want the GPU freq governor to interpret time stalled waiting
for GPU as "idle" time
2) Cases where the compositor is waiting for fences to be signaled
before issuing the atomic ioctl, for example to maintain 60fps
cursor updates even when the GPU is not able to maintain that
framerate.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt5nDQpa6J86V1oFKPA30YcJzPhAVpmF7N1K1g2N3c=Zg@mail.gmail.com

+303 -34
+14 -2
Documentation/driver-api/dma-buf.rst
··· 164 164 .. kernel-doc:: drivers/dma-buf/dma-fence.c 165 165 :doc: fence signalling annotation 166 166 167 + DMA Fence Deadline Hints 168 + ~~~~~~~~~~~~~~~~~~~~~~~~ 169 + 170 + .. kernel-doc:: drivers/dma-buf/dma-fence.c 171 + :doc: deadline hints 172 + 167 173 DMA Fences Functions Reference 168 174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 169 175 ··· 203 197 .. kernel-doc:: include/linux/dma-fence-unwrap.h 204 198 :internal: 205 199 206 - DMA Fence uABI/Sync File 207 - ~~~~~~~~~~~~~~~~~~~~~~~~ 200 + DMA Fence Sync File 201 + ~~~~~~~~~~~~~~~~~~~ 208 202 209 203 .. kernel-doc:: drivers/dma-buf/sync_file.c 210 204 :export: 211 205 212 206 .. kernel-doc:: include/linux/sync_file.h 207 + :internal: 208 + 209 + DMA Fence Sync File uABI 210 + ~~~~~~~~~~~~~~~~~~~~~~~~ 211 + 212 + .. kernel-doc:: include/uapi/linux/sync_file.h 213 213 :internal: 214 214 215 215 Indefinite DMA Fences
+11
drivers/dma-buf/dma-fence-array.c
··· 123 123 dma_fence_free(fence); 124 124 } 125 125 126 + static void dma_fence_array_set_deadline(struct dma_fence *fence, 127 + ktime_t deadline) 128 + { 129 + struct dma_fence_array *array = to_dma_fence_array(fence); 130 + unsigned i; 131 + 132 + for (i = 0; i < array->num_fences; ++i) 133 + dma_fence_set_deadline(array->fences[i], deadline); 134 + } 135 + 126 136 const struct dma_fence_ops dma_fence_array_ops = { 127 137 .get_driver_name = dma_fence_array_get_driver_name, 128 138 .get_timeline_name = dma_fence_array_get_timeline_name, 129 139 .enable_signaling = dma_fence_array_enable_signaling, 130 140 .signaled = dma_fence_array_signaled, 131 141 .release = dma_fence_array_release, 142 + .set_deadline = dma_fence_array_set_deadline, 132 143 }; 133 144 EXPORT_SYMBOL(dma_fence_array_ops); 134 145
+12
drivers/dma-buf/dma-fence-chain.c
··· 206 206 dma_fence_free(fence); 207 207 } 208 208 209 + 210 + static void dma_fence_chain_set_deadline(struct dma_fence *fence, 211 + ktime_t deadline) 212 + { 213 + dma_fence_chain_for_each(fence, fence) { 214 + struct dma_fence *f = dma_fence_chain_contained(fence); 215 + 216 + dma_fence_set_deadline(f, deadline); 217 + } 218 + } 219 + 209 220 const struct dma_fence_ops dma_fence_chain_ops = { 210 221 .use_64bit_seqno = true, 211 222 .get_driver_name = dma_fence_chain_get_driver_name, ··· 224 213 .enable_signaling = dma_fence_chain_enable_signaling, 225 214 .signaled = dma_fence_chain_signaled, 226 215 .release = dma_fence_chain_release, 216 + .set_deadline = dma_fence_chain_set_deadline, 227 217 }; 228 218 EXPORT_SYMBOL(dma_fence_chain_ops); 229 219
+59
drivers/dma-buf/dma-fence.c
··· 913 913 EXPORT_SYMBOL(dma_fence_wait_any_timeout); 914 914 915 915 /** 916 + * DOC: deadline hints 917 + * 918 + * In an ideal world, it would be possible to pipeline a workload sufficiently 919 + * that a utilization based device frequency governor could arrive at a minimum 920 + * frequency that meets the requirements of the use-case, in order to minimize 921 + * power consumption. But in the real world there are many workloads which 922 + * defy this ideal. For example, but not limited to: 923 + * 924 + * * Workloads that ping-pong between device and CPU, with alternating periods 925 + * of CPU waiting for device, and device waiting on CPU. This can result in 926 + * devfreq and cpufreq seeing idle time in their respective domains and in 927 + * result reduce frequency. 928 + * 929 + * * Workloads that interact with a periodic time based deadline, such as double 930 + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, 931 + * missing a vblank deadline results in an *increase* in idle time on the GPU 932 + * (since it has to wait an additional vblank period), sending a signal to 933 + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is 934 + * needed. 935 + * 936 + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. 937 + * The deadline hint provides a way for the waiting driver, or userspace, to 938 + * convey an appropriate sense of urgency to the signaling driver. 939 + * 940 + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace 941 + * facing APIs). The time could either be some point in the future (such as 942 + * the vblank based deadline for page-flipping, or the start of a compositor's 943 + * composition cycle), or the current time to indicate an immediate deadline 944 + * hint (Ie. forward progress cannot be made until this fence is signaled). 945 + * 946 + * Multiple deadlines may be set on a given fence, even in parallel. See the 947 + * documentation for &dma_fence_ops.set_deadline. 948 + * 949 + * The deadline hint is just that, a hint. The driver that created the fence 950 + * may react by increasing frequency, making different scheduling choices, etc. 951 + * Or doing nothing at all. 952 + */ 953 + 954 + /** 955 + * dma_fence_set_deadline - set desired fence-wait deadline hint 956 + * @fence: the fence that is to be waited on 957 + * @deadline: the time by which the waiter hopes for the fence to be 958 + * signaled 959 + * 960 + * Give the fence signaler a hint about an upcoming deadline, such as 961 + * vblank, by which point the waiter would prefer the fence to be 962 + * signaled by. This is intended to give feedback to the fence signaler 963 + * to aid in power management decisions, such as boosting GPU frequency 964 + * if a periodic vblank deadline is approaching but the fence is not 965 + * yet signaled.. 966 + */ 967 + void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) 968 + { 969 + if (fence->ops->set_deadline && !dma_fence_is_signaled(fence)) 970 + fence->ops->set_deadline(fence, deadline); 971 + } 972 + EXPORT_SYMBOL(dma_fence_set_deadline); 973 + 974 + /** 916 975 * dma_fence_describe - Dump fence describtion into seq_file 917 976 * @fence: the 6fence to describe 918 977 * @seq: the seq_file to put the textual description into
+22
drivers/dma-buf/dma-resv.c
··· 684 684 } 685 685 EXPORT_SYMBOL_GPL(dma_resv_wait_timeout); 686 686 687 + /** 688 + * dma_resv_set_deadline - Set a deadline on reservation's objects fences 689 + * @obj: the reservation object 690 + * @usage: controls which fences to include, see enum dma_resv_usage. 691 + * @deadline: the requested deadline (MONOTONIC) 692 + * 693 + * May be called without holding the dma_resv lock. Sets @deadline on 694 + * all fences filtered by @usage. 695 + */ 696 + void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage, 697 + ktime_t deadline) 698 + { 699 + struct dma_resv_iter cursor; 700 + struct dma_fence *fence; 701 + 702 + dma_resv_iter_begin(&cursor, obj, usage); 703 + dma_resv_for_each_fence_unlocked(&cursor, fence) { 704 + dma_fence_set_deadline(fence, deadline); 705 + } 706 + dma_resv_iter_end(&cursor); 707 + } 708 + EXPORT_SYMBOL_GPL(dma_resv_set_deadline); 687 709 688 710 /** 689 711 * dma_resv_test_signaled - Test if a reservation object's fences have been
+37
drivers/gpu/drm/drm_atomic_helper.c
··· 1511 1511 } 1512 1512 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables); 1513 1513 1514 + /* 1515 + * For atomic updates which touch just a single CRTC, calculate the time of the 1516 + * next vblank, and inform all the fences of the deadline. 1517 + */ 1518 + static void set_fence_deadline(struct drm_device *dev, 1519 + struct drm_atomic_state *state) 1520 + { 1521 + struct drm_crtc *crtc; 1522 + struct drm_crtc_state *new_crtc_state; 1523 + struct drm_plane *plane; 1524 + struct drm_plane_state *new_plane_state; 1525 + ktime_t vbltime = 0; 1526 + int i; 1527 + 1528 + for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) { 1529 + ktime_t v; 1530 + 1531 + if (drm_crtc_next_vblank_start(crtc, &v)) 1532 + continue; 1533 + 1534 + if (!vbltime || ktime_before(v, vbltime)) 1535 + vbltime = v; 1536 + } 1537 + 1538 + /* If no CRTCs updated, then nothing to do: */ 1539 + if (!vbltime) 1540 + return; 1541 + 1542 + for_each_new_plane_in_state (state, plane, new_plane_state, i) { 1543 + if (!new_plane_state->fence) 1544 + continue; 1545 + dma_fence_set_deadline(new_plane_state->fence, vbltime); 1546 + } 1547 + } 1548 + 1514 1549 /** 1515 1550 * drm_atomic_helper_wait_for_fences - wait for fences stashed in plane state 1516 1551 * @dev: DRM device ··· 1574 1539 struct drm_plane *plane; 1575 1540 struct drm_plane_state *new_plane_state; 1576 1541 int i, ret; 1542 + 1543 + set_fence_deadline(dev, state); 1577 1544 1578 1545 for_each_new_plane_in_state(state, plane, new_plane_state, i) { 1579 1546 if (!new_plane_state->fence)
+44 -9
drivers/gpu/drm/drm_vblank.c
··· 844 844 EXPORT_SYMBOL(drm_crtc_vblank_helper_get_vblank_timestamp); 845 845 846 846 /** 847 - * drm_get_last_vbltimestamp - retrieve raw timestamp for the most recent 848 - * vblank interval 849 - * @dev: DRM device 850 - * @pipe: index of CRTC whose vblank timestamp to retrieve 847 + * drm_crtc_get_last_vbltimestamp - retrieve raw timestamp for the most 848 + * recent vblank interval 849 + * @crtc: CRTC whose vblank timestamp to retrieve 851 850 * @tvblank: Pointer to target time which should receive the timestamp 852 851 * @in_vblank_irq: 853 852 * True when called from drm_crtc_handle_vblank(). Some drivers ··· 864 865 * True if timestamp is considered to be very precise, false otherwise. 865 866 */ 866 867 static bool 867 - drm_get_last_vbltimestamp(struct drm_device *dev, unsigned int pipe, 868 - ktime_t *tvblank, bool in_vblank_irq) 868 + drm_crtc_get_last_vbltimestamp(struct drm_crtc *crtc, ktime_t *tvblank, 869 + bool in_vblank_irq) 869 870 { 870 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 871 871 bool ret = false; 872 872 873 873 /* Define requested maximum error on timestamps (nanoseconds). */ ··· 874 876 875 877 /* Query driver if possible and precision timestamping enabled. */ 876 878 if (crtc && crtc->funcs->get_vblank_timestamp && max_error > 0) { 877 - struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 878 - 879 879 ret = crtc->funcs->get_vblank_timestamp(crtc, &max_error, 880 880 tvblank, in_vblank_irq); 881 881 } ··· 885 889 *tvblank = ktime_get(); 886 890 887 891 return ret; 892 + } 893 + 894 + static bool 895 + drm_get_last_vbltimestamp(struct drm_device *dev, unsigned int pipe, 896 + ktime_t *tvblank, bool in_vblank_irq) 897 + { 898 + struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe); 899 + 900 + return drm_crtc_get_last_vbltimestamp(crtc, tvblank, in_vblank_irq); 888 901 } 889 902 890 903 /** ··· 984 979 vblanktime); 985 980 } 986 981 EXPORT_SYMBOL(drm_crtc_vblank_count_and_time); 982 + 983 + /** 984 + * drm_crtc_next_vblank_start - calculate the time of the next vblank 985 + * @crtc: the crtc for which to calculate next vblank time 986 + * @vblanktime: pointer to time to receive the next vblank timestamp. 987 + * 988 + * Calculate the expected time of the start of the next vblank period, 989 + * based on time of previous vblank and frame duration 990 + */ 991 + int drm_crtc_next_vblank_start(struct drm_crtc *crtc, ktime_t *vblanktime) 992 + { 993 + unsigned int pipe = drm_crtc_index(crtc); 994 + struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe]; 995 + struct drm_display_mode *mode = &vblank->hwmode; 996 + u64 vblank_start; 997 + 998 + if (!vblank->framedur_ns || !vblank->linedur_ns) 999 + return -EINVAL; 1000 + 1001 + if (!drm_crtc_get_last_vbltimestamp(crtc, vblanktime, false)) 1002 + return -EINVAL; 1003 + 1004 + vblank_start = DIV_ROUND_DOWN_ULL( 1005 + (u64)vblank->framedur_ns * mode->crtc_vblank_start, 1006 + mode->crtc_vtotal); 1007 + *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank_start)); 1008 + 1009 + return 0; 1010 + } 1011 + EXPORT_SYMBOL(drm_crtc_next_vblank_start); 987 1012 988 1013 static void send_vblank_event(struct drm_device *dev, 989 1014 struct drm_pending_vblank_event *e,
+46
drivers/gpu/drm/scheduler/sched_fence.c
··· 123 123 dma_fence_put(&fence->scheduled); 124 124 } 125 125 126 + static void drm_sched_fence_set_deadline_finished(struct dma_fence *f, 127 + ktime_t deadline) 128 + { 129 + struct drm_sched_fence *fence = to_drm_sched_fence(f); 130 + struct dma_fence *parent; 131 + unsigned long flags; 132 + 133 + spin_lock_irqsave(&fence->lock, flags); 134 + 135 + /* If we already have an earlier deadline, keep it: */ 136 + if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) && 137 + ktime_before(fence->deadline, deadline)) { 138 + spin_unlock_irqrestore(&fence->lock, flags); 139 + return; 140 + } 141 + 142 + fence->deadline = deadline; 143 + set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags); 144 + 145 + spin_unlock_irqrestore(&fence->lock, flags); 146 + 147 + /* 148 + * smp_load_aquire() to ensure that if we are racing another 149 + * thread calling drm_sched_fence_set_parent(), that we see 150 + * the parent set before it calls test_bit(HAS_DEADLINE_BIT) 151 + */ 152 + parent = smp_load_acquire(&fence->parent); 153 + if (parent) 154 + dma_fence_set_deadline(parent, deadline); 155 + } 156 + 126 157 static const struct dma_fence_ops drm_sched_fence_ops_scheduled = { 127 158 .get_driver_name = drm_sched_fence_get_driver_name, 128 159 .get_timeline_name = drm_sched_fence_get_timeline_name, ··· 164 133 .get_driver_name = drm_sched_fence_get_driver_name, 165 134 .get_timeline_name = drm_sched_fence_get_timeline_name, 166 135 .release = drm_sched_fence_release_finished, 136 + .set_deadline = drm_sched_fence_set_deadline_finished, 167 137 }; 168 138 169 139 struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) ··· 178 146 return NULL; 179 147 } 180 148 EXPORT_SYMBOL(to_drm_sched_fence); 149 + 150 + void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, 151 + struct dma_fence *fence) 152 + { 153 + /* 154 + * smp_store_release() to ensure another thread racing us 155 + * in drm_sched_fence_set_deadline_finished() sees the 156 + * fence's parent set before test_bit() 157 + */ 158 + smp_store_release(&s_fence->parent, dma_fence_get(fence)); 159 + if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, 160 + &s_fence->finished.flags)) 161 + dma_fence_set_deadline(fence, s_fence->deadline); 162 + } 181 163 182 164 struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, 183 165 void *owner)
+1 -1
drivers/gpu/drm/scheduler/sched_main.c
··· 1048 1048 drm_sched_fence_scheduled(s_fence); 1049 1049 1050 1050 if (!IS_ERR_OR_NULL(fence)) { 1051 - s_fence->parent = dma_fence_get(fence); 1051 + drm_sched_fence_set_parent(s_fence, fence); 1052 1052 /* Drop for original kref_init of the fence */ 1053 1053 dma_fence_put(fence); 1054 1054
+1
include/drm/drm_vblank.h
··· 230 230 u64 drm_crtc_vblank_count(struct drm_crtc *crtc); 231 231 u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc, 232 232 ktime_t *vblanktime); 233 + int drm_crtc_next_vblank_start(struct drm_crtc *crtc, ktime_t *vblanktime); 233 234 void drm_crtc_send_vblank_event(struct drm_crtc *crtc, 234 235 struct drm_pending_vblank_event *e); 235 236 void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
+17
include/drm/gpu_scheduler.h
··· 41 41 */ 42 42 #define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS 43 43 44 + /** 45 + * DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT - A fence deadline hint has been set 46 + * 47 + * Because we could have a deadline hint can be set before the backing hw 48 + * fence is created, we need to keep track of whether a deadline has already 49 + * been set. 50 + */ 51 + #define DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT (DMA_FENCE_FLAG_USER_BITS + 1) 52 + 44 53 enum dma_resv_usage; 45 54 struct dma_resv; 46 55 struct drm_gem_object; ··· 290 281 * resolved. 291 282 */ 292 283 struct dma_fence finished; 284 + 285 + /** 286 + * @deadline: deadline set on &drm_sched_fence.finished which 287 + * potentially needs to be propagated to &drm_sched_fence.parent 288 + */ 289 + ktime_t deadline; 293 290 294 291 /** 295 292 * @parent: the fence returned by &drm_sched_backend_ops.run_job ··· 589 574 enum drm_sched_priority priority); 590 575 bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); 591 576 577 + void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, 578 + struct dma_fence *fence); 592 579 struct drm_sched_fence *drm_sched_fence_alloc( 593 580 struct drm_sched_entity *s_entity, void *owner); 594 581 void drm_sched_fence_init(struct drm_sched_fence *fence,
+22
include/linux/dma-fence.h
··· 257 257 */ 258 258 void (*timeline_value_str)(struct dma_fence *fence, 259 259 char *str, int size); 260 + 261 + /** 262 + * @set_deadline: 263 + * 264 + * Callback to allow a fence waiter to inform the fence signaler of 265 + * an upcoming deadline, such as vblank, by which point the waiter 266 + * would prefer the fence to be signaled by. This is intended to 267 + * give feedback to the fence signaler to aid in power management 268 + * decisions, such as boosting GPU frequency. 269 + * 270 + * This is called without &dma_fence.lock held, it can be called 271 + * multiple times and from any context. Locking is up to the callee 272 + * if it has some state to manage. If multiple deadlines are set, 273 + * the expectation is to track the soonest one. If the deadline is 274 + * before the current time, it should be interpreted as an immediate 275 + * deadline. 276 + * 277 + * This callback is optional. 278 + */ 279 + void (*set_deadline)(struct dma_fence *fence, ktime_t deadline); 260 280 }; 261 281 262 282 void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, ··· 602 582 603 583 return ret < 0 ? ret : 0; 604 584 } 585 + 586 + void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline); 605 587 606 588 struct dma_fence *dma_fence_get_stub(void); 607 589 struct dma_fence *dma_fence_allocate_private_stub(void);
+2
include/linux/dma-resv.h
··· 479 479 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); 480 480 long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage, 481 481 bool intr, unsigned long timeout); 482 + void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage, 483 + ktime_t deadline); 482 484 bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage); 483 485 void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq); 484 486
+15 -22
include/uapi/linux/sync_file.h
··· 16 16 #include <linux/types.h> 17 17 18 18 /** 19 - * struct sync_merge_data - data passed to merge ioctl 19 + * struct sync_merge_data - SYNC_IOC_MERGE: merge two fences 20 20 * @name: name of new fence 21 21 * @fd2: file descriptor of second fence 22 22 * @fence: returns the fd of the new fence to userspace 23 23 * @flags: merge_data flags 24 24 * @pad: padding for 64-bit alignment, should always be zero 25 + * 26 + * Creates a new fence containing copies of the sync_pts in both 27 + * the calling fd and sync_merge_data.fd2. Returns the new fence's 28 + * fd in sync_merge_data.fence 25 29 */ 26 30 struct sync_merge_data { 27 31 char name[32]; ··· 38 34 /** 39 35 * struct sync_fence_info - detailed fence information 40 36 * @obj_name: name of parent sync_timeline 41 - * @driver_name: name of driver implementing the parent 42 - * @status: status of the fence 0:active 1:signaled <0:error 37 + * @driver_name: name of driver implementing the parent 38 + * @status: status of the fence 0:active 1:signaled <0:error 43 39 * @flags: fence_info flags 44 40 * @timestamp_ns: timestamp of status change in nanoseconds 45 41 */ ··· 52 48 }; 53 49 54 50 /** 55 - * struct sync_file_info - data returned from fence info ioctl 51 + * struct sync_file_info - SYNC_IOC_FILE_INFO: get detailed information on a sync_file 56 52 * @name: name of fence 57 53 * @status: status of fence. 1: signaled 0:active <0:error 58 54 * @flags: sync_file_info flags 59 55 * @num_fences number of fences in the sync_file 60 56 * @pad: padding for 64-bit alignment, should always be zero 61 - * @sync_fence_info: pointer to array of structs sync_fence_info with all 57 + * @sync_fence_info: pointer to array of struct &sync_fence_info with all 62 58 * fences in the sync_file 59 + * 60 + * Takes a struct sync_file_info. If num_fences is 0, the field is updated 61 + * with the actual number of fences. If num_fences is > 0, the system will 62 + * use the pointer provided on sync_fence_info to return up to num_fences of 63 + * struct sync_fence_info, with detailed fence information. 63 64 */ 64 65 struct sync_file_info { 65 66 char name[32]; ··· 78 69 79 70 #define SYNC_IOC_MAGIC '>' 80 71 81 - /** 72 + /* 82 73 * Opcodes 0, 1 and 2 were burned during a API change to avoid users of the 83 74 * old API to get weird errors when trying to handling sync_files. The API 84 75 * change happened during the de-stage of the Sync Framework when there was 85 76 * no upstream users available. 86 77 */ 87 78 88 - /** 89 - * DOC: SYNC_IOC_MERGE - merge two fences 90 - * 91 - * Takes a struct sync_merge_data. Creates a new fence containing copies of 92 - * the sync_pts in both the calling fd and sync_merge_data.fd2. Returns the 93 - * new fence's fd in sync_merge_data.fence 94 - */ 95 79 #define SYNC_IOC_MERGE _IOWR(SYNC_IOC_MAGIC, 3, struct sync_merge_data) 96 - 97 - /** 98 - * DOC: SYNC_IOC_FILE_INFO - get detailed information on a sync_file 99 - * 100 - * Takes a struct sync_file_info. If num_fences is 0, the field is updated 101 - * with the actual number of fences. If num_fences is > 0, the system will 102 - * use the pointer provided on sync_fence_info to return up to num_fences of 103 - * struct sync_fence_info, with detailed fence information. 104 - */ 105 80 #define SYNC_IOC_FILE_INFO _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info) 106 81 107 82 #endif /* _UAPI_LINUX_SYNC_H */