Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

xsk: new descriptor addressing scheme

Currently, AF_XDP only supports a fixed frame-size memory scheme where
each frame is referenced via an index (idx). A user passes the frame
index to the kernel, and the kernel acts upon the data. Some NICs,
however, do not have a fixed frame-size model, instead they have a
model where a memory window is passed to the hardware and multiple
frames are filled into that window (referred to as the "type-writer"
model).

By changing the descriptor format from the current frame index
addressing scheme, AF_XDP can in the future be extended to support
these kinds of NICs.

In the index-based model, an idx refers to a frame of size
frame_size. Addressing a frame in the UMEM is done by offseting the
UMEM starting address by a global offset, idx * frame_size + offset.
Communicating via the fill- and completion-rings are done by means of
idx.

In this commit, the idx is removed in favor of an address (addr),
which is a relative address ranging over the UMEM. To convert an
idx-based address to the new addr is simply: addr = idx * frame_size +
offset.

We also stop referring to the UMEM "frame" as a frame. Instead it is
simply called a chunk.

To transfer ownership of a chunk to the kernel, the addr of the chunk
is passed in the fill-ring. Note, that the kernel will mask addr to
make it chunk aligned, so there is no need for userspace to do
that. E.g., for a chunk size of 2k, passing an addr of 2048, 2050 or
3000 to the fill-ring will refer to the same chunk.

On the completion-ring, the addr will match that of the Tx descriptor,
passed to the kernel.

Changing the descriptor format to use chunks/addr will allow for
future changes to move to a type-writer based model, where multiple
frames can reside in one chunk. In this model passing one single chunk
into the fill-ring, would potentially result in multiple Rx
descriptors.

This commit changes the uapi of AF_XDP sockets, and updates the
documentation.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

authored by

Björn Töpel and committed by
Daniel Borkmann
bbff2f32 a509a955

+123 -129
+58 -43
Documentation/networking/af_xdp.rst
··· 12 12 13 13 This document assumes that the reader is familiar with BPF and XDP. If 14 14 not, the Cilium project has an excellent reference guide at 15 - http://cilium.readthedocs.io/en/doc-1.0/bpf/. 15 + http://cilium.readthedocs.io/en/latest/bpf/. 16 16 17 17 Using the XDP_REDIRECT action from an XDP program, the program can 18 18 redirect ingress frames to other XDP enabled netdevs, using the ··· 33 33 to that packet can be changed to point to another and reused right 34 34 away. This again avoids copying data. 35 35 36 - The UMEM consists of a number of equally size frames and each frame 37 - has a unique frame id. A descriptor in one of the rings references a 38 - frame by referencing its frame id. The user space allocates memory for 39 - this UMEM using whatever means it feels is most appropriate (malloc, 40 - mmap, huge pages, etc). This memory area is then registered with the 41 - kernel using the new setsockopt XDP_UMEM_REG. The UMEM also has two 42 - rings: the FILL ring and the COMPLETION ring. The fill ring is used by 43 - the application to send down frame ids for the kernel to fill in with 44 - RX packet data. References to these frames will then appear in the RX 45 - ring once each packet has been received. The completion ring, on the 46 - other hand, contains frame ids that the kernel has transmitted 47 - completely and can now be used again by user space, for either TX or 48 - RX. Thus, the frame ids appearing in the completion ring are ids that 49 - were previously transmitted using the TX ring. In summary, the RX and 50 - FILL rings are used for the RX path and the TX and COMPLETION rings 51 - are used for the TX path. 36 + The UMEM consists of a number of equally sized chunks. A descriptor in 37 + one of the rings references a frame by referencing its addr. The addr 38 + is simply an offset within the entire UMEM region. The user space 39 + allocates memory for this UMEM using whatever means it feels is most 40 + appropriate (malloc, mmap, huge pages, etc). This memory area is then 41 + registered with the kernel using the new setsockopt XDP_UMEM_REG. The 42 + UMEM also has two rings: the FILL ring and the COMPLETION ring. The 43 + fill ring is used by the application to send down addr for the kernel 44 + to fill in with RX packet data. References to these frames will then 45 + appear in the RX ring once each packet has been received. The 46 + completion ring, on the other hand, contains frame addr that the 47 + kernel has transmitted completely and can now be used again by user 48 + space, for either TX or RX. Thus, the frame addrs appearing in the 49 + completion ring are addrs that were previously transmitted using the 50 + TX ring. In summary, the RX and FILL rings are used for the RX path 51 + and the TX and COMPLETION rings are used for the TX path. 52 52 53 53 The socket is then finally bound with a bind() call to a device and a 54 54 specific queue id on that device, and it is not until bind is ··· 59 59 corresponding two rings, sets the XDP_SHARED_UMEM flag in the bind 60 60 call and submits the XSK of the process it would like to share UMEM 61 61 with as well as its own newly created XSK socket. The new process will 62 - then receive frame id references in its own RX ring that point to this 63 - shared UMEM. Note that since the ring structures are single-consumer / 64 - single-producer (for performance reasons), the new process has to 65 - create its own socket with associated RX and TX rings, since it cannot 66 - share this with the other process. This is also the reason that there 67 - is only one set of FILL and COMPLETION rings per UMEM. It is the 68 - responsibility of a single process to handle the UMEM. 62 + then receive frame addr references in its own RX ring that point to 63 + this shared UMEM. Note that since the ring structures are 64 + single-consumer / single-producer (for performance reasons), the new 65 + process has to create its own socket with associated RX and TX rings, 66 + since it cannot share this with the other process. This is also the 67 + reason that there is only one set of FILL and COMPLETION rings per 68 + UMEM. It is the responsibility of a single process to handle the UMEM. 69 69 70 70 How is then packets distributed from an XDP program to the XSKs? There 71 71 is a BPF map called XSKMAP (or BPF_MAP_TYPE_XSKMAP in full). The ··· 102 102 103 103 UMEM is a region of virtual contiguous memory, divided into 104 104 equal-sized frames. An UMEM is associated to a netdev and a specific 105 - queue id of that netdev. It is created and configured (frame size, 106 - frame headroom, start address and size) by using the XDP_UMEM_REG 107 - setsockopt system call. A UMEM is bound to a netdev and queue id, via 108 - the bind() system call. 105 + queue id of that netdev. It is created and configured (chunk size, 106 + headroom, start address and size) by using the XDP_UMEM_REG setsockopt 107 + system call. A UMEM is bound to a netdev and queue id, via the bind() 108 + system call. 109 109 110 110 An AF_XDP is socket linked to a single UMEM, but one UMEM can have 111 111 multiple AF_XDP sockets. To share an UMEM created via one socket A, ··· 147 147 ~~~~~~~~~~~~~~ 148 148 149 149 The Fill ring is used to transfer ownership of UMEM frames from 150 - user-space to kernel-space. The UMEM indicies are passed in the 151 - ring. As an example, if the UMEM is 64k and each frame is 4k, then the 152 - UMEM has 16 frames and can pass indicies between 0 and 15. 150 + user-space to kernel-space. The UMEM addrs are passed in the ring. As 151 + an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has 152 + 16 chunks and can pass addrs between 0 and 64k. 153 153 154 154 Frames passed to the kernel are used for the ingress path (RX rings). 155 155 156 - The user application produces UMEM indicies to this ring. 156 + The user application produces UMEM addrs to this ring. Note that the 157 + kernel will mask the incoming addr. E.g. for a chunk size of 2k, the 158 + log2(2048) LSB of the addr will be masked off, meaning that 2048, 2050 159 + and 3000 refers to the same chunk. 160 + 157 161 158 162 UMEM Completetion Ring 159 163 ~~~~~~~~~~~~~~~~~~~~~~ ··· 169 165 Frames passed from the kernel to user-space are frames that has been 170 166 sent (TX ring) and can be used by user-space again. 171 167 172 - The user application consumes UMEM indicies from this ring. 168 + The user application consumes UMEM addrs from this ring. 173 169 174 170 175 171 RX Ring 176 172 ~~~~~~~ 177 173 178 174 The RX ring is the receiving side of a socket. Each entry in the ring 179 - is a struct xdp_desc descriptor. The descriptor contains UMEM index 180 - (idx), the length of the data (len), the offset into the frame 181 - (offset). 175 + is a struct xdp_desc descriptor. The descriptor contains UMEM offset 176 + (addr) and the length of the data (len). 182 177 183 178 If no frames have been passed to kernel via the Fill ring, no 184 179 descriptors will (or can) appear on the RX ring. ··· 224 221 225 222 Naive ring dequeue and enqueue could look like this:: 226 223 224 + // struct xdp_rxtx_ring { 225 + // __u32 *producer; 226 + // __u32 *consumer; 227 + // struct xdp_desc *desc; 228 + // }; 229 + 230 + // struct xdp_umem_ring { 231 + // __u32 *producer; 232 + // __u32 *consumer; 233 + // __u64 *desc; 234 + // }; 235 + 227 236 // typedef struct xdp_rxtx_ring RING; 228 237 // typedef struct xdp_umem_ring RING; 229 238 230 239 // typedef struct xdp_desc RING_TYPE; 231 - // typedef __u32 RING_TYPE; 240 + // typedef __u64 RING_TYPE; 232 241 233 242 int dequeue_one(RING *ring, RING_TYPE *item) 234 243 { 235 - __u32 entries = ring->ptrs.producer - ring->ptrs.consumer; 244 + __u32 entries = *ring->producer - *ring->consumer; 236 245 237 246 if (entries == 0) 238 247 return -1; 239 248 240 249 // read-barrier! 241 250 242 - *item = ring->desc[ring->ptrs.consumer & (RING_SIZE - 1)]; 243 - ring->ptrs.consumer++; 251 + *item = ring->desc[*ring->consumer & (RING_SIZE - 1)]; 252 + (*ring->consumer)++; 244 253 return 0; 245 254 } 246 255 247 256 int enqueue_one(RING *ring, const RING_TYPE *item) 248 257 { 249 - u32 free_entries = RING_SIZE - (ring->ptrs.producer - ring->ptrs.consumer); 258 + u32 free_entries = RING_SIZE - (*ring->producer - *ring->consumer); 250 259 251 260 if (free_entries == 0) 252 261 return -1; 253 262 254 - ring->desc[ring->ptrs.producer & (RING_SIZE - 1)] = *item; 263 + ring->desc[*ring->producer & (RING_SIZE - 1)] = *item; 255 264 256 265 // write-barrier! 257 266 258 - ring->ptrs.producer++; 267 + (*ring->producer)++; 259 268 return 0; 260 269 } 261 270
+5 -7
include/uapi/linux/if_xdp.h
··· 48 48 struct xdp_umem_reg { 49 49 __u64 addr; /* Start of packet data area */ 50 50 __u64 len; /* Length of packet data area */ 51 - __u32 frame_size; /* Frame size */ 52 - __u32 frame_headroom; /* Frame head room */ 51 + __u32 chunk_size; 52 + __u32 headroom; 53 53 }; 54 54 55 55 struct xdp_statistics { ··· 66 66 67 67 /* Rx/Tx descriptor */ 68 68 struct xdp_desc { 69 - __u32 idx; 69 + __u64 addr; 70 70 __u32 len; 71 - __u16 offset; 72 - __u8 flags; 73 - __u8 padding[5]; 71 + __u32 options; 74 72 }; 75 73 76 - /* UMEM descriptor is __u32 */ 74 + /* UMEM descriptor is __u64 */ 77 75 78 76 #endif /* _LINUX_IF_XDP_H */
+15 -18
net/xdp/xdp_umem.c
··· 14 14 15 15 #include "xdp_umem.h" 16 16 17 - #define XDP_UMEM_MIN_FRAME_SIZE 2048 17 + #define XDP_UMEM_MIN_CHUNK_SIZE 2048 18 18 19 19 static void xdp_umem_unpin_pages(struct xdp_umem *umem) 20 20 { ··· 151 151 152 152 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) 153 153 { 154 - u32 frame_size = mr->frame_size, frame_headroom = mr->frame_headroom; 154 + u32 chunk_size = mr->chunk_size, headroom = mr->headroom; 155 + unsigned int chunks, chunks_per_page; 155 156 u64 addr = mr->addr, size = mr->len; 156 - unsigned int nframes, nfpp; 157 157 int size_chk, err; 158 158 159 - if (frame_size < XDP_UMEM_MIN_FRAME_SIZE || frame_size > PAGE_SIZE) { 159 + if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { 160 160 /* Strictly speaking we could support this, if: 161 161 * - huge pages, or* 162 162 * - using an IOMMU, or ··· 166 166 return -EINVAL; 167 167 } 168 168 169 - if (!is_power_of_2(frame_size)) 169 + if (!is_power_of_2(chunk_size)) 170 170 return -EINVAL; 171 171 172 172 if (!PAGE_ALIGNED(addr)) { ··· 179 179 if ((addr + size) < addr) 180 180 return -EINVAL; 181 181 182 - nframes = (unsigned int)div_u64(size, frame_size); 183 - if (nframes == 0 || nframes > UINT_MAX) 182 + chunks = (unsigned int)div_u64(size, chunk_size); 183 + if (chunks == 0) 184 184 return -EINVAL; 185 185 186 - nfpp = PAGE_SIZE / frame_size; 187 - if (nframes < nfpp || nframes % nfpp) 186 + chunks_per_page = PAGE_SIZE / chunk_size; 187 + if (chunks < chunks_per_page || chunks % chunks_per_page) 188 188 return -EINVAL; 189 189 190 - frame_headroom = ALIGN(frame_headroom, 64); 190 + headroom = ALIGN(headroom, 64); 191 191 192 - size_chk = frame_size - frame_headroom - XDP_PACKET_HEADROOM; 192 + size_chk = chunk_size - headroom - XDP_PACKET_HEADROOM; 193 193 if (size_chk < 0) 194 194 return -EINVAL; 195 195 196 196 umem->pid = get_task_pid(current, PIDTYPE_PID); 197 - umem->size = (size_t)size; 198 197 umem->address = (unsigned long)addr; 199 - umem->props.frame_size = frame_size; 200 - umem->props.nframes = nframes; 201 - umem->frame_headroom = frame_headroom; 198 + umem->props.chunk_mask = ~((u64)chunk_size - 1); 199 + umem->props.size = size; 200 + umem->headroom = headroom; 201 + umem->chunk_size_nohr = chunk_size - headroom; 202 202 umem->npgs = size / PAGE_SIZE; 203 203 umem->pgs = NULL; 204 204 umem->user = NULL; 205 205 206 - umem->frame_size_log2 = ilog2(frame_size); 207 - umem->nfpp_mask = nfpp - 1; 208 - umem->nfpplog2 = ilog2(nfpp); 209 206 refcount_set(&umem->users, 1); 210 207 211 208 err = xdp_umem_account_pages(umem);
+6 -21
net/xdp/xdp_umem.h
··· 18 18 struct xsk_queue *cq; 19 19 struct page **pgs; 20 20 struct xdp_umem_props props; 21 - u32 npgs; 22 - u32 frame_headroom; 23 - u32 nfpp_mask; 24 - u32 nfpplog2; 25 - u32 frame_size_log2; 21 + u32 headroom; 22 + u32 chunk_size_nohr; 26 23 struct user_struct *user; 27 24 struct pid *pid; 28 25 unsigned long address; 29 - size_t size; 30 26 refcount_t users; 31 27 struct work_struct work; 28 + u32 npgs; 32 29 }; 33 30 34 - static inline char *xdp_umem_get_data(struct xdp_umem *umem, u32 idx) 31 + static inline char *xdp_umem_get_data(struct xdp_umem *umem, u64 addr) 35 32 { 36 - u64 pg, off; 37 - char *data; 38 - 39 - pg = idx >> umem->nfpplog2; 40 - off = (idx & umem->nfpp_mask) << umem->frame_size_log2; 41 - 42 - data = page_address(umem->pgs[pg]); 43 - return data + off; 44 - } 45 - 46 - static inline char *xdp_umem_get_data_with_headroom(struct xdp_umem *umem, 47 - u32 idx) 48 - { 49 - return xdp_umem_get_data(umem, idx) + umem->frame_headroom; 33 + return page_address(umem->pgs[addr >> PAGE_SHIFT]) + 34 + (addr & (PAGE_SIZE - 1)); 50 35 } 51 36 52 37 bool xdp_umem_validate_queues(struct xdp_umem *umem);
+2 -2
net/xdp/xdp_umem_props.h
··· 7 7 #define XDP_UMEM_PROPS_H_ 8 8 9 9 struct xdp_umem_props { 10 - u32 frame_size; 11 - u32 nframes; 10 + u64 chunk_mask; 11 + u64 size; 12 12 }; 13 13 14 14 #endif /* XDP_UMEM_PROPS_H_ */
+17 -13
net/xdp/xsk.c
··· 41 41 42 42 static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) 43 43 { 44 - u32 id, len = xdp->data_end - xdp->data; 44 + u32 len = xdp->data_end - xdp->data; 45 45 void *buffer; 46 + u64 addr; 46 47 int err; 47 48 48 49 if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) 49 50 return -EINVAL; 50 51 51 - if (!xskq_peek_id(xs->umem->fq, &id)) { 52 + if (!xskq_peek_addr(xs->umem->fq, &addr) || 53 + len > xs->umem->chunk_size_nohr) { 52 54 xs->rx_dropped++; 53 55 return -ENOSPC; 54 56 } 55 57 56 - buffer = xdp_umem_get_data_with_headroom(xs->umem, id); 58 + addr += xs->umem->headroom; 59 + 60 + buffer = xdp_umem_get_data(xs->umem, addr); 57 61 memcpy(buffer, xdp->data, len); 58 - err = xskq_produce_batch_desc(xs->rx, id, len, 59 - xs->umem->frame_headroom); 62 + err = xskq_produce_batch_desc(xs->rx, addr, len); 60 63 if (!err) 61 - xskq_discard_id(xs->umem->fq); 64 + xskq_discard_addr(xs->umem->fq); 62 65 else 63 66 xs->rx_dropped++; 64 67 ··· 98 95 99 96 static void xsk_destruct_skb(struct sk_buff *skb) 100 97 { 101 - u32 id = (u32)(long)skb_shinfo(skb)->destructor_arg; 98 + u64 addr = (u64)(long)skb_shinfo(skb)->destructor_arg; 102 99 struct xdp_sock *xs = xdp_sk(skb->sk); 103 100 104 - WARN_ON_ONCE(xskq_produce_id(xs->umem->cq, id)); 101 + WARN_ON_ONCE(xskq_produce_addr(xs->umem->cq, addr)); 105 102 106 103 sock_wfree(skb); 107 104 } ··· 126 123 127 124 while (xskq_peek_desc(xs->tx, &desc)) { 128 125 char *buffer; 129 - u32 id, len; 126 + u64 addr; 127 + u32 len; 130 128 131 129 if (max_batch-- == 0) { 132 130 err = -EAGAIN; 133 131 goto out; 134 132 } 135 133 136 - if (xskq_reserve_id(xs->umem->cq)) { 134 + if (xskq_reserve_addr(xs->umem->cq)) { 137 135 err = -EAGAIN; 138 136 goto out; 139 137 } ··· 157 153 } 158 154 159 155 skb_put(skb, len); 160 - id = desc.idx; 161 - buffer = xdp_umem_get_data(xs->umem, id) + desc.offset; 156 + addr = desc.addr; 157 + buffer = xdp_umem_get_data(xs->umem, addr); 162 158 err = skb_store_bits(skb, 0, buffer, len); 163 159 if (unlikely(err)) { 164 160 kfree_skb(skb); ··· 168 164 skb->dev = xs->dev; 169 165 skb->priority = sk->sk_priority; 170 166 skb->mark = sk->sk_mark; 171 - skb_shinfo(skb)->destructor_arg = (void *)(long)id; 167 + skb_shinfo(skb)->destructor_arg = (void *)(long)addr; 172 168 skb->destructor = xsk_destruct_skb; 173 169 174 170 err = dev_direct_xmit(skb, xs->queue_id);
+1 -1
net/xdp/xsk_queue.c
··· 17 17 18 18 static u32 xskq_umem_get_ring_size(struct xsk_queue *q) 19 19 { 20 - return sizeof(struct xdp_umem_ring) + q->nentries * sizeof(u32); 20 + return sizeof(struct xdp_umem_ring) + q->nentries * sizeof(u64); 21 21 } 22 22 23 23 static u32 xskq_rxtx_get_ring_size(struct xsk_queue *q)
+19 -24
net/xdp/xsk_queue.h
··· 27 27 /* Used for the fill and completion queues for buffers */ 28 28 struct xdp_umem_ring { 29 29 struct xdp_ring ptrs; 30 - u32 desc[0] ____cacheline_aligned_in_smp; 30 + u64 desc[0] ____cacheline_aligned_in_smp; 31 31 }; 32 32 33 33 struct xsk_queue { ··· 76 76 77 77 /* UMEM queue */ 78 78 79 - static inline bool xskq_is_valid_id(struct xsk_queue *q, u32 idx) 79 + static inline bool xskq_is_valid_addr(struct xsk_queue *q, u64 addr) 80 80 { 81 - if (unlikely(idx >= q->umem_props.nframes)) { 81 + if (addr >= q->umem_props.size) { 82 82 q->invalid_descs++; 83 83 return false; 84 84 } 85 + 85 86 return true; 86 87 } 87 88 88 - static inline u32 *xskq_validate_id(struct xsk_queue *q, u32 *id) 89 + static inline u64 *xskq_validate_addr(struct xsk_queue *q, u64 *addr) 89 90 { 90 91 while (q->cons_tail != q->cons_head) { 91 92 struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; 92 93 unsigned int idx = q->cons_tail & q->ring_mask; 93 94 94 - *id = READ_ONCE(ring->desc[idx]); 95 - if (xskq_is_valid_id(q, *id)) 96 - return id; 95 + *addr = READ_ONCE(ring->desc[idx]) & q->umem_props.chunk_mask; 96 + if (xskq_is_valid_addr(q, *addr)) 97 + return addr; 97 98 98 99 q->cons_tail++; 99 100 } ··· 102 101 return NULL; 103 102 } 104 103 105 - static inline u32 *xskq_peek_id(struct xsk_queue *q, u32 *id) 104 + static inline u64 *xskq_peek_addr(struct xsk_queue *q, u64 *addr) 106 105 { 107 106 if (q->cons_tail == q->cons_head) { 108 107 WRITE_ONCE(q->ring->consumer, q->cons_tail); ··· 112 111 smp_rmb(); 113 112 } 114 113 115 - return xskq_validate_id(q, id); 114 + return xskq_validate_addr(q, addr); 116 115 } 117 116 118 - static inline void xskq_discard_id(struct xsk_queue *q) 117 + static inline void xskq_discard_addr(struct xsk_queue *q) 119 118 { 120 119 q->cons_tail++; 121 120 } 122 121 123 - static inline int xskq_produce_id(struct xsk_queue *q, u32 id) 122 + static inline int xskq_produce_addr(struct xsk_queue *q, u64 addr) 124 123 { 125 124 struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; 126 125 127 - ring->desc[q->prod_tail++ & q->ring_mask] = id; 126 + ring->desc[q->prod_tail++ & q->ring_mask] = addr; 128 127 129 128 /* Order producer and data */ 130 129 smp_wmb(); ··· 133 132 return 0; 134 133 } 135 134 136 - static inline int xskq_reserve_id(struct xsk_queue *q) 135 + static inline int xskq_reserve_addr(struct xsk_queue *q) 137 136 { 138 137 if (xskq_nb_free(q, q->prod_head, 1) == 0) 139 138 return -ENOSPC; ··· 146 145 147 146 static inline bool xskq_is_valid_desc(struct xsk_queue *q, struct xdp_desc *d) 148 147 { 149 - u32 buff_len; 150 - 151 - if (unlikely(d->idx >= q->umem_props.nframes)) { 152 - q->invalid_descs++; 148 + if (!xskq_is_valid_addr(q, d->addr)) 153 149 return false; 154 - } 155 150 156 - buff_len = q->umem_props.frame_size; 157 - if (unlikely(d->len > buff_len || d->len == 0 || 158 - d->offset > buff_len || d->offset + d->len > buff_len)) { 151 + if (((d->addr + d->len) & q->umem_props.chunk_mask) != 152 + (d->addr & q->umem_props.chunk_mask)) { 159 153 q->invalid_descs++; 160 154 return false; 161 155 } ··· 195 199 } 196 200 197 201 static inline int xskq_produce_batch_desc(struct xsk_queue *q, 198 - u32 id, u32 len, u16 offset) 202 + u64 addr, u32 len) 199 203 { 200 204 struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring; 201 205 unsigned int idx; ··· 204 208 return -ENOSPC; 205 209 206 210 idx = (q->prod_head++) & q->ring_mask; 207 - ring->desc[idx].idx = id; 211 + ring->desc[idx].addr = addr; 208 212 ring->desc[idx].len = len; 209 - ring->desc[idx].offset = offset; 210 213 211 214 return 0; 212 215 }