Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at b1404069f64457c94de241738fdca142c2e5698f 219 lines 9.0 kB view raw
1 Asynchronous Transfers/Transforms API 2 31 INTRODUCTION 4 52 GENEALOGY 6 73 USAGE 83.1 General format of the API 93.2 Supported operations 103.3 Descriptor management 113.4 When does the operation execute? 123.5 When does the operation complete? 133.6 Constraints 143.7 Example 15 164 DRIVER DEVELOPER NOTES 174.1 Conformance points 184.2 "My application needs finer control of hardware channels" 19 205 SOURCE 21 22--- 23 241 INTRODUCTION 25 26The async_tx API provides methods for describing a chain of asynchronous 27bulk memory transfers/transforms with support for inter-transactional 28dependencies. It is implemented as a dmaengine client that smooths over 29the details of different hardware offload engine implementations. Code 30that is written to the API can optimize for asynchronous operation and 31the API will fit the chain of operations to the available offload 32resources. 33 342 GENEALOGY 35 36The API was initially designed to offload the memory copy and 37xor-parity-calculations of the md-raid5 driver using the offload engines 38present in the Intel(R) Xscale series of I/O processors. It also built 39on the 'dmaengine' layer developed for offloading memory copies in the 40network stack using Intel(R) I/OAT engines. The following design 41features surfaced as a result: 421/ implicit synchronous path: users of the API do not need to know if 43 the platform they are running on has offload capabilities. The 44 operation will be offloaded when an engine is available and carried out 45 in software otherwise. 462/ cross channel dependency chains: the API allows a chain of dependent 47 operations to be submitted, like xor->copy->xor in the raid5 case. The 48 API automatically handles cases where the transition from one operation 49 to another implies a hardware channel switch. 503/ dmaengine extensions to support multiple clients and operation types 51 beyond 'memcpy' 52 533 USAGE 54 553.1 General format of the API: 56struct dma_async_tx_descriptor * 57async_<operation>(<op specific parameters>, 58 enum async_tx_flags flags, 59 struct dma_async_tx_descriptor *dependency, 60 dma_async_tx_callback callback_routine, 61 void *callback_parameter); 62 633.2 Supported operations: 64memcpy - memory copy between a source and a destination buffer 65memset - fill a destination buffer with a byte value 66xor - xor a series of source buffers and write the result to a 67 destination buffer 68xor_zero_sum - xor a series of source buffers and set a flag if the 69 result is zero. The implementation attempts to prevent 70 writes to memory 71 723.3 Descriptor management: 73The return value is non-NULL and points to a 'descriptor' when the operation 74has been queued to execute asynchronously. Descriptors are recycled 75resources, under control of the offload engine driver, to be reused as 76operations complete. When an application needs to submit a chain of 77operations it must guarantee that the descriptor is not automatically recycled 78before the dependency is submitted. This requires that all descriptors be 79acknowledged by the application before the offload engine driver is allowed to 80recycle (or free) the descriptor. A descriptor can be acked by one of the 81following methods: 821/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted 832/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent 84 descriptor of a new operation. 853/ calling async_tx_ack() on the descriptor. 86 873.4 When does the operation execute? 88Operations do not immediately issue after return from the 89async_<operation> call. Offload engine drivers batch operations to 90improve performance by reducing the number of mmio cycles needed to 91manage the channel. Once a driver-specific threshold is met the driver 92automatically issues pending operations. An application can force this 93event by calling async_tx_issue_pending_all(). This operates on all 94channels since the application has no knowledge of channel to operation 95mapping. 96 973.5 When does the operation complete? 98There are two methods for an application to learn about the completion 99of an operation. 1001/ Call dma_wait_for_async_tx(). This call causes the CPU to spin while 101 it polls for the completion of the operation. It handles dependency 102 chains and issuing pending operations. 1032/ Specify a completion callback. The callback routine runs in tasklet 104 context if the offload engine driver supports interrupts, or it is 105 called in application context if the operation is carried out 106 synchronously in software. The callback can be set in the call to 107 async_<operation>, or when the application needs to submit a chain of 108 unknown length it can use the async_trigger_callback() routine to set a 109 completion interrupt/callback at the end of the chain. 110 1113.6 Constraints: 1121/ Calls to async_<operation> are not permitted in IRQ context. Other 113 contexts are permitted provided constraint #2 is not violated. 1142/ Completion callback routines cannot submit new operations. This 115 results in recursion in the synchronous case and spin_locks being 116 acquired twice in the asynchronous case. 117 1183.7 Example: 119Perform a xor->copy->xor operation where each operation depends on the 120result from the previous operation: 121 122void complete_xor_copy_xor(void *param) 123{ 124 printk("complete\n"); 125} 126 127int run_xor_copy_xor(struct page **xor_srcs, 128 int xor_src_cnt, 129 struct page *xor_dest, 130 size_t xor_len, 131 struct page *copy_src, 132 struct page *copy_dest, 133 size_t copy_len) 134{ 135 struct dma_async_tx_descriptor *tx; 136 137 tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, 138 ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL); 139 tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, 140 ASYNC_TX_DEP_ACK, tx, NULL, NULL); 141 tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, 142 ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK, 143 tx, complete_xor_copy_xor, NULL); 144 145 async_tx_issue_pending_all(); 146} 147 148See include/linux/async_tx.h for more information on the flags. See the 149ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more 150implementation examples. 151 1524 DRIVER DEVELOPMENT NOTES 1534.1 Conformance points: 154There are a few conformance points required in dmaengine drivers to 155accommodate assumptions made by applications using the async_tx API: 1561/ Completion callbacks are expected to happen in tasklet context 1572/ dma_async_tx_descriptor fields are never manipulated in IRQ context 1583/ Use async_tx_run_dependencies() in the descriptor clean up path to 159 handle submission of dependent operations 160 1614.2 "My application needs finer control of hardware channels" 162This requirement seems to arise from cases where a DMA engine driver is 163trying to support device-to-memory DMA. The dmaengine and async_tx 164implementations were designed for offloading memory-to-memory 165operations; however, there are some capabilities of the dmaengine layer 166that can be used for platform-specific channel management. 167Platform-specific constraints can be handled by registering the 168application as a 'dma_client' and implementing a 'dma_event_callback' to 169apply a filter to the available channels in the system. Before showing 170how to implement a custom dma_event callback some background of 171dmaengine's client support is required. 172 173The following routines in dmaengine support multiple clients requesting 174use of a channel: 175- dma_async_client_register(struct dma_client *client) 176- dma_async_client_chan_request(struct dma_client *client) 177 178dma_async_client_register takes a pointer to an initialized dma_client 179structure. It expects that the 'event_callback' and 'cap_mask' fields 180are already initialized. 181 182dma_async_client_chan_request triggers dmaengine to notify the client of 183all channels that satisfy the capability mask. It is up to the client's 184event_callback routine to track how many channels the client needs and 185how many it is currently using. The dma_event_callback routine returns a 186dma_state_client code to let dmaengine know the status of the 187allocation. 188 189Below is the example of how to extend this functionality for 190platform-specific filtering of the available channels beyond the 191standard capability mask: 192 193static enum dma_state_client 194my_dma_client_callback(struct dma_client *client, 195 struct dma_chan *chan, enum dma_state state) 196{ 197 struct dma_device *dma_dev; 198 struct my_platform_specific_dma *plat_dma_dev; 199 200 dma_dev = chan->device; 201 plat_dma_dev = container_of(dma_dev, 202 struct my_platform_specific_dma, 203 dma_dev); 204 205 if (!plat_dma_dev->platform_specific_capability) 206 return DMA_DUP; 207 208 . . . 209} 210 2115 SOURCE 212include/linux/dmaengine.h: core header file for DMA drivers and clients 213drivers/dma/dmaengine.c: offload engine channel management routines 214drivers/dma/: location for offload engine drivers 215include/linux/async_tx.h: core header file for the async_tx api 216crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code 217crypto/async_tx/async_memcpy.c: copy offload 218crypto/async_tx/async_memset.c: memory fill offload 219crypto/async_tx/async_xor.c: xor and xor zero sum offload