1 Dynamic DMA mapping Guide
2 =========================
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
8 This is a guide to device driver writers on how to use the DMA API
9 with example pseudo-code. For a concise description of the API, see
12 Most of the 64bit platforms have special hardware that translates bus
13 addresses (DMA addresses) into physical addresses. This is similar to
14 how page tables and/or a TLB translates virtual addresses to physical
15 addresses on a CPU. This is needed so that e.g. PCI devices can
16 access with a Single Address Cycle (32bit DMA address) any page in the
17 64bit physical address space. Previously in Linux those 64bit
18 platforms had to set artificial limits on the maximum RAM size in the
19 system, so that the virt_to_bus() static scheme works (the DMA address
20 translation tables were simply filled on bootup to map each bus
21 address to the physical page __pa(bus_to_virt())).
23 So that Linux can use the dynamic DMA mapping, it needs some help from the
24 drivers, namely it has to take into account that DMA addresses should be
25 mapped only for the time they are actually used and unmapped after the DMA
28 The following API will work of course even on platforms where no such
31 Note that the DMA API works with any bus independent of the underlying
32 microprocessor architecture. You should use the DMA API rather than
33 the bus specific DMA API (e.g. pci_dma_*).
35 First of all, you should make sure
37 #include <linux/dma-mapping.h>
39 is in your driver. This file will obtain for you the definition of the
40 dma_addr_t (which can hold any valid DMA address for the platform)
41 type which should be used everywhere you hold a DMA (bus) address
42 returned from the DMA mapping functions.
44 What memory is DMA'able?
46 The first piece of information you must know is what kernel memory can
47 be used with the DMA mapping facilities. There has been an unwritten
48 set of rules regarding this, and this text is an attempt to finally
51 If you acquired your memory via the page allocator
52 (i.e. __get_free_page*()) or the generic memory allocators
53 (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
54 that memory using the addresses returned from those routines.
56 This means specifically that you may _not_ use the memory/addresses
57 returned from vmalloc() for DMA. It is possible to DMA to the
58 _underlying_ memory mapped into a vmalloc() area, but this requires
59 walking page tables to get the physical addresses, and then
60 translating each of those pages back to a kernel address using
61 something like __va(). [ EDIT: Update this when we integrate
62 Gerd Knorr's generic code which does this. ]
64 This rule also means that you may use neither kernel image addresses
65 (items in data/text/bss segments), nor module image addresses, nor
66 stack addresses for DMA. These could all be mapped somewhere entirely
67 different than the rest of physical memory. Even if those classes of
68 memory could physically work with DMA, you'd need to ensure the I/O
69 buffers were cacheline-aligned. Without that, you'd see cacheline
70 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
71 (The CPU could write to one word, DMA would write to a different one
72 in the same cache line, and one of them could be overwritten.)
74 Also, this means that you cannot take the return of a kmap()
75 call and DMA to/from that. This is similar to vmalloc().
77 What about block I/O and networking buffers? The block I/O and
78 networking subsystems make sure that the buffers they use are valid
79 for you to DMA from/to.
81 DMA addressing limitations
83 Does your device have any DMA addressing limitations? For example, is
84 your device only capable of driving the low order 24-bits of address?
85 If so, you need to inform the kernel of this fact.
87 By default, the kernel assumes that your device can address the full
88 32-bits. For a 64-bit capable device, this needs to be increased.
89 And for a device with limitations, as discussed in the previous
90 paragraph, it needs to be decreased.
92 Special note about PCI: PCI-X specification requires PCI-X devices to
93 support 64-bit addressing (DAC) for all transactions. And at least
94 one platform (SGI SN2) requires 64-bit consistent allocations to
95 operate correctly when the IO bus is in PCI-X mode.
97 For correct operation, you must interrogate the kernel in your device
98 probe routine to see if the DMA controller on the machine can properly
99 support the DMA addressing limitation your device has. It is good
100 style to do this even if your device holds the default setting,
101 because this shows that you did think about these issues wrt. your
104 The query is performed via a call to dma_set_mask_and_coherent():
106 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
108 which will query the mask for both streaming and coherent APIs together.
109 If you have some special requirements, then the following two separate
110 queries can be used instead:
112 The query for streaming mappings is performed via a call to
115 int dma_set_mask(struct device *dev, u64 mask);
117 The query for consistent allocations is performed via a call
118 to dma_set_coherent_mask():
120 int dma_set_coherent_mask(struct device *dev, u64 mask);
122 Here, dev is a pointer to the device struct of your device, and mask
123 is a bit mask describing which bits of an address your device
124 supports. It returns zero if your card can perform DMA properly on
125 the machine given the address mask you provided. In general, the
126 device struct of your device is embedded in the bus specific device
127 struct of your device. For example, a pointer to the device struct of
128 your PCI device is pdev->dev (pdev is a pointer to the PCI device
129 struct of your device).
131 If it returns non-zero, your device cannot perform DMA properly on
132 this platform, and attempting to do so will result in undefined
133 behavior. You must either use a different mask, or not use DMA.
135 This means that in the failure case, you have three options:
137 1) Use another DMA mask, if possible (see below).
138 2) Use some non-DMA mode for data transfer, if possible.
139 3) Ignore this device and do not initialize it.
141 It is recommended that your driver print a kernel KERN_WARNING message
142 when you end up performing either #2 or #3. In this manner, if a user
143 of your driver reports that performance is bad or that the device is not
144 even detected, you can ask them for the kernel messages to find out
147 The standard 32-bit addressing device would do something like this:
149 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
151 "mydev: No suitable DMA available.\n");
152 goto ignore_this_device;
155 Another common scenario is a 64-bit capable device. The approach here
156 is to try for 64-bit addressing, but back down to a 32-bit mask that
157 should not fail. The kernel may fail the 64-bit mask not because the
158 platform is not capable of 64-bit addressing. Rather, it may fail in
159 this case simply because 32-bit addressing is done more efficiently
160 than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
161 more efficient than DAC addressing.
163 Here is how you would handle a 64-bit capable device which can drive
164 all 64-bits when accessing streaming DMA:
168 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
170 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
174 "mydev: No suitable DMA available.\n");
175 goto ignore_this_device;
178 If a card is capable of using 64-bit consistent allocations as well,
179 the case would look like this:
181 int using_dac, consistent_using_dac;
183 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
185 consistent_using_dac = 1;
186 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
188 consistent_using_dac = 0;
191 "mydev: No suitable DMA available.\n");
192 goto ignore_this_device;
195 The coherent coherent mask will always be able to set the same or a
196 smaller mask as the streaming mask. However for the rare case that a
197 device driver only uses consistent allocations, one would have to
198 check the return value from dma_set_coherent_mask().
200 Finally, if your device can only drive the low 24-bits of
201 address you might do something like:
203 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
205 "mydev: 24-bit DMA addressing not available.\n");
206 goto ignore_this_device;
209 When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
210 returns zero, the kernel saves away this mask you have provided. The
211 kernel will use this information later when you make DMA mappings.
213 There is a case which we are aware of at this time, which is worth
214 mentioning in this documentation. If your device supports multiple
215 functions (for example a sound card provides playback and record
216 functions) and the various different functions have _different_
217 DMA addressing limitations, you may wish to probe each mask and
218 only provide the functionality which the machine can handle. It
219 is important that the last call to dma_set_mask() be for the
222 Here is pseudo-code showing how this might be done:
224 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
225 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
227 struct my_sound_card *card;
231 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
232 card->playback_enabled = 1;
234 card->playback_enabled = 0;
235 printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
238 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
239 card->record_enabled = 1;
241 card->record_enabled = 0;
242 printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
246 A sound card was used as an example here because this genre of PCI
247 devices seems to be littered with ISA chips given a PCI front end,
248 and thus retaining the 16MB DMA addressing limitations of ISA.
250 Types of DMA mappings
252 There are two types of DMA mappings:
254 - Consistent DMA mappings which are usually mapped at driver
255 initialization, unmapped at the end and for which the hardware should
256 guarantee that the device and the CPU can access the data
257 in parallel and will see updates made by each other without any
258 explicit software flushing.
260 Think of "consistent" as "synchronous" or "coherent".
262 The current default is to return consistent memory in the low 32
263 bits of the bus space. However, for future compatibility you should
264 set the consistent mask even if this default is fine for your
267 Good examples of what to use consistent mappings for are:
269 - Network card DMA ring descriptors.
270 - SCSI adapter mailbox command data structures.
271 - Device firmware microcode executed out of
274 The invariant these examples all require is that any CPU store
275 to memory is immediately visible to the device, and vice
276 versa. Consistent mappings guarantee this.
278 IMPORTANT: Consistent DMA memory does not preclude the usage of
279 proper memory barriers. The CPU may reorder stores to
280 consistent memory just as it may normal memory. Example:
281 if it is important for the device to see the first word
282 of a descriptor updated before the second, you must do
285 desc->word0 = address;
287 desc->word1 = DESC_VALID;
289 in order to get correct behavior on all platforms.
291 Also, on some platforms your driver may need to flush CPU write
292 buffers in much the same way as it needs to flush write buffers
293 found in PCI bridges (such as by reading a register's value
296 - Streaming DMA mappings which are usually mapped for one DMA
297 transfer, unmapped right after it (unless you use dma_sync_* below)
298 and for which hardware can optimize for sequential accesses.
300 This of "streaming" as "asynchronous" or "outside the coherency
303 Good examples of what to use streaming mappings for are:
305 - Networking buffers transmitted/received by a device.
306 - Filesystem buffers written/read by a SCSI device.
308 The interfaces for using this type of mapping were designed in
309 such a way that an implementation can make whatever performance
310 optimizations the hardware allows. To this end, when using
311 such mappings you must be explicit about what you want to happen.
313 Neither type of DMA mapping has alignment restrictions that come from
314 the underlying bus, although some devices may have such restrictions.
315 Also, systems with caches that aren't DMA-coherent will work better
316 when the underlying buffers don't share cache lines with other data.
319 Using Consistent DMA mappings.
321 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
324 dma_addr_t dma_handle;
326 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
328 where device is a struct device *. This may be called in interrupt
329 context with the GFP_ATOMIC flag.
331 Size is the length of the region you want to allocate, in bytes.
333 This routine will allocate RAM for that region, so it acts similarly to
334 __get_free_pages (but takes size instead of a page order). If your
335 driver needs regions sized smaller than a page, you may prefer using
336 the dma_pool interface, described below.
338 The consistent DMA mapping interfaces, for non-NULL dev, will by
339 default return a DMA address which is 32-bit addressable. Even if the
340 device indicates (via DMA mask) that it may address the upper 32-bits,
341 consistent allocation will only return > 32-bit addresses for DMA if
342 the consistent DMA mask has been explicitly changed via
343 dma_set_coherent_mask(). This is true of the dma_pool interface as
346 dma_alloc_coherent returns two values: the virtual address which you
347 can use to access it from the CPU and dma_handle which you pass to the
350 The cpu return address and the DMA bus master address are both
351 guaranteed to be aligned to the smallest PAGE_SIZE order which
352 is greater than or equal to the requested size. This invariant
353 exists (for example) to guarantee that if you allocate a chunk
354 which is smaller than or equal to 64 kilobytes, the extent of the
355 buffer you receive will not cross a 64K boundary.
357 To unmap and free such a DMA region, you call:
359 dma_free_coherent(dev, size, cpu_addr, dma_handle);
361 where dev, size are the same as in the above call and cpu_addr and
362 dma_handle are the values dma_alloc_coherent returned to you.
363 This function may not be called in interrupt context.
365 If your driver needs lots of smaller memory regions, you can write
366 custom code to subdivide pages returned by dma_alloc_coherent,
367 or you can use the dma_pool API to do that. A dma_pool is like
368 a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
369 Also, it understands common hardware constraints for alignment,
370 like queue heads needing to be aligned on N byte boundaries.
372 Create a dma_pool like this:
374 struct dma_pool *pool;
376 pool = dma_pool_create(name, dev, size, align, alloc);
378 The "name" is for diagnostics (like a kmem_cache name); dev and size
379 are as above. The device's hardware alignment requirement for this
380 type of data is "align" (which is expressed in bytes, and must be a
381 power of two). If your device has no boundary crossing restrictions,
382 pass 0 for alloc; passing 4096 says memory allocated from this pool
383 must not cross 4KByte boundaries (but at that time it may be better to
384 go for dma_alloc_coherent directly instead).
386 Allocate memory from a dma pool like this:
388 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
390 flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
391 holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
392 this returns two values, cpu_addr and dma_handle.
394 Free memory that was allocated from a dma_pool like this:
396 dma_pool_free(pool, cpu_addr, dma_handle);
398 where pool is what you passed to dma_pool_alloc, and cpu_addr and
399 dma_handle are the values dma_pool_alloc returned. This function
400 may be called in interrupt context.
402 Destroy a dma_pool by calling:
404 dma_pool_destroy(pool);
406 Make sure you've called dma_pool_free for all memory allocated
407 from a pool before you destroy the pool. This function may not
408 be called in interrupt context.
412 The interfaces described in subsequent portions of this document
413 take a DMA direction argument, which is an integer and takes on
414 one of the following values:
421 One should provide the exact DMA direction if you know it.
423 DMA_TO_DEVICE means "from main memory to the device"
424 DMA_FROM_DEVICE means "from the device to main memory"
425 It is the direction in which the data moves during the DMA
428 You are _strongly_ encouraged to specify this as precisely
431 If you absolutely cannot know the direction of the DMA transfer,
432 specify DMA_BIDIRECTIONAL. It means that the DMA can go in
433 either direction. The platform guarantees that you may legally
434 specify this, and that it will work, but this may be at the
435 cost of performance for example.
437 The value DMA_NONE is to be used for debugging. One can
438 hold this in a data structure before you come to know the
439 precise direction, and this will help catch cases where your
440 direction tracking logic has failed to set things up properly.
442 Another advantage of specifying this value precisely (outside of
443 potential platform-specific optimizations of such) is for debugging.
444 Some platforms actually have a write permission boolean which DMA
445 mappings can be marked with, much like page protections in the user
446 program address space. Such platforms can and do report errors in the
447 kernel logs when the DMA controller hardware detects violation of the
450 Only streaming mappings specify a direction, consistent mappings
451 implicitly have a direction attribute setting of
454 The SCSI subsystem tells you the direction to use in the
455 'sc_data_direction' member of the SCSI command your driver is
458 For Networking drivers, it's a rather simple affair. For transmit
459 packets, map/unmap them with the DMA_TO_DEVICE direction
460 specifier. For receive packets, just the opposite, map/unmap them
461 with the DMA_FROM_DEVICE direction specifier.
463 Using Streaming DMA mappings
465 The streaming DMA mapping routines can be called from interrupt
466 context. There are two versions of each map/unmap, one which will
467 map/unmap a single memory region, and one which will map/unmap a
470 To map a single region, you do:
472 struct device *dev = &my_dev->dev;
473 dma_addr_t dma_handle;
474 void *addr = buffer->ptr;
475 size_t size = buffer->len;
477 dma_handle = dma_map_single(dev, addr, size, direction);
478 if (dma_mapping_error(dma_handle)) {
480 * reduce current DMA mapping usage,
481 * delay and try again later or
484 goto map_error_handling;
489 dma_unmap_single(dev, dma_handle, size, direction);
491 You should call dma_mapping_error() as dma_map_single() could fail and return
492 error. Not all dma implementations support dma_mapping_error() interface.
493 However, it is a good practice to call dma_mapping_error() interface, which
494 will invoke the generic mapping error check interface. Doing so will ensure
495 that the mapping code will work correctly on all dma implementations without
496 any dependency on the specifics of the underlying implementation. Using the
497 returned address without checking for errors could result in failures ranging
498 from panics to silent data corruption. A couple of examples of incorrect ways
499 to check for errors that make assumptions about the underlying dma
500 implementation are as follows and these are applicable to dma_map_page() as
504 dma_addr_t dma_handle;
506 dma_handle = dma_map_single(dev, addr, size, direction);
507 if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
512 dma_addr_t dma_handle;
514 dma_handle = dma_map_single(dev, addr, size, direction);
515 if (dma_handle == DMA_ERROR_CODE) {
519 You should call dma_unmap_single when the DMA activity is finished, e.g.
520 from the interrupt which told you that the DMA transfer is done.
522 Using cpu pointers like this for single mappings has a disadvantage,
523 you cannot reference HIGHMEM memory in this way. Thus, there is a
524 map/unmap interface pair akin to dma_{map,unmap}_single. These
525 interfaces deal with page/offset pairs instead of cpu pointers.
528 struct device *dev = &my_dev->dev;
529 dma_addr_t dma_handle;
530 struct page *page = buffer->page;
531 unsigned long offset = buffer->offset;
532 size_t size = buffer->len;
534 dma_handle = dma_map_page(dev, page, offset, size, direction);
535 if (dma_mapping_error(dma_handle)) {
537 * reduce current DMA mapping usage,
538 * delay and try again later or
541 goto map_error_handling;
546 dma_unmap_page(dev, dma_handle, size, direction);
548 Here, "offset" means byte offset within the given page.
550 You should call dma_mapping_error() as dma_map_page() could fail and return
551 error as outlined under the dma_map_single() discussion.
553 You should call dma_unmap_page when the DMA activity is finished, e.g.
554 from the interrupt which told you that the DMA transfer is done.
556 With scatterlists, you map a region gathered from several regions by:
558 int i, count = dma_map_sg(dev, sglist, nents, direction);
559 struct scatterlist *sg;
561 for_each_sg(sglist, sg, count, i) {
562 hw_address[i] = sg_dma_address(sg);
563 hw_len[i] = sg_dma_len(sg);
566 where nents is the number of entries in the sglist.
568 The implementation is free to merge several consecutive sglist entries
569 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
570 consecutive sglist entries can be merged into one provided the first one
571 ends and the second one starts on a page boundary - in fact this is a huge
572 advantage for cards which either cannot do scatter-gather or have very
573 limited number of scatter-gather entries) and returns the actual number
574 of sg entries it mapped them to. On failure 0 is returned.
576 Then you should loop count times (note: this can be less than nents times)
577 and use sg_dma_address() and sg_dma_len() macros where you previously
578 accessed sg->address and sg->length as shown above.
580 To unmap a scatterlist, just call:
582 dma_unmap_sg(dev, sglist, nents, direction);
584 Again, make sure DMA activity has already finished.
586 PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
587 the _same_ one you passed into the dma_map_sg call,
588 it should _NOT_ be the 'count' value _returned_ from the
591 Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
592 counterpart, because the bus address space is a shared resource (although
593 in some ports the mapping is per each BUS so less devices contend for the
594 same bus address space) and you could render the machine unusable by eating
597 If you need to use the same streaming DMA region multiple times and touch
598 the data in between the DMA transfers, the buffer needs to be synced
599 properly in order for the cpu and device to see the most uptodate and
600 correct copy of the DMA buffer.
602 So, firstly, just map it with dma_map_{single,sg}, and after each DMA
603 transfer call either:
605 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
609 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
613 Then, if you wish to let the device get at the DMA area again,
614 finish accessing the data with the cpu, and then before actually
615 giving the buffer to the hardware call either:
617 dma_sync_single_for_device(dev, dma_handle, size, direction);
621 dma_sync_sg_for_device(dev, sglist, nents, direction);
625 After the last DMA transfer call one of the DMA unmap routines
626 dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
627 call till dma_unmap_*, then you don't have to call the dma_sync_*
630 Here is pseudo code which shows a situation in which you would need
631 to use the dma_sync_*() interfaces.
633 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
637 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
638 if (dma_mapping_error(dma_handle)) {
640 * reduce current DMA mapping usage,
641 * delay and try again later or
644 goto map_error_handling;
649 cp->rx_dma = mapping;
651 give_rx_buf_to_card(cp);
656 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
658 struct my_card *cp = devid;
661 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
662 struct my_card_header *hp;
664 /* Examine the header to see if we wish
665 * to accept the data. But synchronize
666 * the DMA transfer with the CPU first
667 * so that we see updated contents.
669 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
673 /* Now it is safe to examine the buffer. */
674 hp = (struct my_card_header *) cp->rx_buf;
675 if (header_is_ok(hp)) {
676 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
678 pass_to_upper_layers(cp->rx_buf);
679 make_and_setup_new_rx_buf(cp);
681 /* CPU should not write to
682 * DMA_FROM_DEVICE-mapped area,
683 * so dma_sync_single_for_device() is
684 * not needed here. It would be required
685 * for DMA_BIDIRECTIONAL mapping if
686 * the memory was modified.
688 give_rx_buf_to_card(cp);
693 Drivers converted fully to this interface should not use virt_to_bus any
694 longer, nor should they use bus_to_virt. Some drivers have to be changed a
695 little bit, because there is no longer an equivalent to bus_to_virt in the
696 dynamic DMA mapping scheme - you have to always store the DMA addresses
697 returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
698 calls (dma_map_sg stores them in the scatterlist itself if the platform
699 supports dynamic DMA mapping in hardware) in your driver structures and/or
700 in the card registers.
702 All drivers should be using these interfaces with no exceptions. It
703 is planned to completely remove virt_to_bus() and bus_to_virt() as
704 they are entirely deprecated. Some ports already do not provide these
705 as it is impossible to correctly support them.
709 DMA address space is limited on some architectures and an allocation
710 failure can be determined by:
712 - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
714 - checking the returned dma_addr_t of dma_map_single and dma_map_page
715 by using dma_mapping_error():
717 dma_addr_t dma_handle;
719 dma_handle = dma_map_single(dev, addr, size, direction);
720 if (dma_mapping_error(dev, dma_handle)) {
722 * reduce current DMA mapping usage,
723 * delay and try again later or
726 goto map_error_handling;
729 - unmap pages that are already mapped, when mapping error occurs in the middle
730 of a multiple page mapping attempt. These example are applicable to
731 dma_map_page() as well.
734 dma_addr_t dma_handle1;
735 dma_addr_t dma_handle2;
737 dma_handle1 = dma_map_single(dev, addr, size, direction);
738 if (dma_mapping_error(dev, dma_handle1)) {
740 * reduce current DMA mapping usage,
741 * delay and try again later or
744 goto map_error_handling1;
746 dma_handle2 = dma_map_single(dev, addr, size, direction);
747 if (dma_mapping_error(dev, dma_handle2)) {
749 * reduce current DMA mapping usage,
750 * delay and try again later or
753 goto map_error_handling2;
759 dma_unmap_single(dma_handle1);
762 Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
763 mapping error is detected in the middle)
766 dma_addr_t array[DMA_BUFFERS];
769 for (i = 0; i < DMA_BUFFERS; i++) {
773 dma_addr = dma_map_single(dev, addr, size, direction);
774 if (dma_mapping_error(dev, dma_addr)) {
776 * reduce current DMA mapping usage,
777 * delay and try again later or
780 goto map_error_handling;
782 array[i].dma_addr = dma_addr;
790 for (i = 0; i < save_index; i++) {
794 dma_unmap_single(array[i].dma_addr);
797 Networking drivers must call dev_kfree_skb to free the socket buffer
798 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
799 (ndo_start_xmit). This means that the socket buffer is just dropped in
802 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
803 fails in the queuecommand hook. This means that the SCSI subsystem
804 passes the command to the driver again later.
806 Optimizing Unmap State Space Consumption
808 On many platforms, dma_unmap_{single,page}() is simply a nop.
809 Therefore, keeping track of the mapping address and length is a waste
810 of space. Instead of filling your drivers up with ifdefs and the like
811 to "work around" this (which would defeat the whole purpose of a
812 portable API) the following facilities are provided.
814 Actually, instead of describing the macros one by one, we'll
815 transform some example code.
817 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
830 DEFINE_DMA_UNMAP_ADDR(mapping);
831 DEFINE_DMA_UNMAP_LEN(len);
834 2) Use dma_unmap_{addr,len}_set to set these values.
837 ringp->mapping = FOO;
842 dma_unmap_addr_set(ringp, mapping, FOO);
843 dma_unmap_len_set(ringp, len, BAR);
845 3) Use dma_unmap_{addr,len} to access these values.
848 dma_unmap_single(dev, ringp->mapping, ringp->len,
853 dma_unmap_single(dev,
854 dma_unmap_addr(ringp, mapping),
855 dma_unmap_len(ringp, len),
858 It really should be self-explanatory. We treat the ADDR and LEN
859 separately, because it is possible for an implementation to only
860 need the address in order to perform the unmap operation.
864 If you are just writing drivers for Linux and do not maintain
865 an architecture port for the kernel, you can safely skip down
868 1) Struct scatterlist requirements.
870 Don't invent the architecture specific struct scatterlist; just use
871 <asm-generic/scatterlist.h>. You need to enable
872 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
873 (including software IOMMU).
877 Architectures must ensure that kmalloc'ed buffer is
878 DMA-safe. Drivers and subsystems depend on it. If an architecture
879 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
880 the CPU cache is identical to data in main memory),
881 ARCH_DMA_MINALIGN must be set so that the memory allocator
882 makes sure that kmalloc'ed buffer doesn't share a cache line with
883 the others. See arch/arm/include/asm/cache.h as an example.
885 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
886 constraints. You don't need to worry about the architecture data
887 alignment constraints (e.g. the alignment constraints about 64-bit
890 3) Supporting multiple types of IOMMUs
892 If your architecture needs to support multiple types of IOMMUs, you
893 can use include/linux/asm-generic/dma-mapping-common.h. It's a
894 library to support the DMA API with multiple types of IOMMUs. Lots
895 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
896 sparc) use it. Choose one to see how it can be used. If you need to
897 support multiple types of IOMMUs in a single system, the example of
898 x86 or powerpc helps.
902 This document, and the API itself, would not be in its current
903 form without the feedback and suggestions from numerous individuals.
904 We would like to specifically mention, in no particular order, the
907 Russell King <rmk@arm.linux.org.uk>
908 Leo Dagum <dagum@barrel.engr.sgi.com>
909 Ralf Baechle <ralf@oss.sgi.com>
910 Grant Grundler <grundler@cup.hp.com>
911 Jay Estabrook <Jay.Estabrook@compaq.com>
912 Thomas Sailer <sailer@ife.ee.ethz.ch>
913 Andrea Arcangeli <andrea@suse.de>
914 Jens Axboe <jens.axboe@oracle.com>
915 David Mosberger-Tang <davidm@hpl.hp.com>