1 LIBARCHIVE_INTERNALS(3) manual page
3 '''libarchive_internals'''
4 - description of libarchive internal interfaces
8 library provides a flexible interface for reading and writing
9 streaming archive files such as tar and cpio.
10 Internally, it follows a modular layered design that should
11 make it easy to add new archive and compression formats.
12 == GENERAL ARCHITECTURE ==
13 Externally, libarchive exposes most operations through an
14 opaque, object-style interface.
16 [[ManPagerchiventry3]]
17 objects store information about a single filesystem object.
18 The rest of the library provides facilities to write
19 [[ManPagerchiventry3]]
20 objects to archive files,
21 read them from archive files,
22 and write them to disk.
23 (There are plans to add a facility to read
24 [[ManPagerchiventry3]]
25 objects from disk as well.)
27 The read and write APIs each have four layers: a public API
28 layer, a format layer that understands the archive file format,
29 a compression layer, and an I/O layer.
30 The I/O layer is completely exposed to clients who can replace
31 it entirely with their own functions.
33 In order to provide as much consistency as possible for clients,
34 some public functions are virtualized.
35 Eventually, it should be possible for clients to open
36 an archive or disk writer, and then use a single set of
37 code to select and write entries, regardless of the target.
38 == READ ARCHITECTURE ==
39 From the outside, clients use the
43 object to read entries and bodies from an archive stream.
48 object, which holds all read-specific data.
49 The API has four layers:
50 The lowest layer is the I/O layer.
51 This layer can be overridden by clients, but most clients use
52 the packaged I/O callbacks provided, for example, by
53 [[ManPagerchiveeadpenemory3]],
55 [[ManPagerchiveeadpend3]].
56 The compression layer calls the I/O layer to
57 read bytes and decompresses them for the format layer.
58 The format layer unpacks a stream of uncompressed bytes and
61 objects from the incoming data.
62 The API layer tracks overall state
63 (for example, it prevents clients from reading data before reading a header)
64 and invokes the format and compression layer operations
65 through registered function pointers.
66 In particular, the API layer drives the format-detection process:
67 When opening the archive, it reads an initial block of data
68 and offers it to each registered compression handler.
69 The one with the highest bid is initialized with the first block.
70 Similarly, the format handlers are polled to see which handler
71 is the best for each archive.
72 (Prior to 2.4.0, the format bidders were invoked for each
73 entry, but this design hindered error recovery.)
74 === I/O Layer and Client Callbacks===
75 The read API goes to some lengths to be nice to clients.
76 As a result, there are few restrictions on the behavior of
79 The client read callback is expected to provide a block
81 A zero-length return does indicate end of file, but otherwise
82 blocks may be as small as one byte or as large as the entire file.
83 In particular, blocks may be of different sizes.
85 The client skip callback returns the number of bytes actually
86 skipped, which may be much smaller than the skip requested.
87 The only requirement is that the skip not be larger.
88 In particular, clients are allowed to return zero for any
89 skip that they don't want to handle.
90 The skip callback must never be invoked with a negative value.
92 Keep in mind that not all clients are reading from disk:
93 clients reading from networks may provide different-sized
94 blocks on every request and cannot skip at all;
95 advanced clients may use
96 [[mmap(2)|http://www.freebsd.org/cgi/man.cgi?query=mmap&sektion=2]]
97 to read the entire file into memory at once and return the
98 entire file to libarchive as a single block;
99 other clients may begin asynchronous I/O operations for the
100 next block on each request.
101 === Decompresssion Layer===
102 The decompression layer not only handles decompression,
103 it also buffers data so that the format handlers see a
104 much nicer I/O model.
105 The decompression API is a two stage peek/consume model.
106 A read_ahead request specifies a minimum read amount;
107 the decompression layer must provide a pointer to at least
109 If more data is immediately available, it should return more:
110 the format layer handles bulk data reads by asking for a minimum
111 of one byte and then copying as much data as is available.
113 A subsequent call to the
115 function advances the read pointer.
116 Note that data returned from a
118 call is guaranteed to remain in place until
123 should not cause the data to move.
125 Skip requests must always be handled exactly.
126 Decompression handlers that cannot seek forward should
127 not register a skip handler;
128 the API layer fills in a generic skip handler that reads and discards data.
130 A decompression handler has a specific lifecycle:
132 <dt>Registration/Configuration</dt><dd>
133 When the client invokes the public support function,
134 the decompression handler invokes the internal
135 '''__archive_read_register_compression'''()
136 function to provide bid and initialization functions.
137 This function returns
139 on error or else a pointer to a
140 '''struct''' decompressor_t.
141 This structure contains a
143 slot that can be used for storing any customization information.
144 </dd><dt>Bid</dt><dd>
145 The bid function is invoked with a pointer and size of a block of data.
146 The decompressor can access its config data
152 The bid function is otherwise stateless.
153 In particular, it must not perform any I/O operations.
155 The value returned by the bid function indicates its suitability
156 for handling this data stream.
157 A bid of zero will ensure that this decompressor is never invoked.
158 Return zero if magic number checks fail.
159 Otherwise, your initial implementation should return the number of bits
161 For example, if you verify two full bytes and three bits of another
163 Note that the initial block may be very short;
164 be careful to only inspect the data you are given.
165 (The current decompressors require two bytes for correct bidding.)
166 </dd><dt>Initialize</dt><dd>
167 The winning bidder will have its init function called.
168 This function should initialize the remaining slots of the
169 ''struct'' decompressor_t
170 object pointed to by the
175 In particular, it should allocate any working data it needs
178 slot of that structure.
179 The init function is called with the block of data that
180 was used for tasting.
181 At this point, the decompressor is responsible for all I/O
182 requests to the client callbacks.
183 The decompressor is free to read more data as and when
185 </dd><dt>Satisfy I/O requests</dt><dd>
186 The format handler will invoke the
192 </dd><dt>Finish</dt><dd>
193 The finish method is called only once when the archive is closed.
194 It should release anything stored in the
201 It should not invoke the client close callback.
204 The read formats have a similar lifecycle to the decompression handlers:
206 <dt>Registration</dt><dd>
207 Allocate your private data and initialize your pointers.
208 </dd><dt>Bid</dt><dd>
209 Formats bid by invoking the
211 decompression method but not calling the
214 This allows each bidder to look ahead in the input stream.
215 Bidders should not look further ahead than necessary, as long
216 look aheads put pressure on the decompression layer to buffer
218 Most formats only require a few hundred bytes of look ahead;
219 look aheads of a few kilobytes are reasonable.
220 (The ISO9660 reader sometimes looks ahead by 48k, which
221 should be considered an upper limit.)
222 </dd><dt>Read header</dt><dd>
223 The header read is usually the most complex part of any format.
224 There are a few strategies worth mentioning:
225 For formats such as tar or cpio, reading and parsing the header is
226 straightforward since headers alternate with data.
227 For formats that store all header data at the beginning of the file,
228 the first header read request may have to read all headers into
229 memory and store that data, sorted by the location of the file
231 Subsequent header read requests will skip forward to the
232 beginning of the file data and return the corresponding header.
233 </dd><dt>Read Data</dt><dd>
234 The read data interface supports sparse files; this requires that
235 each call return a block of data specifying the file offset and
237 This may require you to carefully track the location so that you
238 can return accurate file offsets for each read.
239 Remember that the decompressor will return as much data as it has.
240 Generally, you will want to request one byte,
241 examine the return value to see how much data is available, and
242 possibly trim that to the amount you can use.
243 You should invoke consume for each block just before you return it.
244 </dd><dt>Skip All Data</dt><dd>
245 The skip data call should skip over all file data and trailing padding.
246 This is called automatically by the API layer just before each
248 It is also called in response to the client calling the public
251 </dd><dt>Cleanup</dt><dd>
252 On cleanup, the format should release all of its allocated memory.
256 == WRITE ARCHITECTURE ==
257 The write API has a similar set of four layers:
258 an API layer, a format layer, a compression layer, and an I/O layer.
259 The registration here is much simpler because only
260 one format and one compression can be registered at a time.
261 === I/O Layer and Client Callbacks===
262 XXX To be written XXX
263 === Compression Layer===
264 XXX To be written XXX
266 XXX To be written XXX
268 XXX To be written XXX
269 == WRITE_DISK ARCHITECTURE ==
270 The write_disk API is intended to look just like the write API
272 Since it does not handle multiple formats or compression, it
273 is not layered internally.
274 == GENERAL SERVICES ==
279 '''archive_write_disk'''
280 objects all contain an initial
282 object which provides common support for a set of standard services.
283 (Recall that ANSI/ISO C90 guarantees that you can cast freely between
284 a pointer to a structure and a pointer to the first element of that
288 object has a magic value that indicates which API this object
290 slots for storing error information,
291 and function pointers for virtualized API functions.
292 == MISCELLANEOUS NOTES ==
293 Connecting existing archiving libraries into libarchive is generally
295 In particular, many existing libraries strongly assume that you
296 are reading from a file; they seek forwards and backwards as necessary
297 to locate various pieces of information.
298 In contrast, libarchive never seeks backwards in its input, which
299 sometimes requires very different approaches.
301 For example, libarchive's ISO9660 support operates very differently
302 from most ISO9660 readers.
303 The libarchive support utilizes a work-queue design that
304 keeps a list of known entries sorted by their location in the input.
305 Whenever libarchive's ISO9660 implementation is asked for the next
306 header, checks this list to find the next item on the disk.
307 Directories are parsed when they are encountered and new
308 items are added to the list.
309 This design relies heavily on the ISO9660 image being optimized so that
310 directories always occur earlier on the disk than the files they
313 Depending on the specific format, such approaches may not be possible.
314 The ZIP format specification, for example, allows archivers to store
315 key information only at the end of the file.
316 In theory, it is possible to create ZIP archives that cannot
317 be read without seeking.
318 Fortunately, such archives are very rare, and libarchive can read
319 most ZIP archives, though it cannot always extract as much information
320 as a dedicated ZIP program.
322 [[ManPagerchiventry3]],
323 [[ManPagerchiveead3]],
324 [[ManPagerchiverite3]],
325 [[ManPagerchiveriteisk3]]
326 [[ManPageibarchive3]],
330 library first appeared in
335 library was written by
336 Tim Kientzle <kientzle@acm.org.>