optimistically pulled out of the source, so that the stream can be
passed on to some other party.
+ If you find that you must often call `stream.unshift(chunk)` in your
+ programs, consider implementing a [Transform][] stream instead. (See API
+ for Stream Implementors, below.)
+
+ ```javascript
+ // Pull off a header delimited by \n\n
+ // use unshift() if we get too much
+ // Call the callback with (error, header, stream)
+ var StringDecoder = require('string_decoder').StringDecoder;
+ function parseHeader(stream, callback) {
+ stream.on('error', callback);
+ stream.on('readable', onReadable);
+ var decoder = new StringDecoder('utf8');
+ var header = '';
+ function onReadable() {
+ var chunk;
+ while (null !== (chunk = stream.read())) {
+ var str = decoder.write(chunk);
+ if (str.match(/\n\n/)) {
+ // found the header boundary
+ var split = str.split(/\n\n/);
+ header += split.shift();
+ var remaining = split.join('\n\n');
+ var buf = new Buffer(remaining, 'utf8');
+ if (buf.length)
+ stream.unshift(buf);
+ stream.removeListener('error', callback);
+ stream.removeListener('readable', onReadable);
+ // now the body of the message can be read from the stream.
+ callback(null, header, stream);
+ } else {
+ // still reading the header.
+ header += str;
+ }
+ }
+ }
+ }
+ ```
+
+ #### readable.wrap(stream)
+
+ * `stream` {Stream} An "old style" readable stream
+
+ Versions of Node prior to v0.10 had streams that did not implement the
+ entire Streams API as it is today. (See "Compatibility" below for
+ more information.)
+
+ If you are using an older Node library that emits `'data'` events and
+ has a `pause()` method that is advisory only, then you can use the
+ `wrap()` method to create a [Readable][] stream that uses the old stream
+ as its data source.
+
+ You will very rarely ever need to call this function, but it exists
+ as a convenience for interacting with old Node programs and libraries.
+
+ For example:
+
+ ```javascript
+ var OldReader = require('./old-api-module.js').OldReader;
+ var oreader = new OldReader;
+ var Readable = require('stream').Readable;
+ var myReader = new Readable().wrap(oreader);
+
+ myReader.on('readable', function() {
+ myReader.read(); // etc.
+ });
+ ```
+
+
+ ### Class: stream.Writable
+
+ <!--type=class-->
+
+ The Writable stream interface is an abstraction for a *destination*
+ that you are writing data *to*.
+
+ Examples of writable streams include:
+
+ * [http requests, on the client](http.html#http_class_http_clientrequest)
+ * [http responses, on the server](http.html#http_class_http_serverresponse)
+ * [fs write streams](fs.html#fs_class_fs_writestream)
+ * [zlib streams][]
+ * [crypto streams][]
+ * [tcp sockets][]
+ * [child process stdin](child_process.html#child_process_child_stdin)
+ * [process.stdout][], [process.stderr][]
+
+ #### writable.write(chunk, [encoding], [callback])
+
+ * `chunk` {String | Buffer} The data to write
+ * `encoding` {String} The encoding, if `chunk` is a String
+ * `callback` {Function} Callback for when this chunk of data is flushed
+ * Returns: {Boolean} True if the data was handled completely.
+
+ This method writes some data to the underlying system, and calls the
+ supplied callback once the data has been fully handled.
+
+ The return value indicates if you should continue writing right now.
+ If the data had to be buffered internally, then it will return
+ `false`. Otherwise, it will return `true`.
+
+ This return value is strictly advisory. You MAY continue to write,
+ even if it returns `false`. However, writes will be buffered in
+ memory, so it is best not to do this excessively. Instead, wait for
+ the `drain` event before writing more data.
+
+ #### Event: 'drain'
+
+ If a [`writable.write(chunk)`][] call returns false, then the `drain`
+ event will indicate when it is appropriate to begin writing more data
+ to the stream.
+
+ ```javascript
+ // Write the data to the supplied writable stream 1MM times.
+ // Be attentive to back-pressure.
+ function writeOneMillionTimes(writer, data, encoding, callback) {
+ var i = 1000000;
+ write();
+ function write() {
+ var ok = true;
+ do {
+ i -= 1;
+ if (i === 0) {
+ // last time!
+ writer.write(data, encoding, callback);
+ } else {
+ // see if we should continue, or wait
+ // don't pass the callback, because we're not done yet.
+ ok = writer.write(data, encoding);
+ }
+ } while (i > 0 && ok);
+ if (i > 0) {
+ // had to stop early!
+ // write some more once it drains
+ writer.once('drain', write);
+ }
+ }
+ }
+ ```
+
++#### writable.cork()
++
++Forces buffering of all writes.
++
++Buffered data will be flushed either at `.uncork()` or at `.end()` call.
++
++#### writable.uncork()
++
++Flush all data, buffered since `.cork()` call.
++
+ #### writable.end([chunk], [encoding], [callback])
+
+ * `chunk` {String | Buffer} Optional data to write
+ * `encoding` {String} The encoding, if `chunk` is a String
+ * `callback` {Function} Optional callback for when the stream is finished
+
+ Call this method when no more data will be written to the stream. If
+ supplied, the callback is attached as a listener on the `finish` event.
+
+ Calling [`write()`][] after calling [`end()`][] will raise an error.
+
+ ```javascript
+ // write 'hello, ' and then end with 'world!'
+ http.createServer(function (req, res) {
+ res.write('hello, ');
+ res.end('world!');
+ // writing more now is not allowed!
+ });
+ ```
+
+ #### Event: 'finish'
+
+ When the [`end()`][] method has been called, and all data has been flushed
+ to the underlying system, this event is emitted.
+
+ ```javascript
+ var writer = getWritableStreamSomehow();
+ for (var i = 0; i < 100; i ++) {
+ writer.write('hello, #' + i + '!\n');
+ }
+ writer.end('this is the end\n');
+ write.on('finish', function() {
+ console.error('all writes are now complete.');
+ });
+ ```
+
+ #### Event: 'pipe'
+
+ * `src` {[Readable][] Stream} source stream that is piping to this writable
+
+ This is emitted whenever the `pipe()` method is called on a readable
+ stream, adding this writable to its set of destinations.
+
+ ```javascript
+ var writer = getWritableStreamSomehow();
+ var reader = getReadableStreamSomehow();
+ writer.on('pipe', function(src) {
+ console.error('something is piping into the writer');
+ assert.equal(src, reader);
+ });
+ reader.pipe(writer);
+ ```
+
+ #### Event: 'unpipe'
+
+ * `src` {[Readable][] Stream} The source stream that [unpiped][] this writable
+
+ This is emitted whenever the [`unpipe()`][] method is called on a
+ readable stream, removing this writable from its set of destinations.
+
+ ```javascript
+ var writer = getWritableStreamSomehow();
+ var reader = getReadableStreamSomehow();
+ writer.on('unpipe', function(src) {
+ console.error('something has stopped piping into the writer');
+ assert.equal(src, reader);
+ });
+ reader.pipe(writer);
+ reader.unpipe(writer);
+ ```
+
+ ### Class: stream.Duplex
+
+ Duplex streams are streams that implement both the [Readable][] and
+ [Writable][] interfaces. See above for usage.
+
+ Examples of Duplex streams include:
+
+ * [tcp sockets][]
+ * [zlib streams][]
+ * [crypto streams][]
+
+
+ ### Class: stream.Transform
+
+ Transform streams are [Duplex][] streams where the output is in some way
+ computed from the input. They implement both the [Readable][] and
+ [Writable][] interfaces. See above for usage.
+
+ Examples of Transform streams include:
+
+ * [zlib streams][]
+ * [crypto streams][]
+
+
+ ## API for Stream Implementors
+
+ <!--type=misc-->
+
+ To implement any sort of stream, the pattern is the same:
+
+ 1. Extend the appropriate parent class in your own subclass. (The
+ [`util.inherits`][] method is particularly helpful for this.)
+ 2. Call the appropriate parent class constructor in your constructor,
+ to be sure that the internal mechanisms are set up properly.
+ 2. Implement one or more specific methods, as detailed below.
+
+ The class to extend and the method(s) to implement depend on the sort
+ of stream class you are writing:
+
+ <table>
+ <thead>
+ <tr>
+ <th>
+ <p>Use-case</p>
+ </th>
+ <th>
+ <p>Class</p>
+ </th>
+ <th>
+ <p>Method(s) to implement</p>
+ </th>
+ </tr>
+ </thead>
+ <tr>
+ <td>
+ <p>Reading only</p>
+ </td>
+ <td>
+ <p>[Readable](#stream_class_stream_readable_1)</p>
+ </td>
+ <td>
+ <p><code>[_read][]</code></p>
+ </td>
+ </tr>
+ <tr>
+ <td>
+ <p>Writing only</p>
+ </td>
+ <td>
+ <p>[Writable](#stream_class_stream_writable_1)</p>
+ </td>
+ <td>
+ <p><code>[_write][]</code></p>
+ </td>
+ </tr>
+ <tr>
+ <td>
+ <p>Reading and writing</p>
+ </td>
+ <td>
+ <p>[Duplex](#stream_class_stream_duplex_1)</p>
+ </td>
+ <td>
+ <p><code>[_read][]</code>, <code>[_write][]</code></p>
+ </td>
+ </tr>
+ <tr>
+ <td>
+ <p>Operate on written data, then read the result</p>
+ </td>
+ <td>
+ <p>[Transform](#stream_class_stream_transform_1)</p>
+ </td>
+ <td>
+ <p><code>_transform</code>, <code>_flush</code></p>
+ </td>
+ </tr>
+ </table>
+
+ In your implementation code, it is very important to never call the
+ methods described in [API for Stream Consumers][] above. Otherwise, you
+ can potentially cause adverse side effects in programs that consume
+ your streaming interfaces.
+
+ ### Class: stream.Readable
+
+ <!--type=class-->
+
+ `stream.Readable` is an abstract class designed to be extended with an
+ underlying implementation of the [`_read(size)`][] method.
+
+ Please see above under [API for Stream Consumers][] for how to consume
+ streams in your programs. What follows is an explanation of how to
+ implement Readable streams in your programs.
+
+ #### Example: A Counting Stream
+
+ <!--type=example-->
+
+ This is a basic example of a Readable stream. It emits the numerals
+ from 1 to 1,000,000 in ascending order, and then ends.
+
+ ```javascript
+ var Readable = require('stream').Readable;
+ var util = require('util');
+ util.inherits(Counter, Readable);
+
+ function Counter(opt) {
+ Readable.call(this, opt);
+ this._max = 1000000;
+ this._index = 1;
+ }
+
+ Counter.prototype._read = function() {
+ var i = this._index++;
+ if (i > this._max)
+ this.push(null);
+ else {
+ var str = '' + i;
+ var buf = new Buffer(str, 'ascii');
+ this.push(buf);
+ }
+ };
+ ```
+
+ #### Example: SimpleProtocol v1 (Sub-optimal)
+
+ This is similar to the `parseHeader` function described above, but
+ implemented as a custom stream. Also, note that this implementation
+ does not convert the incoming data to a string.
+
+ However, this would be better implemented as a [Transform][] stream. See
+ below for a better implementation.
+
```javascript
// A parser for a simple data protocol.
// The "header" is a JSON object, followed by 2 \n characters, and
programs. However, you **are** expected to override this method in
your own extension classes.
- This function is completely optional to implement. In the most cases
- it is unnecessary. If implemented, it will be called with all the
- chunks that are buffered in the write queue.
-
- ### writable.write(chunk, [encoding], [callback])
-
- * `chunk` {Buffer | String} Data to be written
- * `encoding` {String} Optional. If `chunk` is a string, then encoding
- defaults to `'utf8'`
- * `callback` {Function} Optional. Called when this chunk is
- successfully written.
- * Returns {Boolean}
-
- Writes `chunk` to the stream. Returns `true` if the data has been
- flushed to the underlying resource. Returns `false` to indicate that
- the buffer is full, and the data will be sent out in the future. The
- `'drain'` event will indicate when the buffer is empty again.
-
- The specifics of when `write()` will return false, is determined by
- the `highWaterMark` option provided to the constructor.
-
- ### writable.cork()
-
- Forces buffering of all writes.
-
- Buffered data will be flushed either at `.uncork()` or at `.end()` call.
-
- ### writable.uncork()
-
- Flush all data, buffered since `.cork()` call.
-
- ### writable.end([chunk], [encoding], [callback])
-
- * `chunk` {Buffer | String} Optional final data to be written
- * `encoding` {String} Optional. If `chunk` is a string, then encoding
- defaults to `'utf8'`
- * `callback` {Function} Optional. Called when the final chunk is
- successfully written.
-
- Call this method to signal the end of the data being written to the
- stream.
-
- ### Event: 'drain'
-
- Emitted when the stream's write queue empties and it's safe to write
- without buffering again. Listen for it when `stream.write()` returns
- `false`.
-
- ### Event: 'error'
-
- Emitted if there was an error receiving data.
-
- ### Event: 'close'
-
- Emitted when the underlying resource (for example, the backing file
- descriptor) has been closed. Not all streams will emit this.
-
- ### Event: 'finish'
-
- When `end()` is called and there are no more chunks to write, this
- event is emitted.
-
- ### Event: 'pipe'
-
- * `source` {Readable Stream}
-
- Emitted when the stream is passed to a readable stream's pipe method.
-
- ### Event 'unpipe'
+### writable.\_writev(chunks, callback)
+
+* `chunks` {Array} The chunks to be written. Each chunk has following
+ format: `{ chunk: ..., encoding: ... }`.
+* `callback` {Function} Call this function (optionally with an error
+ argument) when you are done processing the supplied chunks.
+
- * `source` {Readable Stream}
++Note: **This function MUST NOT be called directly.** It may be
++implemented by child classes, and called by the internal Writable
++class methods only.
+
- Emitted when a previously established `pipe()` is removed using the
- source Readable stream's `unpipe()` method.
++This function is completely optional to implement. In most cases it is
++unnecessary. If implemented, it will be called with all the chunks
++that are buffered in the write queue.
+
- ## Class: stream.Duplex
+ ### Class: stream.Duplex
<!--type=class-->