diff --git a/doc/api/buffer.markdown b/doc/api/buffer.markdown index 92b7f2ba93b288..94f3e3d8e164f5 100644 --- a/doc/api/buffer.markdown +++ b/doc/api/buffer.markdown @@ -43,7 +43,7 @@ Creating a typed array from a `Buffer` works with the following caveats: 2. The buffer's memory is interpreted as an array, not a byte array. That is, `new Uint32Array(new Buffer([1,2,3,4]))` creates a 4-element `Uint32Array` - with elements `[1,2,3,4]`, not an `Uint32Array` with a single element + with elements `[1,2,3,4]`, not a `Uint32Array` with a single element `[0x1020304]` or `[0x4030201]`. NOTE: Node.js v0.8 simply retained a reference to the buffer in `array.buffer` @@ -67,6 +67,10 @@ Allocates a new buffer of `size` bytes. `size` must be less than 2,147,483,648 bytes (2 GB) on 64 bits architectures, otherwise a `RangeError` is thrown. +Unlike `ArrayBuffers`, the underlying memory for buffers is not initialized. So +the contents of a newly created `Buffer` is unknown. Use `buf.fill(0)`to +initialize a buffer to zeroes. + ### new Buffer(array) * `array` Array diff --git a/doc/api/child_process.markdown b/doc/api/child_process.markdown index 3d6e5f5fe2f048..4acd61bfa6a97c 100644 --- a/doc/api/child_process.markdown +++ b/doc/api/child_process.markdown @@ -279,7 +279,7 @@ Here is an example of sending a server: child.send('server', server); }); -And the child would the receive the server object as: +And the child would then receive the server object as: process.on('message', function(m, server) { if (m === 'server') { diff --git a/doc/api/cluster.markdown b/doc/api/cluster.markdown index b7e76fcb8ffa0e..abf05bf9f40c82 100644 --- a/doc/api/cluster.markdown +++ b/doc/api/cluster.markdown @@ -121,7 +121,7 @@ values are `"rr"` and `"none"`. ## cluster.settings * {Object} - * `execArgv` {Array} list of string arguments passed to the io.js executable. + * `execArgv` {Array} list of string arguments passed to the io.js executable. (Default=`process.execArgv`) * `exec` {String} file path to worker file. (Default=`process.argv[1]`) * `args` {Array} string arguments passed to worker. @@ -613,7 +613,7 @@ It is not emitted in the worker. ### Event: 'disconnect' -Similar to the `cluster.on('disconnect')` event, but specfic to this worker. +Similar to the `cluster.on('disconnect')` event, but specific to this worker. cluster.fork().on('disconnect', function() { // Worker has disconnected diff --git a/doc/api/dns.markdown b/doc/api/dns.markdown index d8ed53e3fa0f79..7c9f419ce08ae7 100644 --- a/doc/api/dns.markdown +++ b/doc/api/dns.markdown @@ -85,7 +85,7 @@ All properties are optional. An example usage of options is shown below. ``` The callback has arguments `(err, address, family)`. `address` is a string -representation of a IP v4 or v6 address. `family` is either the integer 4 or 6 +representation of an IP v4 or v6 address. `family` is either the integer 4 or 6 and denotes the family of `address` (not necessarily the value initially passed to `lookup`). @@ -163,7 +163,7 @@ attribute (e.g. `[{'priority': 10, 'exchange': 'mx.example.com'},...]`). ## dns.resolveTxt(hostname, callback) The same as `dns.resolve()`, but only for text queries (`TXT` records). -`addresses` is an 2-d array of the text records available for `hostname` (e.g., +`addresses` is a 2-d array of the text records available for `hostname` (e.g., `[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of one record. Depending on the use case, the could be either joined together or treated separately. diff --git a/doc/api/events.markdown b/doc/api/events.markdown index fbc04a96239b0f..a51905ab4da110 100644 --- a/doc/api/events.markdown +++ b/doc/api/events.markdown @@ -122,7 +122,7 @@ Note that `emitter.setMaxListeners(n)` still has precedence over ### emitter.listeners(event) -Returns an array of listeners for the specified event. +Returns a copy of the array of listeners for the specified event. server.on('connection', function (stream) { console.log('someone connected!'); diff --git a/doc/api/fs.markdown b/doc/api/fs.markdown index 5f96b10763fe45..ebc2046d76ac1e 100644 --- a/doc/api/fs.markdown +++ b/doc/api/fs.markdown @@ -801,6 +801,10 @@ on Unix systems, it never was. Returns a new ReadStream object (See `Readable Stream`). +Be aware that, unlike the default value set for `highWaterMark` on a +readable stream (16 kb), the stream returned by this method has a +default value of 64 kb for the same parameter. + `options` is an object or string with the following defaults: { flags: 'r', @@ -823,6 +827,9 @@ there's no file descriptor leak. If `autoClose` is set to true (default behavior), on `error` or `end` the file descriptor will be closed automatically. +`mode` sets the file mode (permission and sticky bits), but only if the +file was created. + An example to read the last 10 bytes of a file which is 100 bytes long: fs.createReadStream('sample.txt', {start: 90, end: 99}); @@ -847,14 +854,14 @@ Returns a new WriteStream object (See `Writable Stream`). `options` is an object or string with the following defaults: { flags: 'w', - encoding: null, + defaultEncoding: 'utf8', fd: null, mode: 0o666 } `options` may also include a `start` option to allow writing data at some position past the beginning of the file. Modifying a file rather than replacing it may require a `flags` mode of `r+` rather than the -default mode `w`. The `encoding` can be `'utf8'`, `'ascii'`, `binary`, +default mode `w`. The `defaultEncoding` can be `'utf8'`, `'ascii'`, `binary`, or `'base64'`. Like `ReadStream` above, if `fd` is specified, `WriteStream` will ignore the diff --git a/doc/api/http.markdown b/doc/api/http.markdown index 5cf3b07a9cd778..966b304b56d282 100644 --- a/doc/api/http.markdown +++ b/doc/api/http.markdown @@ -462,6 +462,7 @@ automatically parsed with [url.parse()][]. Options: +- `protocol`: Protocol to use. Defaults to `'http'`. - `host`: A domain name or IP address of the server to issue the request to. Defaults to `'localhost'`. - `hostname`: Alias for `host`. To support `url.parse()` `hostname` is @@ -911,7 +912,8 @@ is finished. ### request.abort() -Aborts a request. (New since v0.3.8.) +Marks the request as aborting. Calling this will cause remaining data +in the response to be dropped and the socket to be destroyed. ### request.setTimeout(timeout[, callback]) diff --git a/doc/api/stream.markdown b/doc/api/stream.markdown index a7a78f229edb57..ffad1717f75096 100644 --- a/doc/api/stream.markdown +++ b/doc/api/stream.markdown @@ -164,6 +164,34 @@ readable.on('readable', function() { Once the internal buffer is drained, a `readable` event will fire again when more data is available. +The `readable` event is not emitted in the "flowing" mode with the +sole exception of the last one, on end-of-stream. + +The 'readable' event indicates that the stream has new information: +either new data is available or the end of the stream has been reached. +In the former case, `.read()` will return that data. In the latter case, +`.read()` will return null. For instance, in the following example, `foo.txt` +is an empty file: + +```javascript +var fs = require('fs'); +var rr = fs.createReadStream('foo.txt'); +rr.on('readable', function() { + console.log('readable:', rr.read()); +}); +rr.on('end', function() { + console.log('end'); +}); +``` + +The output of running this script is: + +``` +bash-3.2$ node test.js +readable: null +end +``` + #### Event: 'data' * `chunk` {Buffer | String} The chunk of data. @@ -221,7 +249,9 @@ returns it. If there is no data available, then it will return `null`. If you pass in a `size` argument, then it will return that many -bytes. If `size` bytes are not available, then it will return `null`. +bytes. If `size` bytes are not available, then it will return `null`, +unless we've ended, in which case it will return the data remaining +in the buffer. If you do not specify a `size` argument, then it will return all the data in the internal buffer. @@ -243,6 +273,9 @@ readable.on('readable', function() { If this method returns a data chunk, then it will also trigger the emission of a [`'data'` event][]. +Note that calling `readable.read([size])` after the `end` event has been +triggered will return `null`. No runtime error will be raised. + #### readable.setEncoding(encoding) * `encoding` {String} The encoding to use. @@ -414,6 +447,9 @@ parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party. +Note that `stream.unshift(chunk)` cannot be called after the `end` event +has been triggered; a runtime error will be raised. + If you find that you must often call `stream.unshift(chunk)` in your programs, consider implementing a [Transform][] stream instead. (See API for Stream Implementors, below.) @@ -452,6 +488,13 @@ function parseHeader(stream, callback) { } } ``` +Note that, unlike `stream.push(chunk)`, `stream.unshift(chunk)` will not +end the reading process by resetting the internal reading state of the +stream. This can cause unexpected results if `unshift` is called during a +read (i.e. from within a `_read` implementation on a custom stream). Following +the call to `unshift` with an immediate `stream.push('')` will reset the +reading state appropriately, however it is best to simply avoid calling +`unshift` while in the process of performing a read. #### readable.wrap(stream) @@ -883,6 +926,10 @@ SimpleProtocol.prototype._read = function(n) { // back into the read queue so that our consumer will see it. var b = chunk.slice(split); this.unshift(b); + // calling unshift by itself does not reset the reading state + // of the stream; since we're inside _read, doing an additional + // push('') will reset the state appropriately. + this.push(''); // and let them know that we are done parsing the header. this.emit('header', this.header); @@ -922,24 +969,22 @@ initialized. * `size` {Number} Number of bytes to read asynchronously -Note: **Implement this function, but do NOT call it directly.** +Note: **Implement this method, but do NOT call it directly.** -This function should NOT be called directly. It should be implemented -by child classes, and only called by the internal Readable class -methods. +This method is prefixed with an underscore because it is internal to the +class that defines it and should only be called by the internal Readable +class methods. All Readable stream implementations must provide a _read +method to fetch data from the underlying resource. -All Readable stream implementations must provide a `_read` method to -fetch data from the underlying resource. - -This method is prefixed with an underscore because it is internal to -the class that defines it, and should not be called directly by user -programs. However, you **are** expected to override this method in -your own extension classes. +When _read is called, if data is available from the resource, `_read` should +start pushing that data into the read queue by calling `this.push(dataChunk)`. +`_read` should continue reading from the resource and pushing data until push +returns false, at which point it should stop reading from the resource. Only +when _read is called again after it has stopped should it start reading +more data from the resource and pushing that data onto the queue. -When data is available, put it into the read queue by calling -`readable.push(chunk)`. If `push` returns false, then you should stop -reading. When `_read` is called again, you should start pushing more -data. +Note: once the `_read()` method is called, it will not be called again until +the `push` method is called. The `size` argument is advisory. Implementations where a "read" is a single call that returns data can use this to know how much data to @@ -955,19 +1000,16 @@ becomes available. There is no need, for example to "wait" until Buffer encoding, such as `'utf8'` or `'ascii'` * return {Boolean} Whether or not more pushes should be performed -Note: **This function should be called by Readable implementors, NOT +Note: **This method should be called by Readable implementors, NOT by consumers of Readable streams.** -The `_read()` function will not be called again until at least one -`push(chunk)` call is made. - -The `Readable` class works by putting data into a read queue to be -pulled out later by calling the `read()` method when the `'readable'` -event fires. +If a value other than null is passed, The `push()` method adds a chunk of data +into the queue for subsequent stream processors to consume. If `null` is +passed, it signals the end of the stream (EOF), after which no more data +can be written. -The `push()` method will explicitly insert some data into the read -queue. If it is called with `null` then it will signal the end of the -data (EOF). +The data added with `push` can be pulled out by calling the `read()` method +when the `'readable'`event fires. This API is designed to be as flexible as possible. For example, you may be wrapping a lower-level source which has some sort of @@ -1315,7 +1357,7 @@ for examples and testing, but there are occasionally use cases where it can come in handy as a building block for novel sorts of streams. -## Simplified Constructor API +## Simplified Constructor API diff --git a/lib/_http_client.js b/lib/_http_client.js index a7d714f7e0b0b2..50d1052b44e15c 100644 --- a/lib/_http_client.js +++ b/lib/_http_client.js @@ -359,7 +359,7 @@ function parserOnIncomingClient(res, shouldKeepAlive) { var req = socket._httpMessage; - // propogate "domain" setting... + // propagate "domain" setting... if (req.domain && !res.domain) { debug('setting "res.domain"'); res.domain = req.domain; @@ -465,7 +465,7 @@ function tickOnSocket(req, socket) { socket.parser = parser; socket._httpMessage = req; - // Setup "drain" propogation. + // Setup "drain" propagation. httpSocketSetup(socket); // Propagate headers limit from request object to parser diff --git a/lib/url.js b/lib/url.js index 55c5248e4751dd..45155fee936bbf 100644 --- a/lib/url.js +++ b/lib/url.js @@ -587,7 +587,7 @@ Url.prototype.resolveObject = function(relative) { if (psychotic) { result.hostname = result.host = srcPath.shift(); //occationaly the auth can get stuck only in host - //this especialy happens in cases like + //this especially happens in cases like //url.resolveObject('mailto:local1@domain1', 'local2@domain2') var authInHost = result.host && result.host.indexOf('@') > 0 ? result.host.split('@') : false; @@ -669,7 +669,7 @@ Url.prototype.resolveObject = function(relative) { result.hostname = result.host = isAbsolute ? '' : srcPath.length ? srcPath.shift() : ''; //occationaly the auth can get stuck only in host - //this especialy happens in cases like + //this especially happens in cases like //url.resolveObject('mailto:local1@domain1', 'local2@domain2') var authInHost = result.host && result.host.indexOf('@') > 0 ? result.host.split('@') : false; diff --git a/src/node.cc b/src/node.cc index 7c0a80ec31462b..c272139e5518cd 100644 --- a/src/node.cc +++ b/src/node.cc @@ -2184,7 +2184,7 @@ static void OnFatalError(const char* location, const char* message) { NO_RETURN void FatalError(const char* location, const char* message) { OnFatalError(location, message); - // to supress compiler warning + // to suppress compiler warning abort(); } diff --git a/src/node_object_wrap.h b/src/node_object_wrap.h index d00e1484b7c10c..f0226622272a5c 100644 --- a/src/node_object_wrap.h +++ b/src/node_object_wrap.h @@ -80,7 +80,7 @@ class ObjectWrap { * attached to detached state it will be freed. Be careful not to access * the object after making this call as it might be gone! * (A "weak reference" means an object that only has a - * persistant handle.) + * persistent handle.) * * DO NOT CALL THIS FROM DESTRUCTOR */