Skip to content

Commit

Permalink
doc: multiple documentation updates cherry picked from v0.12
Browse files Browse the repository at this point in the history
 * doc: improve http.abort description
 * doc: mention that mode is ignored if file exists
 * docs: Fix default options for fs.createWriteStream()
 * Documentation update about Buffer initialization
 * doc: add a note about readable in flowing mode
 * doc: Document http.request protocol option
 * doc, comments: Grammar and spelling fixes
 * updated documentation for fs.createReadStream
 * Update child_process.markdown, spelling
 * doc: Clarified read method with specified size argument.
 * docs:events clarify emitter.listener() behavior
 * doc: two minor stream doc improvements
 * doc: clarify Readable._read and Readable.push
 * doc: stream.unshift does not reset reading state
 * doc: readable event clarification
 * doc: additional refinement to readable event

Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Ben Noorduis <[email protected]>
PR-URL: #2302
  • Loading branch information
jasnell committed Aug 5, 2015
1 parent d88194d commit 936c9ff
Show file tree
Hide file tree
Showing 12 changed files with 98 additions and 43 deletions.
6 changes: 5 additions & 1 deletion doc/api/buffer.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Creating a typed array from a `Buffer` works with the following caveats:

2. The buffer's memory is interpreted as an array, not a byte array. That is,
`new Uint32Array(new Buffer([1,2,3,4]))` creates a 4-element `Uint32Array`
with elements `[1,2,3,4]`, not an `Uint32Array` with a single element
with elements `[1,2,3,4]`, not a `Uint32Array` with a single element
`[0x1020304]` or `[0x4030201]`.

NOTE: Node.js v0.8 simply retained a reference to the buffer in `array.buffer`
Expand All @@ -67,6 +67,10 @@ Allocates a new buffer of `size` bytes. `size` must be less than
2,147,483,648 bytes (2 GB) on 64 bits architectures,
otherwise a `RangeError` is thrown.

Unlike `ArrayBuffers`, the underlying memory for buffers is not initialized. So
the contents of a newly created `Buffer` is unknown. Use `buf.fill(0)`to
initialize a buffer to zeroes.

### new Buffer(array)

* `array` Array
Expand Down
2 changes: 1 addition & 1 deletion doc/api/child_process.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ Here is an example of sending a server:
child.send('server', server);
});

And the child would the receive the server object as:
And the child would then receive the server object as:

process.on('message', function(m, server) {
if (m === 'server') {
Expand Down
4 changes: 2 additions & 2 deletions doc/api/cluster.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ values are `"rr"` and `"none"`.
## cluster.settings

* {Object}
* `execArgv` {Array} list of string arguments passed to the io.js executable.
* `execArgv` {Array} list of string arguments passed to the io.js executable.
(Default=`process.execArgv`)
* `exec` {String} file path to worker file. (Default=`process.argv[1]`)
* `args` {Array} string arguments passed to worker.
Expand Down Expand Up @@ -613,7 +613,7 @@ It is not emitted in the worker.

### Event: 'disconnect'

Similar to the `cluster.on('disconnect')` event, but specfic to this worker.
Similar to the `cluster.on('disconnect')` event, but specific to this worker.

cluster.fork().on('disconnect', function() {
// Worker has disconnected
Expand Down
4 changes: 2 additions & 2 deletions doc/api/dns.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ All properties are optional. An example usage of options is shown below.
```

The callback has arguments `(err, address, family)`. `address` is a string
representation of a IP v4 or v6 address. `family` is either the integer 4 or 6
representation of an IP v4 or v6 address. `family` is either the integer 4 or 6
and denotes the family of `address` (not necessarily the value initially passed
to `lookup`).

Expand Down Expand Up @@ -163,7 +163,7 @@ attribute (e.g. `[{'priority': 10, 'exchange': 'mx.example.com'},...]`).
## dns.resolveTxt(hostname, callback)

The same as `dns.resolve()`, but only for text queries (`TXT` records).
`addresses` is an 2-d array of the text records available for `hostname` (e.g.,
`addresses` is a 2-d array of the text records available for `hostname` (e.g.,
`[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of
one record. Depending on the use case, the could be either joined together or
treated separately.
Expand Down
2 changes: 1 addition & 1 deletion doc/api/events.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ Note that `emitter.setMaxListeners(n)` still has precedence over

### emitter.listeners(event)

Returns an array of listeners for the specified event.
Returns a copy of the array of listeners for the specified event.

server.on('connection', function (stream) {
console.log('someone connected!');
Expand Down
11 changes: 9 additions & 2 deletions doc/api/fs.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -801,6 +801,10 @@ on Unix systems, it never was.

Returns a new ReadStream object (See `Readable Stream`).

Be aware that, unlike the default value set for `highWaterMark` on a
readable stream (16 kb), the stream returned by this method has a
default value of 64 kb for the same parameter.

`options` is an object or string with the following defaults:

{ flags: 'r',
Expand All @@ -823,6 +827,9 @@ there's no file descriptor leak. If `autoClose` is set to true (default
behavior), on `error` or `end` the file descriptor will be closed
automatically.

`mode` sets the file mode (permission and sticky bits), but only if the
file was created.

An example to read the last 10 bytes of a file which is 100 bytes long:

fs.createReadStream('sample.txt', {start: 90, end: 99});
Expand All @@ -847,14 +854,14 @@ Returns a new WriteStream object (See `Writable Stream`).
`options` is an object or string with the following defaults:

{ flags: 'w',
encoding: null,
defaultEncoding: 'utf8',
fd: null,
mode: 0o666 }

`options` may also include a `start` option to allow writing data at
some position past the beginning of the file. Modifying a file rather
than replacing it may require a `flags` mode of `r+` rather than the
default mode `w`. The `encoding` can be `'utf8'`, `'ascii'`, `binary`,
default mode `w`. The `defaultEncoding` can be `'utf8'`, `'ascii'`, `binary`,
or `'base64'`.

Like `ReadStream` above, if `fd` is specified, `WriteStream` will ignore the
Expand Down
4 changes: 3 additions & 1 deletion doc/api/http.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -462,6 +462,7 @@ automatically parsed with [url.parse()][].

Options:

- `protocol`: Protocol to use. Defaults to `'http'`.
- `host`: A domain name or IP address of the server to issue the request to.
Defaults to `'localhost'`.
- `hostname`: Alias for `host`. To support `url.parse()` `hostname` is
Expand Down Expand Up @@ -911,7 +912,8 @@ is finished.

### request.abort()

Aborts a request. (New since v0.3.8.)
Marks the request as aborting. Calling this will cause remaining data
in the response to be dropped and the socket to be destroyed.

### request.setTimeout(timeout[, callback])

Expand Down
96 changes: 69 additions & 27 deletions doc/api/stream.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,34 @@ readable.on('readable', function() {
Once the internal buffer is drained, a `readable` event will fire
again when more data is available.

The `readable` event is not emitted in the "flowing" mode with the
sole exception of the last one, on end-of-stream.

The 'readable' event indicates that the stream has new information:
either new data is available or the end of the stream has been reached.
In the former case, `.read()` will return that data. In the latter case,
`.read()` will return null. For instance, in the following example, `foo.txt`
is an empty file:

```javascript
var fs = require('fs');
var rr = fs.createReadStream('foo.txt');
rr.on('readable', function() {
console.log('readable:', rr.read());
});
rr.on('end', function() {
console.log('end');
});
```

The output of running this script is:

```
bash-3.2$ node test.js
readable: null
end
```

#### Event: 'data'

* `chunk` {Buffer | String} The chunk of data.
Expand Down Expand Up @@ -221,7 +249,9 @@ returns it. If there is no data available, then it will return
`null`.

If you pass in a `size` argument, then it will return that many
bytes. If `size` bytes are not available, then it will return `null`.
bytes. If `size` bytes are not available, then it will return `null`,
unless we've ended, in which case it will return the data remaining
in the buffer.

If you do not specify a `size` argument, then it will return all the
data in the internal buffer.
Expand All @@ -243,6 +273,9 @@ readable.on('readable', function() {
If this method returns a data chunk, then it will also trigger the
emission of a [`'data'` event][].

Note that calling `readable.read([size])` after the `end` event has been
triggered will return `null`. No runtime error will be raised.

#### readable.setEncoding(encoding)

* `encoding` {String} The encoding to use.
Expand Down Expand Up @@ -414,6 +447,9 @@ parser, which needs to "un-consume" some data that it has
optimistically pulled out of the source, so that the stream can be
passed on to some other party.

Note that `stream.unshift(chunk)` cannot be called after the `end` event
has been triggered; a runtime error will be raised.

If you find that you must often call `stream.unshift(chunk)` in your
programs, consider implementing a [Transform][] stream instead. (See API
for Stream Implementors, below.)
Expand Down Expand Up @@ -452,6 +488,13 @@ function parseHeader(stream, callback) {
}
}
```
Note that, unlike `stream.push(chunk)`, `stream.unshift(chunk)` will not
end the reading process by resetting the internal reading state of the
stream. This can cause unexpected results if `unshift` is called during a
read (i.e. from within a `_read` implementation on a custom stream). Following
the call to `unshift` with an immediate `stream.push('')` will reset the
reading state appropriately, however it is best to simply avoid calling
`unshift` while in the process of performing a read.

#### readable.wrap(stream)

Expand Down Expand Up @@ -883,6 +926,10 @@ SimpleProtocol.prototype._read = function(n) {
// back into the read queue so that our consumer will see it.
var b = chunk.slice(split);
this.unshift(b);
// calling unshift by itself does not reset the reading state
// of the stream; since we're inside _read, doing an additional
// push('') will reset the state appropriately.
this.push('');

// and let them know that we are done parsing the header.
this.emit('header', this.header);
Expand Down Expand Up @@ -922,24 +969,22 @@ initialized.

* `size` {Number} Number of bytes to read asynchronously

Note: **Implement this function, but do NOT call it directly.**
Note: **Implement this method, but do NOT call it directly.**

This function should NOT be called directly. It should be implemented
by child classes, and only called by the internal Readable class
methods.
This method is prefixed with an underscore because it is internal to the
class that defines it and should only be called by the internal Readable
class methods. All Readable stream implementations must provide a _read
method to fetch data from the underlying resource.

All Readable stream implementations must provide a `_read` method to
fetch data from the underlying resource.

This method is prefixed with an underscore because it is internal to
the class that defines it, and should not be called directly by user
programs. However, you **are** expected to override this method in
your own extension classes.
When _read is called, if data is available from the resource, `_read` should
start pushing that data into the read queue by calling `this.push(dataChunk)`.
`_read` should continue reading from the resource and pushing data until push
returns false, at which point it should stop reading from the resource. Only
when _read is called again after it has stopped should it start reading
more data from the resource and pushing that data onto the queue.

When data is available, put it into the read queue by calling
`readable.push(chunk)`. If `push` returns false, then you should stop
reading. When `_read` is called again, you should start pushing more
data.
Note: once the `_read()` method is called, it will not be called again until
the `push` method is called.

The `size` argument is advisory. Implementations where a "read" is a
single call that returns data can use this to know how much data to
Expand All @@ -955,19 +1000,16 @@ becomes available. There is no need, for example to "wait" until
Buffer encoding, such as `'utf8'` or `'ascii'`
* return {Boolean} Whether or not more pushes should be performed

Note: **This function should be called by Readable implementors, NOT
Note: **This method should be called by Readable implementors, NOT
by consumers of Readable streams.**

The `_read()` function will not be called again until at least one
`push(chunk)` call is made.

The `Readable` class works by putting data into a read queue to be
pulled out later by calling the `read()` method when the `'readable'`
event fires.
If a value other than null is passed, The `push()` method adds a chunk of data
into the queue for subsequent stream processors to consume. If `null` is
passed, it signals the end of the stream (EOF), after which no more data
can be written.

The `push()` method will explicitly insert some data into the read
queue. If it is called with `null` then it will signal the end of the
data (EOF).
The data added with `push` can be pulled out by calling the `read()` method
when the `'readable'`event fires.

This API is designed to be as flexible as possible. For example,
you may be wrapping a lower-level source which has some sort of
Expand Down Expand Up @@ -1315,7 +1357,7 @@ for examples and testing, but there are occasionally use cases where
it can come in handy as a building block for novel sorts of streams.


## Simplified Constructor API
## Simplified Constructor API

<!--type=misc-->

Expand Down
4 changes: 2 additions & 2 deletions lib/_http_client.js
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ function parserOnIncomingClient(res, shouldKeepAlive) {
var req = socket._httpMessage;


// propogate "domain" setting...
// propagate "domain" setting...
if (req.domain && !res.domain) {
debug('setting "res.domain"');
res.domain = req.domain;
Expand Down Expand Up @@ -465,7 +465,7 @@ function tickOnSocket(req, socket) {
socket.parser = parser;
socket._httpMessage = req;

// Setup "drain" propogation.
// Setup "drain" propagation.
httpSocketSetup(socket);

// Propagate headers limit from request object to parser
Expand Down
4 changes: 2 additions & 2 deletions lib/url.js
Original file line number Diff line number Diff line change
Expand Up @@ -587,7 +587,7 @@ Url.prototype.resolveObject = function(relative) {
if (psychotic) {
result.hostname = result.host = srcPath.shift();
//occationaly the auth can get stuck only in host
//this especialy happens in cases like
//this especially happens in cases like
//url.resolveObject('mailto:local1@domain1', 'local2@domain2')
var authInHost = result.host && result.host.indexOf('@') > 0 ?
result.host.split('@') : false;
Expand Down Expand Up @@ -669,7 +669,7 @@ Url.prototype.resolveObject = function(relative) {
result.hostname = result.host = isAbsolute ? '' :
srcPath.length ? srcPath.shift() : '';
//occationaly the auth can get stuck only in host
//this especialy happens in cases like
//this especially happens in cases like
//url.resolveObject('mailto:local1@domain1', 'local2@domain2')
var authInHost = result.host && result.host.indexOf('@') > 0 ?
result.host.split('@') : false;
Expand Down
2 changes: 1 addition & 1 deletion src/node.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2184,7 +2184,7 @@ static void OnFatalError(const char* location, const char* message) {

NO_RETURN void FatalError(const char* location, const char* message) {
OnFatalError(location, message);
// to supress compiler warning
// to suppress compiler warning
abort();
}

Expand Down
2 changes: 1 addition & 1 deletion src/node_object_wrap.h
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ class ObjectWrap {
* attached to detached state it will be freed. Be careful not to access
* the object after making this call as it might be gone!
* (A "weak reference" means an object that only has a
* persistant handle.)
* persistent handle.)
*
* DO NOT CALL THIS FROM DESTRUCTOR
*/
Expand Down

0 comments on commit 936c9ff

Please sign in to comment.