Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/add-code-flow.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Within the function, a new `Adder` is created with the configured `Blockstore` a

1. **[`adder.add(io.Reader)`](https://github.com/ipfs/go-ipfs/blob/v0.4.18/core/coreunix/add.go#L115)** - *Create and return the **root** __DAG__ node*

This method converts the input data (`io.Reader`) to a __DAG__ tree, by splitting the data into _chunks_ using the `Chunker` and organizing them in to a __DAG__ (with a *trickle* or *balanced* layout. See [balanced](https://github.com/ipfs/go-unixfs/blob/6b769632e7eb8fe8f302e3f96bf5569232e7a3ee/importer/balanced/builder.go) for more info).
This method converts the input data (`io.Reader`) to a __DAG__ tree, by splitting the data into _chunks_ using the `Chunker` and organizing them into a __DAG__ (with a *trickle* or *balanced* layout. See [balanced](https://github.com/ipfs/go-unixfs/blob/6b769632e7eb8fe8f302e3f96bf5569232e7a3ee/importer/balanced/builder.go) for more info).

The method returns the **root** `ipld.Node` of the __DAG__.

Expand All @@ -70,7 +70,7 @@ Within the function, a new `Adder` is created with the configured `Blockstore` a

- **[MFS] [`PutNode(mfs.Root, path, ipld.Node)`](https://github.com/ipfs/go-mfs/blob/v0.1.18/ops.go#L86)** - *Insert node at path into given `MFS`*

The `path` param is used to determine the `MFS Directory`, which is first looked up in the `MFS` using `lookupDir()` function. This is followed by adding the **root** __DAG__ node (`ipld.Node`) in to this `Directory` using `directory.AddChild()` method.
The `path` param is used to determine the `MFS Directory`, which is first looked up in the `MFS` using `lookupDir()` function. This is followed by adding the **root** __DAG__ node (`ipld.Node`) into this `Directory` using `directory.AddChild()` method.

- **[MFS] Add Child To `UnixFS`**
- **[`directory.AddChild(filename, ipld.Node)`](https://github.com/ipfs/go-mfs/blob/v0.1.18/dir.go#L350)** - *Add **root** __DAG__ node under this directory*
Expand Down
6 changes: 3 additions & 3 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -846,7 +846,7 @@ Options for [ZeroConf](https://github.com/libp2p/zeroconf#readme) Multicast DNS-

#### `Discovery.MDNS.Enabled`

A boolean value for whether or not Multicast DNS-SD should be active.
A boolean value to activate or deactivate Multicast DNS-SD.

Default: `true`

Expand Down Expand Up @@ -934,7 +934,7 @@ Type: `object[string -> array[string]]`

### `Gateway.RootRedirect`

A url to redirect requests for `/` to.
A URL to redirect requests for `/` to.

Default: `""`

Expand Down Expand Up @@ -1410,7 +1410,7 @@ Type: `string` (filesystem path)

### `Mounts.FuseAllowOther`

Sets the 'FUSE allow other'-option on the mount point.
Sets the 'FUSE allow-other' option on the mount point.

## `Pinning`

Expand Down
12 changes: 6 additions & 6 deletions docs/datastores.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ field in the ipfs configuration file.

## flatfs

Stores each key value pair as a file on the filesystem.
Stores each key-value pair as a file on the filesystem.

The shardFunc is prefixed with `/repo/flatfs/shard/v1` then followed by a descriptor of the sharding strategy. Some example values are:
- `/repo/flatfs/shard/v1/next-to-last/2`
- Shards on the two next to last characters of the key
- `/repo/flatfs/shard/v1/prefix/2`
- Shards based on the two character prefix of the key
- Shards based on the two-character prefix of the key

```json
{
Expand All @@ -34,7 +34,7 @@ The shardFunc is prefixed with `/repo/flatfs/shard/v1` then followed by a descri
NOTE: flatfs must only be used as a block store (mounted at `/blocks`) as it only partially implements the datastore interface. You can mount flatfs for /blocks only using the mount datastore (described below).

## levelds
Uses a leveldb database to store key value pairs.
Uses a leveldb database to store key-value pairs.

```json
{
Expand All @@ -46,7 +46,7 @@ Uses a leveldb database to store key value pairs.

## pebbleds

Uses [pebble](https://github.com/cockroachdb/pebble) as a key value store.
Uses [pebble](https://github.com/cockroachdb/pebble) as a key-value store.

```json
{
Expand Down Expand Up @@ -90,7 +90,7 @@ When installing a new version of kubo when `"formatMajorVersion"` is configured,

## badgerds

Uses [badger](https://github.com/dgraph-io/badger) as a key value store.
Uses [badger](https://github.com/dgraph-io/badger) as a key-value store.

> [!CAUTION]
> This is based on very old badger 1.x, which has known bugs and is no longer supported by the upstream team.
Expand All @@ -99,7 +99,7 @@ Uses [badger](https://github.com/dgraph-io/badger) as a key value store.


* `syncWrites`: Flush every write to disk before continuing. Setting this to false is safe as kubo will automatically flush writes to disk before and after performing critical operations like pinning. However, you can set this to true to be extra-safe (at the cost of a 2-3x slowdown when adding files).
* `truncate`: Truncate the DB if a partially written sector is found (defaults to true). There is no good reason to set this to false unless you want to manually recover partially written (and unpinned) blocks if kubo crashes half-way through adding a file.
* `truncate`: Truncate the DB if a partially written sector is found (defaults to true). There is no good reason to set this to false unless you want to manually recover partially written (and unpinned) blocks if kubo crashes half-way through a write operation.

```json
{
Expand Down
2 changes: 1 addition & 1 deletion docs/experimental-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -398,7 +398,7 @@ We also support the use of protocol names of the form /x/$NAME/http where $NAME
### Road to being a real feature

- [ ] Needs p2p streams to graduate from experiments
- [ ] Needs more people to use and report on how well it works / fits use cases
- [ ] Needs more people to use and report on how well it works and fits use cases
- [ ] More documentation
- [ ] Need better integration with the subdomain gateway feature.

Expand Down
8 changes: 4 additions & 4 deletions docs/implement-api-bindings.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,12 @@ function calls. For example:
#### CLI API Transport

In the commandline, IPFS uses a traditional flag and arg-based mapping, where:
- the first arguments selects the command, as in git - e.g. `ipfs dag get`
- the first arguments select the command, as in git - e.g. `ipfs dag get`
- the flags specify options - e.g. `--enc=protobuf -q`
- the rest are positional arguments - e.g. `ipfs key rename <name> <newName>`
- files are specified by filename, or through stdin

(NOTE: When kubo runs the daemon, the CLI API is actually converted to HTTP
(NOTE: When kubo runs the daemon, the CLI API is converted to HTTP
calls. otherwise, they execute in the same process)

#### HTTP API Transport
Expand Down Expand Up @@ -87,7 +87,7 @@ Despite all the generalization spoken about above, the IPFS API is actually very
simple. You can inspect all the requests made with `nc` and the `--api` option
(as of [this PR](https://github.com/ipfs/kubo/pull/1598), or `0.3.8`):

```
```sh
> nc -l 5002 &
> ipfs --api /ip4/127.0.0.1/tcp/5002 swarm addrs local --enc=json
POST /api/v0/version?enc=json&stream-channels=true HTTP/1.1
Expand All @@ -104,7 +104,7 @@ The only hard part is getting the file streaming right. It is (now) fairly easy
to stream files to kubo using multipart. Basically, we end up with HTTP
requests like this:

```
```sh
> nc -l 5002 &
> ipfs --api /ip4/127.0.0.1/tcp/5002 add -r ~/demo/basic/test
POST /api/v0/add?encoding=json&progress=true&r=true&stream-channels=true HTTP/1.1
Expand Down
8 changes: 4 additions & 4 deletions docs/releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@

## Release Philosophy

`kubo` aims to have release every six weeks, two releases per quarter. During these 6 week releases, we go through 4 different stages that gives us the opportunity to test the new version against our test environments (unit, interop, integration), QA in our current production environment, IPFS apps (e.g. Desktop and WebUI) and with our community and _early testers_<sup>[1]</sup> that have IPFS running in production.
`kubo` aims to have a release every six weeks, two releases per quarter. During these 6 week releases, we go through 4 different stages that allow us to test the new version against our test environments (unit, interop, integration), QA in our current production environment, IPFS apps (e.g. Desktop and WebUI) and with our community and _early testers_<sup>[1]</sup> that have IPFS running in production.

We might expand the six week release schedule in case of:
We might expand the six-week release schedule in case of:

- No new updates to be added
- In case of a large community event that takes the core team availability away (e.g. IPFS Conf, Dev Meetings, IPFS Camp, etc.)
Expand Down Expand Up @@ -59,7 +59,7 @@ Test the release in as many non-production environments as possible. This is rel

### Stage 3 - Community Prod Testing

At this stage, we consider the release to be "production ready" and will ask the community and our early testers to (partially) deploy the release to their production infrastructure.
At this stage, we consider the release to be "production-ready" and will ask the community and our early testers to (partially) deploy the release to their production infrastructure.

**Goals:**

Expand All @@ -69,7 +69,7 @@ At this stage, we consider the release to be "production ready" and will ask the

### Stage 4 - Release

At this stage, the release is "battle hardened" and ready for wide deployment.
At this stage, the release is "battle-hardened" and ready for wide deployment.

## Release Cycle

Expand Down