Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions hadoop-hdds/docs/content/interface/CSI.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,6 @@ through goofys.

If you don't have an Ozone cluster on kubernetes, you can reference [Kubernetes]({{< ref "start/Kubernetes.md" >}}) to create one. Use the resources from `kubernetes/examples/ozone` where you can find all the required Kubernetes resources to run cluster together with the dedicated Ozone CSI daemon (check `kubernetes/examples/ozone/csi`)

You should check if you already have a name of `/s3v` volume, if not create it by execute follow command:

```bash
kubectl exec -it scm-0 bash
[hadoop@scm-0 ~]$ ozone sh vol create s3v
```

Now, create the CSI related resources by execute the follow command.

```bash
Expand Down
8 changes: 1 addition & 7 deletions hadoop-hdds/docs/content/interface/S3.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ summary: Ozone supports Amazon's Simple Storage Service (S3) protocol. In fact,

Ozone provides S3 compatible REST interface to use the object store data with any S3 compatible tools.

S3 buckets are stored under the `/s3v`(Default is s3v, which can be setted through ozone.s3g.volume.name) volume, which needs to be created by an administrator first.
S3 buckets are stored under the `/s3v` volume. The default name `s3v` can be changed by setting the `ozone.s3g.volume.name` config property in `ozone-site.xml`.

## Getting started

Expand All @@ -38,12 +38,6 @@ Go to the `compose/ozone` directory, and start the server:
docker-compose up -d --scale datanode=3
```

Create the `/s3v` volume:

```bash
docker-compose exec scm ozone sh volume create /s3v
```

You can access the S3 gateway at `http://localhost:9878`

## URL Schema
Expand Down
1 change: 0 additions & 1 deletion hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,6 @@ Download any text file and put it to the `/tmp/alice.txt` first.

```bash
kubectl port-forward s3g-0 9878:9878
ozone sh volume create /s3v
aws s3api --endpoint http://localhost:9878 create-bucket --bucket=test
aws s3api --endpoint http://localhost:9878 put-object --bucket test --key alice.txt --body /tmp/alice.txt
```
Expand Down
6 changes: 1 addition & 5 deletions hadoop-hdds/docs/content/start/StartFromDockerHub.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,11 +72,7 @@ connecting to the SCM's UI at [http://localhost:9876](http://localhost:9876).

The S3 gateway endpoint will be exposed at port 9878. You can use Ozone's S3
support as if you are working against the real S3. S3 buckets are stored under
the `/s3v` volume, which needs to be created by an administrator first:

```
docker-compose exec scm ozone sh volume create /s3v
```
the `/s3v` volume.

Here is how you create buckets from command line:

Expand Down