diff --git a/hadoop-hdds/docs/content/interface/CSI.md b/hadoop-hdds/docs/content/interface/CSI.md index b70572f77f5d..c7046d09f898 100644 --- a/hadoop-hdds/docs/content/interface/CSI.md +++ b/hadoop-hdds/docs/content/interface/CSI.md @@ -35,13 +35,6 @@ through goofys. If you don't have an Ozone cluster on kubernetes, you can reference [Kubernetes]({{< ref "start/Kubernetes.md" >}}) to create one. Use the resources from `kubernetes/examples/ozone` where you can find all the required Kubernetes resources to run cluster together with the dedicated Ozone CSI daemon (check `kubernetes/examples/ozone/csi`) -You should check if you already have a name of `/s3v` volume, if not create it by execute follow command: - -```bash -kubectl exec -it scm-0 bash -[hadoop@scm-0 ~]$ ozone sh vol create s3v -``` - Now, create the CSI related resources by execute the follow command. ```bash diff --git a/hadoop-hdds/docs/content/interface/S3.md b/hadoop-hdds/docs/content/interface/S3.md index 94e455728f95..1be0137942ef 100644 --- a/hadoop-hdds/docs/content/interface/S3.md +++ b/hadoop-hdds/docs/content/interface/S3.md @@ -24,7 +24,7 @@ summary: Ozone supports Amazon's Simple Storage Service (S3) protocol. In fact, Ozone provides S3 compatible REST interface to use the object store data with any S3 compatible tools. -S3 buckets are stored under the `/s3v`(Default is s3v, which can be setted through ozone.s3g.volume.name) volume, which needs to be created by an administrator first. +S3 buckets are stored under the `/s3v` volume. The default name `s3v` can be changed by setting the `ozone.s3g.volume.name` config property in `ozone-site.xml`. ## Getting started @@ -38,12 +38,6 @@ Go to the `compose/ozone` directory, and start the server: docker-compose up -d --scale datanode=3 ``` -Create the `/s3v` volume: - -```bash -docker-compose exec scm ozone sh volume create /s3v -``` - You can access the S3 gateway at `http://localhost:9878` ## URL Schema diff --git a/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md b/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md index 6cf8b1e8d6ba..0f0d094c8fbb 100644 --- a/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md +++ b/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md @@ -112,7 +112,6 @@ Download any text file and put it to the `/tmp/alice.txt` first. ```bash kubectl port-forward s3g-0 9878:9878 -ozone sh volume create /s3v aws s3api --endpoint http://localhost:9878 create-bucket --bucket=test aws s3api --endpoint http://localhost:9878 put-object --bucket test --key alice.txt --body /tmp/alice.txt ``` diff --git a/hadoop-hdds/docs/content/start/StartFromDockerHub.md b/hadoop-hdds/docs/content/start/StartFromDockerHub.md index c4f36aff8926..6d26dfac849a 100644 --- a/hadoop-hdds/docs/content/start/StartFromDockerHub.md +++ b/hadoop-hdds/docs/content/start/StartFromDockerHub.md @@ -72,11 +72,7 @@ connecting to the SCM's UI at [http://localhost:9876](http://localhost:9876). The S3 gateway endpoint will be exposed at port 9878. You can use Ozone's S3 support as if you are working against the real S3. S3 buckets are stored under -the `/s3v` volume, which needs to be created by an administrator first: - -``` -docker-compose exec scm ozone sh volume create /s3v -``` +the `/s3v` volume. Here is how you create buckets from command line: