Storage Container Manager
-
+
SCM requires two Kerberos principals, and the corresponding key tab files
for both of these principals.
-
-
-
-
- | Property |
- Description |
-
-
-
-
- | hdds.scm.kerberos.principal |
- The SCM service principal. e.g. scm/HOST@REALM.COM |
-
-
- | hdds.scm.kerberos.keytab.file |
- The keytab file used by SCM daemon to login as its service principal. |
-
-
- | hdds.scm.http.kerberos.principal |
- SCM http server service principal. |
-
-
- | hdds.scm.http.kerberos.keytab |
- The keytab file used by SCM http server to login as its service principal. |
-
-
-
+
+
+
+
+ | Property |
+ Description |
+
+
+
+
+ | hdds.scm.kerberos.principal
+ | The SCM service principal. e.g. scm/_HOST@REALM.COM |
+
+
+ | hdds.scm.kerberos.keytab.file
+ | The keytab file used by SCM daemon to login as its service principal. |
+
+
+ | hdds.scm.http.kerberos.principal
+ | SCM http server service principal. |
+
+
+ | hdds.scm.http.kerberos.keytab
+ | The keytab file used by SCM http server to login as its service principal. |
+
+
+
@@ -51,7 +51,7 @@ against Ozone S3 buckets.
* Now you can proceed to setup these secrets in aws configs:
-```
+```bash
aws configure set default.s3.signature_version s3v4
aws configure set aws_access_key_id ${accessId}
aws configure set aws_secret_access_key ${secret}
diff --git a/hadoop-hdds/docs/content/security/SecuringTDE.md b/hadoop-hdds/docs/content/security/SecuringTDE.md
index 080df95492774..f110006ab7204 100644
--- a/hadoop-hdds/docs/content/security/SecuringTDE.md
+++ b/hadoop-hdds/docs/content/security/SecuringTDE.md
@@ -22,20 +22,19 @@ icon: lock
limitations under the License.
-->
-## Transparent Data Encryption
Ozone TDE setup process and usage are very similar to HDFS TDE.
The major difference is that Ozone TDE is enabled at Ozone bucket level
when a bucket is created.
### Setting up the Key Management Server
-To use TDE, clients must setup a Key Management server and provide that URI to
+To use TDE, clients must setup a Key Management Server and provide that URI to
Ozone/HDFS. Since Ozone and HDFS can use the same Key Management Server, this
- configuration can be provided via *hdfs-site.xml*.
+ configuration can be provided via *core-site.xml*.
Property| Value
-----------------------------------|-----------------------------------------
-hadoop.security.key.provider.path | KMS uri. e.g. kms://http@kms-host:9600/kms
+hadoop.security.key.provider.path | KMS uri.
e.g. kms://http@kms-host:9600/kms
### Using Transparent Data Encryption
If this is already configured for your cluster, then you can simply proceed
diff --git a/hadoop-hdds/docs/content/security/SecurityAcls.md b/hadoop-hdds/docs/content/security/SecurityAcls.md
index 3aa29f95c1ab6..7cf6f4239ecb3 100644
--- a/hadoop-hdds/docs/content/security/SecurityAcls.md
+++ b/hadoop-hdds/docs/content/security/SecurityAcls.md
@@ -21,9 +21,9 @@ summary: Native ACL support provides ACL functionality without Ranger integratio
limitations under the License.
-->
-Ozone supports a set of native ACLs. These ACLs cane be used independently or
- along with Ranger. If Apache Ranger is enabled, then ACL will be checked
- first with Ranger and then Ozone's internal ACLs will be evaluated.
+Ozone supports a set of native ACLs. These ACLs can be used independently or
+along with Ranger. If Apache Ranger is enabled, then ACL will be checked
+first with Ranger and then Ozone's internal ACLs will be evaluated.
Ozone ACLs are a super set of Posix and S3 ACLs.
@@ -31,10 +31,10 @@ The general format of an ACL is _object_:_who_:_rights_.
Where an _object_ can be:
-1. **Volume** - An Ozone volume. e.g. /volume
-2. **Bucket** - An Ozone bucket. e.g. /volume/bucket
-3. **Key** - An object key or an object. e.g. /volume/bucket/key
-4. **Prefix** - A path prefix for a specific key. e.g. /volume/bucket/prefix1/prefix2
+1. **Volume** - An Ozone volume. e.g. _/volume_
+2. **Bucket** - An Ozone bucket. e.g. _/volume/bucket_
+3. **Key** - An object key or an object. e.g. _/volume/bucket/key_
+4. **Prefix** - A path prefix for a specific key. e.g. _/volume/bucket/prefix1/prefix2_
Where a _who_ can be:
@@ -60,26 +60,26 @@ Where a _right_ can be:
1. **Create** – This ACL provides a user the ability to create buckets in a
volume and keys in a bucket. Please note: Under Ozone, Only admins can create volumes.
2. **List** – This ACL allows listing of buckets and keys. This ACL is attached
- to the volume and buckets which allow listing of the child objects. Please note: The user and admins can list the volumes owned by the user.
+ to the volume and buckets which allow listing of the child objects.
+ Please note: The user and admins can list the volumes owned by the user.
3. **Delete** – Allows the user to delete a volume, bucket or key.
4. **Read** – Allows the user to read the metadata of a Volume and Bucket and
-data stream and metadata of a key(object).
+data stream and metadata of a key.
5. **Write** - Allows the user to write the metadata of a Volume and Bucket and
-allows the user to overwrite an existing ozone key(object).
+allows the user to overwrite an existing ozone key.
6. **Read_ACL** – Allows a user to read the ACL on a specific object.
7. **Write_ACL** – Allows a user to write the ACL on a specific object.
-
Ozone Native ACL APIs Work in
-progress
+
Ozone Native ACL APIs
The ACLs can be manipulated by a set of APIs supported by Ozone. The APIs
supported are:
-1. **SetAcl** – This API will take user principal, the name of the object, type
- of the object and a list of ACLs.
-
-2. **GetAcl** – This API will take the name of an ozone object and type of the
-object and will return a list of ACLs.
-3. **RemoveAcl** - It is possible that we might support an API called RemoveACL
- as a convenience API, but in reality it is just a GetACL followed by SetACL
- with an etag to avoid conflicts.
+1. **SetAcl** – This API will take user principal, the name, type
+of the ozone object and a list of ACLs.
+2. **GetAcl** – This API will take the name and type of the ozone object
+and will return a list of ACLs.
+3. **AddAcl** - This API will take user principal, the name, type
+of the ozone object and an ozone ACL, and add it to existing ACLs of the ozone object.
+4. **RemoveAcl** - This API will take the name, type of the
+ozone object and an ozone ACL that has to be removed.
diff --git a/hadoop-hdds/docs/content/shell/BucketCommands.md b/hadoop-hdds/docs/content/shell/BucketCommands.md
index 96102199f69dc..ee14dc3e63a63 100644
--- a/hadoop-hdds/docs/content/shell/BucketCommands.md
+++ b/hadoop-hdds/docs/content/shell/BucketCommands.md
@@ -29,7 +29,7 @@ Ozone shell supports the following bucket commands.
### Create
-The bucket create command allows users to create a bucket.
+The `bucket create` command allows users to create a bucket.
***Params:***
@@ -46,7 +46,7 @@ Since no scheme was specified this command defaults to O3 (RPC) protocol.
### Delete
-The bucket delete command allows users to delete a bucket. If the
+The `bucket delete` command allows users to delete a bucket. If the
bucket is not empty then this command will fail.
***Params:***
@@ -63,7 +63,8 @@ The above command will delete _jan_ bucket if it is empty.
### Info
-The bucket info commands returns the information about the bucket.
+The `bucket info` commands returns the information about the bucket.
+
***Params:***
| Arguments | Comment |
@@ -78,15 +79,15 @@ The above command will print out the information about _jan_ bucket.
### List
-The bucket list command allows users to list the buckets in a volume.
+The `bucket list` command allows users to list the buckets in a volume.
***Params:***
| Arguments | Comment |
|--------------------------------|-----------------------------------------|
-| -l, --length | Maximum number of results to return. Default: 100
-| -p, --prefix | Optional, Only buckets that match this prefix will be returned.
-| -s, --start | The listing will start from key after the start key.
+| -l, \-\-length | Maximum number of results to return. Default: 100
+| -p, \-\-prefix | Optional, Only buckets that match this prefix will be returned.
+| -s, \-\-start | The listing will start from key after the start key.
| Uri | The name of the _volume_.
{{< highlight bash >}}
@@ -94,18 +95,3 @@ ozone sh bucket list /hive
{{< /highlight >}}
This command will list all buckets on the volume _hive_.
-
-
-
-
-### path
-The bucket command to provide ozone mapping for s3 bucket (Created via aws cli)
-
-{{< highlight bash >}}
-ozone s3 path <
>
-{{< /highlight >}}
-
-The above command will print VolumeName and the mapping created for s3Bucket.
-
-You can try out these commands from the docker instance of the [Alpha
-Cluster](runningviadocker.html).
diff --git a/hadoop-hdds/docs/content/shell/KeyCommands.md b/hadoop-hdds/docs/content/shell/KeyCommands.md
index 56bc038a55e4b..b4a38c8b1b521 100644
--- a/hadoop-hdds/docs/content/shell/KeyCommands.md
+++ b/hadoop-hdds/docs/content/shell/KeyCommands.md
@@ -34,7 +34,7 @@ Ozone shell supports the following key commands.
### Get
-The key get command downloads a key from Ozone cluster to local file system.
+The `key get` command downloads a key from Ozone cluster to local file system.
***Params:***
@@ -52,7 +52,7 @@ local file sales.orc.
### Put
-Uploads a file from the local file system to the specified bucket.
+The `key put` command uploads a file from the local file system to the specified bucket.
***Params:***
@@ -61,7 +61,7 @@ Uploads a file from the local file system to the specified bucket.
|--------------------------------|-----------------------------------------|
| Uri | The name of the key in **/volume/bucket/key** format.
| FileName | Local file to upload.
-| -r, --replication | Optional, Number of copies, ONE or THREE are the options. Picks up the default from cluster configuration.
+| -r, \-\-replication | Optional, Number of copies, ONE or THREE are the options. Picks up the default from cluster configuration.
{{< highlight bash >}}
ozone sh key put /hive/jan/corrected-sales.orc sales.orc
@@ -70,7 +70,7 @@ The above command will put the sales.orc as a new key into _/hive/jan/corrected-
### Delete
-The key delete command removes the key from the bucket.
+The `key delete` command removes the key from the bucket.
***Params:***
@@ -87,7 +87,8 @@ The above command deletes the key _/hive/jan/corrected-sales.orc_.
### Info
-The key info commands returns the information about the key.
+The `key info` commands returns the information about the key.
+
***Params:***
| Arguments | Comment |
@@ -103,15 +104,15 @@ key.
### List
-The key list command allows user to list all keys in a bucket.
+The `key list` command allows user to list all keys in a bucket.
***Params:***
| Arguments | Comment |
|--------------------------------|-----------------------------------------|
-| -l, --length | Maximum number of results to return. Default: 1000
-| -p, --prefix | Optional, Only buckets that match this prefix will be returned.
-| -s, --start | The listing will start from key after the start key.
+| -l, \-\-length | Maximum number of results to return. Default: 1000
+| -p, \-\-prefix | Optional, Only buckets that match this prefix will be returned.
+| -s, \-\-start | The listing will start from key after the start key.
| Uri | The name of the _volume_.
{{< highlight bash >}}
@@ -135,7 +136,4 @@ The `key rename` command changes the name of an existing key in the specified bu
{{< highlight bash >}}
ozone sh key rename /hive/jan sales.orc new_name.orc
{{< /highlight >}}
-The above command will rename `sales.orc` to `new_name.orc` in the bucket `/hive/jan`.
-
-You can try out these commands from the docker instance of the [Alpha
-Cluster](runningviadocker.html).
+The above command will rename _sales.orc_ to _new\_name.orc_ in the bucket _/hive/jan_.
diff --git a/hadoop-hdds/docs/content/shell/VolumeCommands.md b/hadoop-hdds/docs/content/shell/VolumeCommands.md
index 55e8b76f3ff1e..47fb9852b863e 100644
--- a/hadoop-hdds/docs/content/shell/VolumeCommands.md
+++ b/hadoop-hdds/docs/content/shell/VolumeCommands.md
@@ -30,15 +30,15 @@ Volume commands generally need administrator privileges. The ozone shell support
### Create
-The volume create command allows an administrator to create a volume and
+The `volume create` command allows an administrator to create a volume and
assign it to a user.
***Params:***
| Arguments | Comment |
|--------------------------------|-----------------------------------------|
-| -q, --quota | Optional, This argument that specifies the maximum size this volume can use in the Ozone cluster. |
-| -u, --user | Required, The name of the user who owns this volume. This user can create, buckets and keys on this volume. |
+| -q, \-\-quota | Optional, This argument that specifies the maximum size this volume can use in the Ozone cluster. |
+| -u, \-\-user | Required, The name of the user who owns this volume. This user can create, buckets and keys on this volume. |
| Uri | The name of the volume. |
{{< highlight bash >}}
@@ -50,7 +50,7 @@ volume has a quota of 1TB, and the owner is _bilbo_.
### Delete
-The volume delete command allows an administrator to delete a volume. If the
+The `volume delete` command allows an administrator to delete a volume. If the
volume is not empty then this command will fail.
***Params:***
@@ -68,8 +68,9 @@ inside it.
### Info
-The volume info commands returns the information about the volume including
+The `volume info` commands returns the information about the volume including
quota and owner information.
+
***Params:***
| Arguments | Comment |
@@ -84,7 +85,7 @@ The above command will print out the information about hive volume.
### List
-The volume list command will list the volumes owned by a user.
+The `volume list` command will list the volumes owned by a user.
{{< highlight bash >}}
ozone sh volume list --user hadoop
@@ -100,8 +101,8 @@ The volume update command allows changing of owner and quota on a given volume.
| Arguments | Comment |
|--------------------------------|-----------------------------------------|
-| -q, --quota | Optional, This argument that specifies the maximum size this volume can use in the Ozone cluster. |
-| -u, --user | Optional, The name of the user who owns this volume. This user can create, buckets and keys on this volume. |
+| -q, \-\-quota | Optional, This argument that specifies the maximum size this volume can use in the Ozone cluster. |
+| -u, \-\-user | Optional, The name of the user who owns this volume. This user can create, buckets and keys on this volume. |
| Uri | The name of the volume. |
{{< highlight bash >}}
@@ -109,6 +110,3 @@ ozone sh volume update --quota=10TB /hive
{{< /highlight >}}
The above command updates the volume quota to 10TB.
-
-You can try out these commands from the docker instance of the [Alpha
-Cluster](runningviadocker.html).
diff --git a/hadoop-hdds/docs/content/start/Kubernetes.md b/hadoop-hdds/docs/content/start/Kubernetes.md
index 75037230363ef..bd73b1ea8979e 100644
--- a/hadoop-hdds/docs/content/start/Kubernetes.md
+++ b/hadoop-hdds/docs/content/start/Kubernetes.md
@@ -25,7 +25,7 @@ title: Ozone on Kubernetes
{{< /requirements >}}
-As the _apache/ozone_ docker images are available from the dockerhub the deployment process is very similar Minikube deployment. The only big difference is that we have dedicated set of k8s files for hosted clusters (for example we can use one datanode per host)
+As the _apache/ozone_ docker images are available from the dockerhub the deployment process is very similar to Minikube deployment. The only big difference is that we have dedicated set of k8s files for hosted clusters (for example we can use one datanode per host)
Deploy to kubernetes
`kubernetes/examples` folder of the ozone distribution contains kubernetes deployment resource files for multiple use cases.
diff --git a/hadoop-hdds/docs/content/start/OnPrem.md b/hadoop-hdds/docs/content/start/OnPrem.md
index 4e6490a1526be..6b806b8c0c2c7 100644
--- a/hadoop-hdds/docs/content/start/OnPrem.md
+++ b/hadoop-hdds/docs/content/start/OnPrem.md
@@ -33,7 +33,7 @@ requests blocks from SCM, to which clients can write data.
## Setting up an Ozone only cluster
-* Please untar the ozone- to the directory where you are going
+* Please untar the ozone-\ to the directory where you are going
to run Ozone from. We need Ozone jars on all machines in the cluster. So you
need to do this on all machines in the cluster.
@@ -152,14 +152,13 @@ ozone om --init
{{< /highlight >}}
-Once Ozone manager has created the Object Store, we are ready to run the name
-services.
+Once Ozone manager is initialized, we are ready to run the name service.
{{< highlight bash >}}
ozone --daemon start om
{{< /highlight >}}
-At this point Ozone's name services, the Ozone manager, and the block service SCM is both running.
+At this point Ozone's name services, the Ozone manager, and the block service SCM is both running.\
**Please note**: If SCM is not running
```om --init``` command will fail. SCM start will fail if on-disk data structures are missing. So please make sure you have done both ```scm --init``` and ```om --init``` commands.
diff --git a/hadoop-hdds/docs/content/start/StartFromDockerHub.md b/hadoop-hdds/docs/content/start/StartFromDockerHub.md
index 7446fbdcf8048..74a3b951da4ba 100644
--- a/hadoop-hdds/docs/content/start/StartFromDockerHub.md
+++ b/hadoop-hdds/docs/content/start/StartFromDockerHub.md
@@ -30,7 +30,7 @@ The easiest way to start up an all-in-one ozone container is to use the latest
docker image from docker hub:
```bash
-docker run -P 9878:9878 -P 9876:9876 apache/ozone
+docker run -p 9878:9878 -p 9876:9876 apache/ozone
```
This command will pull down the ozone image from docker hub and start all
ozone services in a single container.
@@ -40,7 +40,7 @@ Container Manager) one data node and the S3 compatible REST server
# Local multi-container cluster
-If you would like to use a more realistic pseud-cluster where each components
+If you would like to use a more realistic pseudo-cluster where each components
run in own containers, you can start it with a docker-compose file.
We have shipped a docker-compose and an enviorment file as part of the
@@ -65,7 +65,7 @@ If you need multiple datanodes, we can just scale it up:
```
# Running S3 Clients
-Once the cluster is booted up and ready, you can verify it is running by
+Once the cluster is booted up and ready, you can verify its status by
connecting to the SCM's UI at [http://localhost:9876](http://localhost:9876).
The S3 gateway endpoint will be exposed at port 9878. You can use Ozone's S3
@@ -103,7 +103,6 @@ our bucket.
aws s3 --endpoint http://localhost:9878 ls s3://bucket1/testfile
```
-.
You can also check the internal
bucket browser supported by Ozone S3 interface by clicking on the below link.