Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions hadoop-hdds/docs/content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ weight: -10

{{<figure class="ozone-usage" src="/ozone-usage.png" width="60%">}}

*_Ozone is a scalable, redundant, and distributed object store for Hadoop. <p>
*_Ozone is a scalable, redundant, and distributed object store for Big data workloads. <p>
Apart from scaling to billions of objects of varying sizes,
Ozone can function effectively in containerized environments
like Kubernetes._* <p>
like Kubernetes._*

Applications like Apache Spark, Hive and YARN, work without any modifications when using Ozone. Ozone comes with a [Java client library]({{<ref "JavaApi.md">}}), [S3 protocol support]({{< ref "S3.md" >}}), and a [command line interface]({{< ref "Cli.md" >}}) which makes it easy to use Ozone.

Expand All @@ -38,3 +38,4 @@ Ozone consists of volumes, buckets, and keys:
* Buckets are similar to directories. A bucket can contain any number of keys, but buckets cannot contain other buckets.
* Keys are similar to files.

Check out the [Getting Started](start/) guide to dive right in and learn how to run Ozone on your machine or in the cloud.
7 changes: 7 additions & 0 deletions hadoop-hdds/docs/content/start/OnPrem.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,3 +170,10 @@ This assumes that you have set up the `workers` file correctly and ssh
configuration that allows ssh-ing to all data nodes. This is the same as the
HDFS configuration, so please refer to HDFS documentation on how to set this
up.

## See Also

* [Overview](../concept/Overview.md)
* [Ozone Manager](../concept/OzoneManager.md)
* [Storage Container Manager](../concept/StorageContainerManager.md)
* [Datanodes](../concept/Datanodes.md)
5 changes: 5 additions & 0 deletions hadoop-hdds/docs/content/start/StartFromDockerHub.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,3 +161,8 @@ our bucket.
```bash
aws s3 --endpoint http://localhost:9878 ls s3://bucket1/testfile
```

For more information on using the S3 protocol with Ozone, S3 developers may be interested in the following pages:
* [S3 Protocol](../interface/S3.md)
* [Securing S3](../security/SecuringS3.md)
* [Access Ozone using Boto3 (Docker Quickstart)](../recipe/Boto3Tutorial.md)
2 changes: 1 addition & 1 deletion hadoop-hdds/docs/content/start/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can try out Ozone using docker hub without downloading the official release.
Apache Ozone can also be run from the official release packages. Along with the official source releases, we also release a set of convenience binary packages. It is easy to run these binaries in different configurations.
<br />
{{<card title="Ozone on a physical cluster" link="start/OnPrem" link-text="On-Prem Ozone Cluster" image="start/hadoop.png">}}
Ozone is designed to work concurrently with HDFS. The physical cluster instructions explain each component of Ozone and how to deploy with maximum control.
Ozone is optimized for physical hosts. The physical cluster instructions explain each component of Ozone and how to deploy with maximum control.
{{</card>}}

{{<card title="Ozone on K8s" link="start/Kubernetes" link-text="Kubernetes" image="start/k8s.png">}}
Expand Down