Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
119 changes: 88 additions & 31 deletions hadoop-hdds/docs/content/start/StartFromDockerHub.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,55 +25,111 @@ weight: 10
* AWS CLI (optional)
{{< /requirements >}}

# Ozone in a Single Container
# Local multi-container cluster

The easiest way to start up an all-in-one ozone container is to use the latest
docker image from docker hub:
## Obtain the Docker Compose Configuration
First, obtain Ozone's sample Docker Compose configuration:

```bash
docker run -p 9878:9878 -p 9876:9876 apache/ozone
# Download the latest Docker Compose configuration file
curl -O https://raw.githubusercontent.com/apache/ozone-docker/refs/heads/latest/docker-compose.yaml
Copy link

Copilot AI Jun 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The URL may not resolve correctly using refs/heads. Use the standard raw GitHub path, e.g., https://raw.githubusercontent.com/apache/ozone-docker/latest/docker-compose.yaml.

Suggested change
curl -O https://raw.githubusercontent.com/apache/ozone-docker/refs/heads/latest/docker-compose.yaml
curl -O https://raw.githubusercontent.com/apache/ozone-docker/latest/docker-compose.yaml

Copilot uses AI. Check for mistakes.
```
This command will pull down the ozone image from docker hub and start all
ozone services in a single container. <br>
This container will run the required metadata servers (Ozone Manager, Storage
Container Manager) one data node and the S3 compatible REST server
(S3 Gateway).

# Local multi-container cluster

If you would like to use a more realistic pseudo-cluster where each components
run in own containers, you can start it with a docker-compose file.

We have shipped a docker-compose and an environment file as part of the
container image that is uploaded to docker hub.
## Start the Cluster
Start your Ozone cluster with three Datanodes using the following command:

The following commands can be used to extract these files from the image in the docker hub.
```bash
docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config
docker compose up -d --scale datanode=3
```

Now you can start the cluster with docker-compose:
This command will:

- Automatically pull required images from Docker Hub
- Create a multi-node cluster with the core Ozone services
- Start all components in detached mode

## Verify the Deployment
Check the status of your Ozone cluster components:

```bash
docker-compose up -d
docker compose ps
```

If you need multiple datanodes, we can just scale it up:
You should see output similar to this:

```bash
docker-compose up -d --scale datanode=3
```
docker-datanode-1 apache/ozone:2.0.0 "/usr/local/bin/dumb…" datanode 14 seconds ago Up 13 seconds 0.0.0.0:32958->9864/tcp, :::32958->9864/tcp
docker-datanode-2 apache/ozone:2.0.0 "/usr/local/bin/dumb…" datanode 14 seconds ago Up 13 seconds 0.0.0.0:32957->9864/tcp, :::32957->9864/tcp
docker-datanode-3 apache/ozone:2.0.0 "/usr/local/bin/dumb…" datanode 14 seconds ago Up 12 seconds 0.0.0.0:32959->9864/tcp, :::32959->9864/tcp
docker-om-1 apache/ozone:2.0.0 "/usr/local/bin/dumb…" om 14 seconds ago Up 13 seconds 0.0.0.0:9874->9874/tcp, :::9874->9874/tcp
docker-recon-1 apache/ozone:2.0.0 "/usr/local/bin/dumb…" recon 14 seconds ago Up 13 seconds 0.0.0.0:9888->9888/tcp, :::9888->9888/tcp
docker-s3g-1 apache/ozone:2.0.0 "/usr/local/bin/dumb…" s3g 14 seconds ago Up 13 seconds 0.0.0.0:9878->9878/tcp, :::9878->9878/tcp
docker-scm-1 apache/ozone:2.0.0 "/usr/local/bin/dumb…" scm 14 seconds ago Up 13 seconds 0.0.0.0:9876->9876/tcp, :::9876->9876/tcp
```
## Check the Ozone version

# Running S3 Clients
```bash
docker compose exec om ozone version
```

Once the cluster is booted up and ready, you can verify its status by
connecting to the SCM's UI at [http://localhost:9876](http://localhost:9876).

![SCM UI Screenshot](ozone-scm.png)

Navigate to the Recon server home page. The Ozone Recon server is at [http://localhost:9888](http://localhost:9888), which provides monitoring and management capabilities.

![Recon UI Screenshot](ozone-recon.png)

## Other Commonly Used Commands

- **View logs from the OM:**
```bash
docker compose logs om
```
- **Stop and remove all containers:**
```bash
docker compose down
```

## Configuration
You can customize your Ozone deployment by modifying the configuration parameters in the `docker-compose.yaml` file:

- **Common Configurations:** Located under the `x-common-config` section
- **Service-Specific Settings:** Found under the `environment` section of individual services

As an example, to update the port on which Recon listens to, append the following configuration:

```yaml
x-common-config:
...
OZONE-SITE.XML_ozone.recon.http-address: 0.0.0.0:9090
```

**Note:** If you change the port Recon listens on (e.g., 9090), you must also update the ports mapping in the recon service definition within the docker-compose.yaml file. For example, change:
```yaml
ports:
- "9888:9888"
```
to:
```yaml
ports:
- "9090:9090"
```
# Running S3 Clients

The S3 gateway endpoint will be exposed at port 9878. You can use Ozone's S3
support as if you are working against the real S3. S3 buckets are stored under
the `/s3v` volume.

First, let's configure AWS access key and secret key. Because the cluster is not secured,
you can use any arbitrary access key and secret key. For example:

```bash
export AWS_ACCESS_KEY_ID=testuser/[email protected]
export AWS_SECRET_ACCESS_KEY=c261b6ecabf7d37d5f9ded654b1c724adac9bd9f13e247a235e567e8296d2999
```

Here is how you create buckets from command line:

```bash
Expand All @@ -87,17 +143,18 @@ Now let us put a simple file into the S3 Bucket hosted by Ozone. We will
start by creating a temporary file that we can upload to Ozone via S3 support.
```bash
ls -1 > /tmp/testfile
```
This command creates a temporary file that
we can upload to Ozone. The next command actually uploads to Ozone's S3
bucket using the standard aws s3 command line interface.
```
This command creates a temporary file that
we can upload to Ozone. The next command actually uploads to Ozone's S3
bucket using the standard aws s3 command line interface.

```bash
aws s3 --endpoint http://localhost:9878 cp --storage-class REDUCED_REDUNDANCY /tmp/testfile s3://bucket1/testfile
```
<div class="alert alert-info" role="alert">
Note: REDUCED_REDUNDANCY is required for the single container ozone, since it
has a single datanode. </div>
Note: Add --storage-class REDUCED_REDUNDANCY if only one DataNode is started.
Since this example starts three DataNodes, this parameter is optional.
</div>
We can now verify that file got uploaded by running the list command against
our bucket.

Expand Down
Binary file added hadoop-hdds/docs/content/start/ozone-recon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added hadoop-hdds/docs/content/start/ozone-scm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.