Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion hadoop-hdds/docs/content/start/OnPrem.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ ozone om --init
start-ozone.sh
{{< /highlight >}}

This assumes that you have set up the slaves file correctly and ssh
This assumes that you have set up the `workers` file correctly and ssh
configuration that allows ssh-ing to all data nodes. This is the same as the
HDFS configuration, so please refer to HDFS documentation on how to set this
up.
2 changes: 1 addition & 1 deletion hadoop-hdds/docs/content/start/OnPrem.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,4 +151,4 @@ ozone om --init
start-ozone.sh
{{< /highlight >}}

这么做的前提是,slaves 文件已经正确编写,并且配置好了到各个 Datanode 的 ssh,这和 HDFS 的配置方式相同,具体方法请查看 HDFS 文档。
这么做的前提是,`workers` 文件已经正确编写,并且配置好了到各个 Datanode 的 ssh,这和 HDFS 的配置方式相同,具体方法请查看 HDFS 文档。
2 changes: 1 addition & 1 deletion hadoop-ozone/dist/src/shell/hdds/hadoop-daemons.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# limitations under the License.


# Run a Hadoop command on all slave hosts.
# Run a Hadoop command on all worker hosts.

function hadoop_usage
{
Expand Down
6 changes: 3 additions & 3 deletions hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -999,7 +999,7 @@ function hadoop_connect_to_hosts
# shellcheck disable=SC2124
local params="$@"
local worker_file
local tmpslvnames
local tmp_worker_names

#
# ssh (or whatever) to a host
Expand Down Expand Up @@ -1030,10 +1030,10 @@ function hadoop_connect_to_hosts
else
# no spaces allowed in the pdsh arg host list
# shellcheck disable=SC2086
tmpslvnames=$(echo ${HADOOP_WORKER_NAMES} | tr -s ' ' ,)
tmp_worker_names=$(echo ${HADOOP_WORKER_NAMES} | tr -s ' ' ,)
PDSH_SSH_ARGS_APPEND="${HADOOP_SSH_OPTS}" pdsh \
-f "${HADOOP_SSH_PARALLEL}" \
-w "${tmpslvnames}" $"${@// /\\ }" 2>&1
-w "${tmp_worker_names}" $"${@// /\\ }" 2>&1
fi
else
if [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
Expand Down