Skip to content

Conversation

@szetszwo
Copy link
Contributor

@szetszwo szetszwo commented Oct 28, 2021

What changes were proposed in this pull request?

RATIS-1406 added a new API for providing an Executor to StateMachine.DataStream. In this JIRA, we use this new API to provide an Executor for each LocalStream in ContainerStateMachine.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-5763

How was this patch tested?

No new tests are needed.

@captainzmc captainzmc self-requested a review October 29, 2021 02:34
Copy link
Member

@captainzmc captainzmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @szetszwo. Overall LGTM, only a few minor comments. Also there is a problem with checkstyle.

this.clientPoolSize = clientPoolSize;
}

@Config(key = "datastream.async.write.thread.pool.size",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the previous implementation we added datastream.write.threads, so we just need to reuse the previous key.

conf.getObject(DatanodeRatisServerConfig.class)
.getAsyncWriteThreadPoolPoolSize();
RaftServerConfigKeys.DataStream.setAsyncWriteThreadPoolSize(properties,
asyncWriteThreadPoolPoolSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This property is already set on line 245, so it not need set here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, you are right. We already have set it.

@szetszwo szetszwo changed the title HDDS-5763. Set raft.server.data-stream.async.write.thread.pool.size conf in datanode. HDDS-5763. Provide an Executor for each LocalStream in ContainerStateMachine Oct 29, 2021
@szetszwo
Copy link
Contributor Author

@captainzmc , thanks a lot for reviewing this and pointing out that raft.server.data-stream.async.write.thread.pool.size has already been set in XceiverServerRatis.

Just have updated the JIRA, this pull request and pushed a new commit.

Paths.get(response.getMessage()));
final ExecutorService chunkExecutor = requestProto.hasWriteChunk() ?
getChunkExecutor(requestProto.getWriteChunk()) : null;
return new LocalStream(channel, chunkExecutor);
Copy link
Member

@captainzmc captainzmc Nov 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @szetszwo, The size of the chunkExecutors is determined by dfs.container.ratis.num.write.chunk.threads.per.volume. So I'm wondering if we need delete datastream.write.threads in DatanodeRatisServerConfig? Now that we're passing in chunkExecutor every time, this configuration should be useless?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@captainzmc , you are right. Let's remove it.

@szetszwo
Copy link
Contributor Author

szetszwo commented Nov 1, 2021

@captainzmc , just have pushed a new commit. Please take a look.

BTW, filed https://issues.apache.org/jira/browse/RATIS-1419 for changing the fixed thread pools in DataStreamManagement to cached thread pools in order to reduce the resources usage when the executors are idle.

@captainzmc
Copy link
Member

BTW, filed https://issues.apache.org/jira/browse/RATIS-1419 for changing the fixed thread pools in DataStreamManagement to cached thread pools in order to reduce the resources usage when the executors are idle.

Good idea, Currently ContainerStateMachine also uses newFixedThreadPool. Perhaps we can replace it with newCachedThreadPool in RATIS-1419.

Copy link
Member

@captainzmc captainzmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, change looks good.
The CI of Ozone is very unstable recently. I looked at the failed CI and it had nothing to do with this PR. Let's merge this PR first.

@captainzmc captainzmc merged commit 871f2cd into HDDS-4454 Nov 1, 2021
szetszwo added a commit to szetszwo/ozone that referenced this pull request May 6, 2022
captainzmc pushed a commit to captainzmc/hadoop-ozone that referenced this pull request Jul 4, 2022
szetszwo added a commit that referenced this pull request Oct 25, 2022
…Machine (#2782)

(cherry picked from commit e186bfc)
(cherry picked from commit 311b36652697155b052cd9781fd1590d22256660)
szetszwo added a commit that referenced this pull request Nov 7, 2022
…Machine (#2782)

(cherry picked from commit e186bfc)
(cherry picked from commit 311b36652697155b052cd9781fd1590d22256660)
(cherry picked from commit 7cbb3fa)
szetszwo added a commit that referenced this pull request Dec 1, 2022
szetszwo added a commit that referenced this pull request Dec 16, 2022
nishitpatira pushed a commit to nishitpatira/ozone that referenced this pull request Dec 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants