Skip to content

Conversation

@umamaheswararao
Copy link
Contributor

What changes were proposed in this pull request?

This patch proposes to write the empty putBlocks on non writing DNs in the case of padding.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-6794

How was this patch tested?

Adding tests.

@umamaheswararao umamaheswararao marked this pull request as draft June 20, 2022 16:08
@guihecheng
Copy link
Contributor

Hi @umamaheswararao any updates on this one?
And I have an idea: could we create the empty container entity on DN on the CLOSE of EC pipelines?

@umamaheswararao
Copy link
Contributor Author

umamaheswararao commented Jun 28, 2022

@guihecheng, I have discovered several issues with this. So, I have just kept this issue a side currently as I am focussing on other prioritized tasks.

Here just empty put block will not work:

  1. Container may not be available. PutBlock will not create container on absence. Container's usually be created on first write chunk. Since we don't have write chunk, empty putBlocks will just fail.
  2. Last putBlock with close flag true will fail even though container is available as chunkFile may not exist. close flag will enforce to flush the content to os file system at the DN.

So, the best way could be to create the container's from client on initializing the streams first.
Here I found a problem that, container's may exist already and exist createContainer API will just fail when container already exist. SO, we may need a createContainerAPI which should not fail when container exist. May be we need additional flag in createContainer API?

With offline recovery, we are creating the container all the time to target nodes first. So, even though no blocks needs to be recovered, we may create the container. So, first recovery task will solve the problem and from then onward we will have empty containers I guess. Only thing we need to make sure is, empty containers should not get delete in EC case from RM. With this, I felt this can be low prioritized, until RM work is done.

Doing it as part of close or start of input stream should not have any difference. In both cases, createContainer will fail if already exist.

Please note current code is just experimental changes. Not for commit or review. Feel free to suggest or take up if you have any other thoughts which may be simpler and low risk.

@guihecheng
Copy link
Contributor

@guihecheng, I have discovered several issues with this. So, I have just kept this issue a side currently as I am focussing on other prioritized tasks.

Here just empty put block will not work:

  1. Container may not be available. PutBlock will not create container on absence. Container's usually be created on first write chunk. Since we don't have write chunk, empty putBlocks will just fail.
  2. Last putBlock with close flag true will fail even though container is available as chunkFile may not exist. close flag will enforce to flush the content to os file system at the DN.

So, the best way could be to create the container's from client on initializing the streams first. Here I found a problem that, container's may exist already and exist createContainer API will just fail when container already exist. SO, we may need a createContainerAPI which should not fail when container exist. May be we need additional flag in createContainer API?

With offline recovery, we are creating the container all the time to target nodes first. So, even though no blocks needs to be recovered, we may create the container. So, first recovery task will solve the problem and from then onward we will have empty containers I guess. Only thing we need to make sure is, empty containers should not get delete in EC case from RM. With this, I felt this can be low prioritized, until RM work is done.

Doing it as part of close or start of input stream should not have any difference. In both cases, createContainer will fail if already exist.

Please not current code is just experimental changes. Not for commit or review. Feel free to suggest or take up if you have any other thoughts which may be simpler and low risk.

Thanks for the info, it is really a problem that needs to be discuss more about, since the EC on-disk container replica is different from the Ratis-Replicated container replica, then some old assumptions(e.g. create container replica on the first WriteChunk) may not apply.

I should say that RM won't be able to trigger Offline Recovery facing a CLOSED tiny container with possibly only one partial stripe since it doesn't have enough container replica reports gathered. So we need to handle this problem specially on the write path or somewhere else. The problem is likely to hit when doing Offline Recovery test with small files(e.g. 1MB) only, it may not be very easy to reveal on real mixed workloads, but still possible.

Create container replicas on init of BlockStreams is a workable idea as I think, but it may possibly bring a performance impact. I'm thinking that maybe we could handle this during the OPEN/CLOSE of the container/pipeline on SCM side, but it also has problems that we don't have synchronous view of container replica creation, and we can't wait for the creations.

Let's dig more about this later.

@umamaheswararao umamaheswararao marked this pull request as ready for review July 1, 2022 04:21
@umamaheswararao umamaheswararao marked this pull request as draft July 1, 2022 14:35
@umamaheswararao umamaheswararao marked this pull request as ready for review July 1, 2022 14:35
@kaijchen
Copy link
Member

kaijchen commented Jul 1, 2022

Thanks for working on this @umamaheswararao. Actually the normal EC write path doesn't create blocks on non-writing node, that's why TestContainerCommandsEC.testListBlock failed. You can remove the failed assertion if this behavior needs to be changed.

@kaijchen
Copy link
Member

kaijchen commented Jul 1, 2022

I should say that RM won't be able to trigger Offline Recovery facing a CLOSED tiny container with possibly only one partial stripe since it doesn't have enough container replica reports gathered. So we need to handle this problem specially on the write path or somewhere else. The problem is likely to hit when doing Offline Recovery test with small files(e.g. 1MB) only, it may not be very easy to reveal on real mixed workloads, but still possible.

This is indeed a problem. I would suggest to allow empty EC containers, and generalize the problem.
(Manage EC container in groups, not as individual)

Copy link
Contributor

@JacksonYao287 JacksonYao287 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @umamaheswararao for the patch, very nice work, LGTM!
i believe this patch can solve the problem caused by the first partial strip of EC container!

DatanodeBlockID.Builder blkIDBuilder =
DatanodeBlockID.newBuilder().setContainerID(blockID.getContainerID())
.setLocalID(blockID.getLocalID())
.setBlockCommitSequenceId(blockID.getBlockCommitSequenceId());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blockCommitSequenceID is used for handling quosi_closed ratis container which has a non-determined state. so i don`t think it has any use for EC container. maybe we can remove it for EC case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have not really thought much about BCSID. Currently we are not using it, but we need to think whether we have benefit or take advantage of any other scenarios which we have not covered yet. So, I think it may be too early to discard. Eventually we can ignore that later times if we never find any use of it. I suggest to get set as it is available now. Also we surely don't want to ignore in BlockInputStream as that is non EC code flow too.

@umamaheswararao
Copy link
Contributor Author

Thanks a lot @JacksonYao287 for the reviews!

@umamaheswararao
Copy link
Contributor Author

Since I got a +1, I am going ahead to commit this. @nandakumar131 @sodonnel @guihecheng @adoroszlai Let me know if you have concerns on this approach or anything we missed to take care with respective to SCM HA etc. Thanks

@umamaheswararao umamaheswararao merged commit ab923a3 into apache:master Jul 7, 2022
errose28 added a commit to errose28/ozone that referenced this pull request Jul 12, 2022
* master: (46 commits)
  HDDS-6901. Configure HDDS volume reserved as percentage of the volume space. (apache#3532)
  HDDS-6978. EC: Cleanup RECOVERING container on DN restarts (apache#3585)
  HDDS-6982. EC: Attempt to cleanup the RECOVERING container when reconstruction failed at coordinator. (apache#3583)
  HDDS-6968. Addendum: [Multi-Tenant] Fix USER_MISMATCH error even on correct user. (apache#3578)
  HDDS-6794. EC: Analyze and add putBlock even on non writing node in the case of partial single stripe. (apache#3514)
  HDDS-6900. Propagate TimeoutException for all SCM HA Ratis calls. (apache#3564)
  HDDS-6938. handle NPE when removing prefixAcl (apache#3568)
  HDDS-6960. EC: Implement the Over-replication Handler (apache#3572)
  HDDS-6979. Remove unused plexus dependency declaration (apache#3579)
  HDDS-6957. EC: ReplicationManager - priortise under replicated containers (apache#3574)
  HDDS-6723. Close Rocks objects properly in OzoneManager (apache#3400)
  HDDS-6942. Ozone Buckets/Objects created via S3 should not allow group access (apache#3553)
  HDDS-6965. Increase timeout for basic check (apache#3563)
  HDDS-6969. Add link to compose directory in smoketest README (apache#3567)
  HDDS-6970. EC: Ensure DatanodeAdminMonitor can handle EC containers during decommission (apache#3573)
  HDDS-6977. EC: Remove references to ContainerReplicaPendingOps in TestECContainerReplicaCount (apache#3575)
  HDDS-6217. Cleanup XceiverClientGrpc TODOs, and document how the client works and should be used. (apache#3012)
  HDDS-6773. Cleanup TestRDBTableStore (apache#3434) - fix checkstyle
  HDDS-6773. Cleanup TestRDBTableStore (apache#3434)
  HDDS-6676. KeyValueContainerData#getProtoBufMessage() should set block count (apache#3371)
  ...

Conflicts:
    hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/upgrade/SCMUpgradeFinalizer.java
duongkame pushed a commit to duongkame/ozone that referenced this pull request Aug 16, 2022
HDDS-6794. EC: Analyze and add putBlock even on non writing node in the case of partial single stripe. (apache#3514)

(cherry picked from commit ab923a3)
Change-Id: I7cd3dfb4b91ee9c7e86f34eb18273d794d4731d2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants