-
Notifications
You must be signed in to change notification settings - Fork 590
HDDS-8139. Datanodes should not drop block delete transactions based on transaction ID #4384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…on transaction ID
…on transaction ID
szetszwo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 the change looks good.
...he/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
Show resolved
Hide resolved
|
With this implementation we are assuming duplicate work from retries will be minimal, hence no explicit retry check is needed. If SCM is retrying the command, it means it did not get an ack back from the datanode. Since these command acks are attached to the DN heartbeats, the node should go stale or dead before retries pile up excessively. @sumitagrawl it currently looks like we are still sending commands to nodes that may be stale/dead/decommissioning nothing significant is done with the result of this check. Could you look in to this? |
| delTX.getTxID(), containerData.getDeleteTransactionId())); | ||
| } | ||
| b = false; | ||
| LOG.info(String.format("Delete blocks for containerId: %d" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a metric for this. It is a lot quicker to scan across a cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kerneltime This metrics will not have value, its just retry which will not happen frequently, and no impact to system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add it as a part of a separate Jira, but a count of how many times this occurred is interesting.
Created new jira HDDS-6548 for this, where need avoid caching up repeated transaction when DN is not responsive and need cleanup cache when DNs are stale/dead/decommissioning |
…on transaction ID
errose28
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates @sumitagrawl. I'm following HDDS-8155 that you filed as well. LGTM and I agree a metric might be overkill since this is a niche case that we have handling for and not something we will be tracking over time, but let's wait for consensus from @kerneltime.
* master: (43 commits) HDDS-8148. Improve log for Pipeline creation failure (apache#4385) HDDS-7853. Add support for RemoveSCM in SCMRatisServer. (apache#4358) HDDS-8042. Display certificate issuer in cert list command. (apache#4429) HDDS-8189. [Snapshot] renamedKeyTable should only track keys in buckets that has at least one active snapshot. (apache#4436) HDDS-8154. Perf: Reuse Mac instances in S3 token validation (apache#4433) HDDS-8245. Info log for keyDeletingService when nonzero number of keys are deleted. (apache#4451) HDDS-8233. ReplicationManager: Throttle delete container commands from over-replication handlers (apache#4447) HDDS-8220. [Ozone-Streaming] Trigger volume check on IOException in StreamDataChannelBase (apache#4428) HDDS-8173. Fix to remove enrties from RocksDB after container gets deleted. (apache#4445) HDDS-7975. Rebalance acceptance tests (apache#4437) HDDS-8152. Reduce S3 acceptance test setup time (apache#4393) HDDS-8172. ECUnderReplicationHandler should consider commands already sent when processing the container (apache#4435) HDDS-7883. [Snapshot] Accommodate FSO, key renames and implement OMSnapshotPurgeRequest for SnapshotDeletingService (apache#4407) HDDS-8168. Make deadlines inside MoveManager for move commands configurable (apache#4415) HDDS-7918. EC: ECBlockReconstructedStripeInputStream should check for spare replicas before failing an index (apache#4441) HDDS-8222. EndpointBase#getBucket should handle BUCKET_NOT_FOUND (apache#4431) HDDS-8068. Fix Exception: JMXJsonServlet, getting attribute RatisRoles of Hadoop:service=OzoneManager. (apache#4352) HDDS-8139. Datanodes should not drop block delete transactions based on transaction ID (apache#4384) HDDS-8216. EC: OzoneClientConfig is overwritten in ECKeyOutputStream (apache#4425) HDDS-8054. Fix NPE in metrics for failed volume (apache#4340) ...
What changes were proposed in this pull request?
Remove the check for "delete transactionId" to be in order and skip any transaction coming out of order
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-8139
How was this patch tested?
This is tested for impact using E2E setup and observing frequency of logs for out-of-order or retry of deleteTransactionId comming again.
Result: Its observed only once when had many blocks, but its not coming repeated always. So no impact is observed.