[GOBBLIN-1823] Improving Container Calculation and Allocation Methodology#3692
[GOBBLIN-1823] Improving Container Calculation and Allocation Methodology#3692ZihanLi58 merged 7 commits intoapache:masterfrom
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3692 +/- ##
============================================
- Coverage 46.98% 46.98% -0.01%
- Complexity 10792 10797 +5
============================================
Files 2138 2138
Lines 84065 84129 +64
Branches 9342 9356 +14
============================================
+ Hits 39501 39530 +29
- Misses 40973 41000 +27
- Partials 3591 3599 +8
... and 10 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
| } | ||
| } | ||
|
|
||
| //We go through all the containers we have now and check whether the assigned participant is still alive, if not, we should put them in idle container Map |
There was a problem hiding this comment.
"Check whether assigned participant is still alive". There is nothing here to suggest that these instances are actually "assigned" anything.
I think a more accurate comment is instead something like:
iterate through all containers allocated and check whether the corresponding helix instance is still LIVE within the helix cluster. A container that has a bad connection to zookeeper will be dropped from the Helix cluster if the disconnection is greater than the specified timeout. In these cases, we want to release the container to get a new container because these containers won't be assigned tasks by Helix
| // or soon after it is assigned work. | ||
| if (numTargetContainers < totalAllocatedContainers) { | ||
| List<Container> containersToRelease = new ArrayList<>(); | ||
| if (containersToRelease.isEmpty() && numTargetContainers < totalAllocatedContainers) { |
There was a problem hiding this comment.
Why do we entirely skip this block if there are already containers to release? Hopefully the previous block doesn't happen that often, but this if statement is still a bit strange to read.
To me, this should instead be:
if (numTargetContainers < totalAllocatedContainers - containersToRelease.size())
There was a problem hiding this comment.
Because at this point we still hold reference to those bad containers, and we might end up with releasing those containers again in this block. We can consider changing the algorism to have a hash set of containerIDtoRelease and collect the ContainerIdToRelease first and at the end of the call calculate the containerToRelease.
| } | ||
|
|
||
| //Correct the containerMap first as there is cases that handleContainerCompletion() is called before onContainersAllocated() | ||
| for (ContainerId removedId :this.removedContainerID.keySet()) { |
| if (containerInfo != null) { | ||
| String helixTag = containerInfo.getHelixTag(); | ||
| allocatedContainerCountMap.putIfAbsent(helixTag, new AtomicInteger(0)); | ||
| this.allocatedContainerCountMap.get(helixTag).decrementAndGet(); |
There was a problem hiding this comment.
put if absent 0 and then decrementing means the resulting value would be -1. That does not seem correct to me
There was a problem hiding this comment.
I don't think we will put it though because in this case, onContainerAllocated should have already been called and we definitely have one entry in the map. The worst case is if we call onContainerAllocated and this method concurrently, then we might end up decreasing the value and then increasing it immediately.
There was a problem hiding this comment.
When Yarn allocates "ghost containers" without calling the onContainerAllocated() method and when the container is eventually released, onContainersCompleted() is called, container numbers mismatches can occur.
In the onContainerAllocated() method, we add the container to the containerMap using the container ID as the key, and increase the count for the specific tag.
This from the description makes it sound like it is not guaranteed to be called. Either way, I think if we do need to put this line, we should put if absent 1 and then decrement
homatthew
left a comment
There was a problem hiding this comment.
Left a comment. The rest of the change looks good
* upstream/master: Fix bug with total count watermark whitelist (apache#3724) [GOBBLIN-1858] Fix logs relating to multi-active lease arbiter (apache#3720) [GOBBLIN-1838] Introduce total count based completion watermark (apache#3701) Correct num of failures (apache#3722) [GOBBLIN- 1856] Add flow trigger handler leasing metrics (apache#3717) [GOBBLIN-1857] Add override flag to force generate a job execution id based on gobbl… (apache#3719) [GOBBLIN-1855] Metadata writer tests do not work in isolation after upgrading to Iceberg 1.2.0 (apache#3718) Remove unused ORC writer code (apache#3710) [GOBBLIN-1853] Reduce # of Hive calls during schema related updates (apache#3716) [GOBBLIN-1851] Unit tests for MysqlMultiActiveLeaseArbiter with Single Participant (apache#3715) [GOBBLIN-1848] Add tags to dagmanager metrics for extensibility (apache#3712) [GOBBLIN-1849] Add Flow Group & Name to Job Config for Job Scheduler (apache#3713) [GOBBLIN-1841] Move disabling of current live instances to the GobblinClusterManager startup (apache#3708) [GOBBLIN-1840] Helix Job scheduler should not try to replace running workflow if within configured time (apache#3704) [GOBBLIN-1847] Exceptions in the JobLauncher should try to delete the existing workflow if it is launched (apache#3711) [GOBBLIN-1842] Add timers to GobblinMCEWriter (apache#3703) [GOBBLIN-1844] Ignore workflows marked for deletion when calculating container count (apache#3709) [GOBBLIN-1846] Validate Multi-active Scheduler with Logs (apache#3707) [GOBBLIN-1845] Changes parallelstream to stream in DatasetsFinderFilteringDecorator to avoid classloader issues in spark (apache#3706) [GOBBLIN-1843] Utility for detecting non optional unions should convert dataset urn to hive compatible format (apache#3705) [GOBBLIN-1837] Implement multi-active, non blocking for leader host (apache#3700) [GOBBLIN-1835]Upgrade Iceberg Version from 0.11.1 to 1.2.0 (apache#3697) Update CHANGELOG to reflect changes in 0.17.0 Reserving 0.18.0 version for next release [GOBBLIN-1836] Ensuring Task Reliability: Handling Job Cancellation and Graceful Exits for Error-Free Completion (apache#3699) [GOBBLIN-1805] Check watermark for the most recent hour for quiet topics (apache#3698) [GOBBLIN-1825]Hive retention job should fail if deleting underlying files fail (apache#3687) [GOBBLIN-1823] Improving Container Calculation and Allocation Methodology (apache#3692) [GOBBLIN-1830] Improving Container Transition Tracking in Streaming Data Ingestion (apache#3693) [GOBBLIN-1833]Emit Completeness watermark information in snapshotCommitEvent (apache#3696)
Dear Gobblin maintainers,
Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!
JIRA
Description
Problem: When Yarn allocates "ghost containers" without calling the onContainerAllocated() method and when the container is eventually released, onContainersCompleted() is called, container numbers mismatches can occur.
In the onContainerAllocated() method, we add the container to the containerMap using the container ID as the key, and increase the count for the specific tag.
In the onContainersCompleted() method, we remove the container from the containerMap and decrease the count. However, in some cases, we find that the containerMap does not contain the ID, and we ignore this while still decreasing the number of the allocated tag. We do this because sometimes onContainersCompleted() is called before onContainerAllocated() for the same container.
Solution
Tests
Unit test for exiting function, it's hard to add a unit test for a bad yarn container and helix disconnection situation.
Commits