Skip to content

[GOBBLIN-1823] Improving Container Calculation and Allocation Methodology#3692

Merged
ZihanLi58 merged 7 commits intoapache:masterfrom
ZihanLi58:GOBBLIN-1823_new
May 22, 2023
Merged

[GOBBLIN-1823] Improving Container Calculation and Allocation Methodology#3692
ZihanLi58 merged 7 commits intoapache:masterfrom
ZihanLi58:GOBBLIN-1823_new

Conversation

@ZihanLi58
Copy link
Copy Markdown
Contributor

Dear Gobblin maintainers,

Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!

JIRA

Description

  • Here are some details about my PR, including screenshots (if applicable):
    Problem: When Yarn allocates "ghost containers" without calling the onContainerAllocated() method and when the container is eventually released, onContainersCompleted() is called, container numbers mismatches can occur.
    In the onContainerAllocated() method, we add the container to the containerMap using the container ID as the key, and increase the count for the specific tag.
    In the onContainersCompleted() method, we remove the container from the containerMap and decrease the count. However, in some cases, we find that the containerMap does not contain the ID, and we ignore this while still decreasing the number of the allocated tag. We do this because sometimes onContainersCompleted() is called before onContainerAllocated() for the same container.

Solution

  1. Add the removedContainerID map to track the containers that have been released before onContainerAllocated() is called
  2. Go through the container map to check the whether the assigned helix instance is alive and release it when it's in-alive for more than 10 minutes
  3. Add TIME_OUT and COMPLETED as the un-retryable partition state and log it out to improve debugability.

Tests

  • My PR adds the following unit tests OR does not need testing for this extremely good reason:
    Unit test for exiting function, it's hard to add a unit test for a bad yarn container and helix disconnection situation.

Commits

  • My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
    1. Subject is separated from body by a blank line
    2. Subject is limited to 50 characters
    3. Subject does not end with a period
    4. Subject uses the imperative mood ("add", not "adding")
    5. Body wraps at 72 characters
    6. Body explains "what" and "why", not "how"

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented May 5, 2023

Codecov Report

Merging #3692 (915f450) into master (7bbf676) will decrease coverage by 0.01%.
The diff coverage is 16.27%.

@@             Coverage Diff              @@
##             master    #3692      +/-   ##
============================================
- Coverage     46.98%   46.98%   -0.01%     
- Complexity    10792    10797       +5     
============================================
  Files          2138     2138              
  Lines         84065    84129      +64     
  Branches       9342     9356      +14     
============================================
+ Hits          39501    39530      +29     
- Misses        40973    41000      +27     
- Partials       3591     3599       +8     
Impacted Files Coverage Δ
...main/java/org/apache/gobblin/yarn/YarnService.java 27.59% <12.19%> (-0.92%) ⬇️
...rg/apache/gobblin/yarn/YarnAutoScalingManager.java 56.61% <100.00%> (ø)

... and 10 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Copy Markdown
Contributor

@homatthew homatthew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments

}
}

//We go through all the containers we have now and check whether the assigned participant is still alive, if not, we should put them in idle container Map
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Check whether assigned participant is still alive". There is nothing here to suggest that these instances are actually "assigned" anything.

I think a more accurate comment is instead something like:

iterate through all containers allocated and check whether the corresponding helix instance is still LIVE within the helix cluster. A container that has a bad connection to zookeeper will be dropped from the Helix cluster if the disconnection is greater than the specified timeout. In these cases, we want to release the container to get a new container because these containers won't be assigned tasks by Helix

// or soon after it is assigned work.
if (numTargetContainers < totalAllocatedContainers) {
List<Container> containersToRelease = new ArrayList<>();
if (containersToRelease.isEmpty() && numTargetContainers < totalAllocatedContainers) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we entirely skip this block if there are already containers to release? Hopefully the previous block doesn't happen that often, but this if statement is still a bit strange to read.

To me, this should instead be:

if (numTargetContainers < totalAllocatedContainers - containersToRelease.size())

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because at this point we still hold reference to those bad containers, and we might end up with releasing those containers again in this block. We can consider changing the algorism to have a hash set of containerIDtoRelease and collect the ContainerIdToRelease first and at the end of the call calculate the containerToRelease.

}

//Correct the containerMap first as there is cases that handleContainerCompletion() is called before onContainersAllocated()
for (ContainerId removedId :this.removedContainerID.keySet()) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: whitespace after :

if (containerInfo != null) {
String helixTag = containerInfo.getHelixTag();
allocatedContainerCountMap.putIfAbsent(helixTag, new AtomicInteger(0));
this.allocatedContainerCountMap.get(helixTag).decrementAndGet();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put if absent 0 and then decrementing means the resulting value would be -1. That does not seem correct to me

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we will put it though because in this case, onContainerAllocated should have already been called and we definitely have one entry in the map. The worst case is if we call onContainerAllocated and this method concurrently, then we might end up decreasing the value and then increasing it immediately.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When Yarn allocates "ghost containers" without calling the onContainerAllocated() method and when the container is eventually released, onContainersCompleted() is called, container numbers mismatches can occur.
In the onContainerAllocated() method, we add the container to the containerMap using the container ID as the key, and increase the count for the specific tag.

This from the description makes it sound like it is not guaranteed to be called. Either way, I think if we do need to put this line, we should put if absent 1 and then decrement

Copy link
Copy Markdown
Contributor

@homatthew homatthew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment. The rest of the change looks good

@ZihanLi58 ZihanLi58 merged commit b439800 into apache:master May 22, 2023
phet added a commit to phet/gobblin that referenced this pull request Aug 15, 2023
* upstream/master:
  Fix bug with total count watermark whitelist (apache#3724)
  [GOBBLIN-1858] Fix logs relating to multi-active lease arbiter (apache#3720)
  [GOBBLIN-1838] Introduce total count based completion watermark (apache#3701)
  Correct num of failures (apache#3722)
  [GOBBLIN- 1856] Add flow trigger handler leasing metrics (apache#3717)
  [GOBBLIN-1857] Add override flag to force generate a job execution id based on gobbl… (apache#3719)
  [GOBBLIN-1855] Metadata writer tests do not work in isolation after upgrading to Iceberg 1.2.0 (apache#3718)
  Remove unused ORC writer code (apache#3710)
  [GOBBLIN-1853] Reduce # of Hive calls during schema related updates (apache#3716)
  [GOBBLIN-1851] Unit tests for MysqlMultiActiveLeaseArbiter with Single Participant (apache#3715)
  [GOBBLIN-1848] Add tags to dagmanager metrics for extensibility (apache#3712)
  [GOBBLIN-1849] Add Flow Group & Name to Job Config for Job Scheduler (apache#3713)
  [GOBBLIN-1841] Move disabling of current live instances to the GobblinClusterManager startup (apache#3708)
  [GOBBLIN-1840] Helix Job scheduler should not try to replace running workflow if within configured time (apache#3704)
  [GOBBLIN-1847] Exceptions in the JobLauncher should try to delete the existing workflow if it is launched (apache#3711)
  [GOBBLIN-1842] Add timers to GobblinMCEWriter (apache#3703)
  [GOBBLIN-1844] Ignore workflows marked for deletion when calculating container count (apache#3709)
  [GOBBLIN-1846] Validate Multi-active Scheduler with Logs (apache#3707)
  [GOBBLIN-1845] Changes parallelstream to stream in DatasetsFinderFilteringDecorator  to avoid classloader issues in spark (apache#3706)
  [GOBBLIN-1843] Utility for detecting non optional unions should convert dataset urn to hive compatible format (apache#3705)
  [GOBBLIN-1837] Implement multi-active, non blocking for leader host (apache#3700)
  [GOBBLIN-1835]Upgrade Iceberg Version from 0.11.1 to 1.2.0 (apache#3697)
  Update CHANGELOG to reflect changes in 0.17.0
  Reserving 0.18.0 version for next release
  [GOBBLIN-1836] Ensuring Task Reliability: Handling Job Cancellation and Graceful Exits for Error-Free Completion (apache#3699)
  [GOBBLIN-1805] Check watermark for the most recent hour for quiet topics (apache#3698)
  [GOBBLIN-1825]Hive retention job should fail if deleting underlying files fail (apache#3687)
  [GOBBLIN-1823] Improving Container Calculation and Allocation Methodology (apache#3692)
  [GOBBLIN-1830] Improving Container Transition Tracking in Streaming Data Ingestion (apache#3693)
  [GOBBLIN-1833]Emit Completeness watermark information in snapshotCommitEvent (apache#3696)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants