-
Notifications
You must be signed in to change notification settings - Fork 29k
[CORE][YARN] SPARK-6011: Used Current Working directory for sparklocaldirs instead of Application Directory so that spark-local-files gets deleted when executor exits abruptly. #4770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I'm pretty certain you can't make this change. You're ignoring the setting for YARN's directories and just using the user home directory? why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Sean,
What i have understood from http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/
container directory, from where executor gets launched, created by node manager is inside yarn-local-dirs. So it is automatically fulfilling that criteria.
Please correct me if i am wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps, but that's not the directory we're looking for in this code. We want the local dirs. You can see comments about where this is coming from in the deleted comments. I don't see how this fixes the problem you reported though. You might have a look at the conversation happening now at #4759 (comment) ; I think shuffle files are kept on purpose in some instances, but, I am not clear if this is one of them.
@vanzin I know I am invoking you a lot today but your thoughts would be good here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, please do not make this change, it's not correct. We do want to use those env variables, which are set by Yarn and configurable (so, for example, users can tell apps to use a fast local disk to store shuffle data instead of whatever disk hosts home directories).
And you do not want the executor's files to disappear when it dies. Because you may be able to reuse shuffle data written by that executor to save the work of re-computing that data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So e.g. in spark run via yarn-client
i can see directory structures like
{yarn.nodemanager.local-dirs}/nm-local-dir/usercache/admin/appcache/application_1424859293845_0003/container_1424859293845_0003_01_000001/ -- this is current working directory since executor was launched from this directory.
and spark is using {yarn.nodemanager.local-dirs}/nm-local-dir/usercache/admin/appcache/application_1424859293845_0003/ -- this directory to write shuffle files which will get
deleted when application shuts down.
And also regarding #4759 (comment) will not work if executor gets killed without letting shutdown hook to trigger i.e.
-pankaj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is in
DiskBlockManager.scala. It's the same code whether you're using the external shuffle service or not. As I said, the external service just tracks location of shuffle files (e.g. "this block id is in file /blah"). That code is innetwork/shuffle.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what i understood is:
If above is correct how does it serve blocks if all the executors on particular node dies.
Am i wrong somehwere in my understanding?
--pankaj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I understand it, the shuffle service can serve the files. But the executor still writes them directly - the write does not go through the shuffle service, and those files are written to the directories set up by
createLocalDirsin DiskBlockManager.scala.There's even a comment alluding to that in the
doStopmethod:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you suggest what could be the correct way to delete those files when executor dies in case of no ExternalShuffleService.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the problem is with shuffle files accumulating, as I suggested before, my understanding is that
ContextCleanerwould take care of this. Maybe your application is not releasing RDDs for garbage collection, in which case the cleaner wouldn't be able to do much. Or maybe the cleaner has a bug, or wasn't supposed to do that in the first place.But the point here is that your patch is not correct. It breaks two existing features.