-
Notifications
You must be signed in to change notification settings - Fork 440
TEZ-4547: Add Tez AM JobID to the JobConf #339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
9040a2f
452241a
f9e3b45
e13d6f4
5cef95a
83cd25f
c0c29b4
c56e926
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -78,4 +78,6 @@ public interface OutputCommitterContext { | |
| */ | ||
| public int getVertexIndex(); | ||
|
|
||
| public int getDagIdentifier(); | ||
|
|
||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,6 +18,7 @@ | |
|
|
||
| package org.apache.tez.mapreduce.committer; | ||
|
|
||
| import org.apache.tez.mapreduce.common.Utils; | ||
| import org.slf4j.Logger; | ||
| import org.slf4j.LoggerFactory; | ||
| import org.apache.hadoop.classification.InterfaceAudience.Public; | ||
|
|
@@ -78,6 +79,7 @@ public void initialize() throws IOException { | |
| jobConf.getCredentials().mergeAll(UserGroupInformation.getCurrentUser().getCredentials()); | ||
| jobConf.setInt(MRJobConfig.APPLICATION_ATTEMPT_ID, | ||
| getContext().getDAGAttemptNumber()); | ||
| jobConf.set(MRJobConfig.JOB_COMMITTER_UUID, Utils.getDAGID(getContext())); | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this unique across all jobs which may be writing to a table, even from other processes?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. This ID is unique to a DAG + attempt number - so if we have some other job, it'll have a different application ID component, while if an attempt fails and the DAG retries, the attempt number will be different. |
||
| jobConf.setInt(MRJobConfig.VERTEX_ID, getContext().getVertexIndex()); | ||
| committer = getOutputCommitter(getContext()); | ||
| jobContext = getJobContextFromVertexContext(getContext()); | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -131,6 +131,11 @@ public interface MRJobConfig { | |
|
|
||
| public static final String CACHE_ARCHIVES_VISIBILITIES = "mapreduce.job.cache.archives.visibilities"; | ||
|
|
||
| /** | ||
| * Used by committers to set a job-wide UUID. | ||
| */ | ||
| public static final String JOB_COMMITTER_UUID = "job.committer.uuid"; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not the setting used by s3 committer right? How will it work ?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There is a corresponding change I have in my Hadoop code where it will consult this property similar to how it consults the property Spark sets for this purpose.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so you can confirm this will work with job.committer.uuid, right?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @VenkatSNarayanan May i ask that if the hadoop s3 committer can work with Hive+Tez after this change?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @abstractdog I haven't publicly posted the Hadoop PR yet, but the change I have is to check for this property around here: https://github.com/apache/hadoop/blob/51cb858cc8c23d873d4adfc21de5f2c1c22d346f/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L1372 similar to how the Spark property is checked. I have tested these changes together already alongside my Hive implementation. @zhangbutao There are some corresponding changes to Hadoop and Hive that also need to be merged which I have. Once all 3 PRs(Tez, Hadoop and Hive have been merged), then the magic committer will be usable with Hive.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. should this go into Tez 0.10.4? if so, it would be good to have it in 1-2 weeks, just FYI, regarding planning the hadoop change
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 0.10.4 would be ideal. In that case, let me loop in the Hadoop folks to see if they have any strong opinions about this.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @VenkatSNarayanan https://issues.apache.org/jira/browse/HIVE-16295 I found a old ticket about integrating s3a committer, and it seems that supporting this needs lots of Hive code change.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. https://issues.apache.org/jira/browse/HADOOP-19091 I just saw your Hadoop ticket, and Hive change patch is also there too. Maybe you need create a PR against Hive latest master branch once you have done preparatory work. :)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There haven't been any objections from the Hadoop folks, I think it should be safe to go ahead with the patch as it is @abstractdog . |
||
|
|
||
| public static final String FILEOUTPUTCOMMITTER_ALGORITHM_VERSION = "mapreduce.fileoutputcommitter.algorithm.version"; | ||
|
|
||
| /** | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.