-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[MINOR] Removed unused profiles from spark/pom.xml #1301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MINOR] Removed unused profiles from spark/pom.xml #1301
Conversation
|
Simplification looks great to me! But one thing is not clear - i.e a MapR profiles are not useful any more? IDK but my guess is that they were added there for the reason (compatibility \w MapR build of Hadoop distro) - we should remove those at least with explanation of the reason, why that is done and how it affects people \w those distro, how do you think? Looks like candidate for |
|
@bzz Basically, Spark module doesn't have to need hadoop version because we only use Spark repl, core and so on not including specific hadoop version. spark-dependencies only needs specific hadoop version. Thus, specifying hadoop version is not necessary for it. |
|
Thank you for explanation! Makes sense. Looks great to me. May be it's worth to |
|
Definitely should remove profile yarn. I got classpath issue when enabling yarn profile. And we need to update the docs accordingly. |
|
And about docs, it never change some behaviour. I think another issue linked above is a better point to update docs. |
|
|
||
| <!-- include sparkr in the build --> | ||
| <profile> | ||
| <id>sparkr</id> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is sparkr needed here ? Because I notice there's another sparkr profile in spark-dependencies/pom.xml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because of selecting interpreter-setting.json if/if not we use sparkR and this is because we have two different SparkR versions.
|
I thought we do need |
|
@felixcheung What I want to remove is duplicated from spark-dependencies/pom.xml. This deletion never hurt current features. It just removes duplicated codes. |
|
Merging if there's no more discussion. |
|
Quick question: just to double-check - this should not affect users following published build instructions like https://www.mapr.com/blog/building-apache-zeppelin-mapr-using-spark-under-yarn , right? In a way like in ZEPPELIN-1353 |
8de6cf8 to
e55e307
Compare
e55e307 to
10c7bb7
Compare
What is this PR for?
Making spark/pom.xml simple
What type of PR is it?
[Refactoring]
Todos
What is the Jira issue?
N/A
How should this be tested?
No test. CI should be green
Screenshots (if appropriate)
Questions: