-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-25016][BUILD][CORE] Remove support for Hadoop 2.6 #22615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -30,9 +30,6 @@ Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, Spark {{s | |
| uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version | ||
| ({{site.SCALA_BINARY_VERSION}}.x). | ||
|
|
||
| Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. | ||
| Support for Scala 2.10 was removed as of 2.3.0. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so we are not going to mention supported hadoop version?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Now that we are onto 3.0, I figured we didn't need to keep documenting how version 2.2 and 2.3 worked. I also felt that the particular Hadoop version was only an issue in the distant past, when we were trying to support the odd world of mutually incompatible 2.x releases before 2.2. Now, it's no more of a high level issue than anything else. Indeed we might even just build vs Hadoop 3.x in the end and de-emphasize dependence on a particular version of Hadoop. But for now I just removed this note. |
||
|
|
||
| # Running the Examples and Shell | ||
|
|
||
| Spark comes with several sample programs. Scala, Java, Python and R examples are in the | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vanzin what do you think of this approach? It simplifies the logic below too, avoiding repeating the main build step 3 times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine. Using wildcards is a little weird but I guess that's the cleanest way in bash.
But shouldn't you initialize PIP_FLAG and R_FLAG to empty before these checks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one caveat is I'm not sure we have tested building both python and R in "one build".
this could be a good thing but if I recall the R build changes some of the binary files under R that gets shipped in the "source release" (these are required R object file)