Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion bin/spark-class2.cmd
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ if exist "%SPARK_HOME%\RELEASE" (
)

if not exist "%SPARK_JARS_DIR%"\ (
echo Failed to find Spark assembly JAR.
echo Failed to find Spark jars directory.
echo You need to build Spark before running this program.
exit /b 1
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ object PythonRunner {
// ready to serve connections.
thread.join()

// Build up a PYTHONPATH that includes the Spark assembly JAR (where this class is), the
// Build up a PYTHONPATH that includes the Spark assembly (where this class is), the
// python directories in SPARK_HOME (if set), and any files in the pyFiles argument
val pathElements = new ArrayBuffer[String]
pathElements ++= formattedPyFiles
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,8 @@ private[deploy] class SparkSubmitArguments(args: Seq[String], env: Map[String, S
val command = sys.env.get("_SPARK_CMD_USAGE").getOrElse(
"""Usage: spark-submit [options] <app jar | python file> [app arguments]
|Usage: spark-submit --kill [submission ID] --master [spark://...]
|Usage: spark-submit --status [submission ID] --master [spark://...]""".stripMargin)
|Usage: spark-submit --status [submission ID] --master [spark://...]
|Usage: spark-submit run-example [options] example-class [example args]""".stripMargin)
outStream.println(command)

val mem_mb = Utils.DEFAULT_DRIVER_MEM_MB
Expand Down
2 changes: 1 addition & 1 deletion docs/building-spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ If you have JDK 8 installed but it is not the system default, you can set JAVA_H

# Packaging without Hadoop Dependencies for YARN

The assembly jar produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
The assembly directory produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.

# Building with SBT

Expand Down
4 changes: 2 additions & 2 deletions docs/sql-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -1651,7 +1651,7 @@ SELECT * FROM jsonTable
Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/).
However, since Hive has a large number of dependencies, it is not included in the default Spark assembly.
Hive support is enabled by adding the `-Phive` and `-Phive-thriftserver` flags to Spark's build.
This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present
This command builds a new assembly directory that includes Hive. Note that this Hive assembly directory must also be present
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
(SerDes) in order to access data stored in Hive.

Expand Down Expand Up @@ -1770,7 +1770,7 @@ The following options can be used to configure the version of Hive that is used
property can be one of three options:
<ol>
<li><code>builtin</code></li>
Use Hive 1.2.1, which is bundled with the Spark assembly jar when <code>-Phive</code> is
Use Hive 1.2.1, which is bundled with the Spark assembly when <code>-Phive</code> is
enabled. When this option is chosen, <code>spark.sql.hive.metastore.version</code> must be
either <code>1.2.1</code> or not defined.
<li><code>maven</code></li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -784,7 +784,7 @@ private[sql] object ParquetRelation extends Logging {
// scalastyle:on classforname
redirect(JLogger.getLogger("parquet"))
} catch { case _: Throwable =>
// SPARK-9974: com.twitter:parquet-hadoop-bundle:1.6.0 is not packaged into the assembly jar
// SPARK-9974: com.twitter:parquet-hadoop-bundle:1.6.0 is not packaged into the assembly
// when Spark is built with SBT. So `parquet.Log` may not be found. This try/catch block
// should be removed after this issue is fixed.
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -429,7 +429,7 @@ private[hive] object HiveContext extends Logging {
| Location of the jars that should be used to instantiate the HiveMetastoreClient.
| This property can be one of three options: "
| 1. "builtin"
| Use Hive ${hiveExecutionVersion}, which is bundled with the Spark assembly jar when
| Use Hive ${hiveExecutionVersion}, which is bundled with the Spark assembly when
| <code>-Phive</code> is enabled. When this option is chosen,
| <code>spark.sql.hive.metastore.version</code> must be either
| <code>${hiveExecutionVersion}</code> or not defined.
Expand Down