Skip to content
21 changes: 16 additions & 5 deletions core/src/main/scala/org/apache/spark/internal/config/package.scala
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,12 @@ package object config {
.createWithDefaultString("1g")

private[spark] val DRIVER_MEMORY_OVERHEAD = ConfigBuilder("spark.driver.memoryOverhead")
.doc("The amount of off-heap memory to be allocated per driver in cluster mode, " +
"in MiB unless otherwise specified.")
.doc("Amount of non-heap memory to be allocated per driver process in cluster mode" +
" (e.g YARN and Kubernetes), in MiB unless otherwise specified." +
"Note: Non-heap memory including off-heap memory (when spark.memory.offHeap.enabled=true)" +
" and memory used by other non-driver processes running in the same container." +
"The maximum memory size of container to running driver is determined by the sum of " +
"spark.driver.memoryOverhead and spark.driver.memory.")
.bytesConf(ByteUnit.MiB)
.createOptional

Expand Down Expand Up @@ -185,8 +189,12 @@ package object config {
.createWithDefaultString("1g")

private[spark] val EXECUTOR_MEMORY_OVERHEAD = ConfigBuilder("spark.executor.memoryOverhead")
.doc("The amount of off-heap memory to be allocated per executor in cluster mode, " +
"in MiB unless otherwise specified.")
.doc("Amount of non-heap memory to be allocated per executor process in cluster mode " +
"(e.g YARN and Kubernetes), in MiB unless otherwise specified." +
"Note: Non-heap memory including off-heap memory (when spark.memory.offHeap.enabled=true)" +
" and memory used by other non-executor processes running in the same container." +
"The maximum memory size of container to running executor is determined by the sum of " +
"spark.executor.memoryOverhead and spark.executor.memory.")
.bytesConf(ByteUnit.MiB)
.createOptional

Expand All @@ -201,7 +209,10 @@ package object config {

private[spark] val MEMORY_OFFHEAP_ENABLED = ConfigBuilder("spark.memory.offHeap.enabled")
.doc("If true, Spark will attempt to use off-heap memory for certain operations. " +
"If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive.")
"If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive." +
"Note: If off-heap memory use is enabled or off-heap memory size is increased, " +
"recommend raising the non-heap memory size (e.g increase spark.driver.memoryOverhead " +
"or spark.executor.memoryOverhead).")
.withAlternative("spark.unsafe.offHeap")
.booleanConf
.createWithDefault(false)
Expand Down
30 changes: 22 additions & 8 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,10 +181,15 @@ of the most common options to set are:
<td><code>spark.driver.memoryOverhead</code></td>
<td>driverMemory * 0.10, with minimum of 384 </td>
<td>
The amount of off-heap memory to be allocated per driver in cluster mode, in MiB unless
otherwise specified. This is memory that accounts for things like VM overheads, interned strings,
other native overheads, etc. This tends to grow with the container size (typically 6-10%).
This option is currently supported on YARN, Mesos and Kubernetes.
Amount of non-heap memory to be allocated per driver process in cluster mode
(e.g YARN, Mesos and Kubernetes.), in MiB unless otherwise specified. This is memory that
accounts for things like VM overheads, interned strings, other native overheads, etc.
This tends to grow with the container size (typically 6-10%).
<em>Note:</em> Non-heap memory including off-heap memory
(when <code>spark.memory.offHeap.enabled=true</code>) and memory used by other non-driver
processes running in the same container. The maximum memory size of container to running
driver is determined by the sum of <code>spark.driver.memoryOverhead</code>
and <code>spark.driver.memory</code>.
</td>
</tr>
<tr>
Expand Down Expand Up @@ -215,10 +220,16 @@ of the most common options to set are:
<td><code>spark.executor.memoryOverhead</code></td>
<td>executorMemory * 0.10, with minimum of 384 </td>
<td>
The amount of off-heap memory to be allocated per executor, in MiB unless otherwise specified.
This is memory that accounts for things like VM overheads, interned strings, other native
overheads, etc. This tends to grow with the executor size (typically 6-10%).
This option is currently supported on YARN and Kubernetes.
Amount of non-heap memory to be allocated per executor process in cluster mode
(e.g YARN and Kubernetes), in MiB unless otherwise specified. This is memory that accounts for
things like VM overheads, interned strings, other native overheads, etc. This tends to grow with
the executor size (typically 6-10%).This option is currently supported on YARN and Kubernetes.
<br/>
<em>Note:</em> Non-heap memory including off-heap memory
(when <code>spark.memory.offHeap.enabled=true</code>) and memory used by other non-executor
processes running in the same container. The maximum memory size of container to running executor
is determined by the sum of <code>spark.executor.memoryOverhead</code> and
<code>spark.executor.memory</code>.
</td>
</tr>
<tr>
Expand Down Expand Up @@ -1233,6 +1244,9 @@ Apart from these, the following properties are also available, and may be useful
<td>
If true, Spark will attempt to use off-heap memory for certain operations. If off-heap memory
use is enabled, then <code>spark.memory.offHeap.size</code> must be positive.
<em>Note:</em> If off-heap memory use is enabled or off-heap memory size is increased,
recommend raising the non-heap memory size(e.g increase <code>spark.driver.memoryOverhead</code>
or <code>spark.executor.memoryOverhead</code>).
</td>
</tr>
<tr>
Expand Down