Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,10 @@ of the most common options to set are:
and it is up to the application to avoid exceeding the overhead memory space
shared with other non-JVM processes. When PySpark is run in YARN or Kubernetes, this memory
is added to executor resource requests.

NOTE: Python memory usage may not be limited on platforms that do not support resource limiting, such as Windows.
<br/>
<em>Note:</em> This feature is dependent on Python's `resource` module; therefore, the behaviors and
limitations are inherited. For instance, Windows does not support resource limiting and actual
resource is not limited on MacOS.
</td>
</tr>
<tr>
Expand Down Expand Up @@ -223,7 +225,8 @@ of the most common options to set are:
stored on disk. This should be on a fast, local disk in your system. It can also be a
comma-separated list of multiple directories on different disks.

NOTE: In Spark 1.0 and later this will be overridden by SPARK_LOCAL_DIRS (Standalone), MESOS_SANDBOX (Mesos) or
<br/>
<em>Note:</em> This will be overridden by SPARK_LOCAL_DIRS (Standalone), MESOS_SANDBOX (Mesos) or
LOCAL_DIRS (YARN) environment variables set by the cluster manager.
</td>
</tr>
Expand Down