Skip to content

Commit 798ab88

Browse files
committed
Update documentation and add to SparkContext
1 parent 34c523c commit 798ab88

File tree

3 files changed

+21
-6
lines changed

3 files changed

+21
-6
lines changed

core/src/main/scala/org/apache/spark/SparkContext.scala

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2263,6 +2263,20 @@ object SparkContext extends Logging {
22632263
*/
22642264
def jarOfObject(obj: AnyRef): Option[String] = jarOfClass(obj.getClass)
22652265

2266+
/**
2267+
* :: DeveloperApi ::
2268+
* Estimate the number of bytes that the given object takes up on the JVM heap. The estimate
2269+
* includes space taken up by objects referenced by the given object, their references, and so on
2270+
* and so forth.
2271+
*
2272+
* This is useful for determining the amount of heap space a broadcast variable will occupy on
2273+
* each executor or the amount of space each object will take when caching objects in
2274+
* deserialized form. This is not the same as the serialized size of the object, which will
2275+
* typically be much smaller.
2276+
*/
2277+
@DeveloperApi
2278+
def estimateSizeOf(obj: AnyRef): Long = SizeEstimator.estimate(obj)
2279+
22662280
/**
22672281
* Creates a modified version of a SparkConf with the parameters that can be passed separately
22682282
* to SparkContext, to make it easier to write SparkContext's constructors. This ignores

core/src/main/scala/org/apache/spark/util/SizeEstimator.scala

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,7 @@ import org.apache.spark.util.collection.OpenHashSet
3636
* Based on the following JavaWorld article:
3737
* http://www.javaworld.com/javaworld/javaqa/2003-12/02-qa-1226-sizeof.html
3838
*/
39-
@DeveloperApi
40-
object SizeEstimator extends Logging {
39+
private[spark] object SizeEstimator extends Logging {
4140

4241
// Sizes of primitive types
4342
private val BYTE_SIZE = 1

docs/tuning.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -94,11 +94,13 @@ We will then cover tuning Spark's cache size and the Java garbage collector.
9494

9595
## Determining Memory Consumption
9696

97-
The best way to size the amount of memory consumption your dataset will require is to create an RDD, put it into cache, and look at the SparkContext logs on your driver program. The logs will tell you how much memory each partition is consuming, which you can aggregate to get the total size of the RDD. You will see messages like this:
97+
The best way to size the amount of memory consumption a dataset will require is to create an RDD, put it
98+
into cache, and look at the "Storage" page in the web UI. The page will tell you how much memory the RDD
99+
is occupying.
98100

99-
INFO BlockManagerMasterActor: Added rdd_0_1 in memory on mbk.local:50311 (size: 717.5 KB, free: 332.3 MB)
100-
101-
This means that partition 1 of RDD 0 consumed 717.5 KB.
101+
To estimate the memory consumption of a particular object, use the `SparkContext`'s `estimateSizeOf`
102+
method. This is useful for experimenting with different data layouts to trim memory usage, as well as
103+
determining the amount of space a broadcast variable will occupy on each executor heap.
102104

103105
## Tuning Data Structures
104106

0 commit comments

Comments
 (0)