Skip to content
Closed
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -123,14 +123,12 @@ object SQLConf {
.createWithDefault(10)

val COMPRESS_CACHED = buildConf("spark.sql.inMemoryColumnarStorage.compressed")
.internal()
.doc("When set to true Spark SQL will automatically select a compression codec for each " +
"column based on statistics of the data.")
.booleanConf
.createWithDefault(true)

val COLUMN_BATCH_SIZE = buildConf("spark.sql.inMemoryColumnarStorage.batchSize")
.internal()
.doc("Controls the size of batches for columnar caching. Larger batch sizes can improve " +
"memory utilization and compression, but risk OOMs when caching data.")
.intConf
Expand Down Expand Up @@ -1043,17 +1041,16 @@ object SQLConf {

val ARROW_EXECUTION_ENABLE =
buildConf("spark.sql.execution.arrow.enabled")
.internal()
.doc("Make use of Apache Arrow for columnar data transfers. Currently available " +
"for use with pyspark.sql.DataFrame.toPandas with the following data types: " +
"StringType, BinaryType, BooleanType, DoubleType, FloatType, ByteType, IntegerType, " +
"LongType, ShortType")
.doc("When true, make use of Apache Arrow for columnar data transfers. Currently available " +
"for use with pyspark.sql.DataFrame.toPandas, and " +
"pyspark.sql.SparkSession.createDataFrame when its input is a Pandas DataFrame. " +
"The following data types are unsupported: " +
"MapType, ArrayType of TimestampType, and nested StructType.")
.booleanConf
.createWithDefault(false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark.sql.execution.arrow.maxRecordsPerBatch is also mentioned in the doc change at #19575. Shall we also externalize it?

Copy link
Member Author

@HyukjinKwon HyukjinKwon Jan 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup. Let me update spark.sql.inMemoryColumnarStorage.compressed and spark.sql.inMemoryColumnarStorage.batchSize too. These are also exposed but internals.


val ARROW_EXECUTION_MAX_RECORDS_PER_BATCH =
buildConf("spark.sql.execution.arrow.maxRecordsPerBatch")
.internal()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about conf. https://github.com/apache/spark/pull/19575/files#r164252424

If we want to merge this PR now, maybe revert this change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can externalize this conf in that PR #19575, if we believe this conf is the one we will use in the long term.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Let me take this out. Is this only one you are concerned of for now?

.doc("When using Apache Arrow, limit the maximum number of records that can be written " +
"to a single ArrowRecordBatch in memory. If set to zero or negative there is no limit.")
.intConf
Expand Down