Skip to content

Commit 99b2336

Browse files
committed
[SPARK-4916]Update SQL programming guide
1 parent 6ee6aa7 commit 99b2336

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

docs/sql-programming-guide.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -835,8 +835,7 @@ Spark SQL can cache tables using an in-memory columnar format by calling `sqlCon
835835
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
836836
memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.
837837

838-
Note that if you call `schemaRDD.cache()` rather than `sqlContext.cacheTable(...)`, tables will _not_ be cached using
839-
the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is strongly recommended for this use case.
838+
Note that you call schemaRDD.cache() alike sqlContext.cacheTable(...) in 1.2 release of Spark, tables will be cached using the in-memory columnar format.
840839

841840
Configuration of in-memory caching can be done using the `setConf` method on SQLContext or by running
842841
`SET key=value` commands using SQL.

0 commit comments

Comments
 (0)