You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sql-programming-guide.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -835,7 +835,7 @@ Spark SQL can cache tables using an in-memory columnar format by calling `sqlCon
835
835
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
836
836
memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.
837
837
838
-
Note that you call schemaRDD.cache() alike sqlContext.cacheTable(...) in 1.2 release of Spark, tables will be cached using the in-memory columnar format.
838
+
Note that you call `schemaRDD.cache()` alike `sqlContext.cacheTable(...)` in 1.2 release of Spark, tables will be cached using the in-memory columnar format.
839
839
840
840
Configuration of in-memory caching can be done using the `setConf` method on SQLContext or by running
0 commit comments