File tree Expand file tree Collapse file tree 1 file changed +1
-1
lines changed Expand file tree Collapse file tree 1 file changed +1
-1
lines changed Original file line number Diff line number Diff line change @@ -14,7 +14,7 @@ title: Spark SQL Programming Guide
1414Spark SQL allows relational queries expressed in SQL, HiveQL, or Scala to be executed using
1515Spark. At the core of this component is a new type of RDD,
1616[ SchemaRDD] ( api/scala/index.html#org.apache.spark.sql.SchemaRDD ) . SchemaRDDs are composed of
17- [ Row] ( api/scala/index.html#org.apache.spark.sql.catalyst.expressions.Row ) objects, along with
17+ [ Row] ( api/scala/index.html#org.apache.spark.sql.package@Row:org.apache.spark.sql. catalyst.expressions.Row.type ) objects, along with
1818a schema that describes the data types of each column in the row. A SchemaRDD is similar to a table
1919in a traditional relational database. A SchemaRDD can be created from an existing RDD, a [ Parquet] ( http://parquet.io )
2020file, a JSON dataset, or by running HiveQL against data stored in [ Apache Hive] ( http://hive.apache.org/ ) .
You can’t perform that action at this time.
0 commit comments