Skip to content

Commit ab91750

Browse files
committed
Improve Java SQL API:
* Change JavaRow => Row * Add support for querying RDDs of JavaBeans * Docs * Tests * Hive support
1 parent 0b859c8 commit ab91750

File tree

9 files changed

+489
-32
lines changed

9 files changed

+489
-32
lines changed

docs/sql-programming-guide.md

Lines changed: 173 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,10 @@ file, or by running HiveQL against data stored in [Apache Hive](http://hive.apac
2222

2323
# Getting Started
2424

25-
The entry point into all relational functionallity in Spark is the
25+
<div class="codetabs">
26+
<div data-lang="scala" markdown="1">
27+
28+
The entry point into all relational functionality in Spark is the
2629
[SQLContext](api/sql/core/index.html#org.apache.spark.sql.SQLContext) class, or one of its
2730
decendents. To create a basic SQLContext, all you need is a SparkContext.
2831

@@ -34,8 +37,30 @@ val sqlContext = new org.apache.spark.sql.SQLContext(sc)
3437
import sqlContext._
3538
{% endhighlight %}
3639

40+
</div>
41+
42+
<div data-lang="java" markdown="1">
43+
44+
The entry point into all relational functionality in Spark is the
45+
[JavaSQLContext](api/sql/core/index.html#org.apache.spark.sql.api.java.JavaSQLContext) class, or one
46+
of its decendents. To create a basic JavaSQLContext, all you need is a JavaSparkContext.
47+
48+
{% highlight java %}
49+
JavaSparkContext ctx // An existing JavaSparkContext.
50+
JavaSQLContext sqlCtx = new org.apache.spark.sql.api.java.JavaSQLContext(ctx)
51+
{% endhighlight %}
52+
53+
</div>
54+
55+
</div>
56+
3757
## Running SQL on RDDs
38-
One type of table that is supported by Spark SQL is an RDD of Scala case classetees. The case class
58+
59+
<div class="codetabs">
60+
61+
<div data-lang="scala" markdown="1">
62+
63+
One type of table that is supported by Spark SQL is an RDD of Scala case classes. The case class
3964
defines the schema of the table. The names of the arguments to the case class are read using
4065
reflection and become the names of the columns. Case classes can also be nested or contain complex
4166
types such as Sequences or Arrays. This RDD can be implicitly converted to a SchemaRDD and then be
@@ -60,7 +85,80 @@ val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
6085
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
6186
{% endhighlight %}
6287

63-
**Note that Spark SQL currently uses a very basic SQL parser, and the keywords are case sensitive.**
88+
</div>
89+
90+
<div data-lang="java" markdown="1">
91+
92+
One type of table that is supported by Spark SQL is an RDD of [JavaBeans](http://stackoverflow.com/questions/3295496/what-is-a-javabean-exactly). The BeanInfo
93+
defines the schema of the table. Currently, Spark SQL does not support JavaBeans that contain
94+
nested or contain complex types such as Lists or Arrays. You can create a JavaBean by creating a
95+
class that implements Serializable and has getters and setters for all of its fields.
96+
97+
{% highlight java %}
98+
99+
public class Person implements Serializable {
100+
private String _name;
101+
String getName() {
102+
return _name;
103+
}
104+
void setName(String name) {
105+
_name = name;
106+
}
107+
108+
private int _age;
109+
int getAge() {
110+
return _age;
111+
}
112+
void setAge(int age) {
113+
_age = age;
114+
}
115+
}
116+
117+
{% endhighlight %}
118+
119+
120+
A schema can be applied to an existing RDD by calling `applySchema` and providing the Class object
121+
for the JavaBean.
122+
123+
{% highlight java %}
124+
JavaSQLContext ctx = new org.apache.spark.sql.api.java.JavaSQLContext(sc)
125+
126+
// Load a text file and convert each line to a JavaBean.
127+
JavaRDD<Person> people = ctx.textFile("examples/src/main/resources/people.txt").map(
128+
new Function<String, Person>() {
129+
public Person call(String line) throws Exception {
130+
String[] parts = line.split(",");
131+
132+
Person person = new Person();
133+
person.setName(parts[0]);
134+
person.setAge(Integer.parseInt(parts[1].trim()));
135+
136+
return person;
137+
}
138+
});
139+
140+
// Apply a schema to an RDD of JavaBeans and register it as a table.
141+
JavaSchemaRDD schemaPeople = sqlCtx.applySchema(people, Person.class);
142+
schemaPeople.registerAsTable("people");
143+
144+
// SQL can be run over RDDs that have been registered as tables.
145+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
146+
147+
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
148+
// The columns of a row in the result can be accessed by ordinal.
149+
List<String> teenagerNames = teenagers.map(new Function<Row, String>() {
150+
public String call(Row row) {
151+
return "Name: " + row.getString(0);
152+
}
153+
}).collect();
154+
155+
{% endhighlight %}
156+
157+
</div>
158+
159+
</div>
160+
161+
**Note that Spark SQL currently uses a very basic SQL parser.**
64162
Users that want a more complete dialect of SQL should look at the HiveQL support provided by
65163
`HiveContext`.
66164

@@ -70,17 +168,21 @@ Parquet is a columnar format that is supported by many other data processing sys
70168
provides support for both reading and writing parquet files that automatically preserves the schema
71169
of the original data. Using the data from the above example:
72170

171+
<div class="codetabs">
172+
173+
<div data-lang="scala" markdown="1">
174+
73175
{% highlight scala %}
74176
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
75177
import sqlContext._
76178

77-
val people: RDD[Person] // An RDD of case class objects, from the previous example.
179+
val people: RDD[Person] = ... // An RDD of case class objects, from the previous example.
78180

79181
// The RDD is implicitly converted to a SchemaRDD, allowing it to be stored using parquet.
80182
people.saveAsParquetFile("people.parquet")
81183

82184
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
83-
// The result of loading a parquet file is also a SchemaRDD.
185+
// The result of loading a parquet file is also a JavaSchemaRDD.
84186
val parquetFile = sqlContext.parquetFile("people.parquet")
85187

86188
//Parquet files can also be registered as tables and then used in SQL statements.
@@ -89,15 +191,48 @@ val teenagers = sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19"
89191
teenagers.collect().foreach(println)
90192
{% endhighlight %}
91193

194+
</div>
195+
196+
<div data-lang="java" markdown="1">
197+
198+
One type of table that is supported by Spark SQL is an RDD of JavaBeans. The BeanInfo
199+
defines the schema of the table. Currently, Spark SQL does not support JavaBeans that contain
200+
nested or contain complex types such as Lists or Arrays. You can create a JavaBean by creating a
201+
class that implements Serializable and has getters and setters for all of its fields.
202+
203+
{% highlight java %}
204+
205+
JavaSchemaRDD schemaPeople = ... // The JavaSchemaRDD from the previous example.
206+
207+
// JavaSchemaRDDs can be saved as parquet files, maintaining the schema information.
208+
schemaPeople.saveAsParquetFile("people.parquet");
209+
210+
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
211+
// The result of loading a parquet file is also a JavaSchemaRDD.
212+
JavaSchemaRDD parquetFile = sqlCtx.parquetFile("people.parquet");
213+
214+
//Parquet files can also be registered as tables and then used in SQL statements.
215+
parquetFile.registerAsTable("parquetFile");
216+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
217+
218+
219+
{% endhighlight %}
220+
221+
</div>
222+
223+
</div>
224+
92225
## Writing Language-Integrated Relational Queries
93226

227+
**Language-Integrated queries are currently only supported in Scala.**
228+
94229
Spark SQL also supports a domain specific language for writing queries. Once again,
95230
using the data from the above examples:
96231

97232
{% highlight scala %}
98233
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
99234
import sqlContext._
100-
val people: RDD[Person] // An RDD of case class objects, from the first example.
235+
val people: RDD[Person] = ... // An RDD of case class objects, from the first example.
101236

102237
// The following is the same as 'SELECT name FROM people WHERE age >= 10 AND age <= 19'
103238
val teenagers = people.where('age >= 10).where('age <= 19).select('name)
@@ -114,14 +249,17 @@ evaluated by the SQL execution engine. A full list of the functions supported c
114249

115250
Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/).
116251
However, since Hive has a large number of dependencies, it is not included in the default Spark assembly.
117-
In order to use Hive you must first run '`sbt/sbt hive/assembly`'. This command builds a new assembly
118-
jar that includes Hive. When this jar is present, Spark will use the Hive
119-
assembly instead of the normal Spark assembly. Note that this Hive assembly jar must also be present
252+
In order to use Hive you must first run '`SPARK_HIVE=true sbt/sbt assembly/assembly`'. This command builds a new assembly
253+
jar that includes Hive. Note that this Hive assembly jar must also be present
120254
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
121255
(SerDes) in order to acccess data stored in Hive.
122256

123257
Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
124258

259+
<div class="codetabs">
260+
261+
<div data-lang="scala" markdown="1">
262+
125263
When working with Hive one must construct a `HiveContext`, which inherits from `SQLContext`, and
126264
adds support for finding tables in in the MetaStore and writing queries using HiveQL. Users who do
127265
not have an existing Hive deployment can also experiment with the `LocalHiveContext`,
@@ -140,4 +278,29 @@ sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src
140278

141279
// Queries are expressed in HiveQL
142280
sql("SELECT key, value FROM src").collect().foreach(println)
143-
{% endhighlight %}
281+
{% endhighlight %}
282+
283+
</div>
284+
285+
<div data-lang="java" markdown="1">
286+
287+
When working with Hive one must construct a `JavaHiveContext`, which inherits from `JavaSQLContext`, and
288+
adds support for finding tables in in the MetaStore and writing queries using HiveQL. In addition to
289+
the `sql` method a `JavaHiveContext` also provides an `hql` methods, which allows queries to be
290+
expressed in HiveQL.
291+
292+
{% highlight java %}
293+
JavaSparkContext ctx // An existing JavaSparkContext.
294+
JavaHiveContext hiveCtx = new org.apache.spark.sql.hive.api.java.HiveContext(ctx)
295+
296+
hiveCtx.hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
297+
hiveCtx.hql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
298+
299+
// Queries are expressed in HiveQL
300+
hiveCtx.hql("FROM src SELECT key, value").collect().foreach(println)
301+
302+
{% endhighlight %}
303+
304+
</div>
305+
306+
</div>

examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java

Lines changed: 66 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,28 +17,84 @@
1717

1818
package org.apache.spark.examples.sql;
1919

20+
import java.io.Serializable;
21+
import java.util.List;
22+
23+
import org.apache.spark.api.java.JavaRDD;
2024
import org.apache.spark.api.java.JavaSparkContext;
25+
import org.apache.spark.api.java.function.Function;
2126
import org.apache.spark.api.java.function.VoidFunction;
2227

2328
import org.apache.spark.sql.api.java.JavaSQLContext;
2429
import org.apache.spark.sql.api.java.JavaSchemaRDD;
25-
import org.apache.spark.sql.api.java.JavaRow;
30+
import org.apache.spark.sql.api.java.Row;
31+
32+
public class JavaSparkSQL {
33+
public static class Person implements Serializable {
34+
private String _name;
35+
36+
String getName() {
37+
return _name;
38+
}
39+
40+
void setName(String name) {
41+
_name = name;
42+
}
43+
44+
private int _age;
45+
46+
int getAge() {
47+
return _age;
48+
}
49+
50+
void setAge(int age) {
51+
_age = age;
52+
}
53+
}
2654

27-
public final class JavaSparkSQL {
2855
public static void main(String[] args) throws Exception {
2956
JavaSparkContext ctx = new JavaSparkContext("local", "JavaSparkSQL",
3057
System.getenv("SPARK_HOME"), JavaSparkContext.jarOfClass(JavaSparkSQL.class));
3158
JavaSQLContext sqlCtx = new JavaSQLContext(ctx);
3259

33-
JavaSchemaRDD parquetFile = sqlCtx.parquetFile("pair.parquet");
34-
parquetFile.registerAsTable("parquet");
60+
// Load a text file and convert each line to a Java Bean.
61+
JavaRDD<Person> people = ctx.textFile("examples/src/main/resources/people.txt").map(
62+
new Function<String, Person>() {
63+
public Person call(String line) throws Exception {
64+
String[] parts = line.split(",");
3565

36-
JavaSchemaRDD queryResult = sqlCtx.sql("SELECT * FROM parquet");
37-
queryResult.foreach(new VoidFunction<JavaRow>() {
38-
@Override
39-
public void call(JavaRow row) throws Exception {
40-
System.out.println(row.get(0) + " " + row.get(1));
66+
Person person = new Person();
67+
person.setName(parts[0]);
68+
person.setAge(Integer.parseInt(parts[1].trim()));
69+
70+
return person;
4171
}
42-
});
72+
});
73+
74+
// Apply a schema to an RDD of Java Beans and register it as a table.
75+
JavaSchemaRDD schemaPeople = sqlCtx.applySchema(people, Person.class);
76+
schemaPeople.registerAsTable("people");
77+
78+
// SQL can be run over RDDs that have been registered as tables.
79+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19");
80+
81+
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
82+
// The columns of a row in the result can be accessed by ordinal.
83+
List<String> teenagerNames = teenagers.map(new Function<Row, String>() {
84+
public String call(Row row) {
85+
return "Name: " + row.getString(0);
86+
}
87+
}).collect();
88+
89+
// JavaSchemaRDDs can be saved as parquet files, maintaining the schema information.
90+
schemaPeople.saveAsParquetFile("people.parquet");
91+
92+
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
93+
// The result of loading a parquet file is also a JavaSchemaRDD.
94+
JavaSchemaRDD parquetFile = sqlCtx.parquetFile("people.parquet");
95+
96+
//Parquet files can also be registered as tables and then used in SQL statements.
97+
parquetFile.registerAsTable("parquetFile");
98+
JavaSchemaRDD teenagers2 = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
4399
}
44100
}

0 commit comments

Comments
 (0)