Skip to content

Commit b8f5341

Browse files
marmbrusmateiz
authored andcommitted
[SQL] SPARK-1333 First draft of java API
WIP: Some work remains... * [x] Hive support * [x] Tests * [x] Update docs Feedback welcome! Author: Michael Armbrust <[email protected]> Closes #248 from marmbrus/javaSchemaRDD and squashes the following commits: b393913 [Michael Armbrust] @srowen 's java style suggestions. f531eb1 [Michael Armbrust] Address matei's comments. 33a1b1a [Michael Armbrust] Ignore JavaHiveSuite. 822f626 [Michael Armbrust] improve docs. ab91750 [Michael Armbrust] Improve Java SQL API: * Change JavaRow => Row * Add support for querying RDDs of JavaBeans * Docs * Tests * Hive support 0b859c8 [Michael Armbrust] First draft of java API.
1 parent c1ea3af commit b8f5341

File tree

11 files changed

+750
-50
lines changed

11 files changed

+750
-50
lines changed

docs/sql-programming-guide.md

Lines changed: 191 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@ title: Spark SQL Programming Guide
88
{:toc}
99

1010
# Overview
11+
12+
<div class="codetabs">
13+
<div data-lang="scala" markdown="1">
14+
1115
Spark SQL allows relational queries expressed in SQL, HiveQL, or Scala to be executed using
1216
Spark. At the core of this component is a new type of RDD,
1317
[SchemaRDD](api/sql/core/index.html#org.apache.spark.sql.SchemaRDD). SchemaRDDs are composed
@@ -18,11 +22,27 @@ file, or by running HiveQL against data stored in [Apache Hive](http://hive.apac
1822

1923
**All of the examples on this page use sample data included in the Spark distribution and can be run in the spark-shell.**
2024

25+
</div>
26+
27+
<div data-lang="java" markdown="1">
28+
Spark SQL allows relational queries expressed in SQL, HiveQL, or Scala to be executed using
29+
Spark. At the core of this component is a new type of RDD,
30+
[JavaSchemaRDD](api/sql/core/index.html#org.apache.spark.sql.api.java.JavaSchemaRDD). JavaSchemaRDDs are composed
31+
[Row](api/sql/catalyst/index.html#org.apache.spark.sql.api.java.Row) objects along with
32+
a schema that describes the data types of each column in the row. A JavaSchemaRDD is similar to a table
33+
in a traditional relational database. A JavaSchemaRDD can be created from an existing RDD, parquet
34+
file, or by running HiveQL against data stored in [Apache Hive](http://hive.apache.org/).
35+
</div>
36+
</div>
37+
2138
***************************************************************************************************
2239

2340
# Getting Started
2441

25-
The entry point into all relational functionallity in Spark is the
42+
<div class="codetabs">
43+
<div data-lang="scala" markdown="1">
44+
45+
The entry point into all relational functionality in Spark is the
2646
[SQLContext](api/sql/core/index.html#org.apache.spark.sql.SQLContext) class, or one of its
2747
decendents. To create a basic SQLContext, all you need is a SparkContext.
2848

@@ -34,8 +54,30 @@ val sqlContext = new org.apache.spark.sql.SQLContext(sc)
3454
import sqlContext._
3555
{% endhighlight %}
3656

57+
</div>
58+
59+
<div data-lang="java" markdown="1">
60+
61+
The entry point into all relational functionality in Spark is the
62+
[JavaSQLContext](api/sql/core/index.html#org.apache.spark.sql.api.java.JavaSQLContext) class, or one
63+
of its decendents. To create a basic JavaSQLContext, all you need is a JavaSparkContext.
64+
65+
{% highlight java %}
66+
JavaSparkContext ctx = ...; // An existing JavaSparkContext.
67+
JavaSQLContext sqlCtx = new org.apache.spark.sql.api.java.JavaSQLContext(ctx);
68+
{% endhighlight %}
69+
70+
</div>
71+
72+
</div>
73+
3774
## Running SQL on RDDs
38-
One type of table that is supported by Spark SQL is an RDD of Scala case classetees. The case class
75+
76+
<div class="codetabs">
77+
78+
<div data-lang="scala" markdown="1">
79+
80+
One type of table that is supported by Spark SQL is an RDD of Scala case classes. The case class
3981
defines the schema of the table. The names of the arguments to the case class are read using
4082
reflection and become the names of the columns. Case classes can also be nested or contain complex
4183
types such as Sequences or Arrays. This RDD can be implicitly converted to a SchemaRDD and then be
@@ -60,7 +102,83 @@ val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
60102
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
61103
{% endhighlight %}
62104

63-
**Note that Spark SQL currently uses a very basic SQL parser, and the keywords are case sensitive.**
105+
</div>
106+
107+
<div data-lang="java" markdown="1">
108+
109+
One type of table that is supported by Spark SQL is an RDD of [JavaBeans](http://stackoverflow.com/questions/3295496/what-is-a-javabean-exactly). The BeanInfo
110+
defines the schema of the table. Currently, Spark SQL does not support JavaBeans that contain
111+
nested or contain complex types such as Lists or Arrays. You can create a JavaBean by creating a
112+
class that implements Serializable and has getters and setters for all of its fields.
113+
114+
{% highlight java %}
115+
116+
public static class Person implements Serializable {
117+
private String name;
118+
private int age;
119+
120+
String getName() {
121+
return name;
122+
}
123+
124+
void setName(String name) {
125+
this.name = name;
126+
}
127+
128+
int getAge() {
129+
return age;
130+
}
131+
132+
void setAge(int age) {
133+
this.age = age;
134+
}
135+
}
136+
137+
{% endhighlight %}
138+
139+
140+
A schema can be applied to an existing RDD by calling `applySchema` and providing the Class object
141+
for the JavaBean.
142+
143+
{% highlight java %}
144+
JavaSQLContext ctx = new org.apache.spark.sql.api.java.JavaSQLContext(sc)
145+
146+
// Load a text file and convert each line to a JavaBean.
147+
JavaRDD<Person> people = ctx.textFile("examples/src/main/resources/people.txt").map(
148+
new Function<String, Person>() {
149+
public Person call(String line) throws Exception {
150+
String[] parts = line.split(",");
151+
152+
Person person = new Person();
153+
person.setName(parts[0]);
154+
person.setAge(Integer.parseInt(parts[1].trim()));
155+
156+
return person;
157+
}
158+
});
159+
160+
// Apply a schema to an RDD of JavaBeans and register it as a table.
161+
JavaSchemaRDD schemaPeople = sqlCtx.applySchema(people, Person.class);
162+
schemaPeople.registerAsTable("people");
163+
164+
// SQL can be run over RDDs that have been registered as tables.
165+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
166+
167+
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
168+
// The columns of a row in the result can be accessed by ordinal.
169+
List<String> teenagerNames = teenagers.map(new Function<Row, String>() {
170+
public String call(Row row) {
171+
return "Name: " + row.getString(0);
172+
}
173+
}).collect();
174+
175+
{% endhighlight %}
176+
177+
</div>
178+
179+
</div>
180+
181+
**Note that Spark SQL currently uses a very basic SQL parser.**
64182
Users that want a more complete dialect of SQL should look at the HiveQL support provided by
65183
`HiveContext`.
66184

@@ -70,17 +188,21 @@ Parquet is a columnar format that is supported by many other data processing sys
70188
provides support for both reading and writing parquet files that automatically preserves the schema
71189
of the original data. Using the data from the above example:
72190

191+
<div class="codetabs">
192+
193+
<div data-lang="scala" markdown="1">
194+
73195
{% highlight scala %}
74196
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
75197
import sqlContext._
76198

77-
val people: RDD[Person] // An RDD of case class objects, from the previous example.
199+
val people: RDD[Person] = ... // An RDD of case class objects, from the previous example.
78200

79201
// The RDD is implicitly converted to a SchemaRDD, allowing it to be stored using parquet.
80202
people.saveAsParquetFile("people.parquet")
81203

82204
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
83-
// The result of loading a parquet file is also a SchemaRDD.
205+
// The result of loading a parquet file is also a JavaSchemaRDD.
84206
val parquetFile = sqlContext.parquetFile("people.parquet")
85207

86208
//Parquet files can also be registered as tables and then used in SQL statements.
@@ -89,15 +211,43 @@ val teenagers = sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19"
89211
teenagers.collect().foreach(println)
90212
{% endhighlight %}
91213

214+
</div>
215+
216+
<div data-lang="java" markdown="1">
217+
218+
{% highlight java %}
219+
220+
JavaSchemaRDD schemaPeople = ... // The JavaSchemaRDD from the previous example.
221+
222+
// JavaSchemaRDDs can be saved as parquet files, maintaining the schema information.
223+
schemaPeople.saveAsParquetFile("people.parquet");
224+
225+
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
226+
// The result of loading a parquet file is also a JavaSchemaRDD.
227+
JavaSchemaRDD parquetFile = sqlCtx.parquetFile("people.parquet");
228+
229+
//Parquet files can also be registered as tables and then used in SQL statements.
230+
parquetFile.registerAsTable("parquetFile");
231+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
232+
233+
234+
{% endhighlight %}
235+
236+
</div>
237+
238+
</div>
239+
92240
## Writing Language-Integrated Relational Queries
93241

242+
**Language-Integrated queries are currently only supported in Scala.**
243+
94244
Spark SQL also supports a domain specific language for writing queries. Once again,
95245
using the data from the above examples:
96246

97247
{% highlight scala %}
98248
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
99249
import sqlContext._
100-
val people: RDD[Person] // An RDD of case class objects, from the first example.
250+
val people: RDD[Person] = ... // An RDD of case class objects, from the first example.
101251

102252
// The following is the same as 'SELECT name FROM people WHERE age >= 10 AND age <= 19'
103253
val teenagers = people.where('age >= 10).where('age <= 19).select('name)
@@ -114,14 +264,17 @@ evaluated by the SQL execution engine. A full list of the functions supported c
114264

115265
Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/).
116266
However, since Hive has a large number of dependencies, it is not included in the default Spark assembly.
117-
In order to use Hive you must first run '`sbt/sbt hive/assembly`'. This command builds a new assembly
118-
jar that includes Hive. When this jar is present, Spark will use the Hive
119-
assembly instead of the normal Spark assembly. Note that this Hive assembly jar must also be present
267+
In order to use Hive you must first run '`SPARK_HIVE=true sbt/sbt assembly/assembly`'. This command builds a new assembly
268+
jar that includes Hive. Note that this Hive assembly jar must also be present
120269
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
121270
(SerDes) in order to acccess data stored in Hive.
122271

123272
Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
124273

274+
<div class="codetabs">
275+
276+
<div data-lang="scala" markdown="1">
277+
125278
When working with Hive one must construct a `HiveContext`, which inherits from `SQLContext`, and
126279
adds support for finding tables in in the MetaStore and writing queries using HiveQL. Users who do
127280
not have an existing Hive deployment can also experiment with the `LocalHiveContext`,
@@ -135,9 +288,34 @@ val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
135288
// Importing the SQL context gives access to all the public SQL functions and implicit conversions.
136289
import hiveContext._
137290

138-
sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
139-
sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
291+
hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
292+
hql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
140293

141294
// Queries are expressed in HiveQL
142-
sql("SELECT key, value FROM src").collect().foreach(println)
143-
{% endhighlight %}
295+
hql("FROM src SELECT key, value").collect().foreach(println)
296+
{% endhighlight %}
297+
298+
</div>
299+
300+
<div data-lang="java" markdown="1">
301+
302+
When working with Hive one must construct a `JavaHiveContext`, which inherits from `JavaSQLContext`, and
303+
adds support for finding tables in in the MetaStore and writing queries using HiveQL. In addition to
304+
the `sql` method a `JavaHiveContext` also provides an `hql` methods, which allows queries to be
305+
expressed in HiveQL.
306+
307+
{% highlight java %}
308+
JavaSparkContext ctx = ...; // An existing JavaSparkContext.
309+
JavaHiveContext hiveCtx = new org.apache.spark.sql.hive.api.java.HiveContext(ctx);
310+
311+
hiveCtx.hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)");
312+
hiveCtx.hql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src");
313+
314+
// Queries are expressed in HiveQL.
315+
Row[] results = hiveCtx.hql("FROM src SELECT key, value").collect();
316+
317+
{% endhighlight %}
318+
319+
</div>
320+
321+
</div>
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
18+
package org.apache.spark.examples.sql;
19+
20+
import java.io.Serializable;
21+
import java.util.List;
22+
23+
import org.apache.spark.api.java.JavaRDD;
24+
import org.apache.spark.api.java.JavaSparkContext;
25+
import org.apache.spark.api.java.function.Function;
26+
import org.apache.spark.api.java.function.VoidFunction;
27+
28+
import org.apache.spark.sql.api.java.JavaSQLContext;
29+
import org.apache.spark.sql.api.java.JavaSchemaRDD;
30+
import org.apache.spark.sql.api.java.Row;
31+
32+
public class JavaSparkSQL {
33+
public static class Person implements Serializable {
34+
private String name;
35+
private int age;
36+
37+
String getName() {
38+
return name;
39+
}
40+
41+
void setName(String name) {
42+
this.name = name;
43+
}
44+
45+
int getAge() {
46+
return age;
47+
}
48+
49+
void setAge(int age) {
50+
this.age = age;
51+
}
52+
}
53+
54+
public static void main(String[] args) throws Exception {
55+
JavaSparkContext ctx = new JavaSparkContext("local", "JavaSparkSQL",
56+
System.getenv("SPARK_HOME"), JavaSparkContext.jarOfClass(JavaSparkSQL.class));
57+
JavaSQLContext sqlCtx = new JavaSQLContext(ctx);
58+
59+
// Load a text file and convert each line to a Java Bean.
60+
JavaRDD<Person> people = ctx.textFile("examples/src/main/resources/people.txt").map(
61+
new Function<String, Person>() {
62+
public Person call(String line) throws Exception {
63+
String[] parts = line.split(",");
64+
65+
Person person = new Person();
66+
person.setName(parts[0]);
67+
person.setAge(Integer.parseInt(parts[1].trim()));
68+
69+
return person;
70+
}
71+
});
72+
73+
// Apply a schema to an RDD of Java Beans and register it as a table.
74+
JavaSchemaRDD schemaPeople = sqlCtx.applySchema(people, Person.class);
75+
schemaPeople.registerAsTable("people");
76+
77+
// SQL can be run over RDDs that have been registered as tables.
78+
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19");
79+
80+
// The results of SQL queries are SchemaRDDs and support all the normal RDD operations.
81+
// The columns of a row in the result can be accessed by ordinal.
82+
List<String> teenagerNames = teenagers.map(new Function<Row, String>() {
83+
public String call(Row row) {
84+
return "Name: " + row.getString(0);
85+
}
86+
}).collect();
87+
88+
// JavaSchemaRDDs can be saved as parquet files, maintaining the schema information.
89+
schemaPeople.saveAsParquetFile("people.parquet");
90+
91+
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
92+
// The result of loading a parquet file is also a JavaSchemaRDD.
93+
JavaSchemaRDD parquetFile = sqlCtx.parquetFile("people.parquet");
94+
95+
//Parquet files can also be registered as tables and then used in SQL statements.
96+
parquetFile.registerAsTable("parquetFile");
97+
JavaSchemaRDD teenagers2 = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
98+
}
99+
}

0 commit comments

Comments
 (0)