Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
111 commits
Select commit Hold shift + click to select a range
6f867ce
[ZEPPELIN-1255] Add cast to string in z.show() for Pandas DataFrame
bustios Aug 1, 2016
c1935e1
[ZEPPELIN-1264] [HOTFIX] Fix CI test failure with Failed to create in…
Leemoonsoo Aug 2, 2016
52b3cbf
[ZEPPELIN-1260] R interpreter doesn't work with Spark 2.0
Leemoonsoo Aug 2, 2016
6b6f1cb
minor doc fix for r.md
zjffdu Aug 2, 2016
9eac20d
[ZEPPELIN-1261] Bug fix in z.show() for matplotlib graphs
bustios Aug 2, 2016
16b320f
[DOC][ZEPPELIN-1209] Remove a useless sentence about default interpre…
AhyoungRyu Jul 20, 2016
b965503
[ZEPPELIN-1198][Spark Standalone] Documents for running zeppelin on p…
Jul 28, 2016
b885f43
ZEPPELIN-1197. Should print output directly without invoking function…
zjffdu Jul 28, 2016
b254564
Fix logger class name to correct one
khalidhuseynov Jul 25, 2016
e6f51e7
[ZEPPELIN-1164] ZeppelinHub Realm
anthonycorbacho Jul 22, 2016
9efbcd1
[HOTFIX][ZEPPELIN-1240] Removed interpreter properties are restored
jongyoul Aug 2, 2016
1d0028b
[ZEPPELIN-1237] Auto-suggestion of notebook permissions should list r…
prabhjyotsingh Jul 30, 2016
161dd0e
ZEPPELIN-1267. PySparkInterpreter doesn't work in spark 2.0
zjffdu Aug 2, 2016
cf327f8
Small cleanup of zeppelin-server tests
bzz Aug 2, 2016
a922fd2
Small refactoring of Python interpreter
bzz Aug 3, 2016
adf3355
[ ZEPPELIN-1266 ] Code editor Optimization
cloverhearts Aug 3, 2016
6773d04
[Zeppelin-1276] Fix Notebook Title Input
corneadoug Aug 4, 2016
e8860cf
[ZEPPELIN-1256][BUILD] Build distribution package with Spark 2.0 and …
minahlee Aug 5, 2016
8fe914b
[HOTFIX] Copy spark profiles from spark-dependencies to spark module …
Leemoonsoo Aug 6, 2016
ea76ca9
[Optimization] Code editor key binding event optimization.
cloverhearts Aug 5, 2016
e2dcbfa
[ ZEPPELIN-1296 Optimization ] optimize to Paragraph move logic
cloverhearts Aug 5, 2016
b007924
[HOTFIX] After hotfix #1292, build fails with -Pyarn
Leemoonsoo Aug 6, 2016
55695c9
[BugFix] Show checkbox for "Connect to existing process" on interpret…
Aug 6, 2016
8a179f4
[ZEPPELIN-1289] Update the default value of 'spark.executor.memory' p…
sarutak Aug 4, 2016
471025d
[ZEPPELIN-1288] Use new Spark logo in the document.
sarutak Aug 4, 2016
b6310ad
ZEPPELIN-1270. Remove getting SQLContext from SparkSession.wrapped()
zjffdu Aug 3, 2016
01beb54
[ZEPPELIN-1300] Implement SparkInterpreter.completion for scala 2.11
Leemoonsoo Aug 6, 2016
045b2d2
[ZEPPELIN-1274]Write "Spark SQL" in docs rather than "SparkSQL"
sarutak Aug 3, 2016
61f4ce8
[ZEPPELIN-1273] Use Math.abs to determine if custom formatter should …
Aug 3, 2016
f71d09c
ZEPPELIN-1254 Make get and save Interpreter bindings calls via websocket
r-kamath Aug 3, 2016
80735bc
ZEPPELIN-1305. Fix bug of ZEPPELIN-1215
zjffdu Aug 7, 2016
36a7e38
[ZEPPELIN- 1298] Log instead of throwing trace for ping messages
khalidhuseynov Aug 8, 2016
68c43e2
[BUILD] Remove bigquery interpreter from netinst package
minahlee Aug 9, 2016
3bd94dd
[ZEPPELIN-1312] Hotfix - consistent getNoteRevision in websocket call
khalidhuseynov Aug 9, 2016
293993c
[ZEPPELIN-1246] In JDBCInterpreter.getScheduler, use getMaxConcurrent…
Aug 8, 2016
4178089
ZEPPELIN-1311. Typo in ZEPPELIN-1197
zjffdu Aug 9, 2016
7e491f8
[ZEPPELIN-1304] Show popup when interpreter name is empty
minahlee Aug 7, 2016
3a1ab28
[ZEPPELIN-1290] Refactor Navbar Controller
corneadoug Aug 8, 2016
85d4df4
[ZEPPELIN-1219] Add searching feature to Zeppelin docs site
AhyoungRyu Aug 6, 2016
0c5a03e
[HOTFIX] Bring zeppelin-display back to dependency of spark interpret…
Leemoonsoo Aug 11, 2016
8734890
ZEPPELIN-1308 Apache Ignite version upgraded up to 1.7
Aug 8, 2016
3dec4d7
ZEPPELIN-1287. No need to call print to display output in PythonInter…
zjffdu Aug 8, 2016
46ae6fe
Updated path to point to latest Maven binaries
jojurajan Aug 2, 2016
bccd5f9
Change maven version from 3.3.3 to 3.3.9 at vagrant script and its do…
yoonjs2 Aug 7, 2016
74c9756
[BUILD][HOTFIX] Add -DskipTests property to reduce build time
minahlee Aug 11, 2016
100b978
[MINOR] Update outdated contents in zeppelin-distribution/*.md files
AhyoungRyu Aug 10, 2016
8b40268
ZEPPELIN-1318 - Add support for matplotlib displaying png images in p…
agoodm Aug 13, 2016
b619699
[ZEPPELIN-1258] Add Spark packages support to Livy interpreter
mfelgamal Aug 8, 2016
6deb792
[ZEPPELIN-1257] storage - fix get note revision api
khalidhuseynov Aug 16, 2016
43dc32b
[Zeppelin-945] Interpreter authorization
Aug 16, 2016
d794f6f
[ZEPPELIN-1316] Zeppelin can not start due to an incorrect Interprete…
cloverhearts Aug 16, 2016
37696ea
[ZEPPELIN-1294] Implement one-way sync for notebook repos
Aug 5, 2016
c9d2a2c
[ZEPPELIN-1302] fix rinterpreter default prop values init error
WeichenXu123 Aug 6, 2016
47b1931
[ZEPPELIN-1220] Add geographical map as visualization option
Aug 11, 2016
051929d
[ZEPPELIN-1192] Block pyspark paragraph hang.
Aug 12, 2016
bba6ddd
[ZEPPELIN-1268] As an enduser, I would like to embed paragraph and re…
kavinkumarks Aug 4, 2016
49b3df6
[ZEPPELIN-913] Apply new mechanism to HbaseInterpreter
ggdupont Aug 9, 2016
ff2465b
[ZEPPELIN-1323] Add contribution guide for Zeppelin documentation
AhyoungRyu Aug 13, 2016
371fa76
[ZEPPELIN-1333] prevent calling runParagraph() on shift-enter event
nazgul33 Aug 17, 2016
8a29eb2
[ZEPPELIN-1335] bug fixed y axis label for scatterChart and stackedAr…
cloverhearts Aug 18, 2016
54d4353
[ZEPPELIN-1245] Focus first paragraph after notebook creation
raja-imaginea Aug 17, 2016
d5528f0
[ZEPPELIN-1162] Fix rawType in NotebookRestApi
raja-imaginea Aug 18, 2016
5829816
ZEPPELIN-1328 - z.show in python interpreter does not display PNG ima…
agoodm Aug 19, 2016
377dc4e
[ZEPPELIN-1191] Supported legacy way to run paragraph with group name…
jongyoul Jul 18, 2016
d028b38
ZEPPELIN-1324: Make paragraph code selectable when running
karuppayya Aug 20, 2016
7f733ff
Zeppelin 1307 - Implement notebook revision in Zeppelinhub repo
anthonycorbacho Aug 23, 2016
88c257a
[ZEPPELIN-1359] Commit correctly ordered karma.conf file
corneadoug Aug 23, 2016
47ac1d4
[ZEPPELIN-728] Can't POST interpreter setting (CorsFilter?)
Aug 23, 2016
a9b4835
Update Utils.java
oeegee Aug 19, 2016
ce56188
[MINOR][DOC] Update available interpreters' image in index.html
AhyoungRyu Aug 22, 2016
5ac3fae
[ZEPPELIN-530] Added changes for Credential Provider, using hadoop co…
rconline Aug 23, 2016
e2d0ca3
[ZEPPELIN-1301] fix potential encoding problem in RInterpreter proces…
WeichenXu123 Aug 6, 2016
42e3a14
[ZEPPELIN-960] When there is no interpreter, paragraph runJobapi mod…
cloverhearts Aug 23, 2016
8064c54
[ZEPPELIN-1327] Fix bug in z.show for Python interpreter
bustios Aug 23, 2016
32f35e2
[MINOR] Remove unnecessary question mark
zjffdu Aug 26, 2016
b7f918a
[ZEPPELIN-1178] Tooltip: Show chart type when hovering over chart icon
vensant Aug 26, 2016
c1999ea
ZEPPELIN-1342. Adding dependencies via SPARK_SUBMIT_OPTIONS doesn't w…
zjffdu Aug 27, 2016
5f1208b
ZEPPELIN-1284. Unable to run paragraph with default interpreter
zjffdu Aug 19, 2016
11bdd71
[ZEPPELIN/1356] The graph legend truncates at the nearest period (.) …
Peilin-Yang Aug 26, 2016
c4319b7
ZEPPELIN-1326: make profile to select dependency of hadoop-common for…
prabhjyotsingh Aug 24, 2016
eccfe00
[ZEPPELIN-1280][Spark on Yarn] Documents for running zeppelin on prod…
Aug 18, 2016
d11221f
Revert "ZEPPELIN-1326: make profile to select dependency of hadoop-co…
prabhjyotsingh Aug 29, 2016
223d225
[ZEPPELIN-1383][ Interpreters][r-interpreter] remove SparkInterpreter…
WeichenXu123 Aug 28, 2016
d371d96
[MINOR] Removed unused profiles from spark/pom.xml
jongyoul Aug 8, 2016
e3e19ec
[zeppelin] add temp directories generated by zeppelin-Rinterpreter to…
WeichenXu123 Aug 6, 2016
f1a2471
[ZEPPELIN-1040] Show the time when the result is updated
raja-imaginea Aug 26, 2016
83469be
[ZEPPELIN-1313] NullPointerException when using Clone notebook REST API
Aug 23, 2016
93e3762
ZEPPELIN-1185. ZEPPELIN_INTP_JAVA_OPTS should not use ZEPPELIN_JAVA_OPTS
zjffdu Aug 4, 2016
dad72ce
[ZEPPELIN-1217] Remove horizontal scrollbar in Zeppelin conf table
AhyoungRyu Jul 29, 2016
9dc9c75
[DOC]fix some spelling mistakes
sloth2012 Aug 30, 2016
7e2a1b5
[ZEPPELIN-699] Add new synchronous paragraph run REST API
doanduyhai Jun 2, 2016
df2e77d
[ZEPPELIN-1379] Flink interpreter is missing scala libraries
lresende Aug 26, 2016
d93fb73
ZEPPELIN-1384. Spark interpreter binary compatibility to scala 2.10 /…
zjffdu Aug 29, 2016
c580a82
[ZEPPELIN-1365] Error of Zeppelin Application in development mode.
Aug 24, 2016
323aa18
[ZEPPELIN-1391][Interpreters] print error while existing registedInte…
WeichenXu123 Aug 28, 2016
da8857f
[MINOR] Add new line before logging paragraph content
lresende Aug 28, 2016
51b8792
Cache zeppelin-web/node_modules on travis ci and see if it reduces CI…
Leemoonsoo Aug 5, 2016
fe3dbdb
[ZEPPELIN-1366] Removed legacy JDBC alias
jongyoul Aug 31, 2016
922364f
ZEPPELIN-1319 Use absolute path for ssl truststore and keystore when …
r-kamath Aug 25, 2016
b89e35e
[ZEPPELIN-1069]Ignore implicit interpreter when user enter wrong inte…
mwkang Aug 17, 2016
33ddc00
ZEPPELIN-1374. Should prevent use dot in interpreter name
zjffdu Aug 29, 2016
cee58aa
[ZEPPELIN-1279] Spark on Mesos Docker.
Sep 2, 2016
11becde
Buffer append output results + fix extra incorrect results
Aug 23, 2016
d497348
rename r directory to 2BWJFTXKJ
prabhjyotsingh Sep 2, 2016
6c6097a
[ZEPPELIN-1372]Automatically Detect the data type in table and sort t…
Peilin-Yang Sep 1, 2016
9eeaf49
HotFix for ZEPPELIN-1374
zjffdu Sep 5, 2016
f9bc7a9
[ZEPPELIN-1409] Refactor RAT build on Travis.CI configuration
lresende Sep 5, 2016
20f0e5b
[ZEPPELIN-1398] Use relative path for search_data.json
AhyoungRyu Sep 1, 2016
09870cc
[ZEPPELIN-1116]send out more exception msg
passionke Aug 29, 2016
b805197
[MINOR] Remove duplicated dependency declaration
lresende Sep 5, 2016
66d5811
[ZEPPELIN-1412] add support multiline for pythonErrorIn method on pyt…
cloverhearts Sep 6, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ zeppelin-web/bower_components
# R
/r/lib/
.Rhistory
/R/

# project level
/logs/
Expand Down Expand Up @@ -108,3 +109,6 @@ tramp

# Generated by zeppelin-examples
/helium

# tmp files
/tmp/
28 changes: 18 additions & 10 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ cache:
directories:
- .spark-dist
- ${HOME}/.m2/repository/.cache/maven-download-plugin
- .node_modules

addons:
apt:
Expand All @@ -33,44 +34,49 @@ addons:

matrix:
include:
# Test all modules with spark-2.0.0-preview and scala 2.11
# Test License compliance using RAT tool
- jdk: "oraclejdk7"
env: SCALA_VER="2.11" SPARK_VER="2.0.0" HADOOP_VER="2.3" PROFILE="-Pspark-2.0 -Dspark.version=2.0.0-preview -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.11" BUILD_FLAG="package -Pbuild-distr" TEST_FLAG="verify -Pusing-packaged-distr" TEST_PROJECTS=""
env: SCALA_VER="2.11" SPARK_VER="2.0.0" HADOOP_VER="2.3" PROFILE="-Prat" BUILD_FLAG="clean" TEST_FLAG="org.apache.rat:apache-rat-plugin:check" TEST_PROJECTS=""

# Test all modules with spark 2.0.0 and scala 2.11
- jdk: "oraclejdk7"
env: SCALA_VER="2.11" SPARK_VER="2.0.0" HADOOP_VER="2.3" PROFILE="-Pspark-2.0 -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.11" BUILD_FLAG="package -Pbuild-distr -DskipRat" TEST_FLAG="verify -Pusing-packaged-distr -DskipRat" TEST_PROJECTS=""

# Test all modules with scala 2.10
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Pr -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.10" BUILD_FLAG="package -Pbuild-distr" TEST_FLAG="verify -Pusing-packaged-distr" TEST_PROJECTS=""
env: SCALA_VER="2.10" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Pr -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.10" BUILD_FLAG="package -Pbuild-distr -DskipRat" TEST_FLAG="verify -Pusing-packaged-distr -DskipRat" TEST_PROJECTS=""

# Test all modules with scala 2.11
- jdk: "oraclejdk7"
env: SCALA_VER="2.11" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Pr -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.11" BUILD_FLAG="package -Pbuild-distr" TEST_FLAG="verify -Pusing-packaged-distr" TEST_PROJECTS=""
env: SCALA_VER="2.11" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Pr -Phadoop-2.3 -Ppyspark -Psparkr -Pscalding -Pexamples -Pscala-2.11" BUILD_FLAG="package -Pbuild-distr -DskipRat" TEST_FLAG="verify -Pusing-packaged-distr -DskipRat" TEST_PROJECTS=""

# Test spark module for 1.5.2
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.5.2" HADOOP_VER="2.3" PROFILE="-Pspark-1.5 -Pr -Phadoop-2.3 -Ppyspark -Psparkr" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark,r -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"
env: SCALA_VER="2.10" SPARK_VER="1.5.2" HADOOP_VER="2.3" PROFILE="-Pspark-1.5 -Pr -Phadoop-2.3 -Ppyspark -Psparkr" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark,r -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"

# Test spark module for 1.4.1
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.4.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.4 -Pr -Phadoop-2.3 -Ppyspark -Psparkr" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark,r -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"
env: SCALA_VER="2.10" SPARK_VER="1.4.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.4 -Pr -Phadoop-2.3 -Ppyspark -Psparkr" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark,r -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"

# Test spark module for 1.3.1
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.3.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.3 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"
env: SCALA_VER="2.10" SPARK_VER="1.3.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.3 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"

# Test spark module for 1.2.2
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.2.2" HADOOP_VER="2.3" PROFILE="-Pspark-1.2 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"
env: SCALA_VER="2.10" SPARK_VER="1.2.2" HADOOP_VER="2.3" PROFILE="-Pspark-1.2 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"

# Test spark module for 1.1.1
- jdk: "oraclejdk7"
env: SCALA_VER="2.10" SPARK_VER="1.1.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.1 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"
env: SCALA_VER="2.10" SPARK_VER="1.1.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.1 -Phadoop-2.3 -Ppyspark" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.rest.*Test,org.apache.zeppelin.spark* -DfailIfNoTests=false"

# Test selenium with spark module for 1.6.1
- jdk: "oraclejdk7"
env: TEST_SELENIUM="true" SCALA_VER="2.10" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Phadoop-2.3 -Ppyspark -Pexamples" BUILD_FLAG="package -DskipTests" TEST_FLAG="verify" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.AbstractFunctionalSuite -DfailIfNoTests=false"
env: TEST_SELENIUM="true" SCALA_VER="2.10" SPARK_VER="1.6.1" HADOOP_VER="2.3" PROFILE="-Pspark-1.6 -Phadoop-2.3 -Ppyspark -Pexamples" BUILD_FLAG="package -DskipTests -DskipRat" TEST_FLAG="verify -DskipRat" TEST_PROJECTS="-pl zeppelin-interpreter,zeppelin-zengine,zeppelin-server,zeppelin-display,spark-dependencies,spark -Dtest=org.apache.zeppelin.AbstractFunctionalSuite -DfailIfNoTests=false"

before_install:
- "ls -la .spark-dist ${HOME}/.m2/repository/.cache/maven-download-plugin"
- ls .node_modules && cp -r .node_modules zeppelin-web/node_modules || echo "node_modules are not cached"
- mkdir -p ~/R
- echo 'R_LIBS=~/R' > ~/.Renviron
- R -e "install.packages('knitr', repos = 'http://cran.us.r-project.org', lib='~/R')"
Expand All @@ -89,6 +95,7 @@ before_script:

script:
- mvn $TEST_FLAG $PROFILE -B $TEST_PROJECTS
- rm -rf .node_modules; cp -r zeppelin-web/node_modules .node_modules

after_success:
- echo "Travis exited with ${TRAVIS_TEST_RESULT}"
Expand All @@ -104,3 +111,4 @@ after_failure:

after_script:
- ./testing/stopSparkCluster.sh $SPARK_VER $HADOOP_VER

3 changes: 2 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,8 @@ The following components are provided under the MIT-style license. See project l
The text of each license is also included at licenses/LICENSE-[project]-[version].txt.

(MIT Style) jekyll-table-of-contents (https://github.com/ghiculescu/jekyll-table-of-contents) - https://github.com/ghiculescu/jekyll-table-of-contents/blob/master/LICENSE.txt

(MIT Style) lunr.js (https://github.com/olivernn/lunr.js) - https://github.com/olivernn/lunr.js/blob/v0.7.1/LICENSE

========================================================================
Apache licenses
========================================================================
Expand Down
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#Zeppelin
# Apache Zeppelin

**Documentation:** [User Guide](http://zeppelin.apache.org/docs/latest/index.html)<br/>
**Mailing Lists:** [User and Dev mailing list](http://zeppelin.apache.org/community.html)<br/>
Expand Down Expand Up @@ -93,9 +93,9 @@ _Notes:_

#### Install maven
```
wget http://www.eu.apache.org/dist/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
sudo tar -zxf apache-maven-3.3.3-bin.tar.gz -C /usr/local/
sudo ln -s /usr/local/apache-maven-3.3.3/bin/mvn /usr/local/bin/mvn
wget http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
sudo tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/
sudo ln -s /usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvn
```

_Notes:_
Expand Down Expand Up @@ -217,6 +217,7 @@ Here're some examples:

```sh
# build with spark-2.0, scala-2.11
./dev/change_scala_version.sh 2.11
mvn clean package -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11

# build with spark-1.6, scala-2.10
Expand Down Expand Up @@ -306,6 +307,7 @@ For configuration details check __`./conf`__ subdirectory.
To produce a Zeppelin package compiled with Scala 2.11, use the -Pscala-2.11 profile:

```
./dev/change_scala_version.sh 2.11
mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Pscala-2.11 -DskipTests clean install
```

Expand Down
7 changes: 0 additions & 7 deletions bin/common.cmd
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,6 @@ if not defined JAVA_OPTS (
set JAVA_OPTS=%JAVA_OPTS% %ZEPPELIN_JAVA_OPTS%
)

if not defined ZEPPELIN_INTP_JAVA_OPTS (
set ZEPPELIN_INTP_JAVA_OPTS=%ZEPPELIN_JAVA_OPTS%
)

if not defined ZEPPELIN_INTP_MEM (
set ZEPPELIN_INTP_MEM=%ZEPPELIN_MEM%
)

set JAVA_INTP_OPTS=%ZEPPELIN_INTP_JAVA_OPTS% -Dfile.encoding=%ZEPPELIN_ENCODING%

Expand Down
9 changes: 0 additions & 9 deletions bin/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -121,15 +121,6 @@ JAVA_OPTS+=" ${ZEPPELIN_JAVA_OPTS} -Dfile.encoding=${ZEPPELIN_ENCODING} ${ZEPPEL
JAVA_OPTS+=" -Dlog4j.configuration=file://${ZEPPELIN_CONF_DIR}/log4j.properties"
export JAVA_OPTS

# jvm options for interpreter process
if [[ -z "${ZEPPELIN_INTP_JAVA_OPTS}" ]]; then
export ZEPPELIN_INTP_JAVA_OPTS="${ZEPPELIN_JAVA_OPTS}"
fi

if [[ -z "${ZEPPELIN_INTP_MEM}" ]]; then
export ZEPPELIN_INTP_MEM="${ZEPPELIN_MEM}"
fi

JAVA_INTP_OPTS="${ZEPPELIN_INTP_JAVA_OPTS} -Dfile.encoding=${ZEPPELIN_ENCODING}"
JAVA_INTP_OPTS+=" -Dlog4j.configuration=file://${ZEPPELIN_CONF_DIR}/log4j.properties"
export JAVA_INTP_OPTS
Expand Down
8 changes: 8 additions & 0 deletions bin/interpreter.cmd
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,14 @@ if exist "%ZEPPELIN_HOME%\zeppelin-interpreter\target\classes" (
set ZEPPELIN_CLASSPATH=%ZEPPELIN_CLASSPATH%;"!ZEPPELIN_INTERPRETER_JAR!"
)

REM add test classes for unittest
if exist "%ZEPPELIN_HOME%\zeppelin-interpreter\target\test-classes" (
set ZEPPELIN_CLASSPATH=%ZEPPELIN_CLASSPATH%;"%ZEPPELIN_HOME%\zeppelin-interpreter\target\test-classes"
)
if exist "%ZEPPELIN_HOME%\zeppelin-zengine\target\test-classes" (
set ZEPPELIN_CLASSPATH=%ZEPPELIN_CLASSPATH%;"%ZEPPELIN_HOME%\zeppelin-zengine\target\test-classes"
)

call "%bin%\functions.cmd" ADDJARINDIR "%ZEPPELIN_HOME%\zeppelin-interpreter\target\lib"
call "%bin%\functions.cmd" ADDJARINDIR "%INTERPRETER_DIR%"

Expand Down
9 changes: 9 additions & 0 deletions bin/interpreter.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,15 @@ else
ZEPPELIN_INTP_CLASSPATH+=":${ZEPPELIN_INTERPRETER_JAR}"
fi

# add test classes for unittest
if [[ -d "${ZEPPELIN_HOME}/zeppelin-interpreter/target/test-classes" ]]; then
ZEPPELIN_INTP_CLASSPATH+=":${ZEPPELIN_HOME}/zeppelin-interpreter/target/test-classes"
fi
if [[ -d "${ZEPPELIN_HOME}/zeppelin-zengine/target/test-classes" ]]; then
ZEPPELIN_INTP_CLASSPATH+=":${ZEPPELIN_HOME}/zeppelin-zengine/target/test-classes"
fi


addJarInDirForIntp "${ZEPPELIN_HOME}/zeppelin-interpreter/target/lib"
addJarInDirForIntp "${INTERPRETER_DIR}"

Expand Down
8 changes: 8 additions & 0 deletions conf/shiro.ini
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,10 @@ user3 = password4, role2
### A sample for configuring Active Directory Realm
#activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = userNameA

#use either systemPassword or hadoopSecurityCredentialPath, more details in http://zeppelin.apache.org/docs/latest/security/shiroauthentication.html
#activeDirectoryRealm.systemPassword = passwordA
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks
#activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
#activeDirectoryRealm.url = ldap://ldap.test.com:389
#activeDirectoryRealm.groupRolesMap = "CN=admin,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"admin","CN=finance,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"finance","CN=hr,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"hr"
Expand All @@ -42,6 +45,11 @@ user3 = password4, role2
#ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COM
#ldapRealm.contextFactory.authenticationMechanism = SIMPLE

### A sample for configuring ZeppelinHub Realm
#zeppelinHubRealm = org.apache.zeppelin.realm.ZeppelinHubRealm
## Url of ZeppelinHub
#zeppelinHubRealm.zeppelinhubUrl = https://www.zeppelinhub.com
#securityManager.realms = $zeppelinHubRealm

sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager

Expand Down
7 changes: 4 additions & 3 deletions conf/zeppelin-env.cmd.template
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ REM set JAVA_HOME=
REM set MASTER= REM Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode.
REM set ZEPPELIN_JAVA_OPTS REM Additional jvm options. for example, set ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g -Dspark.cores.max=16"
REM set ZEPPELIN_MEM REM Zeppelin jvm mem options Default -Xmx1024m -XX:MaxPermSize=512m
REM set ZEPPELIN_INTP_MEM REM zeppelin interpreter process jvm mem options. Default = ZEPPELIN_MEM
REM set ZEPPELIN_INTP_JAVA_OPTS REM zeppelin interpreter process jvm options. Default = ZEPPELIN_JAVA_OPTS
REM set ZEPPELIN_INTP_MEM REM zeppelin interpreter process jvm mem options.
REM set ZEPPELIN_INTP_JAVA_OPTS REM zeppelin interpreter process jvm options.

REM set ZEPPELIN_LOG_DIR REM Where log files are stored. PWD by default.
REM set ZEPPELIN_PID_DIR REM The pid files are stored. /tmp by default.
Expand All @@ -35,6 +35,7 @@ REM set ZEPPELIN_IDENT_STRING REM A string representing this instance of zep
REM set ZEPPELIN_NICENESS REM The scheduling priority for daemons. Defaults to 0.
REM set ZEPPELIN_INTERPRETER_LOCALREPO REM Local repository for interpreter's additional dependency loading
REM set ZEPPELIN_NOTEBOOK_STORAGE REM Refers to pluggable notebook storage class, can have two classes simultaneously with a sync between them (e.g. local and remote).
REM set ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC REM If there are multiple notebook storages, should we treat the first one as the only source of truth?


REM Spark interpreter configuration
Expand Down Expand Up @@ -62,7 +63,7 @@ REM
REM set ZEPPELIN_SPARK_USEHIVECONTEXT REM Use HiveContext instead of SQLContext if set true. true by default.
REM set ZEPPELIN_SPARK_CONCURRENTSQL REM Execute multiple SQL concurrently if set true. false by default.
REM set ZEPPELIN_SPARK_IMPORTIMPLICIT REM Import implicits, UDF collection, and sql if set true. true by default.
REM set ZEPPELIN_SPARK_MAXRESULT REM Max number of SparkSQL result to display. 1000 by default.
REM set ZEPPELIN_SPARK_MAXRESULT REM Max number of Spark SQL result to display. 1000 by default.

REM ZeppelinHub connection configuration
REM
Expand Down
7 changes: 4 additions & 3 deletions conf/zeppelin-env.sh.template
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@
# export MASTER= # Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode.
# export ZEPPELIN_JAVA_OPTS # Additional jvm options. for example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g -Dspark.cores.max=16"
# export ZEPPELIN_MEM # Zeppelin jvm mem options Default -Xmx1024m -XX:MaxPermSize=512m
# export ZEPPELIN_INTP_MEM # zeppelin interpreter process jvm mem options. Default = ZEPPELIN_MEM
# export ZEPPELIN_INTP_JAVA_OPTS # zeppelin interpreter process jvm options. Default = ZEPPELIN_JAVA_OPTS
# export ZEPPELIN_INTP_MEM # zeppelin interpreter process jvm mem options.
# export ZEPPELIN_INTP_JAVA_OPTS # zeppelin interpreter process jvm options.

# export ZEPPELIN_LOG_DIR # Where log files are stored. PWD by default.
# export ZEPPELIN_PID_DIR # The pid files are stored. ${ZEPPELIN_HOME}/run by default.
Expand All @@ -36,6 +36,7 @@
# export ZEPPELIN_NICENESS # The scheduling priority for daemons. Defaults to 0.
# export ZEPPELIN_INTERPRETER_LOCALREPO # Local repository for interpreter's additional dependency loading
# export ZEPPELIN_NOTEBOOK_STORAGE # Refers to pluggable notebook storage class, can have two classes simultaneously with a sync between them (e.g. local and remote).
# export ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC # If there are multiple notebook storages, should we treat the first one as the only source of truth?

#### Spark interpreter configuration ####

Expand All @@ -62,7 +63,7 @@
# export ZEPPELIN_SPARK_USEHIVECONTEXT # Use HiveContext instead of SQLContext if set true. true by default.
# export ZEPPELIN_SPARK_CONCURRENTSQL # Execute multiple SQL concurrently if set true. false by default.
# export ZEPPELIN_SPARK_IMPORTIMPLICIT # Import implicits, UDF collection, and sql if set true. true by default.
# export ZEPPELIN_SPARK_MAXRESULT # Max number of SparkSQL result to display. 1000 by default.
# export ZEPPELIN_SPARK_MAXRESULT # Max number of Spark SQL result to display. 1000 by default.
# export ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE # Size in characters of the maximum text message to be received by websocket. Defaults to 1024000


Expand Down
8 changes: 7 additions & 1 deletion conf/zeppelin-site.xml.template
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,12 @@
<description>notebook persistence layer implementation</description>
</property>

<property>
<name>zeppelin.notebook.one.way.sync</name>
<value>false</value>
<description>If there are multiple notebook storages, should we treat the first one as the only source of truth?</description>
</property>

<property>
<name>zeppelin.interpreter.dir</name>
<value>interpreter</value>
Expand All @@ -184,7 +190,7 @@

<property>
<name>zeppelin.interpreter.group.order</name>
<value>"spark,md,angular,sh,livy,alluxio,file,psql,flink,python,ignite,lens,cassandra,geode,kylin,elasticsearch,scalding,jdbc,hbase</value>
<value>spark,md,angular,sh,livy,alluxio,file,psql,flink,python,ignite,lens,cassandra,geode,kylin,elasticsearch,scalding,jdbc,hbase,bigquery</value>
<description></description>
</property>

Expand Down
5 changes: 3 additions & 2 deletions dev/create_release.sh
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ function make_binary_release() {

cp -r "${WORKING_DIR}/zeppelin" "${WORKING_DIR}/zeppelin-${RELEASE_VERSION}-bin-${BIN_RELEASE_NAME}"
cd "${WORKING_DIR}/zeppelin-${RELEASE_VERSION}-bin-${BIN_RELEASE_NAME}"
./dev/change_scala_version.sh 2.11
echo "mvn clean package -Pbuild-distr -DskipTests ${BUILD_FLAGS}"
mvn clean package -Pbuild-distr -DskipTests ${BUILD_FLAGS}
if [[ $? -ne 0 ]]; then
Expand Down Expand Up @@ -102,8 +103,8 @@ function make_binary_release() {

git_clone
make_source_package
make_binary_release all "-Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr"
make_binary_release netinst "-Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -pl !alluxio,!angular,!cassandra,!elasticsearch,!file,!flink,!hbase,!ignite,!jdbc,!kylin,!lens,!livy,!markdown,!postgresql,!python,!shell"
make_binary_release all "-Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.11"
make_binary_release netinst "-Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.11 -pl !alluxio,!angular,!cassandra,!elasticsearch,!file,!flink,!hbase,!ignite,!jdbc,!kylin,!lens,!livy,!markdown,!postgresql,!python,!shell,!bigquery"

# remove non release files and dirs
rm -rf "${WORKING_DIR}/zeppelin"
Expand Down
Loading