You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SparkR by default uses Apache Spark 1.3.0. You can switch to a different Spark
26
+
SparkR by default uses Apache Spark 1.1.0. You can switch to a different Spark
27
27
version by setting the environment variable `SPARK_VERSION`. For example, to
28
-
use Apache Spark 1.2.0, you can run
28
+
use Apache Spark 1.3.0, you can run
29
29
30
-
SPARK_VERSION=1.2.0 ./install-dev.sh
30
+
SPARK_VERSION=1.3.0 ./install-dev.sh
31
31
32
32
SparkR by default links to Hadoop 1.0.4. To use SparkR with other Hadoop
33
33
versions, you will need to rebuild SparkR with the same version that [Spark is
@@ -91,8 +91,9 @@ To run one of them, use `./sparkR <filename> <args>`. For example:
91
91
92
92
./sparkR examples/pi.R local[2]
93
93
94
-
You can also run the unit-tests for SparkR by running. You need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first.
94
+
You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):
95
95
96
+
R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
0 commit comments