Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
fe07de9
[SPARK-19673][SQL] "ThriftServer default app name is changed wrong"
lvdongr Feb 25, 2017
410392e
[SPARK-15288][MESOS] Mesos dispatcher should handle gracefully when a…
Feb 25, 2017
6ab6054
[MINOR][ML][DOC] Document default value for GeneralizedLinearRegressi…
jkbradley Feb 26, 2017
89608cf
[SPARK-17075][SQL][FOLLOWUP] fix some minor issues and clean up the code
cloud-fan Feb 26, 2017
68f2142
[SQL] Duplicate test exception in SQLQueryTestSuite due to meta files…
dilipbiswal Feb 26, 2017
9f8e392
[SPARK-19594][STRUCTURED STREAMING] StreamingQueryListener fails to h…
Feb 26, 2017
4ba9c6c
[MINOR][BUILD] Fix lint-java breaks in Java
HyukjinKwon Feb 27, 2017
8a5a585
[SPARK-15615][SQL][BUILD][FOLLOW-UP] Replace deprecated usage of json…
HyukjinKwon Feb 27, 2017
16d8472
[SPARK-19746][ML] Faster indexing for logistic aggregator
sethah Feb 28, 2017
7353038
[SPARK-19749][SS] Name socket source with a meaningful name
uncleGen Feb 28, 2017
a350bc1
[SPARK-19748][SQL] refresh function has a wrong order to do cache inv…
windpiger Feb 28, 2017
9b8eca6
[SPARK-19660][CORE][SQL] Replace the configuration property names tha…
wangyum Feb 28, 2017
b405466
[SPARK-14489][ML][PYSPARK] ALS unknown user/item prediction strategy
Feb 28, 2017
7c7fc30
[SPARK-19678][SQL] remove MetastoreRelation
cloud-fan Feb 28, 2017
9734a92
[SPARK-19677][SS] Committing a delta file atop an existing one should…
vitillo Feb 28, 2017
ce233f1
[SPARK-19463][SQL] refresh cache after the InsertIntoHadoopFsRelation…
windpiger Feb 28, 2017
7e5359b
[SPARK-19610][SQL] Support parsing multiline CSV files
HyukjinKwon Feb 28, 2017
d743ea4
[MINOR][DOC] Update GLM doc to include tweedie distribution
actuaryzhang Feb 28, 2017
bf5987c
[SPARK-19769][DOCS] Update quickstart instructions
elmiko Feb 28, 2017
ca3864d
[SPARK-19373][MESOS] Base spark.scheduler.minRegisteredResourceRatio …
Feb 28, 2017
0fe8020
[SPARK-14503][ML] spark.ml API for FPGrowth
YY-OnCall Feb 28, 2017
7315880
[SPARK-19572][SPARKR] Allow to disable hive in sparkR shell
zjffdu Mar 1, 2017
89cd384
[SPARK-19460][SPARKR] Update dataset used in R documentation, example…
wangmiao1981 Mar 1, 2017
4913c92
[SPARK-19633][SS] FileSource read from FileSink
lw-lin Mar 1, 2017
38e7835
[SPARK-19736][SQL] refreshByPath should clear all cached plans with t…
viirya Mar 1, 2017
5502a9c
[SPARK-19766][SQL] Constant alias columns in INNER JOIN should not be…
stanzhai Mar 1, 2017
8aa560b
[SPARK-19761][SQL] create InMemoryFileIndex with an empty rootPaths w…
windpiger Mar 1, 2017
417140e
[SPARK-19787][ML] Changing the default parameter of regParam.
datumbox Mar 1, 2017
2ff1467
[DOC][MINOR][SPARKR] Update SparkR doc for names, columns and colnames
actuaryzhang Mar 1, 2017
db0ddce
[SPARK-19775][SQL] Remove an obsolete `partitionBy().insertInto()` te…
dongjoon-hyun Mar 1, 2017
51be633
[SPARK-19777] Scan runningTasksSet when check speculatable tasks in T…
Mar 2, 2017
89990a0
[SPARK-13931] Stage can hang if an executor fails while speculated ta…
Mar 2, 2017
de2b53d
[SPARK-19583][SQL] CTAS for data source table with a created location…
windpiger Mar 2, 2017
3bd8ddf
[MINOR][ML] Fix comments in LSH Examples and Python API
Mar 2, 2017
d2a8797
[SPARK-19734][PYTHON][ML] Correct OneHotEncoder doc string to say dro…
markgrover Mar 2, 2017
8d6ef89
[SPARK-18352][DOCS] wholeFile JSON update doc and programming guide
felixcheung Mar 2, 2017
625cfe0
[SPARK-19733][ML] Removed unnecessary castings and refactored checked…
datumbox Mar 2, 2017
50c08e8
[SPARK-19704][ML] AFTSurvivalRegression should support numeric censorCol
zhengruifeng Mar 2, 2017
9cca3db
[SPARK-19345][ML][DOC] Add doc for "coldStartStrategy" usage in ALS
Mar 2, 2017
5ae3516
[SPARK-19720][CORE] Redact sensitive information from SparkSubmit con…
markgrover Mar 2, 2017
433d9eb
[SPARK-19631][CORE] OutputCommitCoordinator should not allow commits …
Mar 2, 2017
8417a7a
[SPARK-19276][CORE] Fetch Failure handling robust to user error handling
squito Mar 3, 2017
93ae176
[SPARK-19745][ML] SVCAggregator captures coefficients in its closure
sethah Mar 3, 2017
f37bb14
[SPARK-19602][SQL][TESTS] Add tests for qualified column names
skambha Mar 3, 2017
e24f21b
[SPARK-19779][SS] Delete needless tmp file after restart structured s…
gf53520 Mar 3, 2017
982f322
[SPARK-18726][SQL] resolveRelation for FileFormat DataSource don't ne…
windpiger Mar 3, 2017
d556b31
[SPARK-18699][SQL][FOLLOWUP] Add explanation in CSV parser and minor …
HyukjinKwon Mar 3, 2017
fa50143
[SPARK-19739][CORE] propagate S3 session token to cluser
uncleGen Mar 3, 2017
0bac3e4
[SPARK-19797][DOC] ML pipeline document correction
ymwdalex Mar 3, 2017
776fac3
[SPARK-19801][BUILD] Remove JDK7 from Travis CI
dongjoon-hyun Mar 3, 2017
98bcc18
[SPARK-19758][SQL] Resolving timezone aware expressions with time zon…
viirya Mar 3, 2017
37a1c0e
[SPARK-19710][SQL][TESTS] Fix ordering of rows in query results
robbinspg Mar 3, 2017
9314c08
[SPARK-19774] StreamExecution should call stop() on sources when a st…
brkyvz Mar 3, 2017
ba186a8
[MINOR][DOC] Fix doc for web UI https configuration
jerryshao Mar 3, 2017
2a7921a
[SPARK-18939][SQL] Timezone support in partition values.
ueshin Mar 4, 2017
44281ca
[SPARK-19348][PYTHON] PySpark keyword_only decorator is not thread-safe
BryanCutler Mar 4, 2017
f5fdbe0
[SPARK-13446][SQL] Support reading data from Hive 2.0.1 metastore
gatorsmile Mar 4, 2017
a6a7a95
[SPARK-19718][SS] Handle more interrupt cases properly for Hadoop
zsxwing Mar 4, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 0 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ dist: trusty
# 2. Choose language and target JDKs for parallel builds.
language: java
jdk:
- oraclejdk7
- oraclejdk8

# 3. Setup cache directory for SBT and Maven.
Expand Down
2 changes: 1 addition & 1 deletion R/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,6 @@ To run the SparkR unit tests on Windows, the following steps are required —ass

```
R -e "install.packages('testthat', repos='http://cran.us.r-project.org')"
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.default.name="file:///" R\pkg\tests\run-all.R
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R
```

4 changes: 2 additions & 2 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ setMethod("dtypes",

#' Column Names of SparkDataFrame
#'
#' Return all column names as a list.
#' Return a vector of column names.
#'
#' @param x a SparkDataFrame.
#'
Expand Down Expand Up @@ -338,7 +338,7 @@ setMethod("colnames",
})

#' @param value a character vector. Must have the same length as the number
#' of columns in the SparkDataFrame.
#' of columns to be renamed.
#' @rdname columns
#' @aliases colnames<-,SparkDataFrame-method
#' @name colnames<-
Expand Down
10 changes: 7 additions & 3 deletions R/pkg/R/SQLContext.R
Original file line number Diff line number Diff line change
Expand Up @@ -332,8 +332,10 @@ setMethod("toDF", signature(x = "RDD"),

#' Create a SparkDataFrame from a JSON file.
#'
#' Loads a JSON file (\href{http://jsonlines.org/}{JSON Lines text format or newline-delimited JSON}
#' ), returning the result as a SparkDataFrame
#' Loads a JSON file, returning the result as a SparkDataFrame
#' By default, (\href{http://jsonlines.org/}{JSON Lines text format or newline-delimited JSON}
#' ) is supported. For JSON (one record per file), set a named property \code{wholeFile} to
#' \code{TRUE}.
#' It goes through the entire dataset once to determine the schema.
#'
#' @param path Path of file to read. A vector of multiple paths is allowed.
Expand All @@ -346,6 +348,7 @@ setMethod("toDF", signature(x = "RDD"),
#' sparkR.session()
#' path <- "path/to/file.json"
#' df <- read.json(path)
#' df <- read.json(path, wholeFile = TRUE)
#' df <- jsonFile(path)
#' }
#' @name read.json
Expand Down Expand Up @@ -778,14 +781,15 @@ dropTempView <- function(viewName) {
#' @return SparkDataFrame
#' @rdname read.df
#' @name read.df
#' @seealso \link{read.json}
#' @export
#' @examples
#'\dontrun{
#' sparkR.session()
#' df1 <- read.df("path/to/file.json", source = "json")
#' schema <- structType(structField("name", "string"),
#' structField("info", "map<string,double>"))
#' df2 <- read.df(mapTypeJsonPath, "json", schema)
#' df2 <- read.df(mapTypeJsonPath, "json", schema, wholeFile = TRUE)
#' df3 <- loadDF("data/test_table", "parquet", mergeSchema = "true")
#' }
#' @name read.df
Expand Down
15 changes: 7 additions & 8 deletions R/pkg/R/mllib_classification.R
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,9 @@ setClass("NaiveBayesModel", representation(jobj = "jobj"))
#' @examples
#' \dontrun{
#' sparkR.session()
#' df <- createDataFrame(iris)
#' training <- df[df$Species %in% c("versicolor", "virginica"), ]
#' model <- spark.svmLinear(training, Species ~ ., regParam = 0.5)
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.svmLinear(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
Expand Down Expand Up @@ -220,9 +220,9 @@ function(object, path, overwrite = FALSE) {
#' \dontrun{
#' sparkR.session()
#' # binary logistic regression
#' df <- createDataFrame(iris)
#' training <- df[df$Species %in% c("versicolor", "virginica"), ]
#' model <- spark.logit(training, Species ~ ., regParam = 0.5)
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.logit(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
Expand All @@ -239,8 +239,7 @@ function(object, path, overwrite = FALSE) {
#'
#' # multinomial logistic regression
#'
#' df <- createDataFrame(iris)
#' model <- spark.logit(df, Species ~ ., regParam = 0.5)
#' model <- spark.logit(training, Class ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' }
Expand Down
15 changes: 8 additions & 7 deletions R/pkg/R/mllib_clustering.R
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,9 @@ setClass("LDAModel", representation(jobj = "jobj"))
#' @examples
#' \dontrun{
#' sparkR.session()
#' df <- createDataFrame(iris)
#' model <- spark.bisectingKmeans(df, Sepal_Length ~ Sepal_Width, k = 4)
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- spark.bisectingKmeans(df, Class ~ Survived, k = 4)
#' summary(model)
#'
#' # get fitted result from a bisecting k-means model
Expand All @@ -82,7 +83,7 @@ setClass("LDAModel", representation(jobj = "jobj"))
#'
#' # fitted values on training data
#' fitted <- predict(model, df)
#' head(select(fitted, "Sepal_Length", "prediction"))
#' head(select(fitted, "Class", "prediction"))
#'
#' # save fitted model to input path
#' path <- "path/to/model"
Expand Down Expand Up @@ -338,14 +339,14 @@ setMethod("write.ml", signature(object = "GaussianMixtureModel", path = "charact
#' @examples
#' \dontrun{
#' sparkR.session()
#' data(iris)
#' df <- createDataFrame(iris)
#' model <- spark.kmeans(df, Sepal_Length ~ Sepal_Width, k = 4, initMode = "random")
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- spark.kmeans(df, Class ~ Survived, k = 4, initMode = "random")
#' summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, df)
#' head(select(fitted, "Sepal_Length", "prediction"))
#' head(select(fitted, "Class", "prediction"))
#'
#' # save fitted model to input path
#' path <- "path/to/model"
Expand Down
14 changes: 7 additions & 7 deletions R/pkg/R/mllib_regression.R
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,14 @@ setClass("IsotonicRegressionModel", representation(jobj = "jobj"))
#' @examples
#' \dontrun{
#' sparkR.session()
#' data(iris)
#' df <- createDataFrame(iris)
#' model <- spark.glm(df, Sepal_Length ~ Sepal_Width, family = "gaussian")
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- spark.glm(df, Freq ~ Sex + Age, family = "gaussian")
#' summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, df)
#' head(select(fitted, "Sepal_Length", "prediction"))
#' head(select(fitted, "Freq", "prediction"))
#'
#' # save fitted model to input path
#' path <- "path/to/model"
Expand Down Expand Up @@ -137,9 +137,9 @@ setMethod("spark.glm", signature(data = "SparkDataFrame", formula = "formula"),
#' @examples
#' \dontrun{
#' sparkR.session()
#' data(iris)
#' df <- createDataFrame(iris)
#' model <- glm(Sepal_Length ~ Sepal_Width, df, family = "gaussian")
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- glm(Freq ~ Sex + Age, df, family = "gaussian")
#' summary(model)
#' }
#' @note glm since 1.5.0
Expand Down
18 changes: 10 additions & 8 deletions R/pkg/R/mllib_tree.R
Original file line number Diff line number Diff line change
Expand Up @@ -143,14 +143,15 @@ print.summary.treeEnsemble <- function(x) {
#'
#' # fit a Gradient Boosted Tree Classification Model
#' # label must be binary - Only binary classification is supported for GBT.
#' df <- createDataFrame(iris[iris$Species != "virginica", ])
#' model <- spark.gbt(df, Species ~ Petal_Length + Petal_Width, "classification")
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- spark.gbt(df, Survived ~ Age + Freq, "classification")
#'
#' # numeric label is also supported
#' iris2 <- iris[iris$Species != "virginica", ]
#' iris2$NumericSpecies <- ifelse(iris2$Species == "setosa", 0, 1)
#' df <- createDataFrame(iris2)
#' model <- spark.gbt(df, NumericSpecies ~ ., type = "classification")
#' t2 <- as.data.frame(Titanic)
#' t2$NumericGender <- ifelse(t2$Sex == "Male", 0, 1)
#' df <- createDataFrame(t2)
#' model <- spark.gbt(df, NumericGender ~ ., type = "classification")
#' }
#' @note spark.gbt since 2.1.0
setMethod("spark.gbt", signature(data = "SparkDataFrame", formula = "formula"),
Expand Down Expand Up @@ -351,8 +352,9 @@ setMethod("write.ml", signature(object = "GBTClassificationModel", path = "chara
#' summary(savedModel)
#'
#' # fit a Random Forest Classification Model
#' df <- createDataFrame(iris)
#' model <- spark.randomForest(df, Species ~ Petal_Length + Petal_Width, "classification")
#' t <- as.data.frame(Titanic)
#' df <- createDataFrame(t)
#' model <- spark.randomForest(df, Survived ~ Freq + Age, "classification")
#' }
#' @note spark.randomForest since 2.1.0
setMethod("spark.randomForest", signature(data = "SparkDataFrame", formula = "formula"),
Expand Down
6 changes: 6 additions & 0 deletions R/pkg/inst/tests/testthat/test_sparkSQL.R
Original file line number Diff line number Diff line change
Expand Up @@ -898,6 +898,12 @@ test_that("names() colnames() set the column names", {
expect_equal(names(z)[3], "c")
names(z)[3] <- "c2"
expect_equal(names(z)[3], "c2")

# Test subset assignment
colnames(df)[1] <- "col5"
expect_equal(colnames(df)[1], "col5")
names(df)[2] <- "col6"
expect_equal(names(df)[2], "col6")
})

test_that("head() and first() return the correct data", {
Expand Down
47 changes: 25 additions & 22 deletions R/pkg/vignettes/sparkr-vignettes.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -565,11 +565,10 @@ We use a simple example to demonstrate `spark.logit` usage. In general, there ar
and 3). Obtain the coefficient matrix of the fitted model using `summary` and use the model for prediction with `predict`.

Binomial logistic regression
```{r, warning=FALSE}
df <- createDataFrame(iris)
# Create a DataFrame containing two classes
training <- df[df$Species %in% c("versicolor", "virginica"), ]
model <- spark.logit(training, Species ~ ., regParam = 0.00042)
```{r}
t <- as.data.frame(Titanic)
training <- createDataFrame(t)
model <- spark.logit(training, Survived ~ ., regParam = 0.04741301)
summary(model)
```

Expand All @@ -579,10 +578,11 @@ fitted <- predict(model, training)
```

Multinomial logistic regression against three classes
```{r, warning=FALSE}
df <- createDataFrame(iris)
```{r}
t <- as.data.frame(Titanic)
training <- createDataFrame(t)
# Note in this case, Spark infers it is multinomial logistic regression, so family = "multinomial" is optional.
model <- spark.logit(df, Species ~ ., regParam = 0.056)
model <- spark.logit(training, Class ~ ., regParam = 0.07815179)
summary(model)
```

Expand All @@ -609,11 +609,12 @@ MLPC employs backpropagation for learning the model. We use the logistic loss fu

`spark.mlp` requires at least two columns in `data`: one named `"label"` and the other one `"features"`. The `"features"` column should be in libSVM-format.

We use iris data set to show how to use `spark.mlp` in classification.
```{r, warning=FALSE}
df <- createDataFrame(iris)
We use Titanic data set to show how to use `spark.mlp` in classification.
```{r}
t <- as.data.frame(Titanic)
training <- createDataFrame(t)
# fit a Multilayer Perceptron Classification Model
model <- spark.mlp(df, Species ~ ., blockSize = 128, layers = c(4, 3), solver = "l-bfgs", maxIter = 100, tol = 0.5, stepSize = 1, seed = 1, initialWeights = c(0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 9, 9, 9, 9, 9))
model <- spark.mlp(training, Survived ~ Age + Sex, blockSize = 128, layers = c(2, 3), solver = "l-bfgs", maxIter = 100, tol = 0.5, stepSize = 1, seed = 1, initialWeights = c( 0, 0, 0, 5, 5, 5, 9, 9, 9))
```

To avoid lengthy display, we only present partial results of the model summary. You can check the full result from your sparkR shell.
Expand All @@ -630,7 +631,7 @@ options(ops)
```
```{r}
# make predictions use the fitted model
predictions <- predict(model, df)
predictions <- predict(model, training)
head(select(predictions, predictions$prediction))
```

Expand Down Expand Up @@ -769,12 +770,13 @@ predictions <- predict(rfModel, df)

`spark.bisectingKmeans` is a kind of [hierarchical clustering](https://en.wikipedia.org/wiki/Hierarchical_clustering) using a divisive (or "top-down") approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

```{r, warning=FALSE}
df <- createDataFrame(iris)
model <- spark.bisectingKmeans(df, Sepal_Length ~ Sepal_Width, k = 4)
```{r}
t <- as.data.frame(Titanic)
training <- createDataFrame(t)
model <- spark.bisectingKmeans(training, Class ~ Survived, k = 4)
summary(model)
fitted <- predict(model, df)
head(select(fitted, "Sepal_Length", "prediction"))
fitted <- predict(model, training)
head(select(fitted, "Class", "prediction"))
```

#### Gaussian Mixture Model
Expand Down Expand Up @@ -912,9 +914,10 @@ testSummary

### Model Persistence
The following example shows how to save/load an ML model by SparkR.
```{r, warning=FALSE}
irisDF <- createDataFrame(iris)
gaussianGLM <- spark.glm(irisDF, Sepal_Length ~ Sepal_Width + Species, family = "gaussian")
```{r}
t <- as.data.frame(Titanic)
training <- createDataFrame(t)
gaussianGLM <- spark.glm(training, Freq ~ Sex + Age, family = "gaussian")

# Save and then load a fitted MLlib model
modelPath <- tempfile(pattern = "ml", fileext = ".tmp")
Expand All @@ -925,7 +928,7 @@ gaussianGLM2 <- read.ml(modelPath)
summary(gaussianGLM2)

# Check model prediction
gaussianPredictions <- predict(gaussianGLM2, irisDF)
gaussianPredictions <- predict(gaussianGLM2, training)
head(gaussianPredictions)

unlink(modelPath)
Expand Down
2 changes: 1 addition & 1 deletion R/run-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ FAILED=0
LOGFILE=$FWDIR/unit-tests.out
rm -f $LOGFILE

SPARK_TESTING=1 $FWDIR/../bin/spark-submit --driver-java-options "-Dlog4j.configuration=file:$FWDIR/log4j.properties" --conf spark.hadoop.fs.default.name="file:///" $FWDIR/pkg/tests/run-all.R 2>&1 | tee -a $LOGFILE
SPARK_TESTING=1 $FWDIR/../bin/spark-submit --driver-java-options "-Dlog4j.configuration=file:$FWDIR/log4j.properties" --conf spark.hadoop.fs.defaultFS="file:///" $FWDIR/pkg/tests/run-all.R 2>&1 | tee -a $LOGFILE
FAILED=$((PIPESTATUS[0]||$FAILED))

NUM_TEST_WARNING="$(grep -c -e 'Warnings ----------------' $LOGFILE)"
Expand Down
2 changes: 1 addition & 1 deletion appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ build_script:
- cmd: mvn -DskipTests -Psparkr -Phive -Phive-thriftserver package

test_script:
- cmd: .\bin\spark-submit2.cmd --conf spark.hadoop.fs.default.name="file:///" R\pkg\tests\run-all.R
- cmd: .\bin\spark-submit2.cmd --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R

notifications:
- provider: Email
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@
import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.*;

import org.apache.spark.network.buffer.ManagedBuffer;
import org.apache.spark.network.buffer.NioManagedBuffer;
import org.apache.spark.network.client.ChunkReceivedCallback;
import org.apache.spark.network.client.RpcResponseCallback;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,8 @@ public void writeTo(ByteBuffer buffer) {
*
* Unlike getBytes this will not create a copy the array if this is a slice.
*/
public @Nonnull ByteBuffer getByteBuffer() {
@Nonnull
public ByteBuffer getByteBuffer() {
if (base instanceof byte[] && offset >= BYTE_ARRAY_OFFSET) {
final byte[] bytes = (byte[]) base;

Expand Down
Loading