Skip to content
Closed
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions R/pkg/NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,7 @@ export("as.DataFrame",
"read.json",
"read.parquet",
"read.text",
"sparkLapply",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just call it lapply? (PS: I'm not an R expert)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like spark.lapply better. cc @shivaram on naming.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that would conflict with base::lapply - in other words it would prevent the user from calling lapply on R native data even when it has nothing to do with Spark, in the session.
(longer explanation: since this is S3, the method routing is by name, so having the same name in a package loaded later (SparkR) would override the one in the base package which is loaded earlier)

I'd like lapply better though ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark.lapply is nice too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@felixcheung Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think lets stick to sparkr.lapply or spark.lapply. As we saw in SPARK-12148 - overloading names can cause unforeseen conflicts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for spark.lapply

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or lapply.spark?:)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going for spark.lapply

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @thunterdb .
It's not updated yet.

"sql",
"str",
"tableToDF",
Expand Down
41 changes: 41 additions & 0 deletions R/pkg/R/context.R
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,47 @@ setCheckpointDir <- function(sc, dirName) {
invisible(callJMethod(sc, "setCheckpointDir", suppressWarnings(normalizePath(dirName))))
}

#' @title Run a function over a list of elements, distributing the computations with Spark.
#'
#' @description
#' Applies a function in a manner that is similar to doParallel or lapply to elements of a list.
#' The computations are distributed using Spark. It is conceptually the same as the following code:
#' unlist(lapply(list, func))
#'
#' Known limitations:
#' - variable scoping and capture: compared to R's rich support for variable resolutions, the
# distributed nature of SparkR limits how variables are resolved at runtime. All the variables
# that are available through lexical scoping are embedded in the closure of the function and
# available as read-only variables within the function. The environment variables should be
# stored into temporary variables outside the function, and not directly accessed within the
# function.
#'
#' - loading external packages: In order to use a package, you need to load it inside the
#' closure. For example, if you rely on the MASS module, here is how you would use it:
#'
#'\dontrun{
#' train <- function(hyperparam) {
#' library(MASS)
#' lm.ridge(“y ~ x+z”, data, lambda=hyperparam)
#' model
#' }
#'}
#'
#' @param list the list of elements
#' @param func a function that takes one argument.
#' @examples
#' Here is a trivial example that double the values in a list
#'\dontrun{
#' doubled <- sparkLapply(1:10, function(x){2 * x})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, too.

#'}
spark.lapply <- function(list, func) {
sc <- get(".sparkRjsc", envir = .sparkREnv)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor thing: All the existing functions like parallelize take in a Spark context as the first argument. We've discussed removing this in the past (See #9192) but we didn't reach a resolution on it.

So to be consistent it'd be better to take in sc as the first argument here ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I thought it was part of the design but I am happy to do that as it simplifies that piece of code.

rdd <- parallelize(sc, list, length(list))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guess people could possibly get confused about when to call this vs when to call the newly proposed dapply (#12493) Perhaps we need to explain this more and check for class(list) in the event someone is passing in a Spark DataFrame to this function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dapply and spark.lapply have different schematics. No need to check class(list) here as a DataFrame can be treated as a list of columns. parallelize() will issue warning for DataFrame at here: https://github.com/apache/spark/blob/master/R/pkg/R/context.R#L110

Copy link
Member

@felixcheung felixcheung Apr 20, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It actually fails here instead https://github.com/apache/spark/blob/master/R/pkg/R/context.R#L116
Spark DataFrame is not is.data.frame

results <- map(rdd, func)
local <- collect(results)
local
}

#' Set new log level
#'
#' Set new log level: "ALL", "DEBUG", "ERROR", "FATAL", "INFO", "OFF", "TRACE", "WARN"
Expand Down
5 changes: 5 additions & 0 deletions R/pkg/inst/tests/testthat/test_context.R
Original file line number Diff line number Diff line change
Expand Up @@ -141,3 +141,8 @@ test_that("sparkJars sparkPackages as comma-separated strings", {
expect_that(processSparkJars(f), not(gives_warning()))
expect_match(processSparkJars(f), f)
})

test_that("sparkLapply should perform simple transforms", {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, here. :)

doubled <- spark.lapply(1:10, function(x){2 * x})
expect_equal(doubled, as.list(2 * 1:10))
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to add a test where we capture some environment variables and/or use a package. Also we should update https://github.com/apache/spark/blob/master/docs/sparkr.md but we can open another JIRA for that I guess.