Skip to content

Commit 92dda99

Browse files
committed
Add migration note, fix new (?) doc error
1 parent 98ef77e commit 92dda99

File tree

2 files changed

+6
-1
lines changed

2 files changed

+6
-1
lines changed

R/pkg/R/SQLContext.R

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -388,6 +388,7 @@ read.orc <- function(path, ...) {
388388
#' Loads a Parquet file, returning the result as a SparkDataFrame.
389389
#'
390390
#' @param path path of file to read. A vector of multiple paths is allowed.
391+
#' @param ... additional external data source specific named properties.
391392
#' @return SparkDataFrame
392393
#' @rdname read.parquet
393394
#' @name read.parquet

docs/sparkr.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -667,8 +667,12 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma
667667

668668
## Upgrading to SparkR 2.3.1 and above
669669

670-
- In SparkR 2.3.0 and earlier, the `start` parameter of `substr` method was wrongly subtracted by one and considered as 0-based. This can lead to inconsistent substring results and also does not match with the behaviour with `substr` in R. In version 2.3.1 and later, it has been fixed so the `start` parameter of `substr` method is now 1-base. As an example, `substr(lit('abcdef'), 2, 4))` would result to `abc` in SparkR 2.3.0, and the result would be `bcd` in SparkR 2.3.1.
670+
- In SparkR 2.3.0 and earlier, the `start` parameter of `substr` method was wrongly subtracted by one and considered as 0-based. This can lead to inconsistent substring results and also does not match with the behaviour with `substr` in R. In version 2.3.1 and later, it has been fixed so the `start` parameter of `substr` method is now 1-based. As an example, `substr(lit('abcdef'), 2, 4))` would result to `abc` in SparkR 2.3.0, and the result would be `bcd` in SparkR 2.3.1.
671671

672672
## Upgrading to SparkR 2.4.0
673673

674674
- Previously, we don't check the validity of the size of the last layer in `spark.mlp`. For example, if the training data only has two labels, a `layers` param like `c(1, 3)` doesn't cause an error previously, now it does.
675+
676+
## Upgrading to SparkR 3.0.0
677+
678+
- The deprecated methods `parquetFile`, `jsonRDD` and `jsonFile` in `SQLContext` have been removed. Use `read.parquet` and `read.json`.

0 commit comments

Comments
 (0)